CN114001668B - Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium - Google Patents

Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium Download PDF

Info

Publication number
CN114001668B
CN114001668B CN202111300606.3A CN202111300606A CN114001668B CN 114001668 B CN114001668 B CN 114001668B CN 202111300606 A CN202111300606 A CN 202111300606A CN 114001668 B CN114001668 B CN 114001668B
Authority
CN
China
Prior art keywords
image
camera
object point
value
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111300606.3A
Other languages
Chinese (zh)
Other versions
CN114001668A (en
Inventor
吴伟锋
王国安
王明毅
陈晓铭
彭粤龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hypersen Technologies Co ltd
Original Assignee
Hypersen Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hypersen Technologies Co ltd filed Critical Hypersen Technologies Co ltd
Priority to CN202111300606.3A priority Critical patent/CN114001668B/en
Publication of CN114001668A publication Critical patent/CN114001668A/en
Application granted granted Critical
Publication of CN114001668B publication Critical patent/CN114001668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Abstract

A three-dimensional measuring system for the surface of a light-reflecting object, a measuring method thereof and a storage medium. Acquiring a first image of an object to be detected shot by a first camera and a second image of the object to be detected shot by a second camera, wherein the first image and the second image comprise stripe images reflected by the surface of the object to be detected; obtaining a height value and a gradient value of a first object point according to the first image and the second image, wherein the first object point is an object point in an overlapping area of the visual field ranges of the first camera and the second camera; determining an initial value of a first height value of a second object point as a height value of the first object point, wherein the second object point is in an adjacent area of the first object point, and the second object point is in a non-overlapping area of the visual field range of the first camera and the second camera; performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine the gradient value of the second object; and reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the shape data of the object to be detected. Thereby enlarging the measurement range.

Description

Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium
Technical Field
The application relates to the technical field of object plane measurement, in particular to a three-dimensional measurement system for the surface of a light-reflecting object, a measurement method thereof and a storage medium.
Background
Reflective objects, such as mirrors or mirror-like objects, require precise measurement in order to achieve three-dimensional reconstruction of the surface of the reflective object or detection of surface defects, etc.
A single-camera deflection system is adopted for measuring a high-reflectivity object in the prior art, however, the high-reflectivity object is influenced by 'non-unique normal direction', and the traditional measuring conditions are harsh. In order to solve the problem of the traditional single-camera deflection system, a deflection system based on the normal consistency principle of two cameras is mostly adopted at present.
However, such deflection systems have a small measurement range.
Disclosure of Invention
The technical problem that this application mainly solved is the less problem of the three-dimensional measuring deflection system measuring range of reflection of light object surface.
According to a first aspect, an embodiment provides a three-dimensional measurement system of a surface of a light reflecting object, comprising:
the display screen is used for displaying the stripe image;
the first camera is used for shooting a first image of the fringe image reflected by the surface of the object to be measured and sending the first image to the processor;
the second camera is used for shooting a second image of the fringe image reflected by the surface of the object to be detected and sending the second image to the processor; the vision range of the first camera and the vision range of the second camera have an overlapping area, and when an object to be measured is measured, at least part of the object to be measured is in the overlapping area;
a processor, configured to obtain a height value and a gradient value of a first object point according to the first image and the second image, where the first object point is an object point in an overlapping area of the fields of view of the first camera and the second camera; determining an initial value of a first height value of a second object point as a height value of the first object point, wherein the second object point is in a neighboring region of the first object point and the second object point is in a non-overlapping region of the field of view of the first camera and the second camera; performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine a gradient value of the second object; and reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the appearance data of the object to be detected.
Optionally, the system further comprises:
a display stripe controller for generating a stripe image and transmitting the stripe image to the display screen.
Optionally, the system further includes:
the third camera is used for shooting a third image of the fringe image reflected by the surface of the object to be detected and sending the third image to the processor;
the processor is further configured to:
determining an overlapping area of a target camera in the third image and the target image and a visual field range of the third camera according to the third image and the target image, wherein the target image is the first image or the second image, and the target camera is a camera for shooting the target image; acquiring a height value of a fourth object point of the visual field range of the target camera and the third camera; determining an initial value of a first height value of a fifth object point as a height value of the fourth object point, wherein the fifth object point is in a neighboring region of the fourth object point and the fifth object point is in a non-overlapping region of the field of view of the target camera and the third camera; and performing iterative processing according to the initial value of the first height value of the fifth object point and a preset step length to determine the gradient value of the fifth object point.
According to a second aspect, an embodiment provides a method for three-dimensional measurement of a surface of a light reflecting object, comprising:
acquiring a first image and a second image, wherein the first image is an image of an object to be detected shot by a first camera, the second image is an image of the object to be detected shot by a second camera, and the first image and the second image both comprise stripe images reflected by the surface of the object to be detected;
obtaining a height value and a gradient value of a first object point according to the first image and the second image, wherein the first object point is an object point in an overlapping area of the visual field ranges of the first camera and the second camera;
determining an initial value of a first height value of a second object point as a height value of the first object point, wherein the second object point is in a neighboring region of the first object point and the second object point is in a non-overlapping region of the field of view of the first camera and the second camera;
performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine a gradient value of the second object;
and reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the appearance data of the object to be detected.
Optionally, the performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine the gradient value of the second object includes:
determining a first gradient value of the second object according to the first height value of the second object from an initial value of the first height value of the second object; obtaining a second height value of the second object according to the first gradient value of the second object;
and updating the first height value of the second object according to a preset step length, returning to execute the first height value according to the second object, determining the first gradient value of the second object until the difference value between the current second height value and the last second height value of the second object is less than or equal to a preset threshold value, and determining that the gradient value of the second object is the first gradient value corresponding to the current second height value of the second object.
Optionally, the determining the first gradient value of the second object according to the first height value of the second object includes:
obtaining the gradient value of the second object according to the following formula:
Figure BDA0003338240490000031
where tan α is a first gradient value of the second point, S1 is a distance from a light emitting point of the display device corresponding to the second point to the reference plane, S2 is a first height value of the second point, θ is an angle between a connecting line from the light emitting point to a corresponding point of the second point on the reference plane and a perpendicular line from the light emitting point to the reference plane, and Δ Φ' is a phase difference between the corresponding point of the second point on the reference plane and an intersection point of the reflected light ray and the reference plane.
Optionally, the obtaining a height value and a gradient value of a first object point according to the first image and the second image includes:
respectively decoding the first image and the second image according to a stripe coding rule to obtain phases corresponding to the first image and the second image respectively;
obtaining a first normal direction of the first object point according to the first incident ray of the first object point and the first reflected ray received by the first camera;
obtaining a second normal direction of the first object point according to a second incident ray of the first object point and a second reflected ray received by the second camera;
and obtaining a height value and a gradient value of the first object point according to the coincidence of the first normal direction and the second normal direction.
Optionally, the performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine the gradient value of the second object includes:
performing iterative processing based on the initial value and the preset step length of the first height value of the second object to determine the height value and the gradient value of the second object;
the method further comprises the following steps:
determining an initial value of a first height value of a third object point as a height value of the second object point, wherein the third object point is in a neighboring area of the second object point and the third object point is not the first object point and the second object point;
and performing iterative processing based on the initial value of the first height value of the third object point and a preset step length, and determining the gradient value of the third object point.
Optionally, the method further includes:
acquiring a third image, wherein the third image is an image of the object to be detected shot by a third camera, and the third image comprises a stripe image reflected by the surface of the object to be detected;
determining an overlapping area of the third image and the view range of a target camera in the target image according to the third image and the target image, wherein the target image is the first image or the second image, and the target camera is a camera for shooting the target image;
acquiring a height value of a fourth object point of the visual field range of the target camera and the third camera;
determining an initial value of a first height value of a fifth object point as a height value of the fourth object point, wherein the fifth object point is in a neighboring region of the fourth object point and the fifth object point is in a non-overlapping region of the field of view of the target camera and the third camera;
and performing iterative processing according to the initial value of the first height value of the fifth object point and a preset step length, and determining the gradient value of the fifth object point.
According to a third aspect, an embodiment provides a computer readable storage medium having a program stored thereon, the program being executable by a processor to implement the method according to the first aspect.
According to the three-dimensional measurement system for the surface of the light-reflecting object, the measurement method and the storage medium thereof in the embodiment, when the system is set, the first camera and the second camera are set to be in the overlapping area with the visual field range, the processing device acquires the first image shot by the first camera and the second image shot by the second camera, determines the height value and the gradient value of the first object point in the overlapping area, and for the second object point in the non-overlapping area of the visual fields of the two cameras, according to the height value of the first object point obtained in the adjacent overlapping area, the gradient value can be rapidly determined, the object to be measured can be three-dimensionally reconstructed, and the topography data of the object to be measured can be obtained. The gradient value of the non-overlapped region can be rapidly determined, the measurement range is expanded, the angle characteristic of the system is improved, and the efficiency of object surface reconstruction is improved. In addition, compared with the traditional system, only one camera is added, the cost is not obviously increased, the complexity of the system is not obviously increased, and the problem of the existing system is solved under the condition of controlling the cost and the complexity of the system. And moreover, the height of the object to be measured is not required, and the universality is strong.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional measurement system for measuring a surface of a reflective object according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method for three-dimensional measurement of a surface of a reflective object according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of another method for three-dimensional measurement of the surface of a reflective object according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a system geometry provided in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an optical path of a first object point according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart of another method for measuring the surface of a light-reflecting object according to an embodiment of the present disclosure.
Detailed Description
The present application will be described in further detail below with reference to the accompanying drawings by way of specific embodiments. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous specific details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the description of the methods may be transposed or transposed in order, as will be apparent to a person skilled in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" as used herein includes both direct and indirect connections (couplings), unless otherwise specified.
The reflective object in this context means an object with a mirror surface or a mirror-like surface, and may also be referred to as a highly reflective object.
A three-dimensional measurement system for a surface of a reflective object to which the method for three-dimensional measurement of a surface of a reflective object according to the embodiment of the present application can be applied will be described with reference to fig. 1.
Referring to fig. 1, fig. 1 is a schematic diagram of a three-dimensional measurement system for a surface of a light-reflecting object according to an embodiment of the present disclosure, and as shown in fig. 1, the three-dimensional measurement system for a surface of a light-reflecting object includes: a display device 1, at least two cameras (two cameras, a first camera 2 and a second camera 3, respectively, are exemplarily shown in the present embodiment), and a processing device 4. The first camera 2 and the second camera 3 are respectively connected to the processing device 4 in a communication manner, which may be a wired connection or a wireless connection.
The above-described display device 1 is used to display a striped image. The stripe image may be generated by the processing device 4 and transmitted to the display device 1, or the stripe image may be generated by the display stripe controller and transmitted to the display device 1.
The first camera 2 and the second camera 3 have overlapping areas in their visual fields, and when measuring the object 5, at least a part of the object 5 needs to be in the overlapping area.
The processing device 4 may be a computer, a server, a computer, a mobile phone, a tablet device, or the like.
Alternatively, the display device 1 may be a display. The display device 1 may be a black and white display or a color display.
Alternatively, the first camera 2 and the second camera 3 may be black and white cameras or color cameras.
If the display device 1, the first camera 2, and the second camera 3 are all colored, the system tact can be increased, thereby improving the measurement efficiency.
Alternatively, the system may be calibrated before the measurement is performed using the three-dimensional measurement system of the surface of the light-reflecting object.
In the calibration process, the overlapping areas of the visual fields of the camera 2 and the camera 3 can be formed, the positions of the display device 1, the camera 2 and the camera 3 can be adjusted, and the reference plane can be calibrated.
In practical applications, the display device 1 displays a stripe image, light is reflected by the object 5 to be measured and then imaged in the first camera 2 and the second camera 3, and fig. 1 exemplarily shows an optical path of the light emitted by the display device 1 and incident into the first camera 2 through the reflected light reflected by the surface of the object 5 to be measured. The first camera 2 and the second camera 3 respectively transmit the photographed images to the processing device 4. The processing device 4 obtains the height value and the gradient value of the object point in the overlapping area of the visual field ranges of the first camera 2 and the second camera 3 according to the images sent by the first camera 2 and the second camera 3, and for the object point in the non-overlapping area of the visual field ranges of the first camera 2 and the second camera 3, the height value of the object point in the visual field overlapping area of the first camera 2 and the second camera 3 adjacent to the object point is used as the initial value of the height value, so that the gradient value of the object point in the non-overlapping area is quickly determined. And reconstructing the surface of the object 5 to be measured according to the obtained gradient values of all the object points, wherein the reconstruction can also be called three-dimensional reconstruction, so as to obtain the shape data of the object to be measured.
According to the method provided by the embodiment of the application, the height value and the gradient value of the object point in the overlapped area of the visual fields of the two cameras are determined according to the images shot by the two cameras, and for the object point in the non-overlapped area of the visual fields of the two cameras, the real height value and gradient value of the object point in the adjacent overlapped area can be rapidly determined according to the height value of the object point in the adjacent overlapped area, so that the object to be detected is subjected to three-dimensional reconstruction. The gradient value of the non-overlapped region can be rapidly determined, the measurement range is expanded, the angle characteristic of the system is improved, and the efficiency of object surface reconstruction is improved.
The technical solutions provided in the present application are described in detail below with specific examples.
The first embodiment is as follows:
referring to fig. 2, fig. 2 is a schematic flow chart of a three-dimensional measurement method for a surface of a reflective object according to an embodiment of the present disclosure, where the method of the present embodiment is executed by an electronic device, and the electronic device may be the processing device 4 in the system shown in fig. 1. The first camera in this embodiment may be the first camera 2 shown in fig. 1, the second camera may be the second camera 3 shown in fig. 1, and the stripe image may be a stripe image generated by the display device 1 shown in fig. 1. The method provided by the embodiment comprises the following steps:
s201: a first image and a second image are acquired.
The first image is an image of an object to be detected shot by a first camera, the second image is an image of the object to be detected shot by a second camera, and the first image and the second image both comprise fringe images reflected by the surface of the object to be detected.
The first image may be one or more, and the second image may be one or more.
Optionally, the number of the first images and the second images is equal. Illustratively, if the first image is multiple and the second image is multiple, the first camera and the second camera shoot the object to be measured synchronously to obtain the first image and the second image.
The object to be measured is an object to be measured by using a three-dimensional measurement system of the surface of the reflective object, and may be the object to be measured 5 shown in fig. 1.
The display device displays the stripe images, after the stripe images are reflected by the surface of the object to be detected, the first camera and the second camera respectively shoot the object to be detected, and the first image processing device respectively obtains the stripe images reflected by the surface of the object to be detected through the first camera and the second camera. The first camera and the second camera may transmit the first image and the second image, respectively, to the processing device.
S202: and obtaining the height value and the gradient value of the first object point according to the first image and the second image.
The first object point is an object point in an overlapping area of the visual field ranges of the first camera and the second camera. The overlapping area of the fields of view of the first camera and the second camera is hereinafter referred to as overlapping area.
The object points in the overlap region may be based on the principle of binocular normal matching, i.e. the normals of the light rays reflected to the first camera and the second camera via any object point are coincident, resulting in a height value and a gradient value for each first object point in the overlap region.
S203: and determining the initial value of the first height value of the second object point as the height value of the first object point.
Wherein the second object point is in the adjacent region of the first object point and the second object point is in the non-overlapping region of the field of view of the first camera and the second camera. The non-overlapping region of the field of view of the first camera and the second camera is hereinafter referred to as a non-overlapping region.
The adjacent area of the first object point refers to an object point in an area within a certain range around the first object point, and exemplarily, taking the first image as an example, an adjacent pixel point of a pixel point of the first object point imaged in the first image is the adjacent area.
Since the object surface is usually continuous, and the difference between the surface height values corresponding to adjacent object points (for an image shot by a camera, adjacent pixel points on the image) is not large, for a second object point in a non-overlapping area of adjacent areas of a first object point, the difference between the first height value of the second object point and the first height value of the first object point is not large, so that the height value of the first object point can be used as the initial value of the first height value of the second object point.
S204: and performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine the gradient value of the second object.
The preset step length is a preset step length, is a non-zero real number, and can be a positive number or a negative number. The preset step size can be set according to the difference of the height values of the adjacent object points on the surface of the object.
And starting from the initial value of the first height value of the second object, and performing iterative processing based on the first height value of the second object and the preset step length until the gradient value of the second object is determined. And in each iteration, assuming that the first height value of the second object is the true height value of the second object, verifying the accuracy of the first height value of the second object according to the optical geometric relationship, updating the first height value of the second object according to a preset step length after each iteration is finished, and continuing to perform the next iteration processing until the verification is passed. So as to obtain the gradient value of the second object through iteration.
Alternatively, a height value range may be set, for example, the height value range may be set according to an initial value of the first height value of the second object, a difference value from the initial value of the first height value of the second object may be set to be the height value range in a preset range, and for example, a range in which a difference value from the initial value of the first height value of the second object is less than or equal to 200nm may be set to be the height value range.
S205: and reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the shape data of the object to be detected.
And obtaining the gradient value of the first object point in the overlapping area and the gradient value of the second object point adjacent to the first object point in the non-overlapping area. And performing surface shape reconstruction on the object to be detected based on an integral algorithm according to the gradient values of the first object point and the second object point, so as to obtain the shape data of the object to be detected.
In this embodiment, when the system is set, the first camera and the second camera are set to have an overlapping area with a view range, the processing device obtains a first image captured by the first camera and a second image captured by the second camera, determines a height value and a gradient value of a first object point in the overlapping area, and for a second object point in a non-overlapping area in the views of the two cameras, according to the height value of the first object point obtained in the adjacent overlapping area, may quickly determine the gradient value thereof, perform three-dimensional reconstruction on the object to be measured, and obtain the topography data of the object to be measured. The gradient value of the non-overlapped region can be rapidly determined, the measurement range is expanded, the angle characteristic of the system is improved, and the efficiency of object surface reconstruction is improved. In addition, compared with the traditional system, only one camera is added, the cost is not obviously increased, the complexity of the system is not obviously increased, and the problem of the existing system is solved under the condition of controlling the cost and the complexity of the system. Moreover, the height of the object to be measured is not required, and the universality is high.
Referring to fig. 3, fig. 3 is a schematic flow chart of another method for measuring the surface of a reflective object in three dimensions according to an embodiment of the present application, and fig. 3 is based on the embodiment shown in fig. 2, and further, S204 may include the following steps:
s2041: and determining a first gradient value of the second point according to the first height value of the second point.
Assuming that the height value of the second object is the first height value of the second object, the first gradient value of the second object can be obtained through the geometric relationship.
S2042: and obtaining a second height value of the second object according to the first gradient value of the second object.
And reconstructing to obtain a second height value of the second object according to the first gradient value of the second object.
S2043: and judging whether the difference value between the current second height value and the last second height value of the second object is smaller than or equal to a preset threshold value.
The phrase "less than or equal to" means "less than", "less than or equal to" and "equal to".
The preset threshold is preset and can be set according to a system allowable error. For example, the preset threshold may be 5nm, 10nm, or 50nm, and the like, which is not limited in this application.
And comparing the difference value between the second height values obtained by two adjacent iterations to determine whether the difference value is within a preset range.
If yes, go to S2044. If not, execution continues with S2045.
S2044: and determining the gradient value of the second object as a first gradient value corresponding to the current second height value of the second object.
If so, the difference between the currently assumed second height value of the second point and the actual height value is considered to be within the error allowable range, and the second height value of the second point is considered to be the actual height value of the second point, so that the first gradient value corresponding to the second height value of the second point is the gradient value of the second point. Execution continues with S205.
S2045: and updating the first height value of the second object according to the preset step length.
The preset step size may be added or subtracted to the first height value of the second object to obtain the first height value of the second object for the next iteration. Execution returns to S2041.
The embodiment provides a determination mode of the gradient value of the second object in the non-overlapping area, and the execution efficiency is high.
Based on the embodiment shown in fig. 3, a specific implementation of S2041 is further described below with reference to the system geometry diagram shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic diagram of a system geometry relationship according to an embodiment of the present disclosure. As shown in fig. 4, it is assumed that a second point P on the surface of the object 5 is within the visual field of the camera 2, and a light ray emitted from a certain light emitting point of the display device 1 is reflected by the second point P and enters the camera 2. The quantities referred to in fig. 4 are as follows: tan α is a first gradient value of the second point, S1 is a distance from the light emitting point of the display device 1 corresponding to the second point to the reference plane 6, S2 is a height value of the second point, θ is an angle between a line connecting the light emitting point of the display device 1 to the corresponding point of the second point on the reference plane 6 and a perpendicular line from the light emitting point to the reference plane, and Δ Φ' is a phase difference between the corresponding point P2 of the second point on the reference plane 6 and an intersection point P1 of the reflected light ray and the reference plane. Where Δ φ', S1 and θ can be determined at system calibration, and are known quantities, S2 can be taken as the first height value of the second object during each iteration, thereby calculating the first gradient value of the second object.
S2041 may include the steps of:
obtaining the gradient value of the second point according to the following formula (1):
Figure BDA0003338240490000101
where tan α is a first gradient value of the second point, S1 is a distance from a light emitting point of the display device corresponding to the second point to the reference plane, S2 is a first height value of the second point, θ is an angle between a line connecting the light emitting point to a corresponding point of the second point on the reference plane and a perpendicular line from the light emitting point to the reference plane, and Δ Φ' is a phase difference between the corresponding point of the second point on the reference plane and an intersection of the reflected light ray and the reference plane.
In this embodiment, a formula is constructed through a specific geometric relationship, in each iteration process, the height value of the second object is taken as the first height value of the second object, and the first gradient value of the second object is quickly determined according to the formula, so that the gradient value of the second object is obtained through iteration processing, and the efficiency is improved.
Based on the above embodiment, a specific implementation of determining the height value and the gradient value of the first object point in the overlap region in S202 is further described below with reference to fig. 5.
Referring to fig. 5, fig. 5 is a schematic diagram of an optical path of a first object point according to an embodiment of the present disclosure, and the first object point a is taken as an example for description. The ray r1 of the display device 1 is reflected to the camera 2 via the first object point a, where s1 is the reflected ray and n1 is the normal. The light ray r2 of the display device 1 is reflected to the camera 3 through the first object point a, where s2 is the reflected light ray and n2 is the normal direction.
The method provided by this embodiment is based on the foregoing embodiment, and further, S202 may include the following steps:
step 1: and carrying out phase encoding on the pixel coordinates of the display equipment to generate orthogonal horizontal and vertical stripe images.
When the display equipment displays the stripe image, the first camera and the second camera synchronously shoot to respectively obtain a first image and a second image.
The display device can display the stripe image sequence, and the first camera and the second camera shoot to obtain a first image sequence and a second image sequence.
And 2, step: and respectively decoding the first image sequence and the second image sequence according to the horizontal and vertical stripe coding and decoding rules of the display screen to obtain the orthogonal phase values corresponding to the pixel points of the first camera and the second camera.
And 3, step 3: and obtaining a first normal direction of the first object point according to the predicted height value of the first object point, the first incident ray of the first object point and the first reflected ray received by the first camera from the initial value of the predicted height value of the first object point.
Wherein the initial value of the predicted height value of the first object point is 0, i.e. the height relative to the reference plane 0.
The first normal direction of the first object point is known from the optical principle
Figure BDA0003338240490000102
Can be obtained according to the following equation (2):
Figure BDA0003338240490000111
wherein the content of the first and second substances,
Figure BDA0003338240490000112
the first incident ray of the first object point,
Figure BDA0003338240490000113
the first reflected light is received by the first camera.
And 4, step 4: obtaining a second normal direction of the first object point according to a second incident ray of the first object point and a second reflected ray received by the second camera
Figure BDA0003338240490000114
The second normal direction of the first object point is known from the optical principle
Figure BDA0003338240490000115
This can be obtained according to the following equation (3):
Figure BDA0003338240490000116
wherein the content of the first and second substances,
Figure BDA0003338240490000117
the second incident ray of the first object point,
Figure BDA0003338240490000118
the second reflected light received by the second camera.
And 5: and judging whether the first normal direction and the second normal direction are overlapped.
According to
Figure BDA0003338240490000119
And
Figure BDA00033382404900001110
according to the coincidence principle, the height value and the gradient value of the first object point A can be obtained.
If so, determining the height value of the first object point as the predicted height value of the first object point, and obtaining the gradient value of the first object point.
If not, updating the predicted height value of the first object point, and returning to execute the first normal direction of the first object point according to the predicted height value of the first object point, the first incident ray of the first object point and the first reflected ray received by the first camera until the first normal direction and the second normal direction are coincident.
The updating of the predicted height value of the first object point may perform spatial point search matching within a certain range of an initial value of the predicted height value of the first object point.
Optionally, before step 1, the method may further include:
step 0: and obtaining calibration parameters of the three-dimensional measuring system of the surface of the reflecting object.
Wherein, the calibration parameters comprise: an internal reference of the first camera, a distortion parameter of the first camera, an internal reference of the second camera, a distortion parameter of the second camera, a pose parameter of the first camera relative to a reference plane, a pose parameter of the first camera relative to the display device, a pose parameter of the second camera relative to the reference plane, and a pose parameter of the second camera relative to the display device.
The calibration parameters of the three-dimensional measurement system of the surface of the reflective object which is calibrated can be directly obtained. And calibrating the geometric structure of the three-dimensional measuring system on the surface of the light-reflecting object to obtain the calibration parameters of the three-dimensional measuring system on the surface of the light-reflecting object.
The calibration process of the three-dimensional measurement system on the surface of the light-reflecting object can be carried out only once, and calibration parameters can be reused without repeatedly calibrating under the condition that the relative positions of all devices in the system are not changed.
According to the embodiment, the height value and the gradient value of the first object point in the overlapping area can be rapidly determined through the normal uniqueness of the first object point in the overlapping area of the visual field ranges of the first camera and the second camera, and the reconstruction efficiency is improved.
In some embodiments, the height value of the first object point in the overlapping region is determined in S202, and the iterative process in S204 may also determine the height value of the second object point, so that the gradient values of the object points in the neighboring regions of the second object point may be determined according to the height value of the second object point in a manner similar to S203 and S204. The embodiment shown in FIG. 6 will be described in detail below.
Referring to fig. 6, fig. 6 is a schematic flow chart of another method for measuring the surface of a light-reflecting object in three dimensions according to an embodiment of the present disclosure, and fig. 6 is based on the embodiment shown in fig. 2 or fig. 3, and further, S204 may include the following steps:
s204 a: and performing iterative processing based on the initial value and the preset step length of the first height value of the second object to determine the height value and the gradient value of the second object.
And performing iterative processing based on the initial value and the preset step length of the first height value of the second object, and determining the height value of the second object besides the gradient value of the second object.
Correspondingly, the method provided by the embodiment further comprises the following steps:
s401: and determining the initial value of the first height value of the third object point as the height value of the second object point.
Wherein the third object point is in an adjacent region of the second object point and the third object point is not the first object point and the second object point.
S402: and performing iterative processing based on the initial value of the first height value of the third object point and a preset step length, and determining the gradient value of the third object point.
Execution continues with S205. All object points in S205 also include a third object point.
Steps S401 and S402 are similar to steps S203 and S204 described above and will not be described here again.
In this embodiment, the gradient value of the third object point in the adjacent area of the second object point can be further determined by the height value already obtained by the second object point, so that the measurement range is further expanded.
Further, for object points other than the first object point, the second object point, and the third object point in the adjacent region of the third object point, gradient values corresponding to the object points may be obtained by using a manner similar to S203 and S204 for the height value of the third object point, and so on, so that gradient values of all object points in the field of view of the first camera and the second camera may be obtained. Further enlarging the measuring range.
In some embodiments, the three-dimensional measurement system of the surface of the light reflecting object further comprises: a third camera.
Among them, the third camera may be one or more.
In one possible implementation, the third camera has no overlapping region with the field of view of the first camera, and the third camera has no overlapping region with the field of view of the second camera. The third image taken by the third camera may be used as an extension of the measurement area to obtain a gradient value of the object point in the third image from the value of the object point already determined and the third image, in a manner similar to S203 and S204.
In another possible implementation, the third camera has an overlapping region with the field of view of the first camera, and/or the third camera has an overlapping region with the field of view of the second camera. The third image captured by the third camera, similar to the above implementation, may be used as an extension to the measurement area, so that the gradient value of the object point in the non-overlapping area in the third image is obtained in a manner similar to S203 and S204 according to the value of the object point that has been determined and the third image.
In another possible implementation manner, the third camera and the first camera form a set of cameras, and/or the third camera and the second camera form a set of cameras, and using the method similar to the above embodiment, the gradient value of the object point in the overlapping region of the third camera and the first camera is determined, and/or the gradient value of the object point in the overlapping region of the third camera and the second camera is determined, and then the gradient value of the object point in the non-overlapping region of the third camera and the first camera, and/or the third camera and the second camera is determined, so as to perform three-dimensional reconstruction on the object to be measured, thereby further expanding the measurement range. The following will explain details of the present invention by specific examples.
On the basis of the foregoing embodiment, further, the method provided in this embodiment may further include the following steps:
step 206: a third image is acquired.
The third image is an image of the object to be detected shot by the third camera, and the third image comprises a stripe image reflected by the surface of the object to be detected.
Step 207: determining an overlapping area of the visual field range of the target camera and the visual field range of the third camera in the third image and the target image according to the third image and the target image, wherein the target image is the first image or the second image, and the target camera is a camera for shooting the target image;
step 208: acquiring a height value of a fourth object point in the visual field range of the target camera and the third camera;
step 209: determining an initial value of the first height value of the fifth object point as a height value of the fourth object point, wherein the fifth object point is in an adjacent region of the fourth object point and is in a non-overlapping region of the field of view of the target camera and the third camera;
step 210: and performing iterative processing according to the initial value of the first height value of the fifth object point and a preset step length to determine the gradient value of the fifth object point.
In this embodiment, the measurement range can be further expanded through the third camera, the measurement efficiency is improved, and the expandability of the system is strong.
On the basis of the above embodiments, further, there may be one or more display devices.
Example two:
the present embodiment provides a three-dimensional measurement system for a surface of a light reflecting object, including:
the display screen is used for displaying the stripe image;
the first camera is used for shooting a first image of a stripe image reflected by the surface of the object to be detected and sending the first image to the processor;
the second camera is used for shooting a second image of the fringe image reflected by the surface of the object to be detected and sending the second image to the processor; the vision range of the first camera and the vision range of the second camera have an overlapping area, and when an object to be measured is measured, at least part of the object to be measured is in the overlapping area;
the processor is used for obtaining a height value and a gradient value of a first object point according to the first image and the second image, wherein the first object point is an object point in an overlapping area of the visual field ranges of the first camera and the second camera; determining that an initial value of a first height value of a second object point is a height value of the first object point, wherein the second object point is in an adjacent area of the first object point, and the second object point is in a non-overlapping area of the field of view of the first camera and the second camera; performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine the gradient value of the second object; and reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the shape data of the object to be detected.
The display screen corresponds to the display device 1 in the above embodiment, the first camera corresponds to the camera 2 in the above embodiment, the second camera corresponds to the camera 3 in the above embodiment, and the processor corresponds to the processing device 4 in the above embodiment.
Optionally, the system further comprises:
and the display stripe controller is used for generating a stripe image and sending the stripe image to the display screen.
Optionally, the system further comprises:
the third camera is used for shooting a third image of the stripe image reflected by the surface of the object to be detected and sending the third image to the processor;
the processor is further configured to:
determining an overlapping area of the visual field range of a target camera and a third camera in the third image and the target image according to the third image and the target image, wherein the target image is a first image or a second image, and the target camera is a camera for shooting the target image; acquiring a height value of a fourth object point in the visual field range of the target camera and the third camera; determining an initial value of the first height value of the fifth object point as a height value of the fourth object point, wherein the fifth object point is in an adjacent region of the fourth object point and is in a non-overlapping region of the field of view of the target camera and the third camera; and performing iterative processing according to the initial value of the first height value of the fifth object point and a preset step length to determine the gradient value of the fifth object point.
The implementation principle and effect of the system of this embodiment are similar to those of the above embodiments, and are not described herein again.
Example three:
the embodiment of the application provides a computer readable storage medium, on which a program is stored, and the program can be executed by a processor to realize the method in the first embodiment.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present application has been described with reference to specific examples, which are provided only to facilitate the understanding of the present application and are not intended to limit the present application. Numerous simple deductions, modifications or substitutions may also be made by those skilled in the art to which the present application pertains, according to the idea of the present application.

Claims (9)

1. A three-dimensional measurement system for a surface of a reflective object, comprising:
the display screen is used for displaying the stripe image;
the first camera is used for shooting a first image of the fringe image reflected by the surface of the object to be measured and sending the first image to the processor;
the second camera is used for shooting a second image of the fringe image reflected by the surface of the object to be detected and sending the second image to the processor; the vision range of the first camera and the vision range of the second camera have an overlapping area, and when an object to be measured is measured, at least part of the object to be measured is in the overlapping area;
the processor is used for obtaining a height value and a gradient value of a first object point according to the first image and the second image, wherein the first object point is an object point in an overlapping area of the visual field ranges of the first camera and the second camera; determining an initial value of a first height value of a second object point as a height value of the first object point, wherein the second object point is in a neighboring region of the first object point and the second object point is in a non-overlapping region of the field of view of the first camera and the second camera; performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine a gradient value of the second object; reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the appearance data of the object to be detected;
the iterative processing is performed based on the initial value and the preset step length of the first height value of the second object, and the determining of the gradient value of the second object comprises:
performing iterative processing based on the initial value and the preset step length of the first height value of the second object to determine the height value and the gradient value of the second object;
the processor is further configured to:
determining an initial value of a first height value of a third object point as a height value of the second object point, wherein the third object point is in a neighboring region of the second object point and the third object point is not the first object point and the second object point; and performing iterative processing based on the initial value of the first height value of the third object point and a preset step length, and determining the gradient value of the third object point.
2. The system of claim 1, wherein the system further comprises:
and the display stripe controller is used for generating a stripe image and sending the stripe image to the display screen.
3. The system of claim 1, wherein the system further comprises:
the third camera is used for shooting a third image of the fringe image reflected by the surface of the object to be detected and sending the third image to the processor;
the processor is further configured to:
determining an overlapping area of the third image and a visual field range of a target camera in the target image according to the third image and the target image, wherein the target image is the first image or the second image, and the target camera is a camera for shooting the target image; acquiring a height value of a fourth object point of the visual field range of the target camera and the third camera; determining an initial value of a first height value of a fifth object point as a height value of the fourth object point, wherein the fifth object point is in a neighboring region of the fourth object point and the fifth object point is in a non-overlapping region of the field of view of the target camera and the third camera; and performing iterative processing according to the initial value of the first height value of the fifth object point and a preset step length to determine the gradient value of the fifth object point.
4. A method for three-dimensional measurement of a surface of a light-reflecting object, comprising:
acquiring a first image and a second image, wherein the first image is an image of an object to be detected shot by a first camera, the second image is an image of the object to be detected shot by a second camera, and the first image and the second image both comprise stripe images reflected by the surface of the object to be detected;
obtaining a height value and a gradient value of a first object point according to the first image and the second image, wherein the first object point is an object point in an overlapping area of the visual field ranges of the first camera and the second camera;
determining an initial value of a first height value of a second object point as a height value of the first object point, wherein the second object point is in a neighboring region of the first object point and the second object point is in a non-overlapping region of the field of view of the first camera and the second camera;
performing iterative processing based on the initial value of the first height value of the second object and a preset step length to determine a gradient value of the second object;
reconstructing the surface of the object to be detected according to the gradient values of all the object points to obtain the appearance data of the object to be detected;
the iterative processing is performed based on the initial value and the preset step length of the first height value of the second object, and the determining of the gradient value of the second object comprises:
performing iterative processing based on the initial value and the preset step length of the first height value of the second object to determine the height value and the gradient value of the second object;
the method further comprises the following steps:
determining an initial value of a first height value of a third object point as a height value of the second object point, wherein the third object point is in a neighboring region of the second object point and the third object point is not the first object point and the second object point;
and performing iterative processing based on the initial value of the first height value of the third object point and a preset step length, and determining the gradient value of the third object point.
5. The method of claim 4, wherein the iterative process based on the initial value of the first height value of the second object and the preset step size to determine the gradient value of the second object comprises:
determining a first gradient value of the second object according to the first height value of the second object from an initial value of the first height value of the second object; obtaining a second height value of the second object according to the first gradient value of the second object;
and updating the first height value of the second object according to a preset step length, returning to execute the first height value according to the second object, determining the first gradient value of the second object until the difference value between the current second height value and the last second height value of the second object is less than or equal to a preset threshold value, and determining that the gradient value of the second object is the first gradient value corresponding to the current second height value of the second object.
6. The method of claim 5, wherein determining the first gradient value for the second point based on the first height value for the second point comprises:
obtaining the gradient value of the second object according to the following formula:
Figure FDA0003674655670000031
where tan α is a first gradient value of the second point, S1 is a distance from a light emitting point of the display device corresponding to the second point to the reference plane, S2 is a first height value of the second point, θ is an angle between a connecting line from the light emitting point to a corresponding point of the second point on the reference plane and a perpendicular line from the light emitting point to the reference plane, and Δ Φ' is a phase difference between the corresponding point of the second point on the reference plane and an intersection point of the reflected light ray and the reference plane.
7. The method of claim 4, wherein deriving the height value and the gradient value for the first object point from the first image and the second image comprises:
encoding pixel coordinates of display equipment to generate horizontal and vertical stripe image sequences, and respectively decoding the first image sequence and the second image sequence according to encoding and decoding rules of horizontal and vertical stripes of a display screen to obtain orthogonal phase values corresponding to each pixel point of the first camera and the second camera;
starting from an initial value of the predicted height value of the first object point, the following processing is performed for the predicted height value of each first object point:
obtaining a first normal direction of the first object point according to the predicted height value of the first object point, the first incident ray of the first object point and the first reflected ray received by the first camera;
obtaining a second normal direction of the first object point according to a second incident ray of the first object point and a second reflected ray received by the second camera;
judging whether the first normal direction and the second normal direction are coincident;
if the first normal direction and the second normal direction are coincident, determining that the height value of the first object point is the predicted height value of the first object point, and obtaining the gradient value of the first object point;
and if the first normal direction and the second normal direction are not coincident, updating the predicted height value of the first object point, returning to execute the first normal direction of the first object point according to the predicted height value of the first object point, the first incident ray of the first object point and the first reflected ray received by the first camera until the first normal direction and the second normal direction are coincident, determining the height value of the first object point as the predicted height value of the first object point, and obtaining the gradient value of the first object point.
8. The method of any one of claims 4-7, further comprising:
acquiring a third image, wherein the third image is an image of the object to be detected shot by a third camera, and the third image comprises a stripe image reflected by the surface of the object to be detected;
determining an overlapping area of the third image and the view range of a target camera in the target image according to the third image and the target image, wherein the target image is the first image or the second image, and the target camera is a camera for shooting the target image;
acquiring a height value of a fourth object point of the visual field range of the target camera and the third camera;
determining an initial value of a first height value of a fifth object point as a height value of the fourth object point, wherein the fifth object point is in a neighboring region of the fourth object point and the fifth object point is in a non-overlapping region of the field of view of the target camera and the third camera;
and performing iterative processing according to the initial value of the first height value of the fifth object point and a preset step length to determine the gradient value of the fifth object point.
9. A computer-readable storage medium, characterized in that the medium has stored thereon a program which is executable by a processor to implement the method according to any one of claims 4-8.
CN202111300606.3A 2021-11-04 2021-11-04 Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium Active CN114001668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111300606.3A CN114001668B (en) 2021-11-04 2021-11-04 Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111300606.3A CN114001668B (en) 2021-11-04 2021-11-04 Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium

Publications (2)

Publication Number Publication Date
CN114001668A CN114001668A (en) 2022-02-01
CN114001668B true CN114001668B (en) 2022-07-19

Family

ID=79927634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111300606.3A Active CN114001668B (en) 2021-11-04 2021-11-04 Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium

Country Status (1)

Country Link
CN (1) CN114001668B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111036A (en) * 2013-04-18 2014-10-22 中国科学院沈阳自动化研究所 Mirror object measuring device and method based on binocular vision
CN105783775A (en) * 2016-04-21 2016-07-20 清华大学 Device and method of measuring surface topographies of mirror and mirror-like objects
CN106546192A (en) * 2016-10-12 2017-03-29 上海大学 A kind of high reflection Free-Form Surface and system
CN109357632A (en) * 2018-12-26 2019-02-19 河北工业大学 A kind of mirror article 3 D measuring method and device
DE102018208417A1 (en) * 2018-05-28 2019-11-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Projection device and projection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5751470B2 (en) * 2008-08-20 2015-07-22 国立大学法人東北大学 Shape / tilt detection and / or measurement optical apparatus and method and related apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111036A (en) * 2013-04-18 2014-10-22 中国科学院沈阳自动化研究所 Mirror object measuring device and method based on binocular vision
CN105783775A (en) * 2016-04-21 2016-07-20 清华大学 Device and method of measuring surface topographies of mirror and mirror-like objects
CN106546192A (en) * 2016-10-12 2017-03-29 上海大学 A kind of high reflection Free-Form Surface and system
DE102018208417A1 (en) * 2018-05-28 2019-11-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Projection device and projection method
CN109357632A (en) * 2018-12-26 2019-02-19 河北工业大学 A kind of mirror article 3 D measuring method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
光亮表面双目立体视觉三维形貌测量方法;程子怡等;《激光与光电子学进展》;20201230(第07期);全文 *
基于全场条纹反射的镜面物体三维面形测量综述;王月敏 等;《光学精密工程》;20180530;第26卷(第5期);全文 *

Also Published As

Publication number Publication date
CN114001668A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN109961468B (en) Volume measurement method and device based on binocular vision and storage medium
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
CA2819956C (en) High accuracy camera modelling and calibration method
US7711182B2 (en) Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces
JP6079333B2 (en) Calibration apparatus, method and program
US10719740B2 (en) Method and a system for identifying reflective surfaces in a scene
Willi et al. Robust geometric self-calibration of generic multi-projector camera systems
JP2014115109A (en) Device and method for measuring distance
CN110490938A (en) For verifying the method, apparatus and electronic equipment of camera calibration parameter
Aliaga et al. Photogeometric structured light: A self-calibrating and multi-viewpoint framework for accurate 3d modeling
WO2018156224A1 (en) Three-dimensional imager
Wilm et al. Accurate and simple calibration of DLP projector systems
CN115876124A (en) High-light-reflection surface three-dimensional reconstruction method and device based on polarized structured light camera
CN117053707A (en) Three-dimensional reconstruction method, device and system, three-dimensional scanning method and three-dimensional scanner
US10252417B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
Knyaz Multi-media projector–single camera photogrammetric system for fast 3D reconstruction
CN114001668B (en) Three-dimensional measuring system for surface of reflecting object, measuring method thereof and storage medium
Draréni et al. Geometric video projector auto-calibration
CN109741384B (en) Multi-distance detection device and method for depth camera
CN110470216B (en) Three-lens high-precision vision measurement method and device
Budge et al. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation
CN110691228A (en) Three-dimensional transformation-based depth image noise marking method and device and storage medium
US20200402260A1 (en) Camera Calibration and/or Use of a Calibrated Camera
Morinaga et al. Underwater active oneshot scan with static wave pattern and bundle adjustment
CN115701871A (en) Point cloud fusion method and device, three-dimensional scanning equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant