CN112700502B - Binocular camera system and binocular camera space calibration method - Google Patents

Binocular camera system and binocular camera space calibration method Download PDF

Info

Publication number
CN112700502B
CN112700502B CN202011593703.1A CN202011593703A CN112700502B CN 112700502 B CN112700502 B CN 112700502B CN 202011593703 A CN202011593703 A CN 202011593703A CN 112700502 B CN112700502 B CN 112700502B
Authority
CN
China
Prior art keywords
camera
event
image
address
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011593703.1A
Other languages
Chinese (zh)
Other versions
CN112700502A (en
Inventor
吴金建
李汉标
石光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202011593703.1A priority Critical patent/CN112700502B/en
Publication of CN112700502A publication Critical patent/CN112700502A/en
Application granted granted Critical
Publication of CN112700502B publication Critical patent/CN112700502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a binocular camera system and a binocular camera space calibration method, which solve the problems of poor imaging quality and no depth information in the prior art. The binocular camera system is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module; the camera module includes an event camera and a normal CMOS camera connected in parallel. The space calibration method comprises the following steps: the camera module acquires address-event data stream, image and shooting time; the correction module carries out distortion and epipolar correction on the data stream and the image; the calibration module calibrates the corrected data stream and the image space position; the storage module stores the corrected data stream and the corrected image and the corresponding relation of the corrected data stream and the corrected image in the persistent memory in a homography matrix set. The invention does not use spectroscope, thus avoiding the imaging difference caused by the spectroscope. The invention has parallax and can recover the space depth information through the parallax. The method is used for reconstructing data with high frame rate and high resolution, and identifying and tracking high-speed moving targets.

Description

Binocular camera system and binocular camera space calibration method
Technical Field
The invention belongs to the technical field of computer vision, relates to the structure and the space calibration of a binocular camera, in particular to a binocular camera system and a binocular camera space calibration method, which are used for reconstructing image data with high time resolution and high space resolution, improving the accuracy of identifying and tracking a high-speed moving target, and can also be used for the space calibration of a binocular camera system comprising an event camera and a common CMOS camera.
Background
The binocular camera system can obtain the distance between an object and a camera through the parallax of two cameras, and is widely applied to the fields of robots and SLAM. However, there are many drawbacks to the binocular camera system using the conventional CMOS camera in that when the relative movement speed between the binocular camera system and the target is too high, serious image blurring occurs, so that the binocular camera does not work well.
The event camera is a novel camera, each pixel of the novel camera is sensitive independently, when the light intensity sensed by a certain pixel changes, the event camera outputs data, otherwise, the event camera does not output data. When the relative movement speed between the event camera and the target is high, the blurring phenomenon can not be generated. And the event camera has the advantages of large dynamic range and small data size. The data format output by the event camera is different from that of a common CMOS camera, and the data output by the event camera is address-event data flow.
The event camera and the common CMOS camera are combined into a binocular camera system, so that the advantages of the two cameras can be combined, the image data with high dynamic range, high frame rate and high spatial resolution can be obtained, and the image data can be used for calculating the spatial depth information.
The application publication number is CN108038888A, entitled "hybrid camera System and space calibration method and apparatus thereof" discloses a hybrid camera System and space calibration method, the camera and space calibration method thereof uses an event camera, a common CMOS camera and a light splitting sheet to form a binocular camera system, and space calibration is realized by minimizing space projection errors, which is characterized in that: the beam splitting sheet is used for carrying out coarse calibration on the space position between the event camera and the common CMOS camera, so that the two cameras can only receive less than 50% of light, and the imaging quality is reduced; the use of a beam splitting slice results in no parallax between the two cameras, losing spatial depth information.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a binocular camera system and a binocular camera space calibration method which can identify and track a high-speed moving target, retain parallax information and ensure imaging quality.
The invention relates to a binocular camera system, which is characterized in that a camera module, a correction module, a calibration module and a storage module are sequentially cascaded to form the binocular camera system; the event camera and the common CMOS camera with the same lens orientation and in parallel form a camera module, wherein each module is described as follows:
the camera module comprises an event camera and a CMOS camera, wherein the event camera is used for acquiring address-event data flow, images and image shooting time, the event camera is used for acquiring the address-event data flow, the CMOS camera is used for acquiring the image shooting time of the images and the image shooting time, the event camera and the CMOS camera output data simultaneously, the output images of the event camera and the CMOS camera are the same in size, the main axes of the event camera and the CMOS camera lens are parallel, and the sensor planes of the event camera and the CMOS camera are on the same plane;
the correction module is used for respectively carrying out distortion correction and polar line correction on an address-event data stream output by the event camera and an image output by the CMOS camera, and outputting a corrected address-event data stream and a corrected image, wherein the correction module stores a camera matrix and a distortion matrix of the event camera calibrated in advance, a camera matrix and a distortion matrix of the CMOS camera, and a rotation matrix and a translation matrix between the event camera calibrated in advance and the CMOS camera;
the calibration module is used for performing space position calibration on the address-event data stream and the image after distortion correction and epipolar correction, obtaining and outputting a homography matrix representing the corresponding relation between the event coordinates in the address-event data stream and the pixel coordinates of the image;
the storage module comprises a persistence memory and is used for persistence storage of calibration results, wherein the calibration results comprise corrected address-event data streams, corrected images and homography matrixes of event coordinates and image pixel coordinates in the address-event data streams.
The invention also provides a binocular camera space calibration method which is realized on a binocular camera system and is characterized by comprising the following steps:
(1) The camera module acquires an address-event data stream, an image, and a shooting time of each image:
event camera in camera module acquires address-event data stream e= { E i |0<i≤N 1 And output, while in the camera moduleCMOS camera acquisition N of (2) 2 Image and moment of each image shot s= { S r |0<r≤N 2 And outputs, where e i Represents the ith event, e i =(x i ,y i ,g i ,t i ),x i And y i Respectively represent e i The abscissa and ordinate of the trigger position pixel, g i Representation e i Gray value g of (1) i >0,t i Representation e i Time of triggering s r =(I r ,tp r ),I r Representing the r-th image in the image sequence S, tp r Representing image I r Moment of shooting, N 1 >0,N 1 Representing the number of events, N, in the address-event data stream E 2 >0;
(2) Correction module for each event e i And image I r Performing distortion correction and epipolar correction:
the correction module corrects each event e by the camera matrix and distortion matrix of the event camera i Performing distortion correction to obtain an event e after distortion correction 1,i For each image I, a camera matrix and a distortion matrix of the CMOS camera r Performing distortion correction to obtain a distortion corrected image I 1,r And pairs event e through rotation matrix and translation matrix between event camera and CMOS camera 1,i And image I 1,r Performing polar correction to obtain an event e after polar correction 2,i And image I 2,r
(3) The calibration module performs spatial position calibration on the corrected address-event data stream and the corrected image:
the calibration module is used for each image I 2,r According to image I 2,r Shooting time tp r Corrected address-event data stream E 2 Splitting into multiple address-event data stream segments E 2,r And segment E of the address-event data stream 2,r The events in the model are accumulated into a matrix according to the coordinates of the events, and the matrix and the image I where the events are accumulated are respectively calculated by using a SIFT operator 2,r The characteristic points of the pattern are matched to obtain a plurality of pairs of matching points, and the matching points are calculated by a matching point meterComputing matrix and image I 2,r The homography matrix between the two, homography matrix represents the corresponding relation between the event coordinates of the address-event data stream and the image pixel coordinates;
(4) The storage module stores the calibration result: the storage module stores the corrected address-event data stream, the corrected image and the set of homography matrix of event coordinates and image pixel coordinates in the address-event data stream in the persistent memory, and the space calibration of the event camera and the CMOS camera is completed.
The invention aims to provide a binocular camera system and perform space calibration on the binocular camera system without using a light splitting sheet.
Compared with the prior art, the invention has the following advantages:
imaging quality is improved: because the invention does not use a beam splitter, the event camera and the CMOS camera can receive more light rays, and the imaging quality is higher compared with a binocular camera system using the beam splitter.
Parallax exists between cameras, and depth images can be obtained: because of the parallax between the event camera and the CMOS camera in the present invention, depth information can be obtained through the obtained address-event data stream and image of the present invention, whereas it is difficult to obtain a depth image without parallax between the two cameras of the binocular camera system using the spectroscope.
Cameras may be used to identify, track objects moving at high speed: since the event camera in the invention does not generate blurring when shooting a high-speed moving object, the track and the gesture of the high-speed moving object can be obtained through the event camera. The common CMOS camera can complement the background information and the color information in the scene which cannot be obtained by the event camera, and can overcome the defect that the event camera cannot shoot a static target. The invention combines the event camera with the common CMOS camera, combines the advantages of the two cameras, and improves the accuracy of high-speed moving target identification and track tracking.
Drawings
Fig. 1 is a schematic diagram of the overall structure of the binocular camera system of the present invention.
FIG. 2 is a flow chart of an implementation of the binocular camera spatial calibration method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and to specific embodiments:
example 1
Due to the limitation of the imaging principle of the conventional common CMOS camera, the common CMOS camera has a smear phenomenon when shooting a high-speed moving object, so that the image is blurred, and the image data shot by the common CMOS camera is difficult to effectively use. The event camera is a camera which is different from the imaging principle of a common CMOS camera, simulates the imaging characteristic of the biological retina, only images moving targets, so the data volume of the event camera is smaller, and the event camera can shoot the targets moving at a high speed. The CMOS camera and the event camera are combined to form a binocular camera system, so that the advantages of the two cameras can be effectively utilized, and data with the advantages of high frame rate, high dynamic range and high spatial resolution can be obtained.
The invention relates to a binocular camera system, referring to fig. 1, wherein a camera module, a correction module, a calibration module and a storage module are sequentially cascaded to form the binocular camera system. The event camera and the common CMOS camera with the same lens orientation and in parallel form a camera module, wherein each module is described as follows:
the camera module comprises an event camera and a CMOS camera, wherein the event camera is used for acquiring address-event data streams, images and image shooting time, the CMOS camera is used for acquiring the image shooting time of the images and the images, the event camera and the CMOS camera output data simultaneously, the output images of the event camera and the CMOS camera are the same in size, the main axes of the event camera and the CMOS camera lens are parallel, and the sensor planes of the event camera and the CMOS camera are on the same plane.
The event camera used in this embodiment is a Celex-V type camera, and both the event camera and the ordinary CMOS camera use a lens with a focal length of 16 mm.
The correction module is used for respectively carrying out distortion correction and polar line correction on the address-event data stream output by the event camera and the image output by the CMOS camera, and outputting the corrected address-event data stream and the corrected image, wherein the correction module stores a camera matrix and a distortion matrix of the event camera which are calibrated in advance, a camera matrix and a distortion matrix of the CMOS camera which are calibrated in advance, and a rotation matrix and a translation matrix between the event camera and the CMOS camera which are calibrated in advance.
In this embodiment, the correction module is implemented by using Nvidia TX2, and is connected to the event camera and the normal CMOS camera through a USB interface.
The calibration module is used for carrying out space position calibration on the address-event data stream and the image after distortion correction and polar line correction, obtaining and outputting a homography matrix of event coordinates and image pixel coordinates in the address-event data stream. The calibration module in this embodiment is implemented using Nvidia TX 2.
The memory module in the invention comprises a persistent memory for persistently storing the corrected address-event data stream, the corrected image and a set of homography matrixes representing event coordinates and image pixel coordinates in the address-event data stream.
Because there is no parallax between two cameras in the binocular camera system using the light splitting sheet in the prior art, only the space calibration is needed when the binocular camera system using the light splitting sheet is used for space calibration, but the quality of the image obtained by the binocular camera system is relatively poor due to the influence of the light splitting sheet, and the binocular camera system has a complex structure and higher requirement on assembly precision. The invention provides a binocular camera system without a beam splitter, which is characterized in that the beam splitter and a camera lens are not required to be aligned, so that the structure of the system is simplified, and the cost is reduced. And as no light splitting sheet is used, the imaging quality of the two cameras is ensured. The parallax is arranged between the two cameras, so that the data shot by the method does not lose space depth information, and the method can be used for calculating a depth map.
Since the event camera in the invention does not generate blurring when shooting a high-speed moving object, the track and the gesture of the high-speed moving object can be obtained through the event camera. The common CMOS camera can complement the background information and the color information in the scene which cannot be obtained by the event camera, and can overcome the defect that the event camera cannot shoot a static target. The invention combines the event camera with the common CMOS camera, combines the advantages of the two cameras, and improves the accuracy of high-speed moving target identification and track tracking.
The binocular camera system can be used for acquiring data with high time resolution and high spatial resolution, can acquire depth information, can be applied to visual navigation of an unmanned system, and improves safety of the unmanned system.
Example 2
The overall constitution of the binocular camera system is the same as that of embodiment 1, the camera matrix and distortion matrix of the event camera previously calibrated stored in the correction module of the present invention are obtained by a Zhang Zhengyou calibration method, and the camera matrix and distortion matrix of the CMOS camera previously calibrated stored in the correction module are obtained by a Zhang Zhengyou calibration method; the rotation matrix and the translation matrix between the event camera and the CMOS camera which are calibrated in advance and stored by the correction module are obtained by a Bouguet polar line correction method. In this embodiment, when the event camera and the normal CMOS camera are calibrated, the calibration board used is a checkerboard calibration board. When the event camera in the camera is calibrated, a full-frame gray image is acquired through the Celex-V camera.
The invention respectively corrects the distortion of each camera in the binocular camera, can reduce the distortion of the camera image and improve the image quality. The invention does not use a beam splitter, not only simplifies the structure of the system, but also avoids precise alignment of a camera lens and the beam splitter.
Example 3
Since the binocular camera system of the present invention has parallax between the two cameras, the coordinates of the same object on the image surfaces of the two cameras are related to the relative positions of the object and the binocular camera system, and when the relative positions of the same object and the binocular camera system are changed, the coordinates of the object on the respective image surfaces of the two cameras in the binocular camera system are also changed, so that the real-time calibration of the binocular camera system according to the real-time relative positions of the object and the binocular camera system is required. The space calibration method of the invention utilizes the binocular camera system of the invention to shoot data in real time, and then corrects and calibrates the data in real time.
The present invention is also a binocular camera space calibration method implemented on the binocular camera system described above, referring to fig. 2, including the steps of:
(1) The camera module acquires an address-event data stream, an image, and a shooting time of each image:
an event camera in the camera module acquires and outputs an address-event data stream E, e= { E i |0<i≤N 1 I is the sequence number of the event, while the CMOS camera in the camera module acquires N 2 Image and moment of each image shot s= { S r |0<r≤N 2 And outputs, where e i Represents the ith event, e i =(x i ,y i ,g i ,t i ),x i And y i Respectively represent e i The abscissa and ordinate of the trigger position pixel, g i Representation e i Gray value g of (1) i >0,t i Representation e i Time of triggering s r =(I r ,tp r ),s r Image and shooting time of image, I r Representing the r-th image in the image sequence S, r being the sequence number of the image, tp r Representing image I r Moment of shooting, N 1 >0,N 1 Representing the total number of events, N, in the address-event data stream E 2 > 0. In this embodiment, the time difference between the shooting moments of two adjacent images is less than 30ms, so as to reduce the number of events generated between the shooting moments of two adjacent images by the event camera.
(2) Correction module for each event e i And each image I r Distortion correction and epipolar correction are performed separately:
the correction module corrects each event e by the camera matrix and distortion matrix of the event camera i Performing distortion correction to obtain an event e after distortion correction 1,i For each image I, a camera matrix and a distortion matrix of the CMOS camera r Performing distortion correction to obtain a distortion corrected image I 1,r And pass through the rotation matrix and translation matrix pair e between the event camera and the CMOS camera 1,i And I 1,r Performing polar correction to obtain an event e after polar correction 2,i And I 2,r . In the present embodiment, for event e i The distortion correction and the polar line correction are performed on the event e i Is subjected to distortion correction and epipolar correction, event e i Time t of (2) i And gray g i The distortion correction and epipolar correction of the CMOS camera are unchanged, namely the image I r The pixel coordinates in (a) are corrected, the image I r Shooting time tp r Is unchanged.
(3) The calibration module performs spatial position calibration on the corrected address-event data stream and the corrected image:
the calibration module is used for each image I 2,r According to image I 2,r Shooting time tp r Address-event data stream E after polar correction 2 Splitting into multiple address-event data stream segments E 2,r And segment E of the address-event data stream 2,r The events in the system are accumulated into a full zero matrix M according to the coordinates of the events, wherein the matrix M is a full zero matrix constructed by a calibration module and is used for calculating an address-event data stream and an image I 2,r Matching points of matrix and image I by SIFT operator 2,r Performing feature point matching to obtain a plurality of matching points, and calculating a matrix M and an image I through the matching points 2,r The homography matrix between them represents the correspondence between the event coordinates of the address-event data stream and the image pixel coordinates.
(4) The storage module stores the calibration result: the storage module sets H= { H of the corrected address-event data stream, the corrected image and the homography matrix of the event coordinates and the image pixel coordinates in the address-event data stream r |0<r≤N 2 The data are stored in a persistent memory to complete the spatial calibration of the event camera and the CMOS camera. In this embodiment, the data storage format is an hdf5 format, and an image, an address-event data stream segment corresponding to the image, and a homography matrix representing a correspondence between pixel coordinates of the image and event coordinates of the corresponding address-event data stream segment are stored in a file. A plurality of files may be used to store an image, an address-event data stream segment corresponding to the image, and a homography matrix representing a correspondence between pixel coordinates of the image and event coordinates of the corresponding address-event data stream segment, and the data storage format is not limited to the hdf5 format.
The invention segments the address-event data stream output by the event camera by utilizing the shooting time of the image, realizes the time alignment of the image and the address-event data stream, and reduces the error of the binocular camera space calibration.
Example 4
Because the data output by the event camera is an unstructured address-event data stream, the address-event data stream needs to be converted into structured data so as to process the address-event data stream. And matching the characteristic points between the address-event data stream and the image to obtain a plurality of pairs of matching points, and calculating a homography matrix between the matching points through the matching points to finish the calibration of the binocular camera system.
A binocular camera system and a binocular camera space calibration method thereof are the same as those of embodiments 1-3, wherein the calibration module in step (3) performs space position calibration on corrected address-event data stream and image, and the method comprises the following steps:
(3a) Constructing an all-zero matrix M=zeros (H, W), wherein H and W respectively represent the total number of rows and the total number of columns of the output image of the CMOS camera, H is more than or equal to 32, W is more than or equal to 32, each element M in M is enabled to be m=0, and i is enabled to be 1.
(3b) Let r=1, and r denote the serial number of the corrected normal CMOS camera image.
(3c) From image I 2,r Shooting time tp r Dividing the address-event data stream into address-event data stream segments:
(3c1) Set image I r The corresponding set of address-event data stream events is E 2,r Order E 2,r Is an empty set.
(3c2) Judging event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) Wherein 0 is less than x 2,i W is more than or equal to 0 and less than y 2,i And (3) if not more than H is established, executing the step (3 c 3), otherwise, enabling i=i+1, and executing the step (3 c 2).
(3c3) Event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) Add to set E 2,r And step (3 c 4) is performed.
(3c4) Judgment of t i ≤tp r If yes, let i=i+1 and execute step (3 c 2), otherwise, get image I r Corresponding address-event data stream segment E 2,r ={e 2,r,b 0 < b.ltoreq.B, where e 2,r,b =(x 2,r,b ,y 2,r,b ,g 2,r,b ,t 2,r,b ) B represents set E 2,r Is a total number of events in the system.
In step (3 c), the present invention is based on image I 2,r Shooting time tp r And event information for events in the address-event data stream, find image I 2,r And shooting an event generated by the event camera is beneficial to improving the accuracy of feature point matching of the image and the address-event data stream.
(3d) Starting to accumulate events in the address-event data stream segment into matrix M, let b=1, b is an address-event data stream segment E 2,r Event sequence number in (a).
(3e) Order theWherein g 2,r,b Is event e in the address-event data stream segment 2,r,b Is used for the gray-scale value of (c),for the coordinates in matrix M as (x 2,r,b ,y 2,r,b ) Is no longer an all zero matrix when matrix M is already。
(3f) Judging whether B is less than or equal to B or not, if so, making b=b+1, executing step (3 e), otherwise, when B is more than B, finishing accumulation of address-event data stream segments, and executing step (3 g).
(3g) Begin image I 2,r Matching the characteristic points with the matrix M, and extracting the characteristic points in the matrix M by using a SIFT operator to obtain N r,3 Individual characteristic points HP r ={p r,l |0<l≤N r,3 P is }, where r,l =(x r,l ,y r,l ,F r,l ) L is the feature point p r,l Sequence number (x) r,l ,y r,l ) Representing coordinates of feature points, F r,l Feature descriptors, N, representing feature points r,3 >8。
(3h) Let l=1, l be the sequence number of the feature point in the feature point set of the matrix M.
(3i) Beginning at image I 2,r Is found and characteristic point p r,l Matching characteristic points, and calculating an image I by using SIFT operator 2,r In y r,l Characteristic points of pixels on the row are obtained to obtain W characteristic points HP r,l '={p r,l,k ' I0 < k.ltoreq.W }, where p r,l,k ' representing image I 2,r In y r,l The kth feature point, p, on the line r,l,k '=(x r,l,k ',y r,l,k ',F r,l,k '), k is image I 2,r In y r,l Serial number of feature point on line, (x) r,l,k ',y r,l,k ' indicates the characteristic point p r,l,k ' coordinates, F r,l,k ' representing the feature point p r,l,k ' feature descriptor.
(3j) Let k=1, k represents image I 2,r In y r,l The kth feature point on the row.
(3k) Calculation F l And F r,l,k ' Euclidean distance between o r,l,k
(3 l) judging whether k < W is true, if yes, making k=k+1, and executing the step (3 k), otherwise, obtaining a feature descriptor distance set O r,l ={o r,l,k And (3 m) and executing the step (3 m).
(3 m) let O be r,l Is o as the minimum value r,l,k' Then p r,l The corresponding characteristic point is p r,l,k′ ′。
(3N) judging that l is less than or equal to N r,3 If yes, let l=l+1 and execute step (3 i), otherwise get the address-event data stream set E 2,r Medium event and image I 2,r Set KP of corresponding points of pixels of (a) r ={f r,l =(p r,l ,p r,l,k′ ′)|0<l≤N r,3 And step (3 o) is performed.
(3 o) eight-point method by KP r Computing an Address-event data stream set E 2,r Medium event coordinates and image I r Homography matrix H of pixel coordinates in (a) r
(3 p) judging that r is less than or equal to N 2 If so, let r=r+1 and execute step (3 c), otherwise, calibrating all images and address-event data streams to obtain a set h= { H of homography matrices representing the correspondence between event coordinates and image pixel coordinates in the address-event data streams r |0<r<N 2 }。
According to the invention, the address-event data stream is segmented and divided according to the shooting time of the image and the time information of the event in the address-event data stream, so that feature matching is only carried out on the image and the address-event data stream which are close to the shooting time, and at the moment, the two cameras shoot the same scene, and the accuracy of space calibration is improved. And the parallax of the event camera and the common CMOS camera can be acquired after the space calibration, and the parallax is used for recovering the space depth information. After the space calibration, the data of the event camera and the data of the common CMOS camera can be fused, the common CMOS camera can complement the background information and the color information in the scene which cannot be obtained by the event camera, the event camera can obtain the data with high time resolution, the data of the event camera and the common CMOS camera are fused, and the data with the advantages of high time resolution and high space resolution can be obtained.
The invention solves the problems of poor imaging quality and loss of space depth information in the prior art, and the binocular camera space calibration method comprises the following steps: the camera module acquires an address-event data stream, an image and shooting time of each image; the correction module performs distortion correction and epipolar correction on the event and the image; the calibration module performs spatial position calibration on the corrected address-event data stream and the image, and the storage module stores the corrected address-event data stream, the corrected image and a set of homography matrixes representing the corresponding relation between event coordinates and image pixel coordinates in the address-event data stream in the persistent memory.
The invention will be further described below in conjunction with a binocular camera system and a calibration method therefor.
Example 5
A binocular camera system and a binocular camera space calibration method are the same as those of embodiments 1 to 4.
Referring to fig. 1, the binocular camera system of the present invention is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module to form a binocular camera system. The event camera and the common CMOS camera with the same lens orientation and in parallel form a camera module, wherein each module is described as follows:
the camera module comprises an event camera and a CMOS camera, wherein the event camera is used for acquiring address-event data flow, images and image shooting time, the event camera is used for acquiring the address-event data flow, the CMOS camera is used for acquiring the image shooting time of the images and the image shooting time, the event camera and the CMOS camera output data simultaneously, the output images of the event camera and the CMOS camera are the same in size, the main axes of the event camera and the CMOS camera lens are parallel, and the sensor planes of the event camera and the CMOS camera are on the same plane.
The correction module is used for respectively carrying out distortion correction and polar line correction on the address-event data stream output by the event camera and the image output by the CMOS camera, and outputting the corrected address-event data stream and the corrected image, wherein the correction module stores a camera matrix and a distortion matrix of the event camera calibrated in advance, a camera matrix and a distortion matrix of the CMOS camera, and a rotation matrix and a translation matrix between the event camera and the CMOS camera.
The calibration module is used for carrying out space position calibration on the address-event data stream and the image after distortion correction and epipolar correction, obtaining and outputting a homography matrix of event coordinates and image pixel coordinates in the address-event data stream.
The memory module comprises a persistent memory for persistently storing a calibration result, wherein the calibration result comprises a corrected address-event data stream, a corrected image and a homography matrix of event coordinates and image pixel coordinates in the address-event data stream.
On the basis of the binocular camera system, a binocular camera space calibration method is also designed, and referring to fig. 2, the binocular camera space calibration method can be used for data acquisition, identification and tracking of a high-speed moving target, and parallax of an event camera and a common CMOS camera is obtained and used for recovering space depth information.
The method comprises the following steps:
(1) The camera module acquires an address-event data stream, an image, and a shooting time of each image:
event camera in camera module acquires address-event data stream e= { E i |0<i≤N 1 And output while the CMOS camera in the camera module acquires N 2 Image and moment of each image shot s= { S r |0<r≤N 2 And outputs, where e i Represents the ith event, e i =(x i ,y i ,g i ,t i ),x i And y i Respectively represent e i The abscissa and ordinate of the trigger position pixel, g i Representation e i Gray value g of (1) i >0,t i Representation e i Time of triggering s r =(I r ,tp r ),I r Representing the r-th image in the image sequence S, tp r Representing image I r Moment of shooting, N 1 >0,N 1 Representing the number of events, N, in the address-event data stream E 2 >0。
(2) Correction module for each event e i And image I r Performing distortion correction and epipolar correction:
correction module passing eventCamera matrix and distortion matrix of camera for each event e i Performing distortion correction to obtain an event e after distortion correction 1,i For each image s, a camera matrix and a distortion matrix of the CMOS camera are used r Performing distortion correction to obtain a distortion corrected image I 1,r And pass through the rotation matrix and translation matrix pair e between the event camera and the CMOS camera 1,i And I 1,r Performing polar correction to obtain an event e after polar correction 2,r And I 2,r
(3) The calibration module performs spatial position calibration on the corrected address-event data stream and the corrected image:
(3a) Constructing an all-zero matrix M=zeros (H, W), wherein H and W respectively represent the total number of rows and the total number of columns of the output image of the CMOS camera, H is more than or equal to 32, W is more than or equal to 32, each element M in M is enabled to be m=0, and i is enabled to be 1.
(3b) Let r=1, and r denote the serial number of the corrected normal CMOS camera image.
(3c) From image I 2,r Shooting time tp r Dividing the address-event data stream into address-event data stream segments:
(3c1) Set image I r The corresponding set of address-event data stream events is E 2,r Order E 2,r Is an empty set.
(3c2) Judging event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) Wherein 0 is less than x 2,i W is more than or equal to 0 and less than y 2,i And (3) if not more than H is established, executing the step (3 c 3), otherwise, enabling i=i+1, and executing the step (3 c 2).
(3c3) Event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) Add to set E 2,r And step (3 c 4) is performed.
(3c4) Judgment of t i ≤tp r If yes, let i=i+1 and execute step (3 c 2), otherwise, get image I r Corresponding address-event data stream segment E 2,r ={e 2,r,b 0 < b.ltoreq.B, where e 2,r,b =(x 2,r,b ,y 2,r,b ,g 2,r,b ,t 2,r,b ) Table BSet of representations E 2,r Is a total number of events in the system.
(3d) Starting to accumulate events in the address-event data stream segment into matrix M, let b=1, b is an address-event data stream segment E 2,r Event sequence number in (a).
(3e) Order theWherein g 2,r,b Is event e in the address-event data stream segment 2,r,b Is used for the gray-scale value of (c),for the coordinates in matrix M as (x 2,r,b ,y 2,r,b ) Is no longer an all zero matrix at this point in time.
(3f) Judging whether B is less than or equal to B or not, if so, making b=b+1, executing step (3 e), otherwise, when B is more than B, finishing accumulation of address-event data stream segments, and executing step (3 g).
(3g) Begin image I 2,r Matching the characteristic points with the matrix M, and extracting the characteristic points in the matrix M by using a SIFT operator to obtain N r,3 Individual characteristic points HP r ={p r,l |0<l≤N r,3 P is }, where r,l =(x r,l ,y r,l ,F r,l ) L is the feature point p r,l Sequence number (x) r,l ,y r,l ) Representing coordinates of feature points, F r,l Feature descriptors, N, representing feature points r,3 >8。
(3h) Let l=1, l be the sequence number of the feature point in the feature point set of the matrix M.
(3i) Beginning at image I 2,r Is found and characteristic point p r,l Matching characteristic points, and calculating an image I by using SIFT operator 2,r In y r,l Characteristic points of pixels on the row are obtained to obtain W characteristic points HP r,l '={p r,l,k ' I0 < k.ltoreq.W }, where p r,l,k ' representing image I 2,r In y r,l The kth feature point, p, on the line r,l,k '=(x r,l,k ',y r,l,k ',F r,l,k '), k is image I 2,r Middle iny r,l Serial number of feature point on line, (x) r,l,k ',y r,l,k ' indicates the characteristic point p r,l,k ' coordinates, F r,l,k ' representing the feature point p r,l,k ' feature descriptor.
(3j) Let k=1, k represents image I 2,r In y r,l The kth feature point on the row.
(3k) Calculation F l And F r,l,k ' Euclidean distance between o r,l,k
(3 l) judging whether k < W is true, if yes, making k=k+1, and executing the step (3 k), otherwise, obtaining a feature descriptor distance set O r,l ={o r,l,k And (3 m) and executing the step (3 m).
(3 m) let O be r,l Is o as the minimum value r,l,k' Then p r,l The corresponding characteristic point is p r,l,k′ ′。
(3N) judging that l is less than or equal to N r,3 If yes, let l=l+1 and execute step (3 i), otherwise get the address-event data stream set E 2,r Medium event and image I 2,r Set KP of corresponding points of pixels of (a) r ={f r,l =(p r,l ,p r,l,k′ ′)|0<l≤N r,3 And step (3 o) is performed.
(3 o) eight-point method by KP r Computing an Address-event data stream set E 2,r Medium event coordinates and image I r Homography matrix H of pixel coordinates in (a) r
(3 p) judging that r is less than or equal to N 2 If so, let r=r+1 and execute step (3 c), otherwise, calibrating all images and address-event data streams to obtain a set h= { H of homography matrices representing the correspondence between event coordinates and image pixel coordinates in the address-event data streams r |0<r<N 2 }。
In summary, the binocular camera system and the binocular camera space calibration method provided by the invention solve the problems of poor imaging quality and loss of space depth information in the prior art. The binocular camera system is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module, wherein the camera module comprises an event camera and a common CMOS camera which are connected in parallel. The binocular camera space calibration method comprises the following steps: the camera module acquires an address-event data stream, an image and shooting time of each image; the correction module performs distortion correction and epipolar correction on the event and the image; the calibration module performs space position calibration on the corrected address-event data stream and the corrected image; the storage module stores the corrected address-event data stream, the corrected image, and a homography matrix set of event coordinates and image pixel coordinates in the address-event data stream in a persistent memory. The binocular camera system does not use a spectroscope, and the problem of reduced imaging quality of the camera caused by the spectroscope is avoided. In the invention, the parallax exists between the two cameras, and the space depth information can be restored through the parallax between the two cameras. The invention can reconstruct image data with high time resolution, high spatial resolution and high dynamic range, and is used for identifying and tracking high-speed moving targets. Meanwhile, the method can be used for recovering the space depth information, can be used for visual navigation of the unmanned system, rebuilds the environment information, and enhances the safety of the unmanned system.

Claims (4)

1. A binocular camera system is characterized in that a camera module, a correction module, a calibration module and a storage module are sequentially cascaded to form the binocular camera system; the event camera and the common CMOS camera with the same lens orientation and in parallel form a camera module, wherein each module is described as follows:
the camera module comprises an event camera and a CMOS camera, wherein the event camera is used for acquiring address-event data flow, images and image shooting time, the event camera is used for acquiring the address-event data flow, the CMOS camera is used for acquiring the image shooting time of the images, the event camera and the CMOS camera output data simultaneously, the size of the output images of the event camera is the same as that of the output images of the CMOS camera, the lens of the event camera is parallel to the lens main shaft of the CMOS camera, and the sensor planes of the event camera and the CMOS camera are on the same plane;
the correction module is used for respectively carrying out distortion correction and polar line correction on an address-event data stream output by the event camera and an image output by the CMOS camera, and outputting a corrected address-event data stream and a corrected image, wherein the correction module stores a camera matrix and a distortion matrix of the event camera which are calibrated in advance, a camera matrix and a distortion matrix of the CMOS camera which are calibrated in advance, and a rotation matrix and a translation matrix between the event camera and the CMOS camera which are calibrated in advance;
the calibration module is used for performing space position calibration on the address-event data stream and the image after distortion correction and epipolar correction, obtaining and outputting a homography matrix representing the corresponding relation between the event coordinates in the address-event data stream and the pixel coordinates of the image;
the storage module comprises a persistence memory and is used for persistence storage of calibration results, wherein the calibration results comprise corrected address-event data streams, corrected images and homography matrixes which represent the corresponding relation between event coordinates and image pixel coordinates in the address-event data streams.
2. The binocular camera system of claim 1, wherein the camera matrix and distortion matrix of the pre-calibrated event camera stored in the correction module is obtained by a Zhang Zhengyou calibration method, and the camera matrix and distortion matrix of the pre-calibrated CMOS camera stored in the correction module is obtained by a Zhang Zhengyou calibration method; the rotation matrix and the translation matrix between the event camera and the CMOS camera which are calibrated in advance and stored by the correction module are obtained by a Bouguet polar line correction method.
3. A binocular camera space calibration method implemented on the binocular camera system of claims 1-2, comprising the steps of:
(1) The camera module acquires an address-event data stream, an image, and a shooting time of each image:
event camera in camera module acquires address-event data stream e= { E i |0<i≤N 1 And output while the CMOS camera in the camera module acquires N 2 Image and moment of each image shot s= { S r |0<r≤N 2 And outputs, where e i Represents the ith event, e i =(x i ,y i ,g i ,t i ),x i And y i Respectively represent e i The abscissa and ordinate of the trigger position pixel, g i Representation e i Gray value g of (1) i >0,t i Representation e i Time of triggering s r =(I r ,tp r ),I r Representing the r-th image in the image sequence S, tp r Representing image I r Moment of shooting, N 1 >0,N 1 Representing the number of events, N, in the address-event data stream E 2 >0;
(2) Correction module for each event e i And image I r Performing distortion correction and epipolar correction:
the correction module corrects each event e by the camera matrix and distortion matrix of the event camera i Performing distortion correction to obtain an event e after distortion correction 1,i For each image I, a camera matrix and a distortion matrix of the CMOS camera r Performing distortion correction to obtain a distortion corrected image I 1,r And pairs event e through rotation matrix and translation matrix between event camera and CMOS camera 1,i And image I 1,r Performing polar correction to obtain an event e after polar correction 2,i And image I 2,r
(3) The calibration module performs spatial position calibration on the corrected address-event data stream and the corrected image:
the calibration module is used for each image I 2,r According to image I 2,r Shooting time tp r Corrected address-event data stream E 2 Splitting into multiple address-event data stream segments E 2,r And segment E of the address-event data stream 2,r The events in the matrix are accumulated into the matrix according to the coordinates of the events, and the SIFT operator is used for respectively calculating the matrix and the image I of the event accumulation 2,r The characteristic points of the image I are matched to obtain a plurality of pairs of matching points, and a matrix and the image I are calculated through the matching points 2,r Homography matrix between them, homography matrix representing address-event data streamThe correspondence between the event coordinates and the image pixel coordinates;
(4) The storage module stores the calibration result: the storage module stores the corrected address-event data stream, the corrected image and a set of homography matrixes representing the corresponding relation between the event coordinates and the image pixel coordinates in the address-event data stream in the persistent memory, and the space calibration of the event camera and the CMOS camera is completed.
4. A binocular camera spatial calibration method of a binocular camera system according to claim 3, wherein the calibration module in step (3) performs spatial location calibration on the corrected address-event data stream and image, comprising the steps of:
(3a) Constructing an all-zero matrix m=zeros (H, W), where H and W represent the number of rows and columns of the CMOS camera output image, H is more than or equal to 32, W is more than or equal to 32, each element m=0 in M, and i=1;
(3b) Let r=1;
(3c) From image I 2,r Shooting time tp r Dividing the address-event data stream into address-event data stream segments:
(3c1) Set image I r The corresponding set of address-event data stream events is E 2,r Order E 2,r Is an empty set;
(3c2) Judging event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) Wherein 0 is less than x 2,i W is more than or equal to 0 and less than y 2,i If not more than H is true, executing the step (3 c 3), otherwise, enabling i=i+1, and executing the step (3 c 2);
(3c3) Event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) Add to set E 2,r And performing step (3 c 4);
(3c4) Judgment of t i ≤tp r If yes, let i=i+1 and execute step (3 c 2), otherwise, get image I r Corresponding address-event data stream segment E 2,r ={e 2,r,b 0 < b.ltoreq.B, where e 2,r,b =(x 2,r,b ,y 2,r,b ,g 2,r,b ,t 2,r,b ) B represents set E 2,r The number of events in (a);
(3d) Let b=1;
(3e) Order theWherein (1)>The coordinates in the representation matrix M are (x 2,r,b ,y 2,r,b ) Is an element of (2);
(3f) Judging whether B is less than or equal to B or not, if so, making b=b+1, executing the step (3 e), otherwise, executing the step (3 g);
(3g) Extracting characteristic points in the matrix M by using SIFT operator to obtain N r,3 Individual characteristic points HP r ={p r,l |0<l≤N r,3 P is }, where r,l =(x r,l ,y r,l ,F r,l ),(x r,l ,y r,l ) Representing coordinates of feature points, F r,l Feature descriptors, N, representing feature points r,3 >8;
(3h) Let l=1;
(3i) Computing image I with SIFT operator 2,r In y r,l Characteristic points of pixels on the row are obtained to obtain W characteristic points HP r,l '={p r,l,k ' I0 < k.ltoreq.W }, where p r,l,k ' representing image I 2,r In y r,l The kth feature point, p, on the line r,l,k '=(x r,l,k ',y r,l,k ',F r,l,k '),(x r,l,k ',y r,l,k ' indicates the characteristic point p r,l,k ' coordinates, F r,l,k ' representing the feature point p r,l,k ' feature descriptors;
(3j) Let k=1;
(3k) Calculation F r,l And F r,l,k ' Euclidean distance between o r,l,k
(3 l) judging whether k < W is true, if yes, making k=k+1, and executing the step (3 k), otherwise, obtaining a feature descriptor distance set O r,l ={o r,l,k 0 < k.ltoreq.W, and executing step (3 m);
(3 m) let O be r,l Is o as the minimum value r,l,k' Then p r,l The corresponding characteristic point is p r,l,k′ ′;
(3N) judging that l is less than or equal to N r,3 If yes, let l=l+1 and execute step (3 i), otherwise get the address-event data stream set E 2,r Medium event and image I r Set KP of corresponding points of pixels of (a) r ={f r,l =(p r,l ,p r,l,k′ ′)|0<l≤N r,3 -and performing step (3 o);
(3 o) eight-point method by KP r Computing a set E of representing address-event data streams 2,r Medium event coordinates and image I r Homography matrix H of pixel coordinate corresponding relation in the matrix r
(3 p) judging that r is less than or equal to N 2 If yes, let r=r+1 and execute step (3 c), otherwise, get the set of homography matrix H= { H representing the corresponding relation between the event coordinates and the image pixel coordinates in the address-event data stream r |0<r<N 2 }。
CN202011593703.1A 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method Active CN112700502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011593703.1A CN112700502B (en) 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011593703.1A CN112700502B (en) 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method

Publications (2)

Publication Number Publication Date
CN112700502A CN112700502A (en) 2021-04-23
CN112700502B true CN112700502B (en) 2023-08-01

Family

ID=75511809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011593703.1A Active CN112700502B (en) 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method

Country Status (1)

Country Link
CN (1) CN112700502B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822945A (en) * 2021-09-28 2021-12-21 天津朗硕机器人科技有限公司 Workpiece identification and positioning method based on binocular vision
CN114401391B (en) * 2021-12-09 2023-01-06 北京邮电大学 Virtual viewpoint generation method and device
CN114092569B (en) * 2022-01-19 2022-08-05 安维尔信息科技(天津)有限公司 Binocular camera online calibration method and system based on multi-sensor fusion
CN117911540A (en) * 2024-03-18 2024-04-19 安徽大学 Event camera calibration device and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103776419A (en) * 2014-01-24 2014-05-07 华南理工大学 Binocular-vision distance measurement method capable of widening measurement range
KR101589167B1 (en) * 2015-02-09 2016-01-27 동의대학교 산학협력단 System and Method for Correcting Perspective Distortion Image Using Depth Information
CN107255443A (en) * 2017-07-14 2017-10-17 北京航空航天大学 Binocular vision sensor field calibration method and device under a kind of complex environment
CN107729893A (en) * 2017-10-12 2018-02-23 清华大学 A kind of vision positioning method of clapper die spotting press, system and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103776419A (en) * 2014-01-24 2014-05-07 华南理工大学 Binocular-vision distance measurement method capable of widening measurement range
KR101589167B1 (en) * 2015-02-09 2016-01-27 동의대학교 산학협력단 System and Method for Correcting Perspective Distortion Image Using Depth Information
CN107255443A (en) * 2017-07-14 2017-10-17 北京航空航天大学 Binocular vision sensor field calibration method and device under a kind of complex environment
CN107729893A (en) * 2017-10-12 2018-02-23 清华大学 A kind of vision positioning method of clapper die spotting press, system and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双目图像校正VLSI硬件电路结构设计;卫钦智;陈松;;信息技术与网络安全(第06期);全文 *
基于双目立体视觉的倒车环境障碍物测量方法;刘昱岗;王卓君;王福景;张祖涛;徐宏;;交通运输系统工程与信息(第04期);全文 *

Also Published As

Publication number Publication date
CN112700502A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN112700502B (en) Binocular camera system and binocular camera space calibration method
KR102487546B1 (en) Improved camera calibration system, target, and process
CN109272570B (en) Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
CN113175899B (en) Camera and galvanometer combined three-dimensional imaging model of variable sight line system and calibration method thereof
CN105279771B (en) A kind of moving target detecting method based on the modeling of online dynamic background in video
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN109325981B (en) Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points
CN112525107B (en) Structured light three-dimensional measurement method based on event camera
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN105959669A (en) Remapping-based integral imaging micro-image array rapid generation method
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN108596960B (en) Sub-aperture image alignment method of light field camera
CN110751601A (en) Distortion correction method based on RC optical system
CN112258581B (en) On-site calibration method for panoramic camera with multiple fish glasses heads
CN112489141B (en) Production line calibration method and device for single-board single-image strip relay lens of vehicle-mounted camera
CN112950727B (en) Large-view-field multi-target simultaneous ranging method based on bionic curved compound eye
CN115439541A (en) Glass orientation calibration system and method for refraction imaging system
CN107330933B (en) Arbitrary focal surface shooting method based on camera array
CN111553955B (en) Multi-camera three-dimensional system and calibration method thereof
CN111508071B (en) Binocular camera-based 3D modeling method and shooting terminal
CN110992258B (en) High-precision RGB-D point cloud splicing method and system based on weak chromatic aberration information
CN103927757B (en) Target object stereo vision three-dimensional analysis and processing method based on cascade sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant