CN110823094B - Point light source three-dimensional coordinate measuring method and device - Google Patents

Point light source three-dimensional coordinate measuring method and device Download PDF

Info

Publication number
CN110823094B
CN110823094B CN201911086603.7A CN201911086603A CN110823094B CN 110823094 B CN110823094 B CN 110823094B CN 201911086603 A CN201911086603 A CN 201911086603A CN 110823094 B CN110823094 B CN 110823094B
Authority
CN
China
Prior art keywords
light source
deep learning
light field
point light
learning framework
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911086603.7A
Other languages
Chinese (zh)
Other versions
CN110823094A (en
Inventor
胡摇
袁诗翥
郝群
曹睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201911086603.7A priority Critical patent/CN110823094B/en
Publication of CN110823094A publication Critical patent/CN110823094A/en
Application granted granted Critical
Publication of CN110823094B publication Critical patent/CN110823094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • G01B11/005Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates coordinate measuring machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method and a device for measuring the three-dimensional coordinates of a point light source have the advantages of passive measurement, simple structure, small random error and real-time tracking. The method comprises the following steps: (1) establishing a light field imaging system; (2) establishing a deep learning framework; (3) initializing a deep learning framework, and determining and applying the node weight of each neural network in the deep learning framework to enable each neural network in the deep learning framework to reach an available state; (4) acquiring a light field of a target point light source through a light field imaging system; (5) extracting characteristic information of a target point light source light field, wherein the characteristic information refers to relevant information required for point light source three-dimensional coordinate measurement contained in the target point light source light field; (6) inputting the characteristic information of the target point light source light field into a deep learning frame; (7) the deep learning framework calculates the three-dimensional coordinates of the point light sources.

Description

Point light source three-dimensional coordinate measuring method and device
Technical Field
The invention relates to the technical field of photoelectric measurement, in particular to a point light source three-dimensional coordinate measuring method and a point light source three-dimensional coordinate measuring device.
Background
The scattering medium refers to a medium which has a significant scattering effect when light passes through, and for a traditional optical imaging system, the existence of the scattering medium can make the image of a target become blurred, which is not beneficial to the observation of the target. Typical scattering media are clouds, fog, soot, ground glass, cytoplasm, and the like.
Imaging through scattering media is a big problem in the field of photoelectric measurement, but imaging through scattering media has great application value in the fields of biomedicine, remote sensing, security and the like. The space positioning of the point light source in the scattering medium is a typical requirement in the field, and has wide application prospect; specific examples thereof are: the method comprises the following steps of vehicle distance measurement under the dense fog condition, tracking and positioning of an aircraft in a cloud layer, positioning of cells in fluorescence imaging and the like.
The defects imaged through the scattering medium are: under the condition that the point light source is positioned in the scattering medium, the scattering medium has great influence on the three-dimensional coordinate measurement of the point light source, and random errors generated in the measurement process are great.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a point light source three-dimensional coordinate measuring method which has the advantages of passive measurement, simple structure, small random error and real-time tracking.
The technical scheme of the invention is as follows: the point light source three-dimensional coordinate measuring method comprises the following steps:
(1) establishing a light field imaging system, wherein the light field imaging system refers to an instrument or equipment capable of acquiring a light field of a target scene;
(2) establishing a deep learning framework, wherein the deep learning framework refers to a class of computing systems based on software or hardware of an artificial neural network algorithm, and comprises one or more trained or untrained artificial neural networks;
(3) initializing a deep learning framework, and determining and applying the node weight of each neural network in the deep learning framework to enable each neural network in the deep learning framework to reach an available state;
(4) acquiring a light field of a target point light source through a light field imaging system;
(5) extracting characteristic information of a target point light source light field, wherein the characteristic information refers to relevant information required for point light source three-dimensional coordinate measurement contained in the target point light source light field;
(6) inputting the characteristic information of the target point light source light field into a deep learning frame;
(7) the deep learning framework calculates the three-dimensional coordinates of the point light sources.
The invention solves the problem of measuring the three-dimensional coordinates of the point light source in the scattering medium by establishing a light field imaging system and a deep learning method, the measuring process is completely passive measurement, energy carriers such as electromagnetic waves, ultrasonic waves and the like do not need to be transmitted to a measuring target, the secrecy of the measuring process can be ensured, and the invention has the advantages of passive measurement, simple structure, small random error and real-time tracking.
There is also provided a point light source three-dimensional coordinate measuring device, which is a microlens array type light field imaging system, comprising from left to right: the LED light source comprises an LED light source (1), ground glass (2), a main lens (3), a micro-lens array (4) and an image detector (5).
Drawings
Fig. 1 is a flowchart of a point light source three-dimensional coordinate measuring method according to the present invention.
Fig. 2 is a schematic view of a point light source three-dimensional coordinate measuring apparatus according to the present invention.
Fig. 3 shows a light field image.
Fig. 4 shows the sub-aperture division of a light field image.
The system comprises a light source 1, an LED light source 2, ground glass 3, a main lens 4, a micro-lens array and an image detector 5.
Detailed Description
Deep learning is a new research direction in the field of machine learning, and is introduced into machine learning to make it closer to the original target, artificial intelligence. Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning has achieved many achievements in search technology, data mining, machine learning, machine translation, natural language processing, multimedia learning, speech, recommendation and personalization technologies, and other related fields. The deep learning enables the machine to imitate human activities such as audio-visual and thinking, solves a plurality of complex pattern recognition problems, and makes great progress on the artificial intelligence related technology.
The light field imaging technology is a new type of imaging technology. Traditional optical imaging systems can only record light intensity information from a scene, ignoring the direction of the light; the light field imaging system can record the intensity information and the direction information of the light at the same time, namely, the light field of the scene can be recorded. The rich information in the light field contains some features that are difficult to reflect by conventional imaging methods.
Through long-time thinking and repeated tests, the applicant integrates the deep learning and light field imaging technologies into the point light source three-dimensional coordinate measurement, and successfully develops the following method and device.
As shown in fig. 1, the method for measuring the three-dimensional coordinate of the point light source comprises the following steps:
(1) establishing a light field imaging system, wherein the light field imaging system refers to an instrument or equipment capable of acquiring a light field of a target scene;
(2) establishing a deep learning framework, wherein the deep learning framework refers to a class of computing systems based on software or hardware of an artificial neural network algorithm, and comprises one or more trained or untrained artificial neural networks;
(3) initializing a deep learning framework, and determining and applying the node weight of each neural network in the deep learning framework to enable each neural network in the deep learning framework to reach an available state;
(4) acquiring a light field of a target point light source through a light field imaging system;
(5) extracting characteristic information of a target point light source light field, wherein the characteristic information refers to relevant information required for point light source three-dimensional coordinate measurement contained in the target point light source light field;
(6) inputting the characteristic information of the target point light source light field into a deep learning frame;
(7) the deep learning framework calculates the three-dimensional coordinates of the point light sources.
The invention solves the problem of measuring the three-dimensional coordinates of the point light source in the scattering medium by establishing a light field imaging system and a deep learning method, the measuring process is completely passive measurement, energy carriers such as electromagnetic waves, ultrasonic waves and the like do not need to be transmitted to a measuring target, the secrecy of the measuring process can be ensured, and the invention has the advantages of passive measurement, simple structure, small random error and real-time tracking.
Preferably, the light field imaging system in step (1) comprises a camera array, a 4f system, and a microlens array.
Preferably, the deep learning framework in step (2) includes three artificial neural network software programs, which are an X-axis coordinate calculation network, a Y-axis coordinate calculation network, and a Z-axis coordinate calculation network, respectively, and the network structures of the three are shown in table 1.
TABLE 1
Layer(s) Activating a function Regularization term
BatchNormalization
Dense(400) Sigmoid L2
Dropout(0.2)
Dense(1)
Preferably, the step (3) comprises the following substeps:
(3.1) for the trained neural network in the deep learning framework, reading node weight information from a storage, and applying node weights to the neural network;
and (3.2) training the untrained neural network in the deep learning framework by using the training set.
Preferably, in the step (4), a light field image of the target point light source is acquired by using a microlens array type light field imaging system.
Preferably, the step (5) comprises the following substeps:
(5.1) performing sub-aperture division on the light field image acquired in the step (4);
(5.2) extracting the coordinates of the center of each sub-aperture image according to the formula (1);
Figure GDA0002332706000000051
wherein P isicIs the coordinate of the center of the sub-aperture image, x is the coordinate of the pixel in the sub-aperture, and I is the gray value of the pixel in the sub-aperture.
As shown in fig. 2, there is also provided a point light source three-dimensional coordinate measuring device, which is a microlens array type light field imaging system, and comprises, from left to right: LED light source 1, ground glass 2, main lens 3, microlens array 4, image detector 5.
One embodiment of the present invention is described in detail below.
Take the measurement of the LED light source 1 after the ground glass 2 as an example.
The point light source three-dimensional coordinate measuring method based on the deep learning and light field imaging technology disclosed by the embodiment comprises the following steps:
the method comprises the following steps: and establishing a micro-lens array type light field imaging system.
The structure of the microlens array type light field imaging system in the first step is shown in fig. 2, and an optical system of the microlens array type light field imaging system specifically comprises a main lens 3, a microlens array 4 and an image detector 5.
Step two: and establishing a deep learning framework.
The deep learning framework in the second step comprises three artificial neural network software programs, and the structures of the three artificial neural network software programs are shown in the table 1.
Step three: a deep learning framework is initialized.
The concrete implementation method of the third step is as follows:
the neural network node weights stored in the file are read and applied to the respective artificial neural networks in the deep learning framework.
Step four: the light field of the target point light source is obtained through the light field imaging system, and a light field image of the target point light source is obtained, wherein the light field image is shown in fig. 3. For clarity of illustration, FIG. 3 is reversed.
Step five: and extracting the characteristic information of the target point light source light field.
The concrete implementation method of the step five is as follows:
step 5.1: and performing sub-aperture division on the light field image acquired in the step four, wherein the sub-aperture division is shown in fig. 4. For clarity, FIG. 4 is reversed;
step 5.2: extracting the coordinates of the center of each sub-aperture image according to formula (1);
step six: and inputting the characteristic information of the target point light source light field into a deep learning frame, and calculating the three-dimensional coordinates of the point light source by the deep learning frame.
According to the point light source three-dimensional coordinate measuring method based on the deep learning and light field imaging technology, the light field of the target point light source is obtained by establishing the light field imaging system, and the light field is solved by the deep learning method, so that the influence of a scattering medium on the point light source three-dimensional coordinate measurement can be avoided, and the random error in the measuring process is inhibited.
The invention has the following beneficial effects:
1. the point light source three-dimensional coordinate measuring method based on the deep learning and light field imaging technology solves the problem of point light source three-dimensional coordinate measurement in a scattering medium by establishing a light field imaging system and using a deep learning method, the measuring process is completely passive measurement, energy carriers such as electromagnetic waves, ultrasonic waves and the like do not need to be transmitted to a measuring target, and the secrecy of the measuring process can be ensured.
2. The invention discloses a point light source three-dimensional coordinate measuring method based on deep learning and light field imaging technology, which can fit the influence of scattering medium on a light field, therefore, the invention can be applied to the three-dimensional coordinate measurement of a point light source in the scattering medium, and concretely comprises the cases of vehicle distance measurement, the tracking and positioning of an aircraft in a cloud layer, the positioning of cells in fluorescence imaging and the like.
3. The invention discloses a point light source three-dimensional coordinate measuring method based on deep learning and light field imaging technology, which is characterized in that a light field imaging system is established to obtain a light field of a target, and the light field is solved by a deep learning method.
4. According to the point light source three-dimensional coordinate measuring method based on the deep learning and light field imaging technology, the light field image of the target is obtained by establishing the light field imaging system, and when the light field data is processed, parallelization is easy to realize by an artificial neural network algorithm supported by a deep learning framework, so that the calculation speed can be effectively improved through multithreading or GPU calculation and the like, and real-time positioning and tracking of the target are realized.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (4)

1. A point light source three-dimensional coordinate measuring method is characterized in that: which comprises the following steps:
(1) establishing a light field imaging system, wherein the light field imaging system refers to an instrument or equipment capable of acquiring a light field of a target scene;
(2) establishing a deep learning framework, wherein the deep learning framework refers to a class of computing systems based on software or hardware of an artificial neural network algorithm, and comprises one or more trained or untrained artificial neural networks;
(3) initializing a deep learning framework, and determining and applying the node weight of each neural network in the deep learning framework to enable each neural network in the deep learning framework to reach an available state;
(4) acquiring a light field of a target point light source through a light field imaging system;
(5) extracting characteristic information of a target point light source light field, wherein the characteristic information refers to relevant information required for point light source three-dimensional coordinate measurement contained in the target point light source light field;
(6) inputting the characteristic information of the target point light source light field into a deep learning frame;
(7) calculating the three-dimensional coordinates of the point light source by the deep learning frame;
the light field imaging system in the step (1) comprises a camera array, a 4f system and a micro-lens array;
in the step (4), a light field image of the target point light source is obtained by using the micro-lens array type light field imaging system;
the step (5) comprises the following sub-steps:
(5.1) performing sub-aperture division on the light field image acquired in the step (4);
(5.2) extracting the coordinates of the center of each sub-aperture image according to the formula (1);
Figure FDA0002923781160000011
wherein P isicIs the coordinate of the center of the sub-aperture image, x is the coordinate of the pixel in the sub-aperture, and I is the gray value of the pixel in the sub-aperture.
2. The point light source three-dimensional coordinate measuring method according to claim 1, characterized in that: the deep learning framework in the step (2) comprises three artificial neural network software programs.
3. The point light source three-dimensional coordinate measuring method according to claim 1, characterized in that: the step (3) comprises the following sub-steps:
(3.1) for the trained neural network in the deep learning framework, reading node weight information from a storage, and applying node weights to the neural network;
and (3.2) training the untrained neural network in the deep learning framework by using the training set.
4. An apparatus for implementing the point light source three-dimensional coordinate measuring method according to claim 1,
the method is characterized in that: it is microlens array light field imaging system, includes from left to right: the LED light source comprises an LED light source (1), ground glass (2), a main lens (3), a micro-lens array (4) and an image detector (5).
CN201911086603.7A 2019-11-08 2019-11-08 Point light source three-dimensional coordinate measuring method and device Active CN110823094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911086603.7A CN110823094B (en) 2019-11-08 2019-11-08 Point light source three-dimensional coordinate measuring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911086603.7A CN110823094B (en) 2019-11-08 2019-11-08 Point light source three-dimensional coordinate measuring method and device

Publications (2)

Publication Number Publication Date
CN110823094A CN110823094A (en) 2020-02-21
CN110823094B true CN110823094B (en) 2021-03-30

Family

ID=69553490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911086603.7A Active CN110823094B (en) 2019-11-08 2019-11-08 Point light source three-dimensional coordinate measuring method and device

Country Status (1)

Country Link
CN (1) CN110823094B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105300307A (en) * 2015-11-20 2016-02-03 北京理工大学 Device and method for optical mirror distortion measurement of relevant techniques of two-dimensional digital speckling
CN105844589A (en) * 2016-03-21 2016-08-10 深圳市未来媒体技术研究院 Method for realizing light field image super-resolution based on mixed imaging system
CN106846463A (en) * 2017-01-13 2017-06-13 清华大学 Micro-image three-dimensional rebuilding method and system based on deep learning neutral net
CN106840398A (en) * 2017-01-12 2017-06-13 南京大学 A kind of multispectral light-field imaging method
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks
CN107993260A (en) * 2017-12-14 2018-05-04 浙江工商大学 A kind of light field image depth estimation method based on mixed type convolutional neural networks
CN109489559A (en) * 2018-10-08 2019-03-19 北京理工大学 Point light source space-location method based on time frequency analysis and optical field imaging technology
CN109506589A (en) * 2018-12-25 2019-03-22 东南大学苏州医疗器械研究院 A kind of measuring three-dimensional profile method based on light field imaging
CN109949354A (en) * 2019-03-13 2019-06-28 北京信息科技大学 A kind of light field depth information estimation method based on full convolutional neural networks

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3516757B2 (en) * 1994-09-19 2004-04-05 大日本スクリーン製造株式会社 Illuminance distribution deriving method and apparatus
CN103399298B (en) * 2013-07-30 2015-12-02 中国科学院深圳先进技术研究院 A kind of multisensor indoor positioning apparatus and method based on light intensity
US10378963B2 (en) * 2016-06-24 2019-08-13 Ushio Denki Kabushiki Kaisha Optical system phase acquisition method and optical system evaluation method
CN106792549A (en) * 2017-02-05 2017-05-31 南京阿尔法莱瑞通信技术有限公司 Indoor locating system based on WiFi fingerprints and its stop pick-up navigation system
CN108230223A (en) * 2017-12-28 2018-06-29 清华大学 Light field angle super-resolution rate method and device based on convolutional neural networks
CN109883324A (en) * 2019-02-21 2019-06-14 大连理工大学 The method that research background light influences the 3 d space coordinate measurement based on PSD
CN109857351A (en) * 2019-02-22 2019-06-07 北京航天泰坦科技股份有限公司 The Method of printing of traceable invoice
CN110070068B (en) * 2019-04-30 2021-03-02 苏州大学 Human body action recognition method
CN110276793A (en) * 2019-06-05 2019-09-24 北京三快在线科技有限公司 A kind of method and device for demarcating three-dimension object
CN110360954B (en) * 2019-08-14 2021-05-04 山东师范大学 Surface shape measuring method and system based on space coordinate calculation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105300307A (en) * 2015-11-20 2016-02-03 北京理工大学 Device and method for optical mirror distortion measurement of relevant techniques of two-dimensional digital speckling
CN105844589A (en) * 2016-03-21 2016-08-10 深圳市未来媒体技术研究院 Method for realizing light field image super-resolution based on mixed imaging system
CN106840398A (en) * 2017-01-12 2017-06-13 南京大学 A kind of multispectral light-field imaging method
CN106846463A (en) * 2017-01-13 2017-06-13 清华大学 Micro-image three-dimensional rebuilding method and system based on deep learning neutral net
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks
CN107993260A (en) * 2017-12-14 2018-05-04 浙江工商大学 A kind of light field image depth estimation method based on mixed type convolutional neural networks
CN109489559A (en) * 2018-10-08 2019-03-19 北京理工大学 Point light source space-location method based on time frequency analysis and optical field imaging technology
CN109506589A (en) * 2018-12-25 2019-03-22 东南大学苏州医疗器械研究院 A kind of measuring three-dimensional profile method based on light field imaging
CN109949354A (en) * 2019-03-13 2019-06-28 北京信息科技大学 A kind of light field depth information estimation method based on full convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
光场成像技术及其在计算机视觉中的应用;张驰 等;《中国图象图形学报》;20160331;第21卷(第3期);263-281 *

Also Published As

Publication number Publication date
CN110823094A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
Mao et al. What can help pedestrian detection?
Kim et al. High-speed drone detection based on yolo-v8
Khoshboresh-Masouleh et al. Multiscale building segmentation based on deep learning for remote sensing RGB images from different sensors
Sommer et al. Comprehensive analysis of deep learning-based vehicle detection in aerial images
CN113191222B (en) Underwater fish target detection method and device
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN109272543B (en) Method and apparatus for generating a model
CN113762009B (en) Crowd counting method based on multi-scale feature fusion and double-attention mechanism
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN110490818A (en) Calculating ghost imaging reconstruction restoration methods based on CGAN
CN111428650B (en) Pedestrian re-recognition method based on SP-PGGAN style migration
Chormai et al. Disentangled explanations of neural network predictions by finding relevant subspaces
Guo et al. Dim space target detection via convolutional neural network in single optical image
Koziarski et al. Marine snow removal using a fully convolutional 3d neural network combined with an adaptive median filter
CN116091946A (en) Yolov 5-based unmanned aerial vehicle aerial image target detection method
Zhou et al. EASE: EM-Assisted Source Extraction from calcium imaging data
Zhang et al. Particle field positioning with a commercial microscope based on a developed CNN and the depth-from-defocus method
CN110823094B (en) Point light source three-dimensional coordinate measuring method and device
CN118089669A (en) Topography mapping system and method based on aviation mapping technology
CN112036072B (en) Three-dimensional tracer particle matching method and speed field measuring device
Yang et al. SerialTrack: ScalE and rotation invariant augmented Lagrangian particle tracking
CN110111307A (en) A kind of immune teaching immune system feedback analog system and method
Benisty et al. Review of data processing of functional optical microscopy for neuroscience
Dong et al. Retrieving object motions from coded shutter snapshot in dark environment
Durant et al. Variation in the local motion statistics of real-life optic flow scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant