CN106875435B - Method and system for obtaining depth image - Google Patents
Method and system for obtaining depth image Download PDFInfo
- Publication number
- CN106875435B CN106875435B CN201611155673.XA CN201611155673A CN106875435B CN 106875435 B CN106875435 B CN 106875435B CN 201611155673 A CN201611155673 A CN 201611155673A CN 106875435 B CN106875435 B CN 106875435B
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- structured light
- camera
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a method for acquiring a depth image, which is characterized in that a depth camera is utilized to respectively acquire a pair of structured light images on at least two planes as reference images; then selecting a reference image, and calculating the depth value of the first depth image; and finally, selecting a reference image according to the depth value of the first depth image, and calculating the depth value of the second depth image. The reference image is selected on the basis of obtaining the first depth image to obtain the second depth image, so that the depth measurement error is reduced, the measurement precision of the depth camera is improved, and the situation that the measurement precision of the depth camera is obviously reduced along with the increase of the distance is avoided. In addition, the second depth image is calculated by adopting the second structured light image, and the reference image is selected as the reference image corresponding to the second structured light image according to the first depth image, so that the calculation amount is reduced, the high measurement precision is ensured, and the acquisition speed of the depth image is improved. In addition, the invention also provides a system for acquiring the depth image.
Description
Technical Field
The invention relates to the technical field of computer science, in particular to a method and a system for acquiring a high-precision depth image.
Background
Structured light based depth cameras are currently a popular type of device for measuring the depth of an object. Because the structured light depth camera has higher resolution and depth picture acquisition frame number, the realization of human-computer interaction by using the depth camera is considered as the next generation human-computer interaction technology. In addition, the depth camera can be used for realizing applications such as 3D scanning, robot indoor environment reconstruction and obstacle avoidance.
The structured light depth camera can acquire a high-speed depth video, but the measurement precision of the structured light depth camera is obviously reduced along with the increase of the measurement distance. The main reason is that the contrast of the structured light image is sharply reduced with the increase of the distance, and the precision is obviously reduced when image matching calculation is performed. One of the methods for solving the problem is to increase the power of the structured light projector, but the increased power may affect the human body, and in addition, the power consumption of the depth camera is increased, and the heat dissipation problem is difficult to solve, and thus has not been adopted at present.
In the prior art, the reference image is a pre-collected structured light image on a plane from the depth camera, and the selection of the plane is generally determined by the measurement range of the depth camera. Typically located at the near end of the measurement range, such an arrangement results in a high accuracy of the target depth camera at close range and a large error in distance measurement.
Disclosure of Invention
The invention provides a method for acquiring a depth image, which aims to solve the problem that the measurement accuracy of a depth camera is obviously reduced along with the increase of distance.
The technical problem of the invention is solved by the following technical scheme: a method of acquiring a depth image, comprising the steps of: s1: respectively acquiring a pair of structured light images on at least two planes by using a depth camera as reference images; s2: selecting a reference image, and calculating a first depth image; s3: and selecting a reference image according to the depth value of the first depth image, and calculating a second depth image.
Specifically, in step S2, the method includes the steps of: and calculating a first depth image by using the selected reference image and the first structured light image.
Specifically, in step S3, the method includes the steps of: and calculating the second depth image by using the selected reference image and the first structured light image or the second structured light image, wherein the first structured light image and the second structured light image are in a sequential and adjacent relation in time.
Specifically, in step S3, the position of the plane of the selected reference image is within the depth area to which the depth value of the first depth image belongs.
Specifically, in step S1, the selected one reference image is the reference image closest to the depth camera.
Specifically, in step S1, the planes are within the measurement range of the depth camera, and the spacing between adjacent planes includes equal or unequal spacing.
Specifically, the step of acquiring the structured light image includes:
s11: projecting a structured light pattern into a target space or plane with a laser projector of the depth camera;
s12: and acquiring the structured light image in the target space or on the plane by using an image acquisition camera of the depth camera.
Specifically, the step of calculating the depth value of the first depth image or the second depth image comprises:
t1: calculating a deviation value delta of each pixel of the first structured light image or the second structured light image relative to the reference image by using a matching algorithm;
t2: calculating the depth value of each pixel according to the following formula to obtain a depth image:
where B is the distance between the laser projector and the image-capturing camera, Z0F is the focal length of the lens of the image capturing camera.
The invention also provides a system adopting any one of the above methods for obtaining depth images, which is characterized by comprising: a laser projector for projecting a structured light pattern into space; an image acquisition camera for acquiring a reference image and a target spatial structured light image, the reference image being a structured light image acquired on at least two planes; a processor for acquiring a depth image.
Specifically, the laser projector is an infrared light source laser projector, the infrared light source is an infrared edge-emitting laser or an infrared vertical cavity surface laser emitter, the image acquisition camera is an infrared camera, and the structured light image is a speckle particle image.
Compared with the prior art, the invention has the advantages that: and the reference image is reselected to obtain the second depth image on the basis of obtaining the first depth image, so that the depth measurement error is reduced, the measurement precision of the depth camera is improved, and the measurement precision of the depth camera cannot be obviously reduced along with the increase of the distance. And the second depth image is calculated by adopting the second structured light image, and the reference image reselected from the first depth image is taken as the reference image corresponding to the second structured light image, so that the calculation amount is reduced, the high measurement precision is ensured, and the acquisition speed of the depth image is improved.
Drawings
Fig. 1 is a flowchart of a method for obtaining a depth image in embodiment 1.
Fig. 2 is a flowchart of a method for obtaining a depth image according to embodiment 2 of the present invention.
Fig. 3 is a schematic representation provided by the present invention.
Fig. 4 is a schematic diagram of a system for acquiring a depth image in embodiment 3 provided by the invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and preferred embodiments.
The following description is further made with reference to the accompanying drawings and specific embodiments.
It should be noted that, a corresponding depth area is divided according to a selected plane, a depth area a corresponds to the reference plane 1, a depth area b corresponds to the reference plane 2, and so on. The first structured-light image and the second structured-light image are temporally adjacent structured-light images one after the other.
Example 1
A method of acquiring a depth image, as shown in fig. 1, comprising the steps of:
s1: using a depth camera to respectively obtain a pair of structured light images on at least two planes as reference images
Ri(i=1,...,n);
S2: selecting a reference image, and calculating a first depth image;
s3: and selecting a reference image according to the depth value of the first depth image, and calculating a second depth image.
In this embodiment, step S2 further includes: and calculating a first depth image by using the selected reference image and the first structured light image.
In this embodiment, step S3 further includes: and calculating the second depth image by using the selected reference image and the first structured light image or the second structured light image, wherein the first structured light image and the second structured light image are in a sequential and adjacent relation in time.
In this embodiment, in step S3, the plane where the selected reference image is located corresponds to the depth value of the first depth image.
In this embodiment, in step S1, the selected reference image is the reference image closest to the depth camera.
In this embodiment, in step S1, the planes are within the measurement range of the depth camera, and the distance between adjacent planes is equal, and may not be equal in some embodiments.
In this embodiment, the acquiring of the structured light image includes:
s11: projecting a structured light pattern into a target space or plane with a laser projector of the depth camera;
s12: and acquiring the structured light image on a target space or plane by using an image acquisition camera of the depth camera.
In this embodiment, the step of calculating the depth value of the first depth image or the second depth image includes: t1: calculating a deviation value delta of each pixel of the first structured light image or the second structured light image relative to the reference image by using a matching algorithm;
t2: calculating the depth value of each pixel according to the following formula to obtain a depth image:
in the selection of the plane distance, every other distance can be selected within the depth measurement range, and 3 equidistant planes are selected in total, and in some embodiments, unequal distance selection can also be performed.
In the embodiment, the measurement range is 0.6-6 m, and the plane positions are respectively 0.9m, 2.7m and 4.5 m.
It should be noted that, since the closer the distance to the reference image, the higher the contrast, the smaller the error caused by the reference image, and the farther the distance, the larger the error. The separation distance between the planes may thus decrease with increasing distance, i.e. the planes may be denser the further they are from the depth camera.
Specifically, the acquisition of the reference image comprises the following steps:
d1: projecting a structured light image into a target space using a laser projector of a depth camera;
d2: placing at least two planes at known distances from a depth camera in space, respectively;
d3: and acquiring a structured light image on each plane in sequence by using an image acquisition camera of the depth camera, and taking the structured light image as a reference image.
It should be noted that, in the first depth image obtained in step S2, the pixel values corresponding to the object close to the reference plane in the target region will have higher accuracy. In the present embodiment, the structured light image acquired with the reference plane 1 is used as the reference image, and the accuracy of the depth information of the object located in the area 1 will be high. The main reason is that the contrast of the structured light image decreases with increasing distance, and a large difference is generated between the structured light image and the reference image, and a large calculation error exists when the deviation value is calculated.
In the present embodiment, in the first depth image obtained in step S2, when the depth value is close to the reference plane, the accuracy of the depth value is high. The depth values of other pixels have larger errors, and in order to reduce the errors, the following steps are taken to obtain a high-precision depth image:
s31: selecting a reference image corresponding to the depth value according to the depth value of each pixel in the first depth image;
s32: and calculating a second depth image of the current target area by using the selected reference image.
It should be noted that, in this embodiment, for a depth value corresponding to each pixel in the first depth image, a reference plane closest to each depth value is selected, and a reference image corresponding to the reference plane is used as a reference image selected when the depth value is calculated for the pixel.
In this embodiment, in step S1, the selected plane is divided into corresponding depth areas, where reference plane 1 corresponds to depth area a, reference plane 2 corresponds to depth area b, and so on. In the present embodiment, if the first depth image is calculated according to the reference image 1, the depth values of the depth areas b and c in the first depth image are selected from the reference image 2 and the reference image 3 in this step.
In this embodiment, the pixels of the first structured light image having the depth values in the depth areas b and c in step S31 are calculated according to the algorithm in step S2, and the accuracy is improved because the selected reference image is close to the actual depth value. The pixels with depth values in depth area 1 do not need to be calculated again.
Specifically, as shown in fig. 3, the person, the sofa, and the photo frame in the current target space are at different distances, respectively, corresponding to the depth areas a, b, and c, respectively.
First, a first depth image of a current target space is obtained according to a reference image corresponding to a reference plane 1, and a depth area a where a person is located closer to the reference plane 1 can be determined according to the first depth image, so that the depth information of the person is more accurate.
Secondly, the sofa and the photo frame can be judged to be located in the depth areas b and c respectively according to the first depth image.
And finally, recalculating the depth values of the sofa and the photo frame by respectively utilizing the reference images 2 and 3 to obtain a second depth image.
Example 2
The method for acquiring a depth image in the present embodiment, as shown in fig. 2, is different from embodiment 1 in that the acquisition of the second depth image further requires the use of a second structured light image.
It should be noted that the depth image method shown in embodiment 1 can be regarded as two steps, namely, the preliminary acquisition of the depth image, and the correction of the depth image. In practical use, the output frame number of the depth video is considered, and the method of embodiment 1 adds a correction step to the original method, so that the calculation amount of the first frame is increased, and the output frame rate is reduced. The method shown in this embodiment will reduce the depth measurement error and improve the accuracy under the condition that the output frame rate is not changed. The main idea is to use the feature that the depth value of the target area corresponding to the adjacent frame is not obviously changed. That is, the depth image obtained from the previous frame is used as a basis for selecting a reference image corresponding to each pixel in the next frame, and the following description is made with reference to fig. 3.
The method for acquiring the depth image comprises the following steps:
first, a first frame of structured light image (i.e. a first structured light image) is obtained, and a first depth image of the current target space can be calculated by combining with a reference image corresponding to the reference plane 1, and the frame of depth image is used as a reference object for rough depth estimation of a next frame of depth image.
Secondly, a second frame structured light image (namely, a second structured light image) is obtained, a depth area a where a person is located closer to the reference plane 1 can be judged according to the first depth image, and the sofa and the photo frame are respectively located in the depth areas b and c.
And finally, calculating a second depth image for each pixel in the second structured light image and the corresponding reference image.
Example 3
A system for acquiring a depth image, as shown in fig. 4, comprising: a laser projector 1 for projecting a structured light pattern into space; an image capturing camera 2 for capturing a reference image and a target spatial structured light image, the reference image being a structured light image acquired on at least two planes; and the processor 3 is used for acquiring the depth image.
Specifically, the laser projector 1 is an infrared light source laser projector, the infrared light source includes an infrared edge emitting laser or an infrared vertical cavity surface laser emitter, the image capturing camera 2 is an infrared camera, and the structured light image is a speckle particle image.
The method and the system are mainly directed to a depth camera based on a structured light technology, the depth camera based on the structured light technology comprises a laser projector and an image acquisition camera, and specifically, the laser projector is an infrared projector and is used for projecting an infrared structured light speckle pattern to a target space; the image acquisition camera is an infrared camera and is used for acquiring an infrared image with a structured light pattern. The depth camera further comprises a processor, the processor is used for carrying out correlation calculation on the acquired structured light image and a reference image stored in the system to obtain a deviation value, and then the processor can obtain a depth image of a target space by utilizing the deviation value through a triangulation principle.
It should be noted that the reference image is a pre-collected structured light image on a plane from the depth camera, and the selection of the plane is generally determined by the measurement range of the depth camera. Typically located at the near end of the measurement range, such an arrangement results in a high accuracy of the target depth camera at close range and a large error in distance measurement. In this embodiment, one of a plurality of reference images on a plurality of planes at different distances is collected and is selected as the reference image to perform depth calculation, and then the reference image is re-selected according to the depth information to perform depth information update calculation, so as to obtain a depth image with higher accuracy.
Claims (8)
1. A method of obtaining a depth image, comprising the steps of:
s1: projecting and acquiring a structured light image of a target space by using a depth camera, and acquiring a pair of structured light images on at least two planes with known distances as reference images, wherein the planes are selected at equal intervals or at unequal intervals within the depth measurement range of the depth camera and divide a depth area corresponding to each plane;
s2: selecting a reference image, and calculating a first depth image of the target space;
s3: respectively selecting a reference plane closest to the depth value of each pixel in the first depth image according to the depth value of each pixel in the first depth image, taking the reference image corresponding to the reference plane as a reference image selected when the depth value of each pixel is calculated, re-calculating the depth values of partial pixels to obtain the depth value of a second depth image, and if the depth value of each pixel in the first depth image is located in the depth area of the reference image, not needing to calculate an additional depth value again, and taking the depth value of each pixel in the first depth image as the depth value of the second depth image;
the step of calculating the depth value of the first depth image or the second depth image comprises:
t1: calculating a deviation value delta of each pixel of the first structured light image or the second structured light image relative to the reference image by using a matching algorithm;
t2: calculating the depth value of each pixel according to the following formula to obtain a depth image:
where B is the distance between the laser projector and the image-capturing camera, Z0F is the focal length of the lens of the image capturing camera.
2. The method of acquiring a depth image according to claim 1, wherein in step S2, the method includes the steps of: and calculating a first depth image by using the selected reference image and the first structured light image.
3. The method of acquiring a depth image according to claim 1, wherein in step S3, the method includes the steps of: and calculating the second depth image by using the selected reference image and the first structured light image or the second structured light image, wherein the first structured light image and the second structured light image are in a sequential and adjacent relation in time.
4. The method of claim 1 or 3, wherein in step S3, the position of the plane of the selected reference image is within the depth area to which the depth value of the first depth image belongs.
5. The method of claim 1, wherein in step S1, the selected one of the reference images is the reference image closest to the depth camera.
6. A method of acquiring a depth image as claimed in any one of claims 1 to 3, wherein the step of acquiring the structured light image comprises:
s11: projecting a structured light pattern into a target space or plane with a laser projector of the depth camera;
s12: and acquiring the structured light image on a target space or plane by using an image acquisition camera of the depth camera.
7. A system for using the method of acquiring a depth image of any of claims 1-6, comprising:
a laser projector for projecting a structured light pattern into space;
an image acquisition camera for acquiring a reference image and a target spatial structured light image, the reference image being a structured light image acquired on at least two planes;
a processor for acquiring a depth image.
8. The system of claim 7, wherein the laser projector is an infrared light source laser projector, the infrared light source comprises an infrared edge-emitting laser or an infrared vertical cavity surface laser emitter, the image capture camera is an infrared camera, and the structured light image is a speckle particle image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611155673.XA CN106875435B (en) | 2016-12-14 | 2016-12-14 | Method and system for obtaining depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611155673.XA CN106875435B (en) | 2016-12-14 | 2016-12-14 | Method and system for obtaining depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106875435A CN106875435A (en) | 2017-06-20 |
CN106875435B true CN106875435B (en) | 2021-04-30 |
Family
ID=59164634
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611155673.XA Active CN106875435B (en) | 2016-12-14 | 2016-12-14 | Method and system for obtaining depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106875435B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107564051B (en) * | 2017-09-05 | 2020-06-02 | 歌尔股份有限公司 | Depth information acquisition method and system |
CN107682607B (en) * | 2017-10-27 | 2019-10-22 | Oppo广东移动通信有限公司 | Image acquiring method, device, mobile terminal and storage medium |
CN108701361A (en) * | 2017-11-30 | 2018-10-23 | 深圳市大疆创新科技有限公司 | Depth value determines method and apparatus |
CN109661683B (en) * | 2017-12-15 | 2020-09-15 | 深圳配天智能技术研究院有限公司 | Structured light projection method, depth detection method and structured light projection device based on image content |
CN108227707B (en) * | 2017-12-25 | 2021-11-26 | 清华大学苏州汽车研究院(吴江) | Automatic driving method based on laser radar and end-to-end deep learning method |
CN110088563B (en) * | 2019-03-13 | 2021-03-19 | 深圳市汇顶科技股份有限公司 | Image depth calculation method, image processing device and three-dimensional measurement system |
CN111885311B (en) * | 2020-03-27 | 2022-01-21 | 东莞埃科思科技有限公司 | Method and device for adjusting exposure of infrared camera, electronic equipment and storage medium |
CN111882596B (en) * | 2020-03-27 | 2024-03-22 | 东莞埃科思科技有限公司 | Three-dimensional imaging method and device for structured light module, electronic equipment and storage medium |
CN112752088B (en) * | 2020-07-28 | 2023-03-28 | 腾讯科技(深圳)有限公司 | Depth image generation method and device, reference image generation method and electronic equipment |
CN112818874A (en) * | 2021-02-03 | 2021-05-18 | 东莞埃科思科技有限公司 | Image processing method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778643A (en) * | 2014-01-10 | 2014-05-07 | 深圳奥比中光科技有限公司 | Method and device for generating target depth information in real time |
CN105120257A (en) * | 2015-08-18 | 2015-12-02 | 宁波盈芯信息科技有限公司 | Vertical depth sensing device based on structured light coding |
CN106170086A (en) * | 2016-08-19 | 2016-11-30 | 深圳奥比中光科技有限公司 | The method of drawing three-dimensional image and device, system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7852461B2 (en) * | 2007-11-15 | 2010-12-14 | Microsoft International Holdings B.V. | Dual mode depth imaging |
US8786682B2 (en) * | 2009-03-05 | 2014-07-22 | Primesense Ltd. | Reference image techniques for three-dimensional sensing |
CN104463880B (en) * | 2014-12-12 | 2017-06-30 | 中国科学院自动化研究所 | A kind of RGB D image acquiring methods |
-
2016
- 2016-12-14 CN CN201611155673.XA patent/CN106875435B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778643A (en) * | 2014-01-10 | 2014-05-07 | 深圳奥比中光科技有限公司 | Method and device for generating target depth information in real time |
CN105120257A (en) * | 2015-08-18 | 2015-12-02 | 宁波盈芯信息科技有限公司 | Vertical depth sensing device based on structured light coding |
CN106170086A (en) * | 2016-08-19 | 2016-11-30 | 深圳奥比中光科技有限公司 | The method of drawing three-dimensional image and device, system |
Also Published As
Publication number | Publication date |
---|---|
CN106875435A (en) | 2017-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875435B (en) | Method and system for obtaining depth image | |
US8718326B2 (en) | System and method for extracting three-dimensional coordinates | |
CN106548489B (en) | A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image | |
WO2021063128A1 (en) | Method for determining pose of active rigid body in single-camera environment, and related apparatus | |
EP3444560A1 (en) | Three-dimensional scanning system and scanning method thereof | |
CN107967697B (en) | Three-dimensional measurement method and system based on color random binary coding structure illumination | |
US20090167843A1 (en) | Two pass approach to three dimensional Reconstruction | |
CN107545586B (en) | Depth obtaining method and system based on light field polar line plane image local part | |
CN111210481A (en) | Depth estimation acceleration method of multiband stereo camera | |
US10713810B2 (en) | Information processing apparatus, method of controlling information processing apparatus, and storage medium | |
US11803982B2 (en) | Image processing device and three-dimensional measuring system | |
JP7489253B2 (en) | Depth map generating device and program thereof, and depth map generating system | |
JP2015135317A (en) | Image processing apparatus, system, image processing method, and program | |
KR101745493B1 (en) | Apparatus and method for depth map generation | |
US11348271B2 (en) | Image processing device and three-dimensional measuring system | |
US9538161B2 (en) | System and method for stereoscopic photography | |
US9674503B2 (en) | Stereo matching apparatus using image property | |
TW201715882A (en) | Device and method for depth estimation | |
US11391843B2 (en) | Using time-of-flight techniques for stereoscopic image processing | |
JP2011095131A (en) | Image processing method | |
KR20190042472A (en) | Method and apparatus for estimating plenoptic camera array depth images with neural network | |
EP3688407B1 (en) | Light projection systems | |
US12106512B2 (en) | Method and apparatus for structured light calibration | |
US12125226B2 (en) | Image processing device, three-dimensional measurement system, and image processing method | |
JPH11183142A (en) | Method and apparatus for picking up three-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant after: Obi Zhongguang Technology Group Co., Ltd Address before: A808, Zhongdi building, industry university research base, China University of Geosciences, No.8, Yuexing Third Road, Nanshan District, Shenzhen, Guangdong 518000 Applicant before: SHENZHEN ORBBEC Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |