CN105374039B - Monocular image depth information method of estimation based on contour acuity - Google Patents

Monocular image depth information method of estimation based on contour acuity Download PDF

Info

Publication number
CN105374039B
CN105374039B CN201510786727.1A CN201510786727A CN105374039B CN 105374039 B CN105374039 B CN 105374039B CN 201510786727 A CN201510786727 A CN 201510786727A CN 105374039 B CN105374039 B CN 105374039B
Authority
CN
China
Prior art keywords
contour
image
depth
edge
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510786727.1A
Other languages
Chinese (zh)
Other versions
CN105374039A (en
Inventor
马利
景源
李鹏
张玉奇
胡彬彬
牛斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University
Original Assignee
Liaoning University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University filed Critical Liaoning University
Priority to CN201510786727.1A priority Critical patent/CN105374039B/en
Publication of CN105374039A publication Critical patent/CN105374039A/en
Application granted granted Critical
Publication of CN105374039B publication Critical patent/CN105374039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes that a kind of monocular image depth information method of estimation based on contour acuity, this method application edge contour acutance carry out extraction of depth information as fuzzy message estimation feature, by low level hint information.Edge detection is carried out to image first;Then to the edge calculations edge energy in image, contour acuity, and using edge energy, contour acuity as profile external energy, in conjunction with the internal energy of profile --- contour line characteristic energy and contour line establish Contour extraction model apart from energy, the minimum value of energy function is solved, image outline is searched for;Then assumed using concentration gradient as a priori assumption gradient former, depth value filling is carried out to the region with different contour lines, depth distribution is calculated;Processing finally is optimized to obtained depth image using original image information and gained deep image information, obtains final disparity map.The results show, depth estimation algorithm of the invention is simple, can rapidly and accurately estimate the depth map of monocular image.

Description

Monocular image depth information estimation method based on contour sharpness
Technical Field
The invention relates to a method for estimating depth information of a monocular image, which can obtain the depth information of an object shown in the monocular image through low-level clues of the image.
Background
Depth information perception is a precondition for generating stereoscopic vision, and the depth information-based practical application plays an important role in the fields of scene re-solution, three-dimensional reconstruction, mode recognition, target tracking and the like. In practical application, a monocular camera or a depth camera can be adopted for extracting the depth information. The monocular camera-based depth information estimation method has certain advantages due to the fact that the method is simple in actual operation and low in hardware cost, and depth information can be directly extracted from existing monocular image materials.
Monocular image depth information estimation extracts two-dimensional and three-dimensional geometric information such as color, shape, coplanarity and the like of a target through a single image, and therefore space three-dimensional information of the target is obtained by using a small amount of known conditions. Most algorithms at present are realized by adopting high-level and middle-level clues of images. For example, the semantic label of the known image is obtained by learning the reference image, and the semantic label is defined for the target image by using the semantic label, so that the relative depth information is obtained. The method comprises the steps of segmenting a target image by utilizing image structure information obtained by training in a real parallax image, such as color, texture, shape and the like, and deriving depth information by applying a Markov random field with distinguishing training. And the depth information can be recovered from the image by directly applying the local information without analyzing the content of the image by using the low-level clue information, and the algorithm is relatively simple.
The fuzzy information can be used as an important characteristic of depth estimation in the process of estimating the depth information of the monocular image. The blurring is mostly caused by out-of-focus or different depth of objects in the imaging area during the imaging process. According to the characteristic, monocular image depth information estimation determines the foreground and the background of an image according to the blurring condition of the image by defocusing the image so as to estimate the depth of a scene. Relative depth extraction is achieved by applying vanishing light and markov random fields to spread the blur across the entire image, as initiated by the blur values at the edge locations. Or the modeling of the defocusing fuzzy process is realized by utilizing the thermal diffusion process, and the scene depth is recovered by utilizing the uneven inverse thermal diffusion equation to estimate the fuzzy amount of the edge position. And the fuzzy diffusion method is complex, low in efficiency and poor in practicability.
Therefore, the method for estimating the monocular image depth information is simplified by utilizing the low-level clues and the new fuzzy information characteristics of the image.
Disclosure of Invention
The invention provides a monocular image depth information estimation method based on contour sharpness.
The purpose of the invention is realized by the following technical scheme:
the monocular image depth information estimation method based on the contour sharpness is characterized in that the gradient amplitude and contour sharpness information are used for reflecting the characteristics of object contours with different depths in an image, and when depth estimation is carried out, the gradient amplitude of edges is not only considered, and meanwhile, the spatial information of the edges is added, so that the fuzzy change trend of the edges of the objects is more fully reflected. The method comprises the following steps:
(1) performing edge detection on an input image to obtain edge points P ═ P of an object in the image1,p2,…pn};
(2) According to the prior depth gradient model, defining an initial contour line, and defining a group of contour lines which are parallel to each other and have equal intervals as the initial contour line V ═ { V ═ V }0,v1,…vm}; wherein v(s) ═ x(s), y(s)]X(s), y(s) are coordinates of the point s in the initial contour line;
the total energy of the contour is defined as:
Etotal=w1Eedge+w2Esharp+w3Es+w4Ed
wherein EedgeAs a function of the contour edge energy, EsharpEnergy function of contour sharpness, EsAs a function of the characteristic energy of the contour line, EdAs a function of contour distance energy, w1、w2、w3、w4The parameters for each function weight control can be assigned according to specific image characteristics.
Contour edge energy function EedgeThe gradient magnitude of the profile, defined as the magnitude of the gradient magnitude of the image I (x, y) along the gradient direction, is expressed as:
wherein,solving gradient amplitude of I (x, y);
contour sharpness energy function EsharpThe fuzzy degree of the contour is represented and obtained by solving the sharpness of the gradient contour. Is defined as:
wherein the gradient profile sharpness σ (p (q)0) Is the root mean square of the variance of the gradient profile variables. Where the gradient profile is an edge pixel q in the image0(x0,y0) As a starting point, tracing to the boundary of the edge along the gradient direction until the gradient amplitude of the edge does not change any more, and obtaining a one-dimensional path p (q)0) The formed gradient amplitude curve. The sharpness of the gradient profile is defined as
Wherein d isc(q,q0) For the points q and q of the gradient profile0Length of the curve in between, G (q) is the gradient amplitude at point q, G (q)0) Is the sum of the gradient amplitudes of all points in the gradient profile, s is any point in the gradient profile, and b is a weight control parameter.
Contour line characteristic energy function EsSmoothness for constraining the contour;
contour distance energy function EdThe method is used for controlling the contour tracing curve not to exceed the search area;
(3) updating the position of each contour point from the bottom to the top of each initial contour line in the image from the starting point on the left side of the image, and calculating contour tracking energy for each contour point according to the total energy definition of the contour lines in the step (2);
(4) solving the minimum value of the energy function; searching pixel points in a pixel column adjacent to the current contour point, and searching for an edge point P which meets the definition of an energy function, wherein the edge point P is equal to { P }1,p2,…pnSelecting the position of the pixel point with the minimum energy as a new contour point; from the figureRepeatedly searching the minimum value from the left side to the right side to obtain a final contour search result, namely obtaining a target contour line V';
(5) filling depth values of areas with different contour lines by taking a depth gradient hypothesis as a priori hypothesis gradient model, and calculating to obtain depth distribution;
target contour line V ' ═ V ' in contour line set '0,v'1,…,v'mAnd the corresponding allocated Depth value Depth is:
(6) and optimizing the obtained depth image by using the gray information of the original image and the obtained depth image information:
wherein Depth (x)i) To input the depth image, Ω (x)i) Is a pixel xiA central neighborhood, I (x)i) Is a pixel xiLuminance I component, x ofjIs a pixel xiIn the neighborhood of Ω (x)i) Of the neighborhood of pixels, W (x)i) Is a normalization factor of the filter parameters; | xi-xj| | is the spatial Euclidean distance of two pixels; i (x)i)-I(xj) Representing the similarity of the luminance of two pixels, pixel xiAnd xjRespectively have a spatial coordinate of (x)ix,xiy) And (x)jx,xjy). Spatial weight coefficientAnd chroma weight coefficientIs defined as:
wherein sigmasIs the variance, σ, of the spatial weightrIs the variance of the chroma weights.
(7) A disparity map obtained by estimating depth information of the input monocular image is obtained.
The method has the advantage that the monocular image depth information estimation method based on the outline sharpness is provided. The traditional monocular image depth information estimation method needs to perform learning training, image understanding and other steps by utilizing high-level and middle-level clues of images, and the algorithm is complex. The method utilizes low-level clues of the images and has simple calculation. Different from the traditional method for distinguishing the object depth by using the fuzzy information, the method disclosed by the invention effectively distinguishes the object contour by using the contour sharpness information when calculating the fuzzy information, so that the object contour with different depths is obtained, the steps of fuzzy diffusion and the like are avoided, and the experimentality of the method is improved. The method reflects the characteristics of object contours with different depths in an image by utilizing the information of the gradient amplitude and the contour sharpness, not only considers the gradient amplitude of the edge when carrying out depth estimation, but also adds the spatial information of the edge, and more fully reflects the fuzzy change trend of the edge of the object.
Drawings
FIG. 1 is a flow chart of the method.
Fig. 2 shows the definition of contour sharpness.
FIG. 3 is a diagram illustrating the relative relationship of depth estimation assigned by contour lines.
Detailed Description
The following detailed description of the embodiments of the invention is provided in connection with the accompanying drawings.
(1) For an input image, edge detection is carried out by adopting a Canny edge detection algorithm, and the edge point P of an object in the image is obtained as { P ═ P }1,p2,…pn};
(2) According to the prior depth gradient model, defining an initial contour line, and defining a group of contour lines which are parallel to each other and have equal intervals as the initial contour line V ═ { V ═ V }0,v1,...,vm}. Wherein v(s) ═ x(s), y(s)]X(s), y(s) coordinates of points s in the initial contour;
the total energy of the contour line is defined as
Etotal=w1Eedge+w2Esharp+w3Es+w4Ed
Wherein EedgeAs a function of the contour edge energy, EsharpEnergy function of contour sharpness, EsAs a function of the characteristic energy of the contour line, EdAs a function of contour distance energy, w1、w2、w3、w4For each function weight control parameter, can be set to w for the generic image1=0.25、w2=0.5、w3=0.125、w4=0.125。
(3) Contour edge energy function EedgeThe magnitude of the gradient amplitude of the image I (x, y) along the gradient direction is defined as
Wherein,solving gradient amplitude of I (x, y);
wherein the parameter a is a weight control parameter.
(4) In the method, the gradient contour sharpness of the edge is used for representing the blurring degree of the contour, so that a contour sharpness energy function E obtained by solving the gradient contour sharpness is definedsharpAs a characteristic value of the degree of contour blur. As shown in fig. 2, with an edge pixel q in the figure0(x0,y0) As a starting point, tracing to both sides of the edge along the gradient direction until the gradient magnitude of the edge no longer changes, thus obtaining a path p (q)0). Along a one-dimensional path p (q)0) The resulting gradient magnitude profile is referred to as the gradient profile. The profile sharpness is defined using the root mean square of the variance of the gradient profile variables, expressed as:
where d isc(q,q0) For the points q and q of the gradient profile0Length of the curve in between, G (q) is the gradient amplitude at point q, G (q)0) Is the sum of the gradient amplitudes of all points in the gradient profile, and s is any point in the gradient profile.
Sharpness energy function EsharpIs defined as:
wherein the parameter b is a weight control parameter.
(5) WheelEnergy function of profile characteristic EsAnd controlling contour tracking as a smooth term constraint to ensure that a curve generated by tracking is smooth and prevent the curve from falling into local extreme values during solving.
Defining outline points in the outline as N ═ N0,n1,...,nnIn which n is0For the starting point of the contour, the characteristic energy function of the contour is defined as
Wherein d iss(ni,ni-1) Representing point niAnd point ni-1The length of the curve between the two parameters, and the parameter c is a weight control parameter.
(6) Contour distance energy function EdThe elastic constraint term is used for constraining the distance of each contour point in the contour line tracking process, so that the contour line tracking curve does not exceed the search area, and the contour lines are ensured not to be crossed.
Wherein d ise(ni,n0) Representing parametric representation points niAnd point n0D is a weight control parameter.
(7) Updating the position of each contour point from the bottom to the top of each initial contour line in the image from the starting point on the left side of the image, and calculating contour tracking energy for each contour point according to the total energy calculation method of the contour lines in the steps (2-6);
(8) the minimum of the energy function is solved. Searching pixel points in a pixel column adjacent to the current contour point, and searching for an edge point P which meets the definition of an energy function, wherein the edge point P is equal to { P }1,p2,…pnThe point with the smallest energy value in the lattice is selectedThe position of the pixel point with the minimum energy is used as a new contour point; and repeatedly searching the minimum value from the left side to the right side of the image to obtain a final contour search result, namely obtaining the target contour line V'.
(9) And filling depth values of areas with different contour lines according to the graph 3 by taking a depth gradient hypothesis which is gradually deepened from bottom to top as a priori hypothesis gradient model, and calculating to obtain depth distribution.
Target contour line V ' ═ V ' in contour line set '0,v'1,...,v'mAnd h, correspondingly allocating the depth value as.
(10) And optimizing the obtained depth image by using the original image information and the obtained depth image information:
wherein Depth (x)i) To input the depth image, Ω (x)i) Is a pixel xiA central neighborhood, I (x)i) Is a pixel xiLuminance I component, pixel xjIs a pixel xiIn the neighborhood of Ω (x)i) Of the neighborhood of pixels, W (x)i) Is a normalization factor of the filter parameters; II xi-xjII is the space Euclidean distance of two pixels; i (x)i)-I(xj) Representing the similarity of the luminance of two pixels, pixel xiAnd xjRespectively have a spatial coordinate of (x)ix,xiy) And (x)jx,xjy). Spatial weight coefficientAnd chroma weight coefficientIs defined as:
wherein sigmasIs the variance, σ, of the spatial weightrIs the variance of the chroma weights.
(11) A disparity map obtained by estimating depth information of the input monocular image is obtained.

Claims (3)

1. The monocular image depth information estimation method based on the contour sharpness is characterized by comprising the following steps of:
(1) performing edge detection on an input image to obtain edge points P ═ P of an object in the image1,p2,…pn};
(2) According to the prior depth gradient model, defining an initial contour line, and defining a group of contour lines which are parallel to each other and have equal intervals as the initial contour line V ═ { V ═ V }0,v1,…vm}; wherein v(s) ═ x(s), y(s)]X(s), y(s) are initial contoursCoordinates of a point s in the line;
the total energy of the contour line is defined as
Etotal=w1Eedge+w2Esharp+w3Es+w4Ed
Wherein EedgeRepresenting the gradient amplitude of the contour as a function of the contour edge energy;
Esharpa contour sharpness energy function representing a degree of blurring of the contour;
Esis a contour characteristic energy function used for restraining the smoothness of the contour;
Edthe contour line distance energy function is used for controlling the contour tracing curve not to exceed the search area;
w1、w2、w3、w4controlling parameters for each function weight;
(3) updating the position of each contour point from the bottom to the top of each initial contour line in the image from the starting point on the left side of the image, and calculating contour tracking energy for each contour point according to the total energy definition of the contour lines in the step (2);
(4) solving the minimum value of the energy function; searching pixel points in a pixel column adjacent to the current contour point, and searching for an edge point P which meets the definition of an energy function, wherein the edge point P is equal to { P }1,p2,…pnSelecting the position of the pixel point with the minimum energy as a new contour point; repeatedly searching the minimum value from the left side to the right side of the image to obtain a final contour search result, namely obtaining a target contour line V';
(5) filling depth values of areas with different contour lines by taking a depth gradient hypothesis as a priori hypothesis gradient model, and calculating to obtain depth distribution;
target contour line V ' ═ V ' in contour line set '0,v'1,...,v'mFor an assigned Depth value Depth corresponding to i-0 … m, the Depth value Depth is:
(6) and optimizing the obtained depth image by using the gray information of the original image and the obtained depth image information:
wherein Depth (x)i) To input the depth image, Ω (x)i) Is a pixel xiA central neighborhood, I (x)i) Is a pixel xiLuminance I component, x ofjIs a pixel xiIn the neighborhood of Ω (x)i) Of the neighborhood of pixels, W (x)i) Is a normalization factor of the filter parameters; | xi-xj| | is the spatial Euclidean distance of two pixels; i (x)i)-I(xj) Representing the similarity of the luminance of two pixels, pixel xiAnd xjRespectively have a spatial coordinate of (x)ix,xiy) And (x)jx,xjy) (ii) a Spatial weight coefficientAnd chroma weight coefficientIs defined as:
wherein sigmasIs the variance, σ, of the spatial weightrIs the variance of the chroma weight;
(7) a disparity map obtained by estimating depth information of the input monocular image is obtained.
2. The method of claim 1, wherein the contour edge energy function E isedgeThe magnitude of the gradient amplitude of the image I (x, y) along the gradient direction is defined as:
wherein,solving gradient amplitude of I (x, y);
wherein the parameter a is a weight control parameter.
3. The method of claim 1, wherein the contour sharpness energy function E issharpSolved for gradient profile sharpness, defined as:
wherein the gradient profile sharpness σ (p (q)0) Is the root mean square of the variance of the gradient profile variable, where the gradient profile is the edge pixel q in the image0(x0,y0) As a starting point, tracing to the boundary of the edge along the gradient direction until the gradient amplitude of the edge does not change any more, and obtaining a one-dimensional path p (q)0) The formed gradient amplitude curve; the sharpness of the gradient profile is defined as
Wherein d isc(q,q0) For the points q and q of the gradient profile0Length of the curve in between, G (q) is the gradient amplitude at point q, G (q)0) For all points in the gradient profileS is any point in the gradient profile, and b is a weight control parameter.
CN201510786727.1A 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity Active CN105374039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510786727.1A CN105374039B (en) 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510786727.1A CN105374039B (en) 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity

Publications (2)

Publication Number Publication Date
CN105374039A CN105374039A (en) 2016-03-02
CN105374039B true CN105374039B (en) 2018-09-21

Family

ID=55376211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510786727.1A Active CN105374039B (en) 2015-11-16 2015-11-16 Monocular image depth information method of estimation based on contour acuity

Country Status (1)

Country Link
CN (1) CN105374039B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204010B (en) * 2017-04-28 2019-11-19 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107582001B (en) * 2017-10-20 2020-08-11 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108647713B (en) * 2018-05-07 2021-04-02 宁波华仪宁创智能科技有限公司 Embryo boundary identification and laser track fitting method
TWI678681B (en) 2018-05-15 2019-12-01 緯創資通股份有限公司 Method, image processing device, and system for generating depth map
CN108932734B (en) 2018-05-23 2021-03-09 浙江商汤科技开发有限公司 Monocular image depth recovery method and device and computer equipment
CN109087346B (en) * 2018-09-21 2020-08-11 北京地平线机器人技术研发有限公司 Monocular depth model training method and device and electronic equipment
CN112446946B (en) * 2019-08-28 2024-07-09 深圳市光鉴科技有限公司 Depth reconstruction method, system, equipment and medium based on sparse depth and boundary
CN112396645B (en) * 2020-11-06 2022-05-31 华中科技大学 Monocular image depth estimation method and system based on convolution residual learning
CN116503821B (en) * 2023-06-19 2023-08-25 成都经开地理信息勘测设计院有限公司 Road identification recognition method and system based on point cloud data and image recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840574A (en) * 2010-04-16 2010-09-22 西安电子科技大学 Depth estimation method based on edge pixel features
CN102883175A (en) * 2012-10-23 2013-01-16 青岛海信信芯科技有限公司 Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN103793918A (en) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 Image definition detecting method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184196B2 (en) * 2008-08-05 2012-05-22 Qualcomm Incorporated System and method to generate depth data using edge detection
US8248410B2 (en) * 2008-12-09 2012-08-21 Seiko Epson Corporation Synthesizing detailed depth maps from images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840574A (en) * 2010-04-16 2010-09-22 西安电子科技大学 Depth estimation method based on edge pixel features
CN102883175A (en) * 2012-10-23 2013-01-16 青岛海信信芯科技有限公司 Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN103793918A (en) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 Image definition detecting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel 2D-to-3D conversion technique based on relative height depth cue;Yong Ju Jung et al.;《Proceedings of the SPIE》;20090218;第1-8页 *
Edge Based Method for Sharp Region Extraction from Low Depth of Field Images;Natalia Neverova et al.;《Visual Communications and Image Processing (VCIP), 2012 IEEE》;20121130;第1-6页 *

Also Published As

Publication number Publication date
CN105374039A (en) 2016-03-02

Similar Documents

Publication Publication Date Title
CN105374039B (en) Monocular image depth information method of estimation based on contour acuity
US11983893B2 (en) Systems and methods for hybrid depth regularization
Huang et al. Indoor depth completion with boundary consistency and self-attention
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
EP2087466B1 (en) Generation of depth map for an image
CN105404888B (en) The conspicuousness object detection method of color combining and depth information
US20180231871A1 (en) Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN107622480B (en) Kinect depth image enhancement method
CN103927016A (en) Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
CN102436671B (en) Virtual viewpoint drawing method based on depth value non-linear transformation
CN103473743B (en) A kind of method obtaining image depth information
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
CN105023253A (en) Visual underlying feature-based image enhancement method
KR101478709B1 (en) Method and apparatus for extracting and generating feature point and feature descriptor rgb-d image
CN104036481A (en) Multi-focus image fusion method based on depth information extraction
Loghman et al. SGM-based dense disparity estimation using adaptive census transform
US9171357B2 (en) Method, apparatus and computer-readable recording medium for refocusing photographed image
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
Fan et al. Collaborative three-dimensional completion of color and depth in a specified area with superpixels
CN103198464A (en) Human face video light and shadow migration generation method based on single reference video
CN107194931A (en) It is a kind of that the method and system for obtaining target depth information is matched based on binocular image
CN108305269B (en) Image segmentation method and system for binocular image
KR101626679B1 (en) Method for generating stereoscopic image from 2D image and for medium recording the same
CN118379436B (en) Three-dimensional virtual scene generation method, device, equipment and storage medium
Dickson et al. User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant