CN109064512B - Multi-calibration-point coordinate value detection method for interactive classroom teaching system - Google Patents

Multi-calibration-point coordinate value detection method for interactive classroom teaching system Download PDF

Info

Publication number
CN109064512B
CN109064512B CN201810852775.XA CN201810852775A CN109064512B CN 109064512 B CN109064512 B CN 109064512B CN 201810852775 A CN201810852775 A CN 201810852775A CN 109064512 B CN109064512 B CN 109064512B
Authority
CN
China
Prior art keywords
image
calibration
container
calibration points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810852775.XA
Other languages
Chinese (zh)
Other versions
CN109064512A (en
Inventor
林传文
汪俊锋
高祥
张巧云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Huishi Jintong Technology Co ltd
Original Assignee
Anhui Huishi Jintong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Huishi Jintong Technology Co ltd filed Critical Anhui Huishi Jintong Technology Co ltd
Priority to CN201810852775.XA priority Critical patent/CN109064512B/en
Publication of CN109064512A publication Critical patent/CN109064512A/en
Application granted granted Critical
Publication of CN109064512B publication Critical patent/CN109064512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Abstract

The invention discloses a method for detecting coordinate values of multiple calibration points of an interactive classroom teaching system, and relates to the field of coordinate detection. The invention comprises the following steps: step S01: collecting a current image, a background image and a mask image; step S02: image processing, namely traversing each pixel of the current image, the background image and the mask image, and extracting a connected domain of each calibration point in the foreground image; step S03: obtaining a calibration point centroid: after the area of each connected domain is judged, calculating the central moment of the remaining connected domains to obtain the centroid coordinate of each calibration point on the foreground image; step S04: and obtaining coordinate values corresponding to the calibration points. According to the invention, the images captured by the camera and the projector are collected, the connected domain is obtained by processing the images, the centroid coordinate of each calibration point is obtained according to the connected domain, the coordinate value corresponding to each calibration point is calculated by algorithms such as traversal, slope, sequencing and the like, a plurality of calibration points are obtained at one time, the time of the whole calibration process is saved, and the calibration efficiency is improved.

Description

Multi-calibration-point coordinate value detection method for interactive classroom teaching system
Technical Field
The invention belongs to the field of coordinate detection, and particularly relates to a multi-calibration-point coordinate value detection method of an interactive classroom teaching system.
Background
The interactive classroom teaching system integrates products such as a computer, a projector, a camera, an infrared emitter and the like, and realizes an interactive function. The interactive class projection equipment realizes the method with the same effect as the computer screen operation by touching the projection surface with a finger or an infrared touch pen. The automatic calibration is to calculate the mapping relationship between the projection plane and the computer screen, so as to operate the projection plane to simulate a mouse event, thereby realizing the interactive function. In the automatic calibration process, coordinate values of different positions on the screen (each position corresponds to a calibration point) need to be obtained, and the calibration point is a drawn circle.
However, in the conventional method for detecting coordinate values of a calibration point, only a single calibration point can be detected each time, and when a plurality of coordinate values are required, one calibration point is detected at regular intervals. Such a calibration process takes about 1-2 minutes (49 calibration points are detected), which is time consuming and inefficient.
Therefore, there is a need for a method capable of acquiring coordinate values corresponding to multiple calibration points at a time, and saving the time for calibrating the whole coordinate by the system.
Disclosure of Invention
The invention aims to provide a method for detecting coordinate values of multiple calibration points of an interactive classroom teaching system.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention relates to a method for detecting coordinate values of multiple calibration points of an interactive classroom teaching system, which comprises the following steps:
step S01: collecting samples, wherein collected sample information comprises a current image, a background image and a mask image;
step S02: image processing, namely traversing each pixel of the current image, the background image and the mask image, calculating to obtain a foreground image, and extracting a connected domain of each calibration point in the foreground image;
step S03: obtaining a calibration point centroid: after the area of each connected domain is judged, the connected domains with the areas smaller than the threshold value are excluded, the central moments of the remaining connected domains are calculated to obtain the centroid coordinates of each calibration point on the foreground image, and the coordinates are placed in centers containers;
step S04: obtaining coordinate values corresponding to the calibration points;
in step S04, the specific steps of obtaining the coordinate values corresponding to the calibration points are as follows:
step S041: traversing the centroid coordinates of the internal standard points of the centers container and sequencing the ordinate y of each calibration point;
step S042: storing the center [0] base point information into a slopecount container and sequencing the information in an ascending order, calculating the absolute value of the slope between each other calibration point in the center container and storing the absolute value into a slopecompare container, and acquiring the number of other calibration points in the same line with the base point;
step S043: the calibration points positioned on the same row are extracted and placed into the same _ row container, and the rest calibration points are placed back into the centers container again;
step S044: circulating for the outermost layer, traversing to obtain the horizontal coordinates of a row of calibration points in the same _ row container, arranging the horizontal coordinates in an ascending order, corresponding to the vertical coordinates y of the internal calibration points in the centers container in the step S041 one by one, and storing the coordinates into the counts _ row two-dimensional container;
step S045: traversing the whole counts _ row two-dimensional container, putting the abscissa x of each calibration point into a center _ x array, and putting the ordinate y of each calibration point into a center _ y array to obtain the coordinate value of the corresponding calibration point.
Preferably, in step S01, the current image is an image of a plurality of calibration points captured by a camera; the background image is an image of the camera with the calibration point removed; the mask image is an operation range image of a projection surface of the projector.
Preferably, in step S02, the specific process of extracting the connected component is as follows:
step 1: traversing each pixel of the current image, the background image and the mask image;
step 2: subtracting the pixel value of the background image from the pixel value of the current image to obtain a difference image;
and step 3: carrying out AND operation on the pixel value of the difference image and the pixel value of the mask image to obtain a foreground image;
and 4, step 4: and extracting the connected domain of each index point of the foreground image.
Preferably, in step S03, the threshold value ranges from 4 to 6; and excluding connected domains with the areas smaller than or equal to the threshold value, and reserving the connected domains with the areas larger than the threshold value.
Preferably, in step S042, the calculation formula of the slope is:
slope=abs((pt1.y-pt2.y)/(pt1.x-pt2.x+0.000001f));
in the formula, slope is an angular coefficient, namely a slope; abs () is an absolute value function; pt1.y-pt2.y is the difference of the vertical coordinates of the two calibration points; and pt1.x-pt2.x is the difference of the abscissas of the two calibration points.
Preferably, in step S043, a two-layer for loop is used, where the outer layer for loop traverses data in the slopecommare container, the inner layer for loop traverses data in the slopecount container, and the slopecommare container needs to be traversed again each time the slopecommare container is traversed.
Preferably, the container is a sequential container encapsulating a dynamic size array for storing various types of objects.
The invention has the following beneficial effects:
according to the invention, the images captured by the camera and the projector are collected, the connected domain is obtained by processing the images, the centroid coordinate of each calibration point is obtained according to the connected domain, the coordinate value corresponding to each calibration point is calculated by algorithms such as traversal, slope, sequencing and the like, a plurality of calibration points are obtained at one time, the time of the whole calibration process is saved, and the calibration efficiency is improved.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a step diagram of a method for detecting coordinate values of multiple calibration points of an interactive classroom teaching system according to the present invention;
FIG. 2 is a current image acquired by a camera;
FIG. 3 is a mask image acquired by a projector;
FIG. 4 is a schematic diagram of an image processing structure;
fig. 5 is a schematic diagram of the principle of traversing the slopecompare container and the lopeccount container.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the invention relates to a method for detecting coordinate values of multiple calibration points in an interactive classroom teaching system, comprising the following steps:
step S01: collecting samples, wherein collected sample information comprises a current image, a background image and a mask image;
step S02: image processing, namely traversing each pixel of the current image, the background image and the mask image, calculating to obtain a foreground image, and extracting a connected domain of each calibration point in the foreground image;
step S03: obtaining a calibration point centroid: after the area of each connected domain is judged, the connected domains with the areas smaller than a threshold value are excluded, the central moments of the remaining connected domains are calculated to obtain the centroid coordinate of each calibration Point on the foreground image, and the coordinate is placed in a vector < Point2d > centers container;
step S04: obtaining coordinate values corresponding to the calibration points;
in step S04, the specific steps of obtaining the coordinate values corresponding to the calibration points are as follows:
step S041: traversing the centroid coordinates of the internal standard points of the centers container and sequencing the ordinate y of each standard point, wherein the sequencing is realized by a sort () function; referring to fig. 2, when the top left corner index point is identified as the first index point, the first index point is sequentially 1, 2, 3, 4, 5, 6, and 7 from left to right, and then sorted in ascending order; if the lower left corner index point is determined as the first index point, sequentially arranging 1, 2, 3, 4, 5, 6 and 7 from left to right, and sorting in a descending order; if the image captured by the camera corresponds to the image projected onto the curtain plate, the upper left corner is the first point, and if the difference between the image captured by the camera and the image projected onto the curtain plate is 180 degrees, the lower left corner is the first point;
step S042: storing the center [0] base point information into a center < double > slopecount container and sequencing the container in an ascending order, calculating the absolute value of the slope between each other calibration point in the center container and storing the absolute value into a slopecompare container, being capable of distinguishing the crosstalk problem between the rows of the calibration points and obtaining the number of other calibration points which are in the same row with the base point, and as can be seen from figure 2, 6 corresponding calibration points are positioned in the same row;
step S043: extracting the index points positioned on the same line and placing the index points into a vector < Point2d > same _ row container, and placing the rest index points back into centers containers again;
step S044: circulating for the outermost layer, traversing to obtain the horizontal coordinates of a row of calibration points in the same _ row container, arranging the horizontal coordinates in an ascending order through a sort () function, corresponding to the vertical coordinates y of the calibration points in the centers container in the step S041 one by one, and storing the vertical coordinates y into a count _ row two-dimensional container;
step S045: traversing the whole counts _ row two-dimensional container, putting the abscissa x of each calibration point into a center _ x array, and putting the ordinate y of each calibration point into a center _ y array to obtain the coordinate value of the corresponding calibration point.
In step S01, the current image is an image of the camera capturing a plurality of calibration points as shown in fig. 2; the background image is the image of the camera with the calibration point removed; the mask image shown in fig. 3 is an image of the operation range of the projector projection plane.
Referring to fig. 4, in step S02, the specific process of extracting the connected component is as follows:
step 1: traversing each pixel of the current image, the background image and the mask image;
step 2: subtracting the pixel value of the background image from the pixel value of the current image to obtain a difference image;
and step 3: performing AND operation on the pixel value of the difference image and the pixel value of the mask image to obtain a foreground image, wherein the AND operation can eliminate the interference on the calibration point on the current image;
in step S03, the threshold value range is 5; and removing the connected domain with the area less than or equal to 5, and filtering the interference points by one step to reserve the connected domain with the area greater than the threshold value.
In step S042, the calculation formula of the slope is:
slope=abs((pt1.y-pt2.y)/(pt1.x-pt2.x+0.000001f));
in the formula, slope is an angular coefficient, namely a slope; abs () is an absolute value function; pt1.y-pt2.y is the difference of the vertical coordinates of the two calibration points; and pt1.x-pt2.x is the difference of the abscissas of the two calibration points.
Referring to fig. 5, in step S043, each time the slopecommare container is traversed, the slopecount container needs to be traversed again; arrow 1 represents the traversal direction of slopecount, and the calibration points corresponding to the first 6 slopes are located in the same row, so that traversal needs to be performed for 6 times; arrow 2 indicates the traversal direction of slopecommare; and using a double-layer for loop, wherein the outer layer for loop traverses the data in the slopecompare container, the inner layer for loop traverses the data in the slopecount container, if the slopecompare container is equal to the container found in the slopecount container, the corresponding index point is placed in the same _ row container, otherwise, the index point is placed in the centers container again.
The container is a sequential container for packaging dynamic size arrays and is used for storing various types of objects.
It should be noted that, in the above system embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
In addition, it is understood by those skilled in the art that all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing associated hardware, and the corresponding program may be stored in a computer-readable storage medium.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (6)

1. A method for detecting coordinate values of multiple calibration points of an interactive classroom teaching system is characterized by comprising the following steps:
step S01: collecting samples, wherein collected sample information comprises a current image, a background image and a mask image;
step S02: image processing, namely traversing each pixel of the current image, the background image and the mask image, calculating to obtain a foreground image, and extracting a connected domain of each calibration point in the foreground image;
step S03: obtaining a calibration point centroid: after the area of each connected domain is judged, the connected domains with the areas smaller than the threshold value are excluded, the central moments of the remaining connected domains are calculated to obtain the centroid coordinates of each calibration point on the foreground image, and the coordinates are placed in centers containers;
step S04: obtaining coordinate values corresponding to the calibration points;
in step S04, the specific steps of obtaining the coordinate values corresponding to the calibration points are as follows:
step S041: traversing the centroid coordinates of the internal standard points of the centers container and sequencing the ordinate y of each calibration point;
step S042: storing the center [0] base point information into a slopecount container and sequencing the information in an ascending order, calculating the absolute value of the slope between each other calibration point in the center container and storing the absolute value into a slopecompare container, and acquiring the number of other calibration points in the same line with the base point;
step S043: the calibration points positioned on the same row are extracted and placed into the same _ row container, and the rest calibration points are placed back into the centers container again;
step S044: circulating for the outermost layer, traversing to obtain the horizontal coordinates of a row of calibration points in the same _ row container, arranging the horizontal coordinates in an ascending order, corresponding to the vertical coordinates y of the internal calibration points in the centers container in the step S041 one by one, and storing the coordinates into the counts _ row two-dimensional container;
step S045: traversing the whole counts _ row two-dimensional container, putting the abscissa x of each calibration point into a center _ x array, and putting the ordinate y of each calibration point into a center _ y array to obtain the coordinate value of the corresponding calibration point;
in step S043, a double-layer for loop is used, the outer layer for loop traverses data in the slopecommand container, the inner layer for loop traverses data in the slopecount container, and the slopecommand container needs to be traversed again each time the slopecommand container traverses once.
2. The method for detecting coordinate values of multiple calibration points in an interactive classroom teaching system as claimed in claim 1, wherein in said step S01, said current image is an image of a plurality of calibration points captured by a camera; the background image is an image of the camera with the calibration point removed; the mask image is an operation range image of a projection surface of the projector.
3. The method for detecting coordinate values of multiple calibration points in an interactive classroom teaching system as claimed in claim 1, wherein in step S02, the specific process of extracting connected components is as follows:
step 1: traversing each pixel of the current image, the background image and the mask image;
step 2: subtracting the pixel value of the background image from the pixel value of the current image to obtain a difference image;
and step 3: carrying out AND operation on the pixel value of the difference image and the pixel value of the mask image to obtain a foreground image;
and 4, step 4: and extracting the connected domain of each index point of the foreground image.
4. The method for detecting coordinate values of multiple calibration points in an interactive classroom teaching system as claimed in claim 1, wherein in said step S03, the threshold value is in the range of 4-6; and excluding connected domains with the areas smaller than or equal to the threshold value, and reserving the connected domains with the areas larger than the threshold value.
5. The method according to claim 1, wherein in step S042, the slope is calculated by the following formula:
slope=abs((pt1.y-pt2.y)/(pt1.x-pt2.x+0.000001f));
in the formula, slope is an angular coefficient, namely a slope; abs () is an absolute value function; pt1.y-pt2.y is the difference of the vertical coordinates of the two calibration points; and pt1.x-pt2.x is the difference of the abscissas of the two calibration points.
6. The method for detecting coordinate values of multiple calibration points in an interactive classroom teaching system as claimed in any one of claims 1-5, wherein the container is a sequential container encapsulating dynamic size arrays for storing objects of various types.
CN201810852775.XA 2018-07-30 2018-07-30 Multi-calibration-point coordinate value detection method for interactive classroom teaching system Active CN109064512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810852775.XA CN109064512B (en) 2018-07-30 2018-07-30 Multi-calibration-point coordinate value detection method for interactive classroom teaching system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810852775.XA CN109064512B (en) 2018-07-30 2018-07-30 Multi-calibration-point coordinate value detection method for interactive classroom teaching system

Publications (2)

Publication Number Publication Date
CN109064512A CN109064512A (en) 2018-12-21
CN109064512B true CN109064512B (en) 2022-01-04

Family

ID=64831760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810852775.XA Active CN109064512B (en) 2018-07-30 2018-07-30 Multi-calibration-point coordinate value detection method for interactive classroom teaching system

Country Status (1)

Country Link
CN (1) CN109064512B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034236B (en) * 2010-12-01 2012-12-26 北京航空航天大学 Multi-camera layered calibration method based on one-dimensional object
CN102063718B (en) * 2010-12-24 2012-10-10 江南大学 Field calibration and precision measurement method for spot laser measuring system
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision
CN104794704B (en) * 2015-03-27 2017-11-17 华为技术有限公司 A kind of calibrating template, template detection method, apparatus and terminal
US10235771B2 (en) * 2016-11-11 2019-03-19 Qualcomm Incorporated Methods and systems of performing object pose estimation

Also Published As

Publication number Publication date
CN109064512A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN104616275B (en) A kind of defect inspection method and device
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
EP2709039A1 (en) Device and method for detecting the presence of a logo in a picture
CN106845383A (en) People&#39;s head inspecting method and device
US20130243343A1 (en) Method and device for people group detection
CN109059770B (en) Wrapping volume measuring method based on TOF depth camera
EP3709266A1 (en) Human-tracking methods, apparatuses, systems, and storage media
AU2015354783B2 (en) System for real-time moving target detection using vision based image segmentation
CN107004265A (en) Information processor, the method for processing information, discriminator generating means, the method and program for generating discriminator
CN105868708A (en) Image object identifying method and apparatus
CN107247926B (en) A kind of human body detecting method and device
CN110533654A (en) The method for detecting abnormality and device of components
CN103065163B (en) A kind of fast target based on static images detects recognition system and method
CN110210428B (en) MSER-based smoke root node detection method in remote complex environment
CN111223129A (en) Detection method, detection device, monitoring equipment and computer readable storage medium
CN110580481A (en) Light field image key position detection method based on EPI
CN106067031B (en) Based on artificial mechanism for correcting errors and deep learning network cooperation machine vision recognition system
CN111383244A (en) Target detection tracking method
CN109064512B (en) Multi-calibration-point coordinate value detection method for interactive classroom teaching system
CN113065454B (en) High-altitude parabolic target identification and comparison method and device
CN106845378A (en) It is a kind of to in image recognize human body target method
US7440636B2 (en) Method and apparatus for image processing
CN106526651A (en) Detector crystal position table establishing method and detector crystal position table establishing system
CN109978916B (en) Vibe moving target detection method based on gray level image feature matching
CN110866917A (en) Tablet type and arrangement mode identification method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 230000 Yafu Park, Juchao Economic Development Zone, Chaohu City, Hefei City, Anhui Province

Applicant after: ANHUI HUISHI JINTONG TECHNOLOGY Co.,Ltd.

Address before: 230000 Room 102, 1st Floor, C District, Science Park, Hefei National University, 602 Huangshan Road, Hefei High-tech Zone, Anhui Province

Applicant before: ANHUI HUISHI JINTONG TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant