CN106683501B - A kind of AR children scene plays the part of projection teaching's method and system - Google Patents

A kind of AR children scene plays the part of projection teaching's method and system Download PDF

Info

Publication number
CN106683501B
CN106683501B CN201611213221.2A CN201611213221A CN106683501B CN 106683501 B CN106683501 B CN 106683501B CN 201611213221 A CN201611213221 A CN 201611213221A CN 106683501 B CN106683501 B CN 106683501B
Authority
CN
China
Prior art keywords
user
image
model
projection
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611213221.2A
Other languages
Chinese (zh)
Other versions
CN106683501A (en
Inventor
伍永豪
赵亚丁
彭泉
曾贵平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Molio Network Co Ltd
Original Assignee
Wuhan Molio Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Molio Network Co Ltd filed Critical Wuhan Molio Network Co Ltd
Priority to CN201611213221.2A priority Critical patent/CN106683501B/en
Publication of CN106683501A publication Critical patent/CN106683501A/en
Application granted granted Critical
Publication of CN106683501B publication Critical patent/CN106683501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/067Combinations of audio and projected visual presentation, e.g. film, slides
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The invention discloses a kind of AR children scenes to play the part of projection teaching's method and system, wherein method includes: acquisition AR interaction card image, user's face image, the real-time limb action data of user, user speech;The information for identifying AR interaction card image, calls corresponding 3D scene play template, 3D scene play template includes 3D actor model and background model, and 3D actor model is made of mask and limbs model;Mask will be synthesized to after the cutting of user's face image;The real-time limb action data of user and limbs model are subjected to data interaction, control the limb motion of 3D actor model;Projection is converted in screen by the 3D scene play template of calling.The utility model has the advantages that by the way that face-image is synthesized to mask, user's limb action data are interacted with limbs model, it allows the role in scene play to possess the part facial characteristics of user, and corresponding actions can be made according to the movement of user, human-computer interaction, story, which immerse, feels very strong.

Description

A kind of AR children scene plays the part of projection teaching's method and system
Technical field
The present invention relates to AR projection arts, play the part of projection teaching's method more particularly, to a kind of AR children scene and are System.
Background technique
With the fast development of information technology, video teaching has been entered into child teaching, its education activities to children Play very big booster action.The interest of children is extensive, and thinking is in concrete image Thinking Stage, to relatively more abstract things It is also difficult to largely receive, the development of concrete image thinking is promoted to be only the approach of early development children for learning potentiality.Video teaching With its visual in image form of expression, the requirement of child teaching in this respect more can satisfy.
The video teaching of children is largely that video is directly played using player to complete at present, and children stories immerse sense Not enough, lack the interaction of people and machine, interest is inadequate, and teaching process is more dull.
Summary of the invention
It is an object of the invention to overcome above-mentioned technical deficiency, propose a kind of AR children scene play the part of projection teaching's method and System solves children's video teaching in the prior art and immerses the technical issues of feeling the interaction for not enough lacking people and machine.
To reach above-mentioned technical purpose, technical solution of the present invention provides a kind of AR children scene and plays the part of projection teaching side Method, wherein include:
S1, acquisition AR interact card image, user's face image, the real-time limb action data of user, user speech, utilize Depth sensing equipment acquires the real-time limb action data of user;
The information of S2, the identification AR interaction card image call the corresponding 3D scene play template of the AR interaction card, The 3D scene play template includes 3D actor model and background model, and the 3D actor model is by mask and limbs model group At the background model is dynamically or statically;
S3, the user's face image is cut, the face-image after cutting is synthesized to the 3D actor model The mask;
S4, the limbs model of the real-time limb action data of the user and the 3D actor model is subjected to data friendship Mutually, the limb motion of the 3D actor model is controlled;
S5, voice change process is carried out to the user speech;
S6, projection is converted on the projection screen for the 3D scene play template called in S2, wherein the back Scape model conversation is background plane dynamically or statically, and the 3D actor model is corresponding according to the real-time limb action of the user Ground is converted into dynamic 3D role projection, and projection while plays the user speech after voice change process.
Technical solution of the present invention also provides a kind of AR children scene and plays the part of projection teaching's system, wherein includes:
Acquisition module: acquisition AR interacts card image, user's face image, the real-time limb action data of user, Yong Huyu Sound acquires the real-time limb action data of user using depth sensing equipment;
Scene play selecting module: identifying the information of the AR interaction card image, calls the AR interaction card corresponding 3D scene play template, the 3D scene play template include 3D actor model and background model, and the 3D actor model is by facial mould Type and limbs model composition, the background model are dynamically or statically;
Face-image synthesis module: the user's face image is cut, the face-image after cutting is synthesized to The mask of the 3D actor model;
Limb action synthesis module: by the limbs of the user real-time limb action data and the 3D actor model Model carries out data interaction, controls the limb motion of the 3D actor model;
Sound processing module: voice change process is carried out to the user speech;
Module is shown in scene play: converting projection for the 3D scene play template called in scene play selecting module On the projection screen, wherein the background model is converted into background plane dynamically or statically, the 3D actor model according to The real-time limb action of user is correspondingly converted into dynamic 3D role projection, and projection while plays the institute after voice change process State user speech.
Compared with prior art, do not sympathize with the beneficial effect comprise that interacting card by switching AR and can switch Scape play projects on projection screen, by the way that face-image is synthesized to 3D actor model, by the real-time limb action data of user with The interaction of 3D actor model allows the role in scene play to possess the facial characteristics of user, and can be made according to the movement of user Stronger human-computer interaction may be implemented in corresponding actions, and children is allowed to have very strong story to immerse sense, and interest is very strong.
Detailed description of the invention
Fig. 1 is that a kind of AR children scene provided by the invention plays the part of projection teaching's method flow diagram;
Fig. 2 is that a kind of AR children scene provided by the invention plays the part of projection teaching's system structure diagram.
In attached drawing: 1, AR children scene plays the part of projection teaching's system, 11, acquisition module, 12, scene play selecting module, 13, Face-image synthesis module, 14, limb action synthesis module, 15, sound processing module, 16, scene play projection module.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Such as Fig. 1, the present invention provides a kind of AR children scenes to play the part of projection teaching's method, wherein includes:
S1, acquisition AR interact card image, user's face image, the real-time limb action data of user, user speech, utilize Depth sensing equipment acquires the real-time limb action data of user;
The information of S2, identification AR interaction card image, call the corresponding 3D scene play template of AR interaction card, and 3D scene is acute Template includes 3D actor model and background model, and 3D actor model is made of mask and limbs model, and background model is State or static state;
S3, user's face image is cut, the face-image after cutting is synthesized to the facial mould of 3D actor model Type;
S4, the limbs model of the real-time limb action data of user and 3D actor model is subjected to data interaction, controls the angle 3D The limb motion of color model;
S5, voice change process is carried out to user speech;
S6, projection is converted on the projection screen for the 3D scene play template called in S2, wherein background model turns Background plane dynamically or statically is turned to, 3D actor model is correspondingly converted into dynamic 3D according to the real-time limb action of user Role's projection plays the user speech after voice change process while projection.
AR children scene of the present invention plays the part of projection teaching's method, and step S1 includes:
It is placed into the place 10cm-15cm in front of camera on one side by what AR interaction card had an image, utilizes camera acquisition AR Card image is interacted, after recognizing the information of AR interaction card image, the interaction card in front of camera is removed, and benefit User's face image is acquired with camera, acquires user speech using voice acquisition device.
AR children scene of the present invention plays the part of projection teaching's method, and step S2 includes:
It is provided with specific image on each AR interaction card, the information of specific image is recognized, calls specific 3D feelings Scape play template, and then it is projected out specific background and 3D role;
Each 3D scene play template includes that multiple 3D actor models are available;
Switching AR interaction card can switch 3D scene play template, so that the scene that switching is projected out is acute, scene play is by angle Background except color and role is constituted.
AR children scene of the present invention plays the part of projection teaching's method, and step S3 includes:
Using Molioopencv technical treatment user's face image, facial contour is identified, then to eyes, mouth, nose Cutting is marked in son, and before the face-image after cutting is synthesized to the mask of 3D actor model, after cutting Face-image carry out Q versionization post-processing.
AR children scene of the present invention plays the part of projection teaching's method, and step S5 includes:
According to scene play needs, voice change process is carried out to the user speech of acquisition.
AR children scene of the present invention plays the part of projection teaching's method, and step S6 includes:
The broadcasting speed of the projection of 3D scene play template is adjustable, and the projection of 3D scene play template can suspend, stop;
By S3, S4, S5 step, the mask of 3D actor model is combined with the user's face image after cutting, Limbs model is interacted with the real-time limb action data of user, the angle when 3D scene play template is converted into projection, in projection Color possesses the facial characteristics of user, can make corresponding actions according to the movement of user, and projection includes background, and is projected same When play voice change process after user speech.
Such as Fig. 2, the present invention provides a kind of AR children scene and plays the part of projection teaching's system 1, wherein includes:
Acquisition module 11: acquisition AR interacts card image, user's face image, the real-time limb action data of user, user Voice acquires the real-time limb action data of user using depth sensing equipment;
Scene play selecting module 12: the information of identification AR interaction card image calls the corresponding 3D scene of AR interaction card Acute template, 3D scene play template includes 3D actor model and background model, and 3D actor model is by mask and limbs model group At background model is dynamically or statically;
Face-image synthesis module 13: cutting user's face image, and the face-image after cutting is synthesized to 3D The mask of actor model;
Limb action synthesis module 14: the real-time limb action data of user and the limbs model of 3D actor model are counted According to interaction, the limb motion of 3D actor model is controlled;
Sound processing module 15: voice change process is carried out to user speech;
Module 16 is shown in scene play: converting projection for the 3D scene play template called in scene play selecting module 12 On the projection screen, wherein background model is converted into background plane dynamically or statically, and 3D actor model is real-time according to user Limb action is correspondingly converted into dynamic 3D role projection, and projection while plays the user speech after voice change process.
AR children scene of the present invention is played the part of in projection teaching's system 1, scene play selecting module 12:
It is provided with specific image on each AR interaction card, the information of specific image is recognized, calls specific 3D feelings Scape play template, and then it is projected out specific background and 3D role;
Each 3D scene play template includes that multiple 3D actor models are available.
AR children scene of the present invention is played the part of in projection teaching's system 1, face-image synthesis module 13:
Using Molioopencv technical treatment user's face image, facial contour is identified, then to eyes, mouth, nose Cutting is marked in son, and before the face-image after cutting is synthesized to the mask of 3D actor model, after cutting Face-image carry out Q versionization post-processing.
AR children scene of the present invention is played the part of in projection teaching's system 1, and module 16 is shown in scene play:
The broadcasting speed of the projection of 3D scene play template is adjustable, and the projection of 3D scene play template can suspend, stop.
In use, acquisition AR interacts card image, user's face image, the real-time limb action number of user to the present invention According to, user speech, depth sensing equipment is utilized to acquire the real-time limb action data of user;Identify the letter of AR interaction card image Breath calls the corresponding 3D scene play template of AR interaction card, and 3D scene play template includes 3D actor model and background model, the angle 3D Color model is made of mask and limbs model, and background model is dynamically or statically;User's face image is cut, Face-image after cutting is synthesized to the mask of 3D actor model;By the real-time limb action data of user and 3D role's mould The limbs model of type carries out data interaction, controls the limb motion of 3D actor model;Voice change process is carried out to user speech;It will adjust 3D scene play template is converted into projection on the projection screen, wherein background model is converted into dynamically or statically Background plane, 3D actor model are correspondingly converted into dynamic 3D role according to the real-time limb action of user and project, projection it is same When play voice change process after user speech.
Projection screen is projected to the beneficial effect comprise that interacting card by switching AR and can switch different scene plays On curtain, by the way that face-image is synthesized to 3D actor model, the real-time limb action data of user is interacted with 3D actor model, are allowed Role in scene play possesses the facial characteristics of user, and can make corresponding actions according to the movement of user, may be implemented Stronger human-computer interaction, allows children to have very strong story to immerse sense, and interest is very strong.
The above described specific embodiments of the present invention are not intended to limit the scope of the present invention..Any basis Any other various changes and modifications that technical concept of the invention is made should be included in the guarantor of the claims in the present invention It protects in range.

Claims (6)

1. a kind of AR children scene plays the part of projection teaching's method characterized by comprising
S1, acquisition AR interact card image, user's face image, the real-time limb action data of user, user speech, utilize depth Sensing equipment acquires the real-time limb action data of user;
The information of S2, the identification AR interaction card image call the corresponding 3D scene play template of the AR interaction card, described 3D scene play template includes 3D actor model and background model, and the 3D actor model is made of mask and limbs model, The background model is dynamically or statically;
S3, the user's face image is cut, the face-image after cutting is synthesized to the institute of the 3D actor model State mask comprising using user's face image described in Molioopencv technical treatment, identify facial contour, then Cutting is marked to eyes, mouth, nose, and is synthesized to described in the 3D actor model by the face-image after cutting Before mask, the face-image after cutting is carried out to the post-processing of Q versionization;
S4, the limbs model of the real-time limb action data of the user and the 3D actor model is subjected to data interaction, Control the limb motion of the 3D actor model;
S5, voice change process is carried out to the user speech;
S6, projection is converted on the projection screen for the 3D scene play template called in S2, wherein the background mould Type is converted into background plane dynamically or statically, and the 3D actor model correspondingly turns according to the real-time limb action of the user Turn to dynamic 3D role projection, projection while plays the user speech after voice change process.
2. AR children scene as described in claim 1 plays the part of projection teaching's method, which is characterized in that step S2 includes:
It is provided with specific image on each Zhang Suoshu AR interaction card, recognizes the information of specific image, is called specific described 3D scene play template, and then it is projected out specific background and 3D role;
Each described 3D scene play template includes that multiple 3D actor models are available.
3. AR children scene as described in claim 1 plays the part of projection teaching's method, which is characterized in that step S6 includes:
The broadcasting speed of the projection of the 3D scene play template is adjustable, the projection of the 3D scene play template can suspend, Stop.
4. a kind of AR children scene plays the part of projection teaching's system characterized by comprising
Acquisition module: acquisition AR interacts card image, user's face image, the real-time limb action data of user, user speech, benefit The real-time limb action data of user are acquired with depth sensing equipment;
Scene play selecting module: identifying the information of the AR interaction card image, calls the corresponding 3D feelings of the AR interaction card Scape play template, the 3D scene play template include 3D actor model and background model, the 3D actor model by mask and Limbs model composition, the background model are dynamically or statically;
Face-image synthesis module: cutting the user's face image, the face-image after cutting is synthesized to described The mask of 3D actor model identifies face wheel using user's face image described in Molioopencv technical treatment Then exterior feature is marked cutting to eyes, mouth, nose, and the face-image after cutting is being synthesized to 3D role's mould Before the mask of type, the face-image after cutting is carried out to the post-processing of Q versionization;
Limb action synthesis module: by the limbs model of the real-time limb action data of the user and the 3D actor model Data interaction is carried out, the limb motion of the 3D actor model is controlled;
Sound processing module: voice change process is carried out to the user speech;
Module is shown in scene play: being converted projection for the 3D scene play template called in scene play selecting module and is being thrown On shadow screen, wherein the background model is converted into background plane dynamically or statically, and the 3D actor model is according to The real-time limb action of user is correspondingly converted into dynamic 3D role projection, and projection while plays the use after voice change process Family voice.
5. AR children scene as claimed in claim 4 plays the part of projection teaching's system, which is characterized in that scene play selecting module:
It is provided with specific image on each Zhang Suoshu AR interaction card, recognizes the information of specific image, is called specific described 3D scene play template, and then it is projected out specific background and 3D role;
Each described 3D scene play template includes that multiple 3D actor models are available.
6. AR children scene as claimed in claim 4 plays the part of projection teaching's system, which is characterized in that module is shown in scene play:
The broadcasting speed of the projection of the 3D scene play template is adjustable, the projection of the 3D scene play template can suspend, Stop.
CN201611213221.2A 2016-12-23 2016-12-23 A kind of AR children scene plays the part of projection teaching's method and system Active CN106683501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611213221.2A CN106683501B (en) 2016-12-23 2016-12-23 A kind of AR children scene plays the part of projection teaching's method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611213221.2A CN106683501B (en) 2016-12-23 2016-12-23 A kind of AR children scene plays the part of projection teaching's method and system

Publications (2)

Publication Number Publication Date
CN106683501A CN106683501A (en) 2017-05-17
CN106683501B true CN106683501B (en) 2019-05-14

Family

ID=58870494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611213221.2A Active CN106683501B (en) 2016-12-23 2016-12-23 A kind of AR children scene plays the part of projection teaching's method and system

Country Status (1)

Country Link
CN (1) CN106683501B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109039851B (en) * 2017-06-12 2020-12-29 腾讯科技(深圳)有限公司 Interactive data processing method and device, computer equipment and storage medium
CN107240319B (en) * 2017-07-25 2019-04-02 深圳市鹰硕技术有限公司 A kind of interaction Scene Teaching system for the K12 stage
CN108509473B (en) * 2017-08-28 2022-07-12 胜典科技股份有限公司 Video and audio works system and recording medium combining self-creation elements by augmented reality technology
CN107396001A (en) * 2017-08-30 2017-11-24 郝翻翻 A kind of method of record personal
CN108245881A (en) * 2017-12-29 2018-07-06 武汉市马里欧网络有限公司 Three-dimensional jointed plate model buildings system based on AR
CN108288419A (en) * 2017-12-31 2018-07-17 广州市坤腾软件技术有限公司 A kind of vocational education craftsman's platform based on AR/VR technologies
CN109255990A (en) * 2018-09-30 2019-01-22 杭州乔智科技有限公司 A kind of tutoring system based on AR augmented reality
CN109326154A (en) * 2018-12-05 2019-02-12 北京汉谷教育科技有限公司 A method of human-computer interaction teaching is carried out by speech recognition engine
CN109917907B (en) * 2019-01-29 2022-05-03 长安大学 Card-based dynamic storyboard interaction method
CN112068709A (en) * 2020-11-12 2020-12-11 广州志胜游艺设备有限公司 AR display interactive learning method based on books for children
JP7414707B2 (en) * 2020-12-18 2024-01-16 トヨタ自動車株式会社 image display system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102412A (en) * 2014-07-24 2014-10-15 央数文化(上海)股份有限公司 Augmented reality technology-based handheld reading equipment and reading method thereof
CN104346451A (en) * 2014-10-29 2015-02-11 山东大学 Situation awareness system based on user feedback, as well as operating method and application thereof
CN105139701A (en) * 2015-09-16 2015-12-09 华中师范大学 Interactive children teaching system
CN105306862A (en) * 2015-11-17 2016-02-03 广州市英途信息技术有限公司 Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method
CN205622745U (en) * 2016-05-10 2016-10-05 倪宏伟 Real -time synthesis system of virtual reality true man
CN106131530A (en) * 2016-08-26 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D virtual reality display system and methods of exhibiting thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8963836B2 (en) * 2010-09-17 2015-02-24 Tencent Technology (Shenzhen) Company Limited Method and system for gesture-based human-machine interaction and computer-readable medium thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102412A (en) * 2014-07-24 2014-10-15 央数文化(上海)股份有限公司 Augmented reality technology-based handheld reading equipment and reading method thereof
CN104346451A (en) * 2014-10-29 2015-02-11 山东大学 Situation awareness system based on user feedback, as well as operating method and application thereof
CN105139701A (en) * 2015-09-16 2015-12-09 华中师范大学 Interactive children teaching system
CN105306862A (en) * 2015-11-17 2016-02-03 广州市英途信息技术有限公司 Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method
CN205622745U (en) * 2016-05-10 2016-10-05 倪宏伟 Real -time synthesis system of virtual reality true man
CN106131530A (en) * 2016-08-26 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D virtual reality display system and methods of exhibiting thereof

Also Published As

Publication number Publication date
CN106683501A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN106683501B (en) A kind of AR children scene plays the part of projection teaching's method and system
US10304208B1 (en) Automated gesture identification using neural networks
CN105425953B (en) A kind of method and system of human-computer interaction
US11452941B2 (en) Emoji-based communications derived from facial features during game play
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
CN107944542A (en) A kind of multi-modal interactive output method and system based on visual human
CN105975239B (en) A kind of generation method and device of vehicle electronic device display screen dynamic background
US20100146052A1 (en) method and a system for setting up encounters between persons in a telecommunications system
CN106730815B (en) Somatosensory interaction method and system easy to realize
US11048326B2 (en) Information processing system, information processing method, and program
CN109333544B (en) Doll interaction method for marionette performance participated by audience
CN111459452B (en) Driving method, device and equipment of interaction object and storage medium
CN106529502B (en) Lip reading recognition methods and device
CN107679519A (en) A kind of multi-modal interaction processing method and system based on visual human
Yargıç et al. A lip reading application on MS Kinect camera
CN111383642B (en) Voice response method based on neural network, storage medium and terminal equipment
WO2021196644A1 (en) Method, apparatus and device for driving interactive object, and storage medium
CN109116981A (en) A kind of mixed reality interactive system of passive touch feedback
CN105835071A (en) Projection type humanoid robot
CN106502382A (en) Active exchange method and system for intelligent robot
JP2021068404A (en) Facial expression generation system for avatar and facial expression generation method for avatar
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
KR20120091625A (en) Speech recognition device and speech recognition method using 3d real-time lip feature point based on stereo camera
KR20180132364A (en) Method and device for videotelephony based on character
CN109739353A (en) A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant