CN106940899B - Layer fusion method applied to weapon aiming system in AR scene - Google Patents

Layer fusion method applied to weapon aiming system in AR scene Download PDF

Info

Publication number
CN106940899B
CN106940899B CN201710200873.0A CN201710200873A CN106940899B CN 106940899 B CN106940899 B CN 106940899B CN 201710200873 A CN201710200873 A CN 201710200873A CN 106940899 B CN106940899 B CN 106940899B
Authority
CN
China
Prior art keywords
glasses
image
sighting
scene
sighting telescope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710200873.0A
Other languages
Chinese (zh)
Other versions
CN106940899A (en
Inventor
林星森
林敏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710200873.0A priority Critical patent/CN106940899B/en
Publication of CN106940899A publication Critical patent/CN106940899A/en
Application granted granted Critical
Publication of CN106940899B publication Critical patent/CN106940899B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/219Input arrangements for video game devices characterised by their sensors, purposes or types for aiming at specific areas on the display, e.g. light-guns
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/003Simulators for teaching or training purposes for military purposes and tactics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1025Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals details of the interface with the game device, e.g. USB version detection
    • A63F2300/1031Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals details of the interface with the game device, e.g. USB version detection using a wireless connection, e.g. Bluetooth, infrared connections
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Abstract

The invention relates to a layer fusion method applied to a weapon aiming system in an AR scene, which comprises the following steps: step S1: providing a weapon aiming system, which comprises a pair of glasses and an aiming device; the glasses and the sighting device are in communication connection in a wired or wireless mode and transmit data mutually; step S2: processing the image collected by the sighting device in the weapon sighting system, and transmitting the image to glasses for displaying; step S3: in the weapon aiming system, the layers of the glasses and the aiming device are fused, a user sees the real scene part and the virtual scene part through the glasses, the real scene part is the scene actually seen by human eyes, and the virtual scene part is the superposed layer generated by the image processing module. The invention can generate the virtual 3D target in a real scene, and can enable a glasses user to finish simulation training in the application field of game interaction or realize real-time aiming of human-gun separation in the application field of military actual combat and have the shooting capability of machine vision assistance.

Description

Layer fusion method applied to weapon aiming system in AR scene
Technical Field
The invention relates to the field of AR, in particular to a layer fusion method applied to a weapon aiming system in an AR scene.
Background
With the development of AR technology, a plurality of systems for game interaction and simulation training exist on the market, but when the weapon aiming system in the system uses glasses and an aiming device to be combined, the image layer fusion method in the prior art has a plurality of defects, so that the experience is not satisfactory; in addition, in the military application field, the traditional weapon aiming system lacks advanced machine vision assistance, cannot aim at real time every moment, and aims with AR assistance, so that the problem can be solved, the weapon and people are combined more closely, quick response is achieved, and the effect of the weapon can be played better.
Disclosure of Invention
In view of this, the present invention provides a layer fusion method for a weapon aiming system in an AR scene, which generates a virtual 3D target in a real scene, and enables a glasses user to complete simulation training in an application field of game interaction, and to achieve real-time aiming of human-gun separation and shooting capability assisted by machine vision in an application field of military actual combat.
The invention is realized by adopting the following scheme: a layer fusion method applied to a weapon aiming system in an AR scene comprises the following steps:
step S1: providing a weapon aiming system, which comprises a pair of glasses and an aiming device; the glasses and the sighting device are in communication connection in a wired or wireless mode and transmit data mutually;
step S2: processing the image collected by the sighting device in the weapon sighting system, and transmitting the image to glasses for displaying;
step S3: in the weapon aiming system, the layers of the glasses and the aiming device are fused, a user sees the real scene part and the virtual scene part through the glasses, the real scene part is the scene actually seen by human eyes, and the virtual scene part is the superposed layer generated by the image processing module.
Further, in the weapon aiming system, the layers of the glasses and the aiming device are fused in two modes: the first mode is to directly mix the image of the sighting telescope in the sighting telescope with the image of the glasses, the second mode is to judge whether the current sighting telescope visual angle is intersected with the glasses visual angle through integrated sensor information, and if the current sighting telescope visual angle is intersected with the glasses visual angle, layer fusion is started; and if the layers are not intersected, layer fusion is not started.
Further, the amplification factor of the sighting telescope in the sighting device is larger than or equal to 1.
Further, when the layer fusion is performed in the second mode, when the sight angle of the sighting device is intersected with the sight angle of the glasses, image recognition is started, and the position relation between the object in the sight image and the object in the glasses image is matched, which includes the following two conditions:
when the amplification factor of the sighting telescope is more than 1, the image needs to be zoomed firstly, and because only local information of an object is left in the sighting telescope after the image is amplified, the sighting telescope is relatively difficult to directly match with an eyeglass image through the information, the sighting telescope is firstly matched with an image shot by an optical camera on the sighting telescope, and then the result is sent to the eyeglass side for image comparison;
when the amplification factor of the sighting telescope is 1, if the imaging of the sighting telescope is different from the imaging of the glasses due to the difference of perspective of the visual angle, an image fuzzy matching algorithm, a visual identification technology and an SLAM technology are adopted for auxiliary matching;
and after matching is finished, only aiming point information on the sighting device is drawn on the glasses image layer.
Further, when the SLAM technology is adopted for auxiliary matching, after the matching is finished, a user sees a three-dimensional virtual central axis of the sighting telescope on the glasses image layer.
In particular, map reconstruction and target tracking in a 3D (three-dimensional) space can be performed in real time by the SLAM technique, so that accurate view angle matching can be achieved, and calibration compensation can be performed in real time for the positioning (position and posture) of the scope. For the matching realized by the SLAM technology, after the matching is finished, a three-dimensional virtual central axis of the sighting telescope can be seen on the spectacle image layer.
Compared with the prior art, the invention has the following beneficial effects: the system and the method provided by the invention are mainly applied to military actual combat, simulation training, game entertainment and the like. For military practice, the method can be applied to a relatively safe visual environment, and the information transmitted from the sighting device is a supplement to the glasses, so that the method is mainly used for assisting the glasses user to know the current working state of the sighting device (such as the sighting position, the target distance and the like) in real time, and has the advantages that: 1. aiming can be finished without holding up the gun, so that the physical strength is saved in the fighting process, and the wide visual field is kept; 2. because two 'redundant' actions of gun holding and aiming are avoided, the time of shooting judgment decision is shortened, the shooting speed can be improved in battle, the attack can be initiated faster than the enemy, and the battle field survival ability can be improved; 3. the target can be kept to be aimed in real time, machine vision assistance can be provided, and the high shooting precision and the rapid grasp of battlefield information can be kept. For simulation training and game entertainment, the method can generate a virtual 3D target in a real scene, and finish game interaction or simulation training with the glasses user, and the glasses user has better immersion in the scene.
Drawings
Fig. 1 is a schematic diagram of layer merging in the first mode in the embodiment of the present invention.
Fig. 2 is a schematic diagram of layer merging performed in the second mode in the embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
The embodiment provides a layer fusion method applied to a weapon aiming system in an AR scene, which comprises the following steps:
step S1: providing a weapon aiming system, which comprises a pair of glasses and an aiming device; the glasses and the sighting device are in communication connection in a wired or wireless mode and transmit data mutually;
step S2: processing the image collected by the sighting device in the weapon sighting system, and transmitting the image to glasses for displaying;
step S3: in the weapon aiming system, the layers of the glasses and the aiming device are fused, a user sees the real scene part and the virtual scene part through the glasses, the real scene part is the scene actually seen by human eyes, and the virtual scene part is the superposed layer generated by the image processing module.
In this embodiment, in the weapon aiming system, the layers of the glasses and the aiming device are fused in two ways: the first mode is to directly mix the image of the sighting telescope in the sighting telescope with the image of the glasses, as shown in fig. 1, and the second mode is to judge whether the current sighting telescope visual angle is intersected with the glasses visual angle through integrated sensor information, and if so, layer fusion is started; if not intersected (e.g., muzzle towards other orientations), layer merging is not enabled to reduce computational power consumption, as shown in FIG. 2.
In this embodiment, the advantage of the first mode is as follows: direct fusion, no need of calibration, low image delay; the disadvantages are: two layers are displayed and are not visual enough; the second mode has the advantages that: the image on the glasses is fresh and cool, when the muzzle is moved, only the red aiming point moves along with the movement, and the defects are that the calibration is needed, the calculated amount is possibly large, and a certain time delay is possibly introduced.
In this embodiment, the amplification factor of the scope in the sight is 1 or more.
In this embodiment, when the second mode is used to perform layer fusion, when the sight angle of the collimator intersects with the sight angle of the glasses, image recognition is started to match the position relationship between the object in the image of the collimator and the object in the image of the glasses, which is divided into the following two cases:
when the amplification factor of the sighting telescope is more than 1, the image needs to be zoomed firstly, and because only local information of an object is left in the sighting telescope after the image is amplified, the sighting telescope is relatively difficult to directly match with an eyeglass image through the information, the sighting telescope is firstly matched with an image shot by an optical camera on the sighting telescope, and then the result is sent to the eyeglass side for image comparison;
when the amplification factor of the sighting telescope is 1, if the imaging of the sighting telescope is different from the imaging of the glasses due to the difference of perspective of the visual angle, an image fuzzy matching algorithm, a visual identification technology and an SLAM technology are adopted for auxiliary matching;
and after matching is finished, only aiming point information on the sighting device is drawn on the glasses image layer.
In this embodiment, when the SLAM technique is used to assist matching, after matching is completed, the user sees a three-dimensional virtual central axis of the sighting telescope on the glasses image layer.
In the embodiment, an image fuzzy matching algorithm is adopted, and the method has small calculation amount and takes the loss of certain precision as the cost; through the visual recognition technology, the targets can be recognized as the same object, and then correlation combination is carried out.
In the embodiment, map reconstruction and target tracking of a 3D space can be performed in real time by the SLAM technique, so that accurate view angle matching can be achieved, and meanwhile, real-time calibration compensation can be performed on the positioning (position and posture) of the sighting telescope; further, if this technique works well, even without image matching, the projection of the sight onto the eye view image can be obtained by resolving the sighting telescope positioning information. For the matching realized by the SLAM technology, after the matching is finished, a three-dimensional virtual central axis of the sighting telescope can be seen on the spectacle image layer.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (3)

1. A layer fusion method applied to a weapon aiming system in an AR scene is characterized in that: the method comprises the following steps:
step S1: providing a weapon aiming system, which comprises a pair of glasses and an aiming device; the glasses and the sighting device are in communication connection in a wired or wireless mode and transmit data mutually;
step S2: processing the image collected by the sighting device in the weapon sighting system, and transmitting the image to glasses for displaying;
step S3: in the weapon aiming system, the layers of the glasses and the aiming device are fused, a user sees a real scene part and a virtual scene part through the glasses, the real scene part is the scene actually seen by human eyes, and the virtual scene part is a superposed layer generated by the image processing module;
wherein, in this weapon aim at system divide into two kinds of modes with the picture layer fusion of glasses and sight: the first mode is to directly mix the image of the sighting telescope in the sighting telescope with the image of the glasses, the second mode is to judge whether the current sighting telescope visual angle is intersected with the glasses visual angle through integrated sensor information, and if the current sighting telescope visual angle is intersected with the glasses visual angle, layer fusion is started; if the layers are not intersected, layer fusion is not started;
when the layer fusion is performed in the second mode, when the sight angle of the sighting device is intersected with the sight angle of the glasses, image recognition is started, the position relation between an object in the sight image and an object in the glasses image is matched, and the two conditions are as follows:
when the amplification factor of the sighting telescope is more than 1, the image needs to be zoomed firstly, and because only local information of an object is left in the sighting telescope after the image is amplified, the sighting telescope is relatively difficult to directly match with an eyeglass image through the information, the sighting telescope is firstly matched with an image shot by an optical camera on the sighting telescope, and then the result is sent to the eyeglass side for image comparison;
when the amplification factor of the sighting telescope is 1, if the imaging of the sighting telescope is different from the imaging of the glasses due to the difference of perspective of the visual angle, an image fuzzy matching algorithm, a visual identification technology and an SLAM technology are adopted for auxiliary matching;
and after matching is finished, only aiming point information on the sighting device is drawn on the glasses image layer.
2. The layer fusion method applied to the weapon aiming system in the AR scene as claimed in claim 1, wherein: the amplification factor of the sighting telescope in the sighting device is more than or equal to 1.
3. The layer fusion method applied to the weapon aiming system in the AR scene as claimed in claim 1, wherein: when the SLAM technology is adopted for auxiliary matching, after matching is finished, a user sees a three-dimensional virtual central axis of the sighting telescope on the glasses layer.
CN201710200873.0A 2017-03-30 2017-03-30 Layer fusion method applied to weapon aiming system in AR scene Expired - Fee Related CN106940899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710200873.0A CN106940899B (en) 2017-03-30 2017-03-30 Layer fusion method applied to weapon aiming system in AR scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710200873.0A CN106940899B (en) 2017-03-30 2017-03-30 Layer fusion method applied to weapon aiming system in AR scene

Publications (2)

Publication Number Publication Date
CN106940899A CN106940899A (en) 2017-07-11
CN106940899B true CN106940899B (en) 2020-06-05

Family

ID=59464431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710200873.0A Expired - Fee Related CN106940899B (en) 2017-03-30 2017-03-30 Layer fusion method applied to weapon aiming system in AR scene

Country Status (1)

Country Link
CN (1) CN106940899B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346179B (en) * 2018-02-11 2021-08-03 北京小米移动软件有限公司 AR equipment display method and device
CN108579083B (en) 2018-04-27 2020-04-21 腾讯科技(深圳)有限公司 Virtual scene display method and device, electronic device and storage medium
CN109260703A (en) * 2018-09-28 2019-01-25 重庆第五维科技有限公司 True man's gunbattle game information exchange method based on AR scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102735100A (en) * 2012-06-08 2012-10-17 重庆邮电大学 Individual light weapon shooting training method and system by using augmented reality technology
CN104660995A (en) * 2015-02-11 2015-05-27 尼森科技(湖北)有限公司 Disaster relief visual system
CN104883556A (en) * 2015-05-25 2015-09-02 深圳市虚拟现实科技有限公司 Three dimensional display method based on augmented reality and augmented reality glasses
CN204612583U (en) * 2015-05-15 2015-09-02 朱铭强 The heavy sniper rifle of remote control counter radar
CN106075915A (en) * 2016-07-15 2016-11-09 成都定为电子技术有限公司 A kind of unmanned plane aerial fight device that can receive multiple directions shooting laser beam

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102735100A (en) * 2012-06-08 2012-10-17 重庆邮电大学 Individual light weapon shooting training method and system by using augmented reality technology
CN104660995A (en) * 2015-02-11 2015-05-27 尼森科技(湖北)有限公司 Disaster relief visual system
CN204612583U (en) * 2015-05-15 2015-09-02 朱铭强 The heavy sniper rifle of remote control counter radar
CN104883556A (en) * 2015-05-25 2015-09-02 深圳市虚拟现实科技有限公司 Three dimensional display method based on augmented reality and augmented reality glasses
CN106075915A (en) * 2016-07-15 2016-11-09 成都定为电子技术有限公司 A kind of unmanned plane aerial fight device that can receive multiple directions shooting laser beam

Also Published As

Publication number Publication date
CN106940899A (en) 2017-07-11

Similar Documents

Publication Publication Date Title
US9892563B2 (en) System and method for generating a mixed reality environment
US8678282B1 (en) Aim assist head-mounted display apparatus
CN106940899B (en) Layer fusion method applied to weapon aiming system in AR scene
JP5300777B2 (en) Program and image generation system
JP2022530012A (en) Head-mounted display with pass-through image processing
CA2829473F (en) Firearm, aiming system therefor, method of operating the firearm and method of reducing the probability of missing a target
JP6454851B2 (en) 3D gaze point location algorithm
US10030931B1 (en) Head mounted display-based training tool
US11480410B2 (en) Direct enhanced view optic
EA030649B1 (en) Firearm aiming system with range finder, and method of acquiring a target
CN107368192B (en) Real-scene observation method of VR glasses and VR glasses
CN110658916A (en) Target tracking method and system
CN112099620A (en) Combat collaboration system and method for soldier and team combat
JP2011212123A (en) Program, information storage medium and terminal
Zhu et al. AR-Weapon: live augmented reality based first-person shooting system
WO2013111146A4 (en) System and method of providing virtual human on human combat training operations
CN108066981A (en) A kind of AR or MR method for gaming identified based on position and image and system
CN105547046A (en) Video-based quick and accurate aiming method of sniping robot
CN106791769A (en) Virtual reality realization method and system
CN109828663A (en) Determination method and device, the operating method of run-home object of aiming area
WO2013111145A1 (en) System and method of generating perspective corrected imagery for use in virtual combat training
CN106959051A (en) The automatized calibration method of weapon-aiming system based on spatial perception location technology
US11460270B1 (en) System and method utilizing a smart camera to locate enemy and friendly forces
CN106940149B (en) A kind of figure layer fusion method applied to the weapon-aiming system under VR scenes
KR20070102942A (en) Sighting device using virtual camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200605

CF01 Termination of patent right due to non-payment of annual fee