CN109426783A - Gesture identification method and system based on augmented reality - Google Patents
Gesture identification method and system based on augmented reality Download PDFInfo
- Publication number
- CN109426783A CN109426783A CN201710758657.8A CN201710758657A CN109426783A CN 109426783 A CN109426783 A CN 109426783A CN 201710758657 A CN201710758657 A CN 201710758657A CN 109426783 A CN109426783 A CN 109426783A
- Authority
- CN
- China
- Prior art keywords
- gesture
- dimensional object
- user
- information
- augmented reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of gesture identification method and system based on augmented reality, this method comprises: according to the user's choice, obtaining the corresponding three-dimensional object of virtual objects;Three-dimensional object is virtually shown according to predeterminated position, the body-sensing information of real-time detection user's hand;When detecting that body-sensing information instruction user's hand virtually touches three-dimensional object, the internal structural information of three-dimensional object is shown;The gesture information for obtaining user's input carries out respective operations to three-dimensional object.The present invention is by receiving user to virtual objects selection instruction, it obtains corresponding three-dimensional object and is virtually shown, when detecting that user's hand virtually touches three-dimensional object, show the internal structural information of three-dimensional object, receive user gesture information, respective operations are carried out to three-dimensional object according to gesture information, user is improved and passes through the interest that gesture is interacted with virtual objects, and improve the mutual efficiency of movement of augmented reality scene.
Description
Technical field
The present invention relates to augmented reality field more particularly to a kind of gesture identification method based on augmented reality and it is
System.
Background technique
With the development of science and technology, human-machine interface technology becomes the important directions of smart machine development, it is based on augmented reality
The human-machine interface technology of (Augmented Reality, AR) is also come into being.The augmented reality is a kind of by true generation
Boundary's information and " seamless " the integrated new technology of virtual world information, be script in the certain time spatial dimension of real world
It is difficult the entity information (visual information, sound, taste, tactile etc.) experienced, by science and technology such as computers, after analog simulation
It is superimposed again, virtual Information application to real world is perceived by human sensory, to reach the sense organ body of exceeding reality
It tests.True environment and virtual object have been added to the same picture in real time or space exists simultaneously.
Currently, in the scene of augmented reality, user can by using simple gesture to the article virtually generated into
Row simple operations, such as amplification, the simple interaction function of diminution, when checking such as user to virtual car, can input letter
Single gesture carries out simple amplification to some profile details of the virtual car and checks that the interest of gesture interaction is low, scene
Interact low efficiency.
Summary of the invention
The embodiment of the present invention provides a kind of gesture identification method and system based on augmented reality, can promote augmented reality
The mutual efficiency of movement of scene.
The embodiment of the present invention the following technical schemes are provided:
A kind of gesture identification method based on augmented reality, comprising:
User is received to the selection instruction of virtual objects, and corresponding according to the selection instruction acquisition virtual objects
Three-dimensional object;
The three-dimensional object is virtually shown according to predeterminated position, the body-sensing letter of real-time detection user's hand
Breath;
When detecting that body-sensing information instruction user's hand virtually touches the three-dimensional object, described in display
The internal structural information of three-dimensional object generates gesture prompt information, and the gesture prompt information is to prompt user to input
Specified gesture operates three-dimensional object;
The gesture information that user is inputted based on the gesture prompt information is obtained, according to the gesture information to the three-dimensional
Virtual objects carry out respective operations.
In order to solve the above technical problems, the embodiment of the present invention also the following technical schemes are provided:
A kind of gesture recognition system based on augmented reality, comprising:
Receiving module, for receiving user to the selection instruction of virtual objects, and according to selection instruction acquisition
The corresponding three-dimensional object of virtual objects;
Detection module, for the three-dimensional object virtually to be shown according to predeterminated position, real-time detection user
The body-sensing information of hand;
Generation module detects that body-sensing information instruction user's hand virtually touches the three-dimensional pair for working as
As when, show the internal structural information of the three-dimensional object, generate gesture prompt information, the gesture prompt information to
Prompt user inputs specified gesture and operates to three-dimensional object;
Operation module, the gesture information inputted for obtaining user based on the gesture prompt information, according to the gesture
Information carries out respective operations to the three-dimensional object.
A kind of gesture identification method and system based on augmented reality provided in this embodiment, by receiving user to virtual
Article selection instruction obtains corresponding three-dimensional object and is virtually shown, when detecting that user's hand virtually touches
When three-dimensional object, the internal structural information of three-dimensional object is shown, user gesture information is received, according to gesture information pair
Three-dimensional object carries out respective operations, improves user and passes through the interest that gesture is interacted with virtual objects, and improves enhancing
The mutual efficiency of movement of reality scene.
Detailed description of the invention
With reference to the accompanying drawing, by the way that detailed description of specific embodiments of the present invention, technical solution of the present invention will be made
And other beneficial effects are apparent.
Fig. 1 is the schematic diagram of a scenario of the gesture identification method provided in an embodiment of the present invention based on augmented reality.
Fig. 2 is the flow diagram of the gesture identification method provided in an embodiment of the present invention based on augmented reality.
Fig. 3 is another flow diagram of the gesture identification method provided in an embodiment of the present invention based on augmented reality.
Fig. 4 is the module diagram of the gesture recognition system provided in an embodiment of the present invention based on augmented reality.
Fig. 5 is another module diagram of the gesture recognition system provided in an embodiment of the present invention based on augmented reality.
Fig. 6 is the structural schematic diagram of augmented reality server provided in an embodiment of the present invention.
Specific embodiment
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the present invention is to implement one
It is illustrated in computing environment appropriate.The following description be based on illustrated by the specific embodiment of the invention, should not be by
It is considered as the limitation present invention other specific embodiments not detailed herein.
Term as used herein " module " can regard the software object to execute in the arithmetic system as.It is somebody's turn to do herein not
Same component, module, engine and service can be regarded as the objective for implementation in the arithmetic system.And the device and method being somebody's turn to do herein is excellent
Being implemented in the form of software for choosing, can also be implemented, within that scope of the present invention on hardware certainly.
Referring to Fig. 1, the scene that Fig. 1 is the gesture identification method based on augmented reality provided by the embodiment of the present invention is shown
It is intended to.The scene includes augmented reality server 31, virtual display device 33, user 34, wearable receiving device 35, Yi Jizhi
A few picture pick-up device 36.
The augmented reality server 31 is used to store the three-dimensional image 32 of virtual corporation scene.The augmented reality service
Wireless network can be passed through between device 31 and virtual display device 33, wearable receiving device 35 and at least one picture pick-up device 36
Network, bluetooth or infrared ray connection.
The virtual display device 33 includes but is not limited to: the intelligent data helmet and terminal.
The wearable receiving device 35 includes but is not limited to: the intelligent data helmet, intelligent data gloves and intelligent data
Shoes.
Wherein, when augmented reality server 31 receives user to the selection instructions of virtual objects, and referred to according to the selection
It enables and obtains the corresponding three-dimensional object 32 of the virtual objects, and by the display of three-dimensional object 32 on virtual display device
33.The body-sensing information of 34 hand of real-time detection user.When detecting that body-sensing information instruction 34 hand of user virtually touches
When the three-dimensional object 32, the internal structural information of the three-dimensional object is shown, generate gesture prompt information.Pass through
At least one picture pick-up device 34 obtains the gesture information that user 34 is inputted based on the gesture prompt information, is believed according to the gesture
Breath carries out respective operations to the three-dimensional object 32.
Concrete analysis explanation is carried out below.
Referring to Fig. 2, Fig. 2 is the process signal of the gesture identification method provided in an embodiment of the present invention based on augmented reality
Figure.
Specifically, this method comprises:
In step s101, it receives user and obtains virtual objects to the selection instruction of virtual objects, and according to selection instruction
Corresponding three-dimensional object.
Wherein, which may include the virtual objects such as virtual car, virtual desk, virtual interior, the virtual object
Product can obtain to carry out three-dimensional modeling to real article, or be customized according to user demand to the article being not present
Production, is not especially limited herein.
Further, which can be enterprising by the thumbnail external connection display equipment of virtual objects by display interface
Row display, user can choose corresponding virtual objects by display equipment, corresponding to obtain when user has selected virtual objects
The three-dimensional object of the virtual objects, the three-dimensional object are the Polygons Representations of object, can be shown and be set by wear-type
Standby to be shown, the object of display can be the entity in the display world, or virtual object, existing for any nature
Thing can be indicated with three-dimensional object.
In step s 102, three-dimensional object is virtually shown according to predeterminated position, real-time detection user's hand
Body-sensing information.
Wherein, user can be checked by wearing head-mounted display apparatus come three-dimensional object, the three-dimensional
Object is shown on realistic space according to predeterminated position, and user can be realized and three-dimensional object by wearing data glove
Interaction, the data glove can detecte the body-sensing information of user's hand.
In step s 103, it when detecting that body-sensing information instruction user's hand virtually touches three-dimensional object, shows
Show the internal structural information of three-dimensional object, generates gesture prompt information.
It is understood that the gesture prompt information carries out three-dimensional object to prompt user to input specified gesture
Operation.
Wherein, when detecting that the hand of body-sensing information instruction user of user virtually touches the three-dimensional object,
Illustrate that user needs to operate the three-dimensional object, three-dimensional object can be subjected to translucent display, Yong Huke
To observe the internal structural information of the three-dimensional object in real time, and the corresponding gesture that generates indicates information, gesture instruction
Information operates three-dimensional object for prompting user to input specified gesture, such as can matching by index finger and middle finger
It closes, so that being adjusted to the size of three-dimensional object, index finger double-click can be carried out the internal structure of three-dimensional object
Design etc..
In step S104, the gesture information that user is inputted based on gesture prompt information is obtained, according to gesture information to institute
It states three-dimensional object and carries out respective operations.
Wherein it is possible to obtain the gesture information that user is inputted based on gesture prompt information in real time by picture pick-up device, analyze
The corresponding operation of the gesture information, the operation can include but is not limited to carry out size adjustment to three-dimensional object, and appearance is set
Meter, internal structure design and dynamic demonstration operation etc..Response operation is to handle three-dimensional object.
It can be seen from the above, a kind of gesture identification method based on augmented reality provided in this embodiment, by receiving user
It to virtual objects selection instruction, obtains corresponding three-dimensional object and is virtually shown, when detecting that user's hand is virtual
When touching three-dimensional object, the internal structural information of three-dimensional object is shown, user gesture information is received, according to gesture
Information carries out respective operations to three-dimensional object, improves user and passes through the interest that gesture is interacted with virtual objects, and is promoted
The mutual efficiency of movement of augmented reality scene.
Citing, is described in further detail by the method according to described in above-described embodiment below.
Referring to Fig. 3, Fig. 3 is another process of the gesture identification method provided in an embodiment of the present invention based on augmented reality
Schematic diagram.
Specifically, this method comprises:
In step s 201, the size, color and internal structural information of virtual objects are obtained.
Wherein, which may include the virtual objects such as virtual car, virtual desk, virtual interior, the virtual object
Product can obtain to carry out three-dimensional modeling to real article, or be customized according to user demand to the article being not present
Production, is not especially limited herein.
It should be noted that being illustrated so that virtual objects are virtual car as an example to be best understood from the present embodiment.
Further, the size of virtual car is obtained, which may include the brand mark, body contour, line of automobile
Item, wheel rim pattern and lamp shape etc., the color of automobile and internal structural information, the internal structural information can start for automobile
Machine, chassis and electrical equipment.
In step S202, three-dimensional modeling is carried out according to the size, color and internal structural information of virtual objects, with life
At the corresponding three-dimensional object of virtual objects.
Wherein, according to the brand mark of virtual car, body contour, lines, wheel rim pattern and lamp shape etc., the color of automobile
And engine, chassis and the electrical equipment of automobile carry out three-dimension virtual reality modeling, it is empty with the three-dimensional for generating virtual car
Quasi- object.
In step S203, user is received to the selection instruction of virtual objects, and virtual objects are obtained according to selection instruction
Corresponding three-dimensional object.
Wherein, which can will be shown by display interface in the thumbnail external connection display equipment of virtual objects
Show, user can choose corresponding virtual objects by display equipment, corresponding to obtain the void when user has selected virtual car
The three-dimensional object of quasi- automobile, which is the Polygons Representation of object, can by head-mounted display apparatus into
Row display.
In step S204, three-dimensional object is virtually shown according to predeterminated position, passes through data glove equipment
The action message of real-time detection user's hand.
Wherein, user can be looked by wearing head-mounted display apparatus come the three-dimensional object to virtual car
It sees, which is shown on realistic space according to predeterminated position, and user is used by wearing data glove real-time detection
The action message of family hand, the data glove can be with the spatial positional informations of real-time detection user's hand, can be according to the space
Location information positions user's hand in real time.
In step S205, corresponding body-sensing information is generated according to the action message of user's hand in real time.
Wherein, the action message of user's hand is subjected to data analysis, generates corresponding body-sensing information.
In step S206, when detecting that body-sensing information instruction user's hand virtually touches three-dimensional object, show
Show the internal structural information of three-dimensional object, generates gesture prompt information.
It is understood that the gesture prompt information carries out three-dimensional object to prompt user to input specified gesture
Operation.
Wherein, when the body-sensing information for detecting user indicates that the hand of user virtually touches the three-dimensional void of the virtual car
When quasi- object, illustrate that user needs to operate the virtual car, three-dimensional object can be subjected to translucent display, used
The internal structural information (engine, chassis and electrical equipment) of the three-dimensional object can be observed in real time in family, and right
Gesture instruction information should be generated, gesture instruction information grasps three-dimensional object for prompting user to input specified gesture
Make, which may include that size adjustment, internal structure design and dynamic demonstration operation are carried out to three-dimensional object.
In step S207, by the finger-image for opening picture pick-up device captured in real-time user.
In step S208, finger-image is analyzed, obtains corresponding gesture information.
Wherein, the finger-image that continuous captured in real-time user is imaged by opening carries out signature analysis to the finger-image,
Obtain the current gesture information of user.
In step S209, when detecting that gesture information instruction carries out size adjustment to three-dimensional object, according to hand
The amplitude of gesture movement carries out corresponding size adjustment to three-dimensional object.
Wherein it is possible to default index finger and middle finger cooperation carry out size adjustment to three-dimensional object, when detecting that gesture believes
When breath is that index finger and middle finger cooperate, amplifies or contract according to index finger and the change amplitude of middle finger are corresponding to virtual car progress
It is small.
In step S210, when detecting internal structure design of the gesture information instruction to three-dimensional object, according to
Gesture motion is designed internal structure.
Wherein it is possible to default middle finger operation carries out internal structure design to three-dimensional object, in one embodiment, when
Detect user's middle finger operation when, can be generated the internal structure of virtual car replacement component (different engines, chassis and
The thumbnail of electrical equipment), user can be clicked by middle finger, be replaced to internal structure.
In one embodiment, when detecting that gesture information instruction designs the external structure of three-dimensional object, root
External structure is designed according to gesture motion.
Wherein it is possible to default thumb operation carries out external structure design to three-dimensional object, in one embodiment,
When detect user's thumb operation when, can be generated the external structure of virtual car replacement component (vehicle color, car door,
Vehicle window, car light etc.), user can be given directions by thumb and be hit, and be replaced to external structure.
In step S211, when detecting that gesture information instruction carries out dynamic demonstration operation to three-dimensional object, obtain
The demonstration video of three-dimensional object is taken to carry out dynamic demonstration.
Wherein it is possible to which adopting consecutive click chemical reaction operation carries out dynamic demonstration operation to three-dimensional object, in one embodiment, use
When family passes through the adopting consecutive click chemical reaction three-dimensional object, the demonstration video for obtaining the three-dimensional object of virtual car can be corresponded to, is such as driven
Demonstration video, collision demonstration video etc. are sailed, user can choose the demonstration video wanted to know about and carry out dynamic play.
It can be seen from the above, a kind of gesture identification method based on augmented reality provided in this embodiment, by receiving user
It to virtual objects selection instruction, obtains corresponding three-dimensional object and is virtually shown, when detecting that user's hand is virtual
When touching three-dimensional object, the internal structural information of three-dimensional object is shown, user gesture information is received, when detecting
When gesture information instruction carries out size adjustment to three-dimensional object, three-dimensional object is carried out according to gesture amplitude corresponding
Size adjustment, it is internal according to gesture motion when detecting internal structure design of the gesture information instruction to three-dimensional object
Portion's structure is designed, and when detecting that gesture information instruction carries out dynamic demonstration operation to three-dimensional object, is obtained three-dimensional
The demonstration video of virtual objects carries out dynamic demonstration, improves user and passes through the interest that gesture is interacted with virtual objects, and is promoted
The mutual efficiency of movement of augmented reality scene.
For convenient for the better implementation gesture identification method provided in an embodiment of the present invention based on augmented reality, the present invention is real
It applies example and a kind of system based on the above-mentioned gesture identification method based on augmented reality is also provided.The wherein meaning of noun and above-mentioned base
Identical in the gesture identification method of augmented reality, specific implementation details can be with reference to the explanation in embodiment of the method.
Referring to Fig. 4, the module that Fig. 4 is the gesture recognition system provided in an embodiment of the present invention based on augmented reality is illustrated
Figure.
Specifically, should gesture recognition system 300 based on augmented reality, comprising: receiving module 31, detection module 32,
Generation module 33 and operation module 34.
The receiving module 31 is obtained for receiving user to the selection instruction of virtual objects, and according to the selection instruction
The corresponding three-dimensional object of the virtual objects.
Wherein, which may include the virtual objects such as virtual car, virtual desk, virtual interior, the virtual object
Product can obtain to carry out three-dimensional modeling to real article, or be customized according to user demand to the article being not present
Production, is not especially limited herein.
Further, which can be enterprising by the thumbnail external connection display equipment of virtual objects by display interface
Row display, user can choose corresponding virtual objects, when user has selected virtual objects, the reception mould by display equipment
The corresponding three-dimensional object for obtaining the virtual objects of block 31, which is the Polygons Representation of object, can be passed through
Head-mounted display apparatus is shown that the object of display can be the entity in the display world, or virtual object, it is any
Thing existing for nature can be indicated with three-dimensional object.
The detection module 32, for the three-dimensional object virtually to be shown according to predeterminated position, real-time detection
The body-sensing information of user's hand.
Wherein, user can be checked by wearing head-mounted display apparatus come three-dimensional object, the three-dimensional
Object is shown on realistic space according to predeterminated position, and user can be realized and three-dimensional object by wearing data glove
Interaction, the data glove can detecte the body-sensing information of user's hand.
The generation module 33 detects that body-sensing information instruction user's hand virtually touches the three-dimensional void for working as
When quasi- object, the internal structural information of the three-dimensional object is shown, generate gesture prompt information, the gesture prompt information
Three-dimensional object is operated to prompt user to input specified gesture.
Wherein, when the generation module 33 detects that the hand of the body-sensing information instruction user of user virtually touches the three-dimensional
When virtual objects, illustrate that user needs to operate the three-dimensional object, three-dimensional object can be carried out translucent
The internal structural information of the three-dimensional object can be observed in real time in display, user, and the corresponding gesture that generates indicates information,
Gesture instruction information operates three-dimensional object for prompting user to input specified gesture, such as can by index finger with
The cooperation of middle finger, so that being adjusted to the size of three-dimensional object, index finger double-click can be to the inside of three-dimensional object
Structure is designed.
The operation module 34, the gesture information inputted for obtaining user based on the gesture prompt information, according to described
Gesture information carries out respective operations to the three-dimensional object.
Wherein, which can obtain the hand that user is inputted based on gesture prompt information in real time by picture pick-up device
Gesture information, analyzes the corresponding operation of the gesture information, which can include but is not limited to carry out size to three-dimensional object
Adjustment, design, internal structure design and dynamic demonstration operation etc..Response operation is with to three-dimensional object
Reason.
Fig. 5 can be referred to together, and Fig. 5 is the another of the gesture recognition system provided in an embodiment of the present invention based on augmented reality
Module diagram, being somebody's turn to do the gesture recognition system 300 based on augmented reality can also include:
Wherein, which can also include detection sub-module 321 and generation submodule 322.
Specifically, the detection sub-module 321, for the three-dimensional object virtually to be shown according to predeterminated position
Show, passes through the action message of data glove equipment real-time detection user's hand.The generation submodule 322, for according to the use
The action message of family hand generates corresponding body-sensing information in real time.
Wherein, which can also include adjusting submodule 341, design submodule 342 and demonstration submodule
343。
Specifically, the adjusting submodule 341, detects that gesture information instruction carries out greatly three-dimensional object for working as
When small adjustment, corresponding size adjustment is carried out to the three-dimensional object according to the amplitude of gesture motion.The design submodule
342, for when detecting gesture information instruction to the internal structure design of three-dimensional object, according to gesture motion to inside
Structure is designed.The demonstration submodule 343 detects that gesture information instruction carries out dynamic to three-dimensional object and drills for working as
When showing operation, the demonstration video for obtaining three-dimensional object carries out dynamic demonstration.
In one embodiment, which can be also used for: be used described in picture pick-up device captured in real-time by opening
The finger-image at family;The finger-image is analyzed, corresponding gesture information is obtained, according to the gesture information to described
Three-dimensional object carries out respective operations.
Module 35 is obtained, for obtaining the size, color and internal structural information of virtual objects;
Modeling module 36 is built for carrying out three-dimensional according to the size, color and internal structural information of the virtual objects
Mould, to generate the corresponding three-dimensional object of the virtual objects.
It can be seen from the above, a kind of gesture recognition system based on augmented reality provided in this embodiment, by receiving user
It to virtual objects selection instruction, obtains corresponding three-dimensional object and is virtually shown, when detecting that user's hand is virtual
When touching three-dimensional object, the internal structural information of three-dimensional object is shown, user gesture information is received, when detecting
When gesture information instruction carries out size adjustment to three-dimensional object, three-dimensional object is carried out according to gesture amplitude corresponding
Size adjustment, it is internal according to gesture motion when detecting internal structure design of the gesture information instruction to three-dimensional object
Portion's structure is designed, and when detecting that gesture information instruction carries out dynamic demonstration operation to three-dimensional object, is obtained three-dimensional
The demonstration video of virtual objects carries out dynamic demonstration, improves user and passes through the interest that gesture is interacted with virtual objects, and is promoted
The mutual efficiency of movement of augmented reality scene.
Correspondingly, the embodiment of the present invention also provides a kind of augmented reality server, as shown in fig. 6, the augmented reality service
Device may include radio frequency (RF, Radio Frequency) circuit 401, include one or more computer-readable storage
The memory 402 of medium, input unit 403, display unit 404, sensor 405, voicefrequency circuit 406, Wireless Fidelity (WiFi,
Wireless Fidelity) module 407, include one or more than one the processor 408 and power supply of processing core
409 equal components.It will be understood by those skilled in the art that augmented reality server architecture shown in Fig. 6 is not constituted to enhancing
The restriction of real server may include perhaps combining certain components or different portions than illustrating more or fewer components
Part arrangement.Wherein:
RF circuit 401 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, one or the processing of more than one processor 408 are transferred to;In addition, the data for being related to uplink are sent to
Base station.In general, RF circuit 401 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, uses
Family identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier
(LNA, Low Noise Amplifier), duplexer etc..In addition, RF circuit 401 can also by wireless communication with network and its
He communicates equipment.Any communication standard or agreement, including but not limited to global system for mobile communications can be used in the wireless communication
(GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more
Location (WCDMA, Wideband Code Division Multiple Access), long term evolution (LTE, Long Term
Evolution), Email, short message service (SMS, Short Messaging Service) etc..
Memory 402 can be used for storing software program and module, and processor 408 is stored in memory 402 by operation
Software program and module, thereby executing various function application and data processing.Memory 402 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
The virtual image of such as product) etc.;Storage data area, which can be stored, uses created data (ratio according to augmented reality server
Such as component information, repair message) etc..In addition, memory 402 may include high-speed random access memory, can also include
Nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Phase
Ying Di, memory 402 can also include Memory Controller, to provide processor 408 and input unit 403 to memory 402
Access.
Input unit 403 can be used for receiving the number or character information of input, and generate and user setting and function
Control related microphone, touch screen, body-sensing input equipment, keyboard, mouse, operating stick, optics or trackball signal input.
Specifically, in a specific embodiment, input unit 403 may include touch sensitive surface and other input equipments.Touch-sensitive table
Face, also referred to as touch display screen or Trackpad, collect user on it or nearby touch operation (such as user use hand
The operation of any suitable object or attachment such as finger, stylus on touch sensitive surface or near touch sensitive surface), and according to setting in advance
Fixed formula drives corresponding attachment device.Optionally, touch sensitive surface may include touch detecting apparatus and touch controller two
Part.Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, signal is transmitted
To touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then send
To processor 408, and order that processor 408 is sent can be received and executed.Furthermore, it is possible to using resistance-type, condenser type,
The multiple types such as infrared ray and surface acoustic wave realize touch sensitive surface.In addition to touch sensitive surface, input unit 403 can also include it
His input equipment.Specifically, other input equipments can include but is not limited to physical keyboard, function key (for example press by volume control
Key, switch key etc.), trace ball, mouse, one of operating stick etc. or a variety of.
Display unit 404 can be used for showing information input by user or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof.Display
Unit 404 may include display panel, optionally, can using liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further
, touch sensitive surface can cover display panel, after touch sensitive surface detects touch operation on it or nearby, send processing to
Device 408 is followed by subsequent processing device 408 and is provided on a display panel accordingly according to the type of touch event to determine the type of touch event
Visual output.Although touch sensitive surface and display panel are to realize input and input as two independent components in Fig. 6
Function, but in some embodiments it is possible to touch sensitive surface and display panel are integrated and realizes and outputs and inputs function.
Augmented reality server may also include at least one sensor 405, for example, optical sensor, motion sensor and its
His sensor.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can root
The brightness of display panel is adjusted according to the light and shade of ambient light, proximity sensor can be moved in virtual augmented reality product repairing
When in one's ear, display panel and/or backlight are closed.As a kind of motion sensor, gravity accelerometer can detect each
The size of (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify mobile phone on direction when static
The application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of posture, (such as the step counting of Vibration identification correlation function
Device, percussion) etc.;Other biographies such as gyroscope, barometer, hygrometer, thermometer, infrared sensor for can also configure as terminal
Sensor, details are not described herein.But it is understood that and be not belonging to must be configured into for augmented reality server, completely may be used
To omit within the scope of not changing the essence of the invention as needed.
Voicefrequency circuit 406, loudspeaker, microphone can provide the audio interface between user and terminal.Voicefrequency circuit 406 can
By the electric signal after the audio data received conversion, it is transferred to loudspeaker, voice signal output is converted to by loudspeaker;It is another
The voice signal of collection is converted to electric signal by aspect, microphone, is converted to audio data after being received by voicefrequency circuit 406, then
After the processing of audio data output processor 408, it is sent to such as another terminal through RF circuit 401, or by audio data
Output is further processed to memory 402.Voicefrequency circuit 406 is also possible that earphone jack, with provide peripheral hardware earphone with
The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user to receive and dispatch by WiFi module 407Electronics postal Part, browsing webpage and access streaming video etc., it provides wireless broadband internet for user and accesses.Although Fig. 6 is shown
WiFi module 407, but it is understood that, and it is not belonging to must be configured into for augmented reality server, it completely can basis
It needs to omit within the scope of not changing the essence of the invention.
Processor 408 is the control centre of terminal, utilizes various interfaces and the entire augmented reality server of connection
Various pieces by running or execute the software program and/or module that are stored in memory 402, and are called and are stored in
Data in reservoir 402, execute augmented reality server various functions and processing data, thus to augmented reality server into
Row integral monitoring.Optionally, processor 408 may include one or more processing cores;Preferably, processor 408 can be integrated and be answered
With processor and modem processor, wherein the main processing operation system of application processor, user interface and application program
Deng modem processor mainly handles wireless communication.It is understood that above-mentioned modem processor can not also integrate
Into processor 408.
Augmented reality server further includes the power supply 409 (such as battery) powered to all parts, it is preferred that power supply can be with
It is logically contiguous by power-supply management system and processor 408, thus by power-supply management system realize management charging, electric discharge, with
And the functions such as power managed.Power supply 409 can also include one or more direct current or AC power source, recharging system,
The random components such as power failure detection circuit, power adapter or inverter, power supply status indicator.
Although being not shown, augmented reality server can also include camera, bluetooth module etc., and details are not described herein.Tool
In the present embodiment, the processor 408 in augmented reality server can be according to following instruction for body, will be one or more
The corresponding executable file of the process of application program is loaded into memory 402, and is run by processor 408 and be stored in storage
Application program in device 402, to realize various functions:
User is received to the selection instruction of virtual objects, and corresponding according to the selection instruction acquisition virtual objects
Three-dimensional object;
The three-dimensional object is virtually shown according to predeterminated position, the body-sensing letter of real-time detection user's hand
Breath;
When detecting that body-sensing information instruction user's hand virtually touches the three-dimensional object, described in display
The internal structural information of three-dimensional object generates gesture prompt information, and the gesture prompt information is to prompt user to input
Specified gesture operates three-dimensional object;
The gesture information that user is inputted based on the gesture prompt information is obtained, according to the gesture information to the three-dimensional
Virtual objects carry out respective operations.
When it is implemented, above each unit can be used as independent entity to realize, any combination can also be carried out, is made
It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not
It repeats again.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the detailed description above with respect to the gesture identification method based on augmented reality, details are not described herein again.
Gesture identification method and system provided in an embodiment of the present invention based on augmented reality, should the hand based on augmented reality
Gesture identifying system and gesture identification method based on augmented reality belong to same design, in the gesture identification based on augmented reality
Either offer method in the gesture identification method embodiment based on augmented reality can be provided in system, implemented
Journey is detailed in the gesture identification method embodiment based on augmented reality, and details are not described herein again.
It should be noted that for the present invention should be based on the gesture identification method of augmented reality, this field common test
Personnel are understood that realize all or part of the process of gesture identification method of the embodiment of the present invention based on augmented reality, and being can be with
Relevant hardware is controlled by computer program to complete, which can be stored in a computer-readable storage and be situated between
It in matter, is such as stored in the memory of terminal, and is executed by least one processor in the terminal, can be wrapped in the process of implementation
Include the process of the embodiment such as the gesture identification method based on augmented reality.Wherein, the storage medium can for magnetic disk, CD,
Read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory) etc..
For this of the embodiment of the present invention is based on the gesture recognition system of augmented reality, each functional module be can integrate
In a processing chip, it is also possible to modules and physically exists alone, can also be integrated in two or more modules
In one module.Above-mentioned integrated module both can take the form of hardware realization, can also use the shape of software function module
Formula is realized.If the integrated module is realized and when sold or used as an independent product in the form of software function module,
Also it can store in a computer readable storage medium, which is for example read-only memory, disk or CD
Deng.
It is provided for the embodiments of the invention a kind of gesture identification method based on augmented reality above and system carries out
It is discussed in detail, used herein a specific example illustrates the principle and implementation of the invention, above embodiments
Illustrate to be merely used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to this
The thought of invention, there will be changes in the specific implementation manner and application range, is to sum up somebody's turn to do, the content of the present specification should not manage
Solution is limitation of the present invention.
Claims (10)
1. a kind of gesture identification method based on augmented reality characterized by comprising
User is received to the selection instruction of virtual objects, and the corresponding three-dimensional of the virtual objects is obtained according to the selection instruction
Virtual objects;
The three-dimensional object is virtually shown according to predeterminated position, the body-sensing information of real-time detection user's hand;
When detecting that body-sensing information instruction user's hand virtually touches the three-dimensional object, the three-dimensional is shown
The internal structural information of virtual objects, generates gesture prompt information, and the gesture prompt information is specified to prompt user's input
Gesture operates three-dimensional object;
The gesture information that user is inputted based on the gesture prompt information is obtained, according to the gesture information to the three-dimensional
Object carries out respective operations.
2. as described in claim 1 based on the gesture identification method of augmented reality, which is characterized in that the real-time detection user
The body-sensing information of hand, comprising:
Pass through the action message of data glove equipment real-time detection user's hand;
Corresponding body-sensing information is generated in real time according to the action message of user's hand.
3. as claimed in claim 2 based on the gesture identification method of augmented reality, which is characterized in that described to described three-dimensional empty
The operation of quasi- object includes that size adjustment, internal structure design and dynamic demonstration operation are carried out to three-dimensional object;
The gesture information for obtaining user and being inputted based on the gesture prompt information, according to the gesture information to the three-dimensional
Virtual objects carry out respective operations, comprising:
When detecting that gesture information instruction carries out size adjustment to three-dimensional object, according to the amplitude of gesture motion to described
Three-dimensional object carries out corresponding size adjustment;
When detecting internal structure design of the gesture information instruction to three-dimensional object, according to gesture motion to internal structure
It is designed;
When detecting that gesture information instruction carries out dynamic demonstration operation to three-dimensional object, drilling for three-dimensional object is obtained
Show that video carries out dynamic demonstration.
4. as described in claim 1 based on the gesture identification method of augmented reality, which is characterized in that the acquisition user is based on
The gesture information of the gesture prompt information input, comprising:
By the finger-image for opening user described in picture pick-up device captured in real-time;
The finger-image is analyzed, corresponding gesture information is obtained.
5. as described in claim 1 based on the gesture identification method of augmented reality, which is characterized in that the reception user is to void
Before the selection instruction of quasi- article, further includes:
Obtain size, color and the internal structural information of virtual objects;
Three-dimensional modeling is carried out according to the size, color and internal structural information of the virtual objects, to generate the virtual object
The corresponding three-dimensional object of product.
6. a kind of gesture recognition system based on augmented reality characterized by comprising
Receiving module, for receiving user to the selection instruction of virtual objects, and it is described virtual according to selection instruction acquisition
The corresponding three-dimensional object of article;
Detection module, for the three-dimensional object virtually to be shown according to predeterminated position, real-time detection user's hand
Body-sensing information;
Generation module detects that body-sensing information instruction user's hand virtually touches the three-dimensional object for working as
When, it shows the internal structural information of the three-dimensional object, generates gesture prompt information, the gesture prompt information is to mention
Show that user inputs specified gesture and operates to three-dimensional object;
Operation module, the gesture information inputted for obtaining user based on the gesture prompt information, according to the gesture information
Respective operations are carried out to the three-dimensional object.
7. as claimed in claim 6 based on the gesture recognition system of augmented reality, which is characterized in that the detection module, packet
It includes:
Detection sub-module is set for virtually being shown the three-dimensional object according to predeterminated position by data glove
The action message of standby real-time detection user hand;
Submodule is generated, for generating corresponding body-sensing information in real time according to the action message of user's hand.
8. as claimed in claim 7 based on the gesture recognition system of augmented reality, which is characterized in that the operation module, packet
It includes:
Adjusting submodule, for when detect gesture information instruction to three-dimensional object carry out size adjustment when, according to gesture
The amplitude of movement carries out corresponding size adjustment to the three-dimensional object;
Submodule is designed, for when detecting gesture information instruction to the internal structure design of three-dimensional object, according to hand
Gesture movement is designed internal structure;
Submodule is demonstrated, for obtaining when detecting that gesture information instruction carries out dynamic demonstration operation to three-dimensional object
The demonstration video of three-dimensional object carries out dynamic demonstration.
9. as claimed in claim 6 based on the gesture recognition system of augmented reality, which is characterized in that the operation module is used
In:
By the finger-image for opening user described in picture pick-up device captured in real-time;
The finger-image is analyzed, corresponding gesture information is obtained, according to the gesture information to the three-dimensional
Object carries out respective operations.
10. as claimed in claim 6 based on the gesture recognition system of augmented reality, which is characterized in that the system also includes:
Module is obtained, for obtaining the size, color and internal structural information of virtual objects;
Modeling module, for carrying out three-dimensional modeling according to the size, color and internal structural information of the virtual objects, with life
At the corresponding three-dimensional object of the virtual objects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710758657.8A CN109426783A (en) | 2017-08-29 | 2017-08-29 | Gesture identification method and system based on augmented reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710758657.8A CN109426783A (en) | 2017-08-29 | 2017-08-29 | Gesture identification method and system based on augmented reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109426783A true CN109426783A (en) | 2019-03-05 |
Family
ID=65503636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710758657.8A Pending CN109426783A (en) | 2017-08-29 | 2017-08-29 | Gesture identification method and system based on augmented reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109426783A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110333785A (en) * | 2019-07-11 | 2019-10-15 | Oppo广东移动通信有限公司 | Information processing method, device, storage medium and augmented reality equipment |
CN111651050A (en) * | 2020-06-09 | 2020-09-11 | 浙江商汤科技开发有限公司 | Method and device for displaying urban virtual sand table, computer equipment and storage medium |
CN112061137A (en) * | 2020-08-19 | 2020-12-11 | 一汽奔腾轿车有限公司 | Man-vehicle interaction control method outside vehicle |
CN112286363A (en) * | 2020-11-19 | 2021-01-29 | 网易(杭州)网络有限公司 | Virtual subject form changing method and device, storage medium and electronic equipment |
CN112613389A (en) * | 2020-12-18 | 2021-04-06 | 上海影创信息科技有限公司 | Eye gesture control method and system and VR glasses thereof |
CN113034668A (en) * | 2021-03-01 | 2021-06-25 | 中科数据(青岛)科技信息有限公司 | AR-assisted mechanical simulation operation method and system |
CN113325952A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, medium and product for presenting virtual objects |
CN113421343A (en) * | 2021-05-27 | 2021-09-21 | 深圳市晨北科技有限公司 | Method for observing internal structure of equipment based on augmented reality |
CN115273636A (en) * | 2021-04-29 | 2022-11-01 | 北京华录新媒信息技术有限公司 | Naked eye 3D immersive experience system |
CN115630415A (en) * | 2022-12-06 | 2023-01-20 | 广东时谛智能科技有限公司 | Method and device for designing shoe body model based on gestures |
WO2024077872A1 (en) * | 2022-10-09 | 2024-04-18 | 网易(杭州)网络有限公司 | Display position adjustment method and apparatus, storage medium, and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070273610A1 (en) * | 2006-05-26 | 2007-11-29 | Itt Manufacturing Enterprises, Inc. | System and method to display maintenance and operational instructions of an apparatus using augmented reality |
CN104050859A (en) * | 2014-05-08 | 2014-09-17 | 南京大学 | Interactive digital stereoscopic sand table system |
CN104641400A (en) * | 2012-07-19 | 2015-05-20 | 戈拉夫·瓦茨 | User-controlled 3D simulation for providing realistic and enhanced digital object viewing and interaction experience |
CN105278685A (en) * | 2015-09-30 | 2016-01-27 | 陕西科技大学 | Assistant instructing system and assistant instructing system method based on EON |
CN106125938A (en) * | 2016-07-01 | 2016-11-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN106937531A (en) * | 2014-06-14 | 2017-07-07 | 奇跃公司 | Method and system for producing virtual and augmented reality |
-
2017
- 2017-08-29 CN CN201710758657.8A patent/CN109426783A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070273610A1 (en) * | 2006-05-26 | 2007-11-29 | Itt Manufacturing Enterprises, Inc. | System and method to display maintenance and operational instructions of an apparatus using augmented reality |
CN104641400A (en) * | 2012-07-19 | 2015-05-20 | 戈拉夫·瓦茨 | User-controlled 3D simulation for providing realistic and enhanced digital object viewing and interaction experience |
CN104050859A (en) * | 2014-05-08 | 2014-09-17 | 南京大学 | Interactive digital stereoscopic sand table system |
CN106937531A (en) * | 2014-06-14 | 2017-07-07 | 奇跃公司 | Method and system for producing virtual and augmented reality |
CN105278685A (en) * | 2015-09-30 | 2016-01-27 | 陕西科技大学 | Assistant instructing system and assistant instructing system method based on EON |
CN106125938A (en) * | 2016-07-01 | 2016-11-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110333785A (en) * | 2019-07-11 | 2019-10-15 | Oppo广东移动通信有限公司 | Information processing method, device, storage medium and augmented reality equipment |
CN110333785B (en) * | 2019-07-11 | 2022-10-28 | Oppo广东移动通信有限公司 | Information processing method and device, storage medium and augmented reality equipment |
CN111651050A (en) * | 2020-06-09 | 2020-09-11 | 浙江商汤科技开发有限公司 | Method and device for displaying urban virtual sand table, computer equipment and storage medium |
CN112061137B (en) * | 2020-08-19 | 2022-01-14 | 一汽奔腾轿车有限公司 | Man-vehicle interaction control method outside vehicle |
CN112061137A (en) * | 2020-08-19 | 2020-12-11 | 一汽奔腾轿车有限公司 | Man-vehicle interaction control method outside vehicle |
CN112286363A (en) * | 2020-11-19 | 2021-01-29 | 网易(杭州)网络有限公司 | Virtual subject form changing method and device, storage medium and electronic equipment |
CN112613389A (en) * | 2020-12-18 | 2021-04-06 | 上海影创信息科技有限公司 | Eye gesture control method and system and VR glasses thereof |
CN113034668A (en) * | 2021-03-01 | 2021-06-25 | 中科数据(青岛)科技信息有限公司 | AR-assisted mechanical simulation operation method and system |
CN113034668B (en) * | 2021-03-01 | 2023-04-07 | 中科数据(青岛)科技信息有限公司 | AR-assisted mechanical simulation operation method and system |
CN115273636A (en) * | 2021-04-29 | 2022-11-01 | 北京华录新媒信息技术有限公司 | Naked eye 3D immersive experience system |
CN113325952A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, medium and product for presenting virtual objects |
CN113421343A (en) * | 2021-05-27 | 2021-09-21 | 深圳市晨北科技有限公司 | Method for observing internal structure of equipment based on augmented reality |
CN113421343B (en) * | 2021-05-27 | 2024-06-04 | 深圳市晨北科技有限公司 | Method based on internal structure of augmented reality observation equipment |
WO2024077872A1 (en) * | 2022-10-09 | 2024-04-18 | 网易(杭州)网络有限公司 | Display position adjustment method and apparatus, storage medium, and electronic device |
CN115630415A (en) * | 2022-12-06 | 2023-01-20 | 广东时谛智能科技有限公司 | Method and device for designing shoe body model based on gestures |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109426783A (en) | Gesture identification method and system based on augmented reality | |
CN105487649B (en) | A kind of reminding method and mobile terminal | |
CN109213885A (en) | Car show method and system based on augmented reality | |
CN109918975A (en) | A kind of processing method of augmented reality, the method for Object identifying and terminal | |
CN105912918B (en) | A kind of unlocked by fingerprint method and terminal | |
CN109905754A (en) | Virtual present collection methods, device and storage equipment | |
CN104238893B (en) | A kind of method and apparatus that video preview picture is shown | |
CN107105093A (en) | Camera control method, device and terminal based on hand track | |
CN103813127B (en) | A kind of video call method, terminal and system | |
CN108958606B (en) | Split screen display method and device, storage medium and electronic equipment | |
CN108984064A (en) | Multi-screen display method, device, storage medium and electronic equipment | |
CN104899912B (en) | Animation method and back method and equipment | |
CN107071129B (en) | A kind of bright screen control method and mobile terminal | |
CN109215130A (en) | A kind of product repairing method and system based on augmented reality | |
CN105955597B (en) | Information display method and device | |
CN106127829A (en) | The processing method of a kind of augmented reality, device and terminal | |
CN109067981A (en) | Split screen application switching method, device, storage medium and electronic equipment | |
CN106708554A (en) | Program running method and device | |
CN109426343A (en) | Cooperation training method and system based on virtual reality | |
CN104820546B (en) | Function information methods of exhibiting and device | |
CN108170358A (en) | Mobile phone and head-up display exchange method | |
CN109871358A (en) | A kind of management method and terminal device | |
CN108958629A (en) | Split screen exits method, apparatus, storage medium and electronic equipment | |
CN107276602A (en) | Radio frequency interference processing method, device, storage medium and terminal | |
CN110442297A (en) | Multi-screen display method, split screen display available device and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190305 |
|
RJ01 | Rejection of invention patent application after publication |