CN108273265A - The display methods and device of virtual objects - Google Patents
The display methods and device of virtual objects Download PDFInfo
- Publication number
- CN108273265A CN108273265A CN201710314092.4A CN201710314092A CN108273265A CN 108273265 A CN108273265 A CN 108273265A CN 201710314092 A CN201710314092 A CN 201710314092A CN 108273265 A CN108273265 A CN 108273265A
- Authority
- CN
- China
- Prior art keywords
- virtual objects
- operational order
- mark
- reality scene
- scene image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of display methods of virtual objects and devices.Wherein, this method includes:It when getting the first operational order, renders and generates the first interactive interface, first interactive interface includes the first display area;It obtains reality scene image and is shown in first display area;The second operational order for monitoring user chooses mark figure according to second operational order in the reality scene image;When receiving third operational order, the three-dimensional modeling data of corresponding virtual objects is obtained according to the third operational order, is rendered according to the three-dimensional modeling data and is generated the virtual objects;Controlling the virtual objects follows the mark figure to move.The present invention solves in the related technology the technical issues of obtaining for virtual objects lacks novel interactive mode, poor user experience.
Description
Technical field
The present invention relates to field of play, in particular to the display methods and device of a kind of virtual objects.
Background technology
In existing Games Software, it usually needs user extracts the virtual objects such as card, role or stage property to play,
Especially card cards game, the extraction of card and displaying are mostly important one of game contents.However, in existing game,
The acquisition modes of the virtual objects such as card, role or stage property are usually to be clicked to extract button by user, and system is according to default general
Rate is shown to user after generating specific virtual objects;And exhibition method is usually the mark that virtual objects are clicked by user,
It is shown to user after the specific 2D images of system generation virtual objects.However, there are following defects for aforesaid operations flow:Lack trip
Interactivity between the virtual objects obtained in play and user, in addition, also very single for the display of virtual objects in game
One, user experience is not high.
Invention content
A present invention wherein embodiment provides a kind of display methods and device of virtual objects, at least to solve related skill
The technical issues of novel interactive mode of the acquisition shortage of virtual objects in art, poor user experience.
According to the one side of a wherein embodiment of the invention, a kind of display methods of virtual objects is provided, including:When
It when getting the first operational order, renders and generates the first interactive interface, first interactive interface includes the first display area;It obtains
It takes reality scene image and is shown in first display area;The second operational order for monitoring user, according to described
Second operational order chooses mark figure in the reality scene image;When receiving third operational order, according to described
Three operational orders obtain the three-dimensional modeling data of corresponding virtual objects, are rendered according to the three-dimensional modeling data and generate the void
Quasi- object;Controlling the virtual objects follows the mark figure to move.
According to the another aspect of a wherein embodiment of the invention, a kind of display device of virtual objects is additionally provided, including:
Interface rendering unit generates the first interactive interface for when getting the first operational order of user, rendering, and described first hands over
Mutual interface includes the first display area;Display unit, for obtain reality scene image and in first display area into
Row display;Generation unit, the second operational order for monitoring user, according to second operational order in the reality scene
Mark figure is chosen in image;Acquiring unit, for when receiving the third operational order, according to the third operational order
The three-dimensional modeling data for obtaining corresponding virtual objects renders according to the three-dimensional modeling data and generates the virtual objects;With
With mobile unit, the mark figure is followed to move for controlling the virtual objects.
According to the one side of a wherein embodiment of the invention, a kind of storage medium is provided, the program of storage is included,
In, equipment where the storage medium is controlled when described program is run executes the display methods of above-mentioned virtual objects.
According to the one side of a wherein embodiment of the invention, a kind of processor is provided, for running program, wherein
Described program executes the display methods of above-mentioned virtual objects when running.
According to the one side of a wherein embodiment of the invention, a kind of terminal is provided, including:One or more processing
Device, memory, display device and one or more programs, wherein one or more of programs are stored in the storage
It in device, and is configured as being executed by one or more of processors, described program includes for executing above-mentioned virtual objects
Display methods.
In a wherein embodiment of the invention, given in reality scene in manner shown using by game virtual object,
Achieve the purpose that the display of virtual objects being combined with reality scene, to by realizing game virtual scene and reality
Novel interactive mode effectively improves the technique effect of user experience between World Scene, and then solves empty in the related technology
The technical issues of obtaining for quasi- object lacks novel interactive mode, poor user experience.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and is constituted part of this application, this hair
Bright illustrative embodiments and their description are not constituted improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is the flow chart according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 2 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 3 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 4 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 5 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 6 is the structure diagram according to the present invention wherein display device of the virtual objects of an embodiment.
Specific implementation mode
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The every other embodiment that member is obtained without making creative work should all belong to the model that the present invention protects
It encloses.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, "
Two " etc. be for distinguishing similar object, without being used to describe specific sequence or precedence.It should be appreciated that using in this way
Data can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover
It includes to be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment to cover non-exclusive
Those of clearly list step or unit, but may include not listing clearly or for these processes, method, product
Or the other steps or unit that equipment is intrinsic.
According to embodiments of the present invention, a kind of display methods of virtual objects is provided, this method can be applied at terminal
It manages during executing software application on device, i.e.,:This method can be given by software application and is embodied, and be especially included in and answer
In Games Software, including mobile game and other Games Softwares.It should be noted that the flow in attached drawing illustrates
Although the step of can execute in the computer system of such as a group of computer-executable instructions also, show in flow charts
Logical order, but in some cases, it can with the steps shown or described are performed in an order that is different from the one herein.
Fig. 1 is the flow chart of the display methods of virtual objects according to the ... of the embodiment of the present invention, as shown in Figure 1, this method packet
Include following steps:
Step S11 is rendered when getting the first operational order and is generated the first interactive interface, first interactive interface
Including the first display area;
Step S13 obtains reality scene image and is shown in first display area;
Step S15 monitors the second operational order of user, according to second operational order in the reality scene image
Middle selection mark figure;
Step S17 obtains corresponding virtual objects when receiving third operational order according to the third operational order
Three-dimensional modeling data, rendered according to the three-dimensional modeling data and generate the virtual objects;
Step S19 controls the virtual objects and the mark figure is followed to move.
The present invention through the above steps, virtual objects and specific object in reality scene is associated, the spy is passed through
The position of earnest body adjusts position/displaying angle etc. to control virtual objects in the reality scene image that shooting obtains, from
And it realizes and adds free virtual objects interaction.
Further to disclose technical scheme of the present invention, more specific or preferred embodiment is provided below with to the present invention
Each specific steps, technical principle is illustrated:
In a preferred implementation process, above-mentioned virtual objects can be the card in game, role, pet, hero, stage property
Deng.
Optionally, in step S11, first operational order, including but not limited to user touch, click on, it is specific to slip off
Region, button or user send out the user behavior of special sound;Can also be according to scheduled rule, in specified by rules
The operational order generated by system when important document is reached, such as:After user completes specified Mission Objective in gaming, games system
Automatically generate operational order.
It after receiving the first operational order, renders and generates the first interactive interface, which includes the first display
Region can also include UI control layers, and the first display area, which can be the part display area of the first interactive interface, to be accounted for
According to the whole region of the first interactive interface.Preferably, using layered structure, the first display area is the whole of the first interactive interface
Region, the UI controls layer are located on first display area, and to the white space on UI control layers using transparent or
It is translucent to be processed such that the content of first display area is able to show user.
If the first display area only occupies the first interactive interface of part, it can be arranged in any position at interface, this
The place present invention is not specially limited.
Optionally, in step s 13, obtain reality scene image and display is carried out in first display area can be with
Including step performed below:
Step S131 calls the photographic device of equipment, aobvious described first by photographic device captured in real-time reality scene
Show reality scene image described in real-time display in region, such as:It is applied in mobile phone games system when by the method for the present embodiment
When, step S131 can be:By the camera shooting captured in real-time reality scene of calling mobile phone, to obtain reality scene image, so
Dynamic Announce is carried out in the first display area afterwards.
In a particular embodiment, the photographic device can be mobile phone built-in camera, can also be external photographic device,
Likewise, the position of photographic device can be preposition, can also be postposition;The number of photographic device can be one, two
It is a or multiple.
In a preferred embodiment, before by photographic device captured in real-time reality scene, it is also necessary to get parms to described
Photographic device carries out initialization adaptation, otherwise may result in the first model and deforms, and the parameter includes:Photographic device sheet
The parameter (white balance and Focusing parameter etc.) of body and the parameter (the ratio of width to height, field angle size etc.) of test pattern.This is specifically fitted
It is with process:The parameter and parameter configuration files of preset test pattern are obtained from the equipment for carrying the photographic device,
The parameter that described photographic device itself is extracted from the parameter configuration files is adapted to the photographic device, the parameter
Configuration file is what system was automatically generated according to model information in advance, or was acquired in advance from third-party server.
Specifically, each type of iOS system is adapted to by model information at present, Android system equipment can be attempted to connect
Artoolkit servers are first adapted to, if link failure, can attempt to be adapted to the ios machines similar to model (identical or connect
The camera of close pixel, resolution ratio identical or close to pixel), if can not be adapted to really, then full frame post-processing is carried out, carried out
Suitably it is stretched to standard proportional and attempts preset white balance and auto-focusing, it is ensured that at least under to a certain degree, described first
Threedimensional model, which is shown, to be deformed.
Optionally, the purpose of step S15 is freely to choose mark figure in reality scene image, without setting in advance
Test pattern is set, the matching of complicated figure need not be also carried out, effectively increase treatment effeciency, and greatly improve user's body
It tests.
Wherein, mark figure refers to:Selected by user, the material object in reality scene shoots via photographic device and is existed
It is formed by the figure in reality scene image in shooting, which can be acquired and store, and can be used as and render
It generates virtual objects and realizes that virtual objects follow mobile foundation.It is (including but unlimited that the figure can be arbitrary figure
In:Shape, color, size, pattern), non-centrosymmetry figure is preferably obtained, consequently facilitating the angle change of test pattern.
Second operational order can be after user touchs, clicks on specific region, to indicate to be emitted in the reality scene
The instruction of mark figure is chosen in image.
In a preferred embodiment, mark figure is chosen in the reality scene image according to second operational order
Specific steps may include:
The touch point for obtaining second operational order intercepts in the reality scene image centered on the touch point
Preset range region in image;Set described image to the mark figure.
In present embodiment, touch point is obtained according to the second operational order of user first, secondly centered on touch point,
Using presetted pixel as range (such as:The square of 100x100, the circle etc. that radius is 150) interception reality scene image
A certain specific region image wherein the specific range intercepted and shape can be arbitrary values, and is pre-set, the spy of interception in advance
Content can be arbitrary by determining area image content, such as:Including the image of half of cup, one jiao of the image comprising keyboard,
Include the image etc. of two mobile phones.It finally sets the specific region image of interception to mark figure, is generated virtually as rendering
Object and realization virtual objects follow mobile foundation.
In a preferred embodiment, mark figure is chosen in the reality scene image according to second operational order
Specific steps can also include:
The touch point for obtaining second operational order identifies in the reality scene image centered on the touch point
Preset range region in figure;Set the figure to the mark figure.
In present embodiment, touch point is obtained according to the second operational order of user first, secondly centered on touch point,
Using image processing algorithms such as outline identifications, intelligent recognition obtains a certain special pattern for including touch point in reality scene, should
Special pattern can be arbitrary content, such as:" teacup ", " cap ", " palm " etc..Finally the special pattern of acquisition is set
It is set to mark figure, virtual objects are generated as rendering and realizes that virtual objects follow mobile foundation.
In a preferred embodiment, the first figure is known or obtained for ease of user, further includes rendering before step S15
The first prompt message is generated, the prompt message is for prompting user to carry out second operation.Wherein, the first prompt message can
To be the setting identification illustrated example control in first interactive interface, which, which is used to show to user, can be used as identification figure
Test pattern can also be the figure, word or auditory tone cues for prompting user to carry out selection in reality scene image.
Optionally, after the step s 15, can also include:
Step S16 is rendered and is generated the second prompt message, and the prompt message is for prompting user to carry out third operation.
After determining identification figure in reality scene image, renders and generate the second prompt message, can feel in order to user
Know that this operation has succeeded, contributes to the operation for prompting user to carry out next step.
In a preferred implementation process, step S16, specially:On the position of the identification figure or according to the letter for identifying figure
Breath renders on the position of determination with default rule and generates the first threedimensional model.The position of above-mentioned identification figure is identification figure existing
Position in real scene image.First threedimensional model can arbitrary preset said three-dimensional body, such as:Method battle array, arched door, Call-officers-roll Platform, ghost
Fire, and/or third operational order control is generated in the interactive interface, the third operational order control is for receiving user's
Third operational order.
In a preferred embodiment, the size information and location information that shared screen is schemed according to identification, calculate the identification
Then the matrix of first threedimensional model is directly disposed as the space by figure in a space conversion matrices of Virtual Space
It carries out rendering after transformation matrix to show, you can realize the locating and displaying of threedimensional model.
Optionally, other than generating threedimensional model, it can also be any other prompt message, such as:Send out prompting sound
Sound, pop-up prompting frame, generation vibration prompting signal etc..
Optionally, in step S17, the threedimensional model number of corresponding virtual objects is obtained according to the third operational order
According to can be any one in following manner:
The three-dimensional modeling data is obtained from preset 3 d model library according to second operational order;Or
The corresponding request of second operational order is sent to preset server-side, and receives the three-dimensional mould of server-side feedback
Type data;Or
The corresponding request of second operational order is sent to preset server-side, and receives the response letter of server-side feedback
Breath, obtains the three-dimensional modeling data, the response message is at least according to the response message from preset 3 d model library
Including:The identifier of the virtual objects, the identifier is for obtaining the corresponding three-dimensional modeling data of the virtual objects.
Wherein, third operational order, including but not limited to user touch, click on, slip off specific region, button or user
Send out the user behavior of special sound;Can also be according to scheduled rule, when the important document of specified by rules is reached by system
The operational order of generation, such as:After user completes specified Mission Objective in gaming, games system automatically generates operation and refers to
It enables, to indicate to send out the instruction for the request for obtaining virtual objects.
In a preferred embodiment, it can also be such as under type:In the specific region of the first user interface, show
The all or part of virtual objects title or icon of acquisition, receive the third operational order, specific virtual for selecting
Object, and obtain corresponding three-dimensional modeling data according to the specific virtual objects.
Optionally, in step S17, the virtual objects are generated, including:
Step S175, according to the wash with watercolours of virtual objects described in location determination of the mark figure in first display area
Contaminate position;
Step S177 is rendered on the rendering position and is generated the virtual objects.
Preferably, first user interface further includes virtual objects display layer, and the virtual corresponding rendering is in institute
It states on virtual objects display layer, which is preferably superimposed on first display area, and the UI is controlled
Part stacking plus with the virtual objects display layer on.
It is highly preferred that the parameters such as the position of virtual objects, direction, size are by identifying figure in the first display area
Dispaly state determines.In a preferred implementation process, extraction mark figure location information in first display area, to determine void
The location information of quasi- object, direction of the extraction mark figure in first display area carry to determine the direction of virtual objects
Depth information of the mark figure in first display area is taken, to determine the size of virtual objects.When the parameter of virtual objects
After information determines, is rendered according to the parameter information and generate corresponding virtual objects.Preferably, above-mentioned function can be mounted by hanging point
Mode realize, i.e.,:The first hanging point is set on mark figure, the second hanging point will be set on virtual objects, by being hung second
Point and the mode of the first extension node mounting realize the linkage of virtual objects and mark figure, wherein second hanging point is preferably void
Quasi- object 3D models with node.
In a preferred embodiment, the parameters such as the position of virtual objects, direction, size obtain in the following way:According to mark
Know the size information of the shared screen of figure, direction and the position in terminal display space, you can calculate the mark
One world's transformation matrix of figure, carries out after the matrix of the virtual objects is then directly disposed as world's transformation matrix
Render display, you can realize the locating and displaying of virtual objects.
In a preferred implementation process, in step S19, when mark figure relatively moves in the first display area, the phase
To movement can be by mark figure in reality scene corresponding ontology have occurred it is mobile caused by, can also be photographic device
Caused by movement has occurred.The movement includes but not limited to:It is subjected to displacement, changes towards change, depth.Control virtual objects
Follow the movement of mark figure that can be realized at least through following two ways:First, detection mark figure is relative to the first display in real time
In region occur movement, real-time rendering generate for mark figure movement be adjusted after virtual objects, this method calculate
Amount is big, it is understood that there may be delay;The second, motion state Predicting Technique may be used, scheduled sampling interval duration is set, each
Sampling instant obtains the mobile status of mark figure, according to the mobile status that sampling obtains, predicts the motion state of subsequent time, wash with watercolours
Dye generates the virtual objects after adjustment, and is smoothed to the virtual objects after adjustment, to substantially reduce operand
While in turn ensure mobile continuity.Wherein, using in motion state Predicting Technique, can to the directions of preceding several frames with
And position ensures in moving process smoothly into row interpolation, while the tiny shake in movement is prevented by the way of high-pass filtering.
Optionally, further include executing step after step S19:
Step S111 is terminated after detecting that the mark figure disappears and is rendered the virtual objects, and in real time described in monitoring
With the presence or absence of the figure that matches with mark figure in reality scene image, if it is present repeat step S175, S177 and
S19。
Preferably, further include step S21:The 4th operational order for monitoring user refers in the 4th operation for receiving user
After order, the image currently shown is exported with preset picture format, it is folded with reality scene image which is preferably virtual objects
Image after adding.
The present invention through the above steps, virtual objects and specific object in reality scene is associated, the spy is passed through
The position of earnest body adjusts position/displaying angle etc. to control virtual objects in the reality scene image that shooting obtains, from
And it realizes and adds free virtual objects interaction.
Fig. 2-5 is the flow chart of the display methods of virtual objects according to the preferred embodiment of the invention.
When this life for getting user calling instruction (the first operational order), as shown in Figure 2.It renders and generates the first interaction
Interface, in the interactive interface, including the first display area, in the present embodiment, which occupies whole friendships
Mutual interface;Reality scene image is obtained by the camera of terminal and is shown in first display area.
Prompt user carries out selection mark figure (the second operational order) in reality scene image, when the user clicks real field
When sphere of movements for the elephants in scape image, system is automatically extracted using selected sphere of movements for the elephants as mark figure, and on the position of identification figure
It renders and generates a three-dimensional method battle array (the first threedimensional model), to prompt the user with the information of successful match, as shown in figure 3,
Method battle array is located on identification figure.
Monitor that user slides the operational order (third operational order) of screen, which indicates acquisition request formula
Refreshing (virtual objects), as shown in Figure 4.The request is sent, and receives corresponding response message, the response message is at least
Including:The identifier of corresponding god, the identifier is for obtaining the corresponding three-dimensional modeling data of formula god.
The rendering position of formula god is determined in the position of the first display area according to mark figure, on the rendering position
Formula god is generated, as shown in figure 5, formula spirit tablet is on mark figure and method battle array.
By the sphere of movements for the elephants shape pattern or dollying head in mobile reality scene, may be implemented formula god and method battle array with
With movement.
According to a wherein embodiment of the invention, a kind of embodiment of the display device of virtual objects is additionally provided.Fig. 6 is root
According to the structure diagram of the present invention wherein display device of the virtual objects of an embodiment, as shown in fig. 6, the device may include:
Interface rendering unit 10 generates the first interactive interface for when getting the first operational order of user, rendering, and described first
Interactive interface includes the first display area;Display unit 20, for obtaining reality scene image and in first display area
Inside shown;Generation unit 30, the second operational order for monitoring user, according to second operational order described existing
Mark figure is chosen in real scene image;Acquiring unit 40, for when receiving the third operational order, according to the third
Operational order obtains the three-dimensional modeling data of corresponding virtual objects, and it is described virtual to render generation according to the three-dimensional modeling data
Object;Mobile unit 50 is followed, follows the mark figure to move for controlling the virtual objects.
Display device provided in this embodiment for virtual objects can perform the use that the method for the present invention embodiment is provided
In the display methods of virtual objects, have the corresponding function module of execution method and advantageous effect.
According to a wherein embodiment of the invention, a kind of storage medium is additionally provided, storage medium includes the program of storage,
In, equipment where controlling storage medium when program is run executes the display methods of above-mentioned virtual objects.Above-mentioned storage medium can
To include but not limited to:USB flash disk, read-only memory (ROM), random access memory (RAM), mobile hard disk, magnetic disc or CD
Etc. the various media that can store program code.
According to a wherein embodiment of the invention, a kind of processor is additionally provided, processor is for running program, wherein journey
The display methods of above-mentioned virtual objects is executed when sort run.Above-mentioned processor can include but is not limited to:Microprocessor (MCU)
Or the processing unit of programmable logic device (FPGA) etc.
According to a wherein embodiment of the invention, a kind of terminal is additionally provided, including:One or more processors, memory,
Display device and one or more program, wherein one or more programs are stored in memory, and be configured as by
One or more processors execute, and program includes that the display methods of above-mentioned virtual objects is required for perform claim.In some realities
Apply in example, above-mentioned terminal can be smart mobile phone (such as:Android phone, iOS mobile phones etc.), tablet computer, palm PC with
And the terminal devices such as mobile internet device (Mobile Internet Devices, referred to as MID), PAD.Above-mentioned display dress
Set can be touch-screen type liquid crystal display (LCD), which may make user can be with the user interface of terminal
It interacts.In addition, above-mentioned terminal can also include:Input/output interface (I/O interfaces), universal serial bus (USB) end
Mouth, network interface, power supply and/or camera.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
In the above embodiment of the present invention, all emphasizes particularly on different fields to the description of each embodiment, do not have in some embodiment
The part of detailed description may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others
Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, for example, the unit division, Ke Yiwei
A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module
It connects, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
On unit.Some or all of unit therein can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can be stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or
Part steps.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (16)
1. a kind of display methods of virtual objects, which is characterized in that including:
It when getting the first operational order, renders and generates the first interactive interface, first interactive interface includes the first display
Region;
It obtains reality scene image and is shown in first display area;
The second operational order for monitoring user, mark is chosen according to second operational order in the reality scene image
Figure;
When receiving third operational order, the threedimensional model number of corresponding virtual objects is obtained according to the third operational order
According to according to the three-dimensional modeling data rendering generation virtual objects;
Controlling the virtual objects follows the mark figure to move.
2. the method as described in claim 1, which is characterized in that obtain reality scene image and in first display area
It is shown, including:It is existing described in real-time display in first display area by photographic device captured in real-time reality scene
Real scene image.
3. the method as described in claim 1, which is characterized in that the mark figure is non-centrosymmetric image.
4. the method as described in claim 1, which is characterized in that obtain reality scene image and in first display area
After being shown, further include:It renders and generates the first prompt message, the prompt message is for prompting user to carry out described second
Operation.
5. the method as described in claim 1, which is characterized in that according to second operational order in the reality scene image
Middle selection mark figure, including:
The touch point for obtaining second operational order intercepts pre- centered on the touch point in the reality scene image
If the image in range areas;
Set described image to the mark figure.
6. the method as described in claim 1, which is characterized in that according to second operational order in the reality scene image
Middle selection mark figure, including:
The touch point for obtaining second operational order identifies pre- centered on the touch point in the reality scene image
If the figure in range areas;
Set the figure to the mark figure.
7. the method as described in claim 1, which is characterized in that according to second operational order in the reality scene image
After middle selection mark figure, further include:It renders and generates the second prompt message, the prompt message is for prompting user to carry out third
Operation.
8. the method for claim 7, which is characterized in that it renders and generates the second prompt message, including:Scheme in the mark
Position on render generate the first threedimensional model, and/or the interactive interface generate third operational order control, the third
Operational order control is used to receive the third operational order of user.
9. the method as described in claim 1, which is characterized in that obtain corresponding virtual objects according to the third operational order
Three-dimensional modeling data, including:
The three-dimensional modeling data is obtained from preset 3 d model library according to the third operational order;Or
The corresponding request of the third operational order is sent to preset server-side, and receives the threedimensional model number of server-side feedback
According to;Or
The corresponding request of the third operational order is sent to preset server-side, and receives the response message of server-side feedback,
The three-dimensional modeling data is obtained from preset 3 d model library according to the response message, the response message is at least wrapped
It includes:The identifier of the virtual objects, the identifier is for obtaining the corresponding three-dimensional modeling data of the virtual objects.
10. the method as described in claim 1, which is characterized in that described to be rendered described in generation according to the three-dimensional modeling data
Virtual objects, including:
According to the rendering position of virtual objects described in location determination of the mark figure in first display area;
It is rendered on the rendering position and generates the virtual objects.
11. the method as described in claim 1, which is characterized in that control the virtual objects and the mark figure is followed to move it
Afterwards, further include:
After detecting that the mark figure disappears, terminates and render the virtual objects.
12. according to any methods of claim 1-11, it is characterised in that:It is further comprising the steps of:
The 4th operational order for monitoring user is exported after receiving the 4th operational order of user with preset picture format
The image currently shown.
13. a kind of display device of virtual objects, which is characterized in that including:
Interface rendering unit is rendered and generates the first interactive interface for when getting the first operational order of user, and described the
One interactive interface includes the first display area;
Display unit, for obtaining reality scene image and being shown in first display area;
Generation unit, the second operational order for monitoring user, according to second operational order in the reality scene figure
Mark figure is chosen as in;
Acquiring unit, for when receiving the third operational order, corresponding void to be obtained according to the third operational order
The three-dimensional modeling data of quasi- object, renders according to the three-dimensional modeling data and generates the virtual objects;
Mobile unit is followed, follows the mark figure to move for controlling the virtual objects.
14. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein run in described program
When control the storage medium where equipment perform claim require the display sides of the virtual objects described in any one of 1 to 12
Method.
15. a kind of processor, which is characterized in that the processor is for running program, wherein right of execution when described program is run
Profit requires the display methods of the virtual objects described in any one of 1 to 12.
16. a kind of terminal, including:One or more processors, memory, display device and one or more programs, wherein
One or more of programs are stored in the memory, and are configured as being held by one or more of processors
Row, described program include that the display methods of the virtual objects described in any one of 1 to 12 is required for perform claim.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710061376 | 2017-01-25 | ||
CN2017100613767 | 2017-01-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108273265A true CN108273265A (en) | 2018-07-13 |
Family
ID=62801204
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710313563.XA Pending CN108305325A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
CN201710314092.4A Pending CN108273265A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
CN201710314103.9A Pending CN108288306A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710313563.XA Pending CN108305325A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710314103.9A Pending CN108288306A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN108305325A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109078327A (en) * | 2018-08-28 | 2018-12-25 | 百度在线网络技术(北京)有限公司 | Game implementation method and equipment based on AR |
CN109107156A (en) * | 2018-08-10 | 2019-01-01 | 腾讯科技(深圳)有限公司 | Game object acquisition methods, device, electronic equipment and readable storage medium storing program for executing |
CN109472873A (en) * | 2018-11-02 | 2019-03-15 | 北京微播视界科技有限公司 | Generation method, device, the hardware device of threedimensional model |
CN109685910A (en) * | 2018-11-16 | 2019-04-26 | 成都生活家网络科技有限公司 | Room setting setting method, device and VR wearable device based on VR |
CN110069125A (en) * | 2018-09-21 | 2019-07-30 | 北京微播视界科技有限公司 | The control method and device of virtual objects |
CN110109726A (en) * | 2019-04-30 | 2019-08-09 | 网易(杭州)网络有限公司 | Receiving handling method and transmission method, the device and storage medium of virtual objects |
CN111103967A (en) * | 2018-10-25 | 2020-05-05 | 北京微播视界科技有限公司 | Control method and device of virtual object |
CN111752161A (en) * | 2020-06-18 | 2020-10-09 | 格力电器(重庆)有限公司 | Electric appliance control method, system and storage medium |
CN111913624A (en) * | 2020-08-18 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Interaction method and device for objects in virtual scene |
CN112051961A (en) * | 2020-09-04 | 2020-12-08 | 脸萌有限公司 | Virtual interaction method and device, electronic equipment and computer readable storage medium |
CN112221124A (en) * | 2020-10-21 | 2021-01-15 | 腾讯科技(深圳)有限公司 | Virtual object generation method and device, electronic equipment and storage medium |
CN112516593A (en) * | 2019-09-19 | 2021-03-19 | 上海哔哩哔哩科技有限公司 | Card drawing method, card drawing system and computer equipment |
CN113058267A (en) * | 2021-04-06 | 2021-07-02 | 网易(杭州)网络有限公司 | Control method and device of virtual object and electronic equipment |
CN113101647A (en) * | 2021-04-14 | 2021-07-13 | 北京字跳网络技术有限公司 | Information display method, device, equipment and storage medium |
CN113289334A (en) * | 2021-05-14 | 2021-08-24 | 网易(杭州)网络有限公司 | Game scene display method and device |
CN113691796A (en) * | 2021-08-16 | 2021-11-23 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer-readable storage medium |
CN114047998A (en) * | 2021-11-30 | 2022-02-15 | 珠海金山数字网络科技有限公司 | Object updating method and device |
CN115185374A (en) * | 2022-07-14 | 2022-10-14 | 北京奇岱松科技有限公司 | Data processing system based on virtual reality |
CN115350475A (en) * | 2022-06-30 | 2022-11-18 | 元素创造(深圳)网络科技有限公司 | Virtual object control method and device |
WO2022241701A1 (en) * | 2021-05-20 | 2022-11-24 | 华为技术有限公司 | Image processing method and device |
WO2023160072A1 (en) * | 2022-02-23 | 2023-08-31 | 华为技术有限公司 | Human-computer interaction method and apparatus in augmented reality (ar) scene, and electronic device |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109939433B (en) * | 2019-03-11 | 2022-09-30 | 网易(杭州)网络有限公司 | Operation control method and device of virtual card, storage medium and electronic equipment |
CN110058685B (en) * | 2019-03-20 | 2021-07-09 | 北京字节跳动网络技术有限公司 | Virtual object display method and device, electronic equipment and computer-readable storage medium |
CN110404250B (en) * | 2019-08-26 | 2023-08-22 | 网易(杭州)网络有限公司 | Card drawing method and device in game |
CN110533780B (en) | 2019-08-28 | 2023-02-24 | 深圳市商汤科技有限公司 | Image processing method and device, equipment and storage medium thereof |
CN110639202B (en) * | 2019-10-29 | 2021-11-12 | 网易(杭州)网络有限公司 | Display control method and device in card game |
CN110975285B (en) * | 2019-12-06 | 2024-03-22 | 北京像素软件科技股份有限公司 | Smooth cutter light acquisition method and device |
CN111821691A (en) * | 2020-07-24 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Interface display method, device, terminal and storage medium |
CN112710254A (en) * | 2020-12-21 | 2021-04-27 | 珠海格力智能装备有限公司 | Object measuring method, system, device, storage medium and processor |
CN113590013B (en) * | 2021-07-13 | 2023-08-25 | 网易(杭州)网络有限公司 | Virtual resource processing method, nonvolatile storage medium and electronic device |
CN114307138B (en) * | 2021-12-28 | 2023-09-26 | 北京字跳网络技术有限公司 | Interaction method and device based on card, computer equipment and storage medium |
CN114758042B (en) * | 2022-06-14 | 2022-09-02 | 深圳智华科技发展有限公司 | Novel virtual simulation engine, virtual simulation method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150356812A1 (en) * | 2010-12-15 | 2015-12-10 | Bally Gaming, Inc. | System and method for augmented reality using a player card |
CN105844714A (en) * | 2016-04-12 | 2016-08-10 | 广州凡拓数字创意科技股份有限公司 | Augmented reality based scenario display method and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100138193A (en) * | 2009-06-24 | 2010-12-31 | 넥스트키 주식회사 | The augmented reality content providing system and equipment for the user interaction based on touchscreen |
CN102902710B (en) * | 2012-08-08 | 2015-08-26 | 成都理想境界科技有限公司 | Based on the augmented reality method of bar code, system and mobile terminal |
CN106157359B (en) * | 2015-04-23 | 2020-03-10 | 中国科学院宁波材料技术与工程研究所 | Design method of virtual scene experience system |
CN105929945A (en) * | 2016-04-18 | 2016-09-07 | 展视网(北京)科技有限公司 | Augmented reality interaction method and device, mobile terminal and mini-computer |
-
2017
- 2017-05-05 CN CN201710313563.XA patent/CN108305325A/en active Pending
- 2017-05-05 CN CN201710314092.4A patent/CN108273265A/en active Pending
- 2017-05-05 CN CN201710314103.9A patent/CN108288306A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150356812A1 (en) * | 2010-12-15 | 2015-12-10 | Bally Gaming, Inc. | System and method for augmented reality using a player card |
CN105844714A (en) * | 2016-04-12 | 2016-08-10 | 广州凡拓数字创意科技股份有限公司 | Augmented reality based scenario display method and system |
Non-Patent Citations (1)
Title |
---|
无: "AR新玩法——现世召唤开启体验!", 《HTTP://YYS.163.COM/M/ZLP/20170118/24874_668345.HTML》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109107156A (en) * | 2018-08-10 | 2019-01-01 | 腾讯科技(深圳)有限公司 | Game object acquisition methods, device, electronic equipment and readable storage medium storing program for executing |
CN109078327A (en) * | 2018-08-28 | 2018-12-25 | 百度在线网络技术(北京)有限公司 | Game implementation method and equipment based on AR |
CN110069125B (en) * | 2018-09-21 | 2023-12-22 | 北京微播视界科技有限公司 | Virtual object control method and device |
CN110069125A (en) * | 2018-09-21 | 2019-07-30 | 北京微播视界科技有限公司 | The control method and device of virtual objects |
CN111103967A (en) * | 2018-10-25 | 2020-05-05 | 北京微播视界科技有限公司 | Control method and device of virtual object |
CN109472873B (en) * | 2018-11-02 | 2023-09-19 | 北京微播视界科技有限公司 | Three-dimensional model generation method, device and hardware device |
CN109472873A (en) * | 2018-11-02 | 2019-03-15 | 北京微播视界科技有限公司 | Generation method, device, the hardware device of threedimensional model |
CN109685910A (en) * | 2018-11-16 | 2019-04-26 | 成都生活家网络科技有限公司 | Room setting setting method, device and VR wearable device based on VR |
CN110109726A (en) * | 2019-04-30 | 2019-08-09 | 网易(杭州)网络有限公司 | Receiving handling method and transmission method, the device and storage medium of virtual objects |
CN112516593B (en) * | 2019-09-19 | 2023-01-24 | 上海哔哩哔哩科技有限公司 | Card drawing method, card drawing system and computer equipment |
CN112516593A (en) * | 2019-09-19 | 2021-03-19 | 上海哔哩哔哩科技有限公司 | Card drawing method, card drawing system and computer equipment |
CN111752161B (en) * | 2020-06-18 | 2023-06-30 | 格力电器(重庆)有限公司 | Electrical appliance control method, system and storage medium |
CN111752161A (en) * | 2020-06-18 | 2020-10-09 | 格力电器(重庆)有限公司 | Electric appliance control method, system and storage medium |
CN111913624A (en) * | 2020-08-18 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Interaction method and device for objects in virtual scene |
CN112051961A (en) * | 2020-09-04 | 2020-12-08 | 脸萌有限公司 | Virtual interaction method and device, electronic equipment and computer readable storage medium |
CN112221124A (en) * | 2020-10-21 | 2021-01-15 | 腾讯科技(深圳)有限公司 | Virtual object generation method and device, electronic equipment and storage medium |
CN113058267B (en) * | 2021-04-06 | 2024-02-02 | 网易(杭州)网络有限公司 | Virtual object control method and device and electronic equipment |
CN113058267A (en) * | 2021-04-06 | 2021-07-02 | 网易(杭州)网络有限公司 | Control method and device of virtual object and electronic equipment |
CN113101647B (en) * | 2021-04-14 | 2023-10-24 | 北京字跳网络技术有限公司 | Information display method, device, equipment and storage medium |
CN113101647A (en) * | 2021-04-14 | 2021-07-13 | 北京字跳网络技术有限公司 | Information display method, device, equipment and storage medium |
CN113289334A (en) * | 2021-05-14 | 2021-08-24 | 网易(杭州)网络有限公司 | Game scene display method and device |
WO2022241701A1 (en) * | 2021-05-20 | 2022-11-24 | 华为技术有限公司 | Image processing method and device |
CN113691796B (en) * | 2021-08-16 | 2023-06-02 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer readable storage medium |
CN113691796A (en) * | 2021-08-16 | 2021-11-23 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer-readable storage medium |
CN114047998A (en) * | 2021-11-30 | 2022-02-15 | 珠海金山数字网络科技有限公司 | Object updating method and device |
CN114047998B (en) * | 2021-11-30 | 2024-04-19 | 珠海金山数字网络科技有限公司 | Object updating method and device |
WO2023160072A1 (en) * | 2022-02-23 | 2023-08-31 | 华为技术有限公司 | Human-computer interaction method and apparatus in augmented reality (ar) scene, and electronic device |
CN115350475A (en) * | 2022-06-30 | 2022-11-18 | 元素创造(深圳)网络科技有限公司 | Virtual object control method and device |
CN115185374A (en) * | 2022-07-14 | 2022-10-14 | 北京奇岱松科技有限公司 | Data processing system based on virtual reality |
Also Published As
Publication number | Publication date |
---|---|
CN108305325A (en) | 2018-07-20 |
CN108288306A (en) | 2018-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108273265A (en) | The display methods and device of virtual objects | |
CN109557998B (en) | Information interaction method and device, storage medium and electronic device | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
CN108525299B (en) | System and method for enhancing computer applications for remote services | |
CN111324253B (en) | Virtual article interaction method and device, computer equipment and storage medium | |
CN104937641A (en) | Information processing device, terminal device, information processing method, and programme | |
CN107479712B (en) | Information processing method and device based on head-mounted display equipment | |
CN108563327B (en) | Augmented reality method, device, storage medium and electronic equipment | |
CN108874114A (en) | Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service | |
US20210089639A1 (en) | Method and system for 3d graphical authentication on electronic devices | |
KR20180013892A (en) | Reactive animation for virtual reality | |
CN106527711A (en) | Virtual reality equipment control method and virtual reality equipment | |
WO2022247204A1 (en) | Game display control method, non-volatile storage medium and electronic device | |
CN107797661A (en) | A kind of plant control unit, method and mobile terminal | |
WO2021208432A1 (en) | Interaction method and apparatus, interaction system, electronic device, and storage medium | |
CN111282264B (en) | Virtual object control method and device | |
CN108415570A (en) | Control selection method based on augmented reality and device | |
CN115624740A (en) | Virtual reality equipment, control method, device and system thereof, and interaction system | |
CN109697001B (en) | Interactive interface display method and device, storage medium and electronic device | |
CN112764527A (en) | Product introduction projection interaction method, terminal and system based on somatosensory interaction equipment | |
KR102615263B1 (en) | Metaverse system | |
CN106484114B (en) | Interaction control method and device based on virtual reality | |
CN116954420A (en) | Method and device for acquiring product information and terminal | |
CN116259099A (en) | Gesture posture estimation method, system and computer readable storage medium | |
KR101659917B1 (en) | Apparatus for virtual battle competition by using motion command input |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |