CN108305325A - The display methods and device of virtual objects - Google Patents
The display methods and device of virtual objects Download PDFInfo
- Publication number
- CN108305325A CN108305325A CN201710313563.XA CN201710313563A CN108305325A CN 108305325 A CN108305325 A CN 108305325A CN 201710313563 A CN201710313563 A CN 201710313563A CN 108305325 A CN108305325 A CN 108305325A
- Authority
- CN
- China
- Prior art keywords
- virtual objects
- operational order
- preset
- reality scene
- scene image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of display methods of virtual objects and devices.Wherein, this method includes:It when getting the first operational order of user, renders and generates the first interactive interface, first interactive interface includes the first display area;It obtains reality scene image and is shown in first display area;Judge in the reality scene image with the presence or absence of the first figure to match with preset test pattern;If it is present the second operational order of monitoring user;And when receiving second operational order, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order;It is rendered according to the three-dimensional modeling data and generates the virtual objects.The present invention solves in the related technology the technical issues of obtaining for virtual objects lacks novel interactive mode, poor user experience.
Description
Technical field
The present invention relates to field of play, in particular to the display methods and device of a kind of virtual objects.
Background technology
In existing Games Software, it usually needs user extracts the virtual objects such as card, role or stage property to play,
Especially card cards game, the extraction of card and displaying are mostly important one of game contents.However, in existing game,
The acquisition modes of the virtual objects such as card, role or stage property are usually to be clicked to extract button by user, and system is according to default general
Rate is shown to user after generating specific virtual objects;And exhibition method is usually the mark that virtual objects are clicked by user,
It is shown to user after the specific 2D images of system generation virtual objects.However, there are following defects for aforesaid operations flow:Lack trip
Interactivity between the virtual objects obtained in play and user, in addition, also very single for the display of virtual objects in game
One, user experience is not high.
Invention content
A present invention wherein embodiment provides a kind of display methods and device of virtual objects, at least to solve related skill
The technical issues of novel interactive mode of the acquisition shortage of virtual objects in art, poor user experience.
According to the one side of a wherein embodiment of the invention, a kind of display methods of virtual objects is provided, including:When
It when getting the first operational order of user, renders and generates the first interactive interface, first interactive interface includes the first display
Region;It obtains reality scene image and is shown in first display area;Judge be in the reality scene image
It is no to there is the first figure to match with preset test pattern, if it is present the second operational order of monitoring user;It is connecing
When receiving second operational order, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order,
It is rendered according to the three-dimensional modeling data and generates the virtual objects.
According to the another aspect of a wherein embodiment of the invention, a kind of display device of virtual objects is additionally provided, including:
Interface rendering unit generates the first interactive interface for when getting the first operational order of user, rendering, and described first hands over
Mutual interface includes the first display area;Display unit, for obtain reality scene image and in first display area into
Row display;Matching unit, for judging in the reality scene image with the presence or absence of the to match with preset test pattern
One figure;If it is present the second operational order of monitoring user;Acquiring unit, for receive it is described second operation refer to
When enabling, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order, according to the threedimensional model number
The virtual objects are generated according to rendering.
According to the one side of a wherein embodiment of the invention, a kind of storage medium is provided, the program of storage is included,
In, equipment where the storage medium is controlled when described program is run executes the display methods of above-mentioned virtual objects.
According to the one side of a wherein embodiment of the invention, a kind of processor is provided, for running program, wherein
Described program executes the display methods of above-mentioned virtual objects when running.
According to the one side of a wherein embodiment of the invention, a kind of terminal is provided, including:One or more processing
Device, memory, display device and one or more programs, wherein one or more of programs are stored in the storage
It in device, and is configured as being executed by one or more of processors, described program includes for executing above-mentioned virtual objects
Display methods.
In a wherein embodiment of the invention, given in reality scene in manner shown using by game virtual object,
Achieve the purpose that the display of virtual objects being combined with reality scene, to by realizing game virtual scene and reality
Novel interactive mode effectively improves the technique effect of user experience between World Scene, and then solves empty in the related technology
The technical issues of obtaining for quasi- object lacks novel interactive mode, poor user experience.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and is constituted part of this application, this hair
Bright illustrative embodiments and their description are not constituted improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is the flow chart according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 2 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 3 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 4 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 5 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 6 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 7 is the structure diagram according to the present invention wherein display device of the virtual objects of an embodiment.
Specific implementation mode
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The every other embodiment that member is obtained without making creative work should all belong to the model that the present invention protects
It encloses.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, "
Two " etc. be for distinguishing similar object, without being used to describe specific sequence or precedence.It should be appreciated that using in this way
Data can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover
It includes to be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment to cover non-exclusive
Those of clearly list step or unit, but may include not listing clearly or for these processes, method, product
Or the other steps or unit that equipment is intrinsic.
According to embodiments of the present invention, a kind of display methods of virtual objects is provided, this method can be applied at terminal
It manages during executing software application on device, i.e.,:This method can be given by software application and is embodied, and be especially included in and answer
In Games Software, including mobile game and other Games Softwares.It should be noted that the flow in attached drawing illustrates
Although the step of can execute in the computer system of such as a group of computer-executable instructions also, show in flow charts
Logical order, but in some cases, it can with the steps shown or described are performed in an order that is different from the one herein.
Fig. 1 is the flow chart of the display methods of virtual objects according to the ... of the embodiment of the present invention, as shown in Figure 1, this method packet
Include following steps:
Step S11 is rendered when getting the first operational order of user and is generated the first interactive interface, and described first hands over
Mutual interface includes the first display area;
Step S13 obtains reality scene image and is shown in first display area;
Step S15 judges in the reality scene image with the presence or absence of the first figure to match with preset test pattern
Shape, if it is present the second operational order of monitoring user;
Step S17 is obtained corresponding virtual when receiving second operational order according to second operational order
The three-dimensional modeling data of object renders according to the three-dimensional modeling data and generates the virtual objects;
The present invention through the above steps, virtual objects and specific object in reality scene is associated, the spy is passed through
The position of earnest body adjusts position/displaying angle etc. to control virtual objects in the reality scene image that shooting obtains, from
And it realizes and adds free virtual objects interaction.
Further to disclose technical scheme of the present invention, more specific or preferred embodiment is provided below with to the present invention
Each specific steps, technical principle is illustrated:
In a preferred implementation process, above-mentioned virtual objects can be the card in game, role, pet, hero, stage property
Deng.
Optionally, in step S11, first operational order, including but not limited to user touch, click on, it is specific to slip off
Region, button or user send out the user behavior of special sound;Can also be according to scheduled rule, in specified by rules
The operational order generated by system when important document is reached, such as:After user completes specified Mission Objective in gaming, games system
Automatically generate operational order.
It after receiving the first operational order, renders and generates the first interactive interface, which includes the first display
Region can also include UI control layers, and the first display area, which can be the part display area of the first interactive interface, to be accounted for
According to the whole region of the first interactive interface.Preferably, using layered structure, the first display area is the whole of the first interactive interface
Region, the UI controls layer are located on first display area, and to the white space on UI control layers using transparent or
It is translucent to be processed such that the content of first display area is able to show user.
If the first display area only occupies the first interactive interface of part, it can be arranged in any position at interface, this
The place present invention is not specially limited.
Optionally, in step s 13, obtain reality scene image and display is carried out in first display area can be with
Including step performed below:
Step S131 calls the photographic device of equipment, aobvious described first by photographic device captured in real-time reality scene
Show reality scene image described in real-time display in region, such as:It is applied in mobile phone games system when by the method for the present embodiment
When, step S131 can be:By the camera shooting captured in real-time reality scene of calling mobile phone, to obtain reality scene image, so
Dynamic Announce is carried out in the first display area afterwards;
In a particular embodiment, the photographic device can be mobile phone built-in camera, can also be external photographic device,
Likewise, the position of photographic device can be preposition, can also be postposition;The number of photographic device can be one, two
It is a or multiple.
In a preferred embodiment, before by photographic device captured in real-time reality scene, it is also necessary to get parms to described
Photographic device carries out initialization adaptation, otherwise may result in the first model and deforms, and the parameter includes:Photographic device sheet
The parameter (white balance and Focusing parameter etc.) of body and the parameter (the ratio of width to height, field angle size etc.) of test pattern.This is specifically fitted
It is with process:The parameter and parameter configuration files of preset test pattern are obtained from the equipment for carrying the photographic device,
The parameter that described photographic device itself is extracted from the parameter configuration files is adapted to the photographic device, the parameter
Configuration file is what system was automatically generated according to model information in advance, or was acquired in advance from third-party server.
Specifically, each type of iOS system is adapted to by model information at present, Android system equipment can be attempted to connect
Artoolkit servers are first adapted to, if link failure, can attempt to be adapted to the ios machines similar to model (identical or connect
The camera of close pixel, resolution ratio identical or close to pixel), if can not be adapted to really, then full frame post-processing is carried out, carried out
Suitably it is stretched to standard proportional and attempts preset white balance and auto-focusing, it is ensured that at least under to a certain degree, described first
Threedimensional model, which is shown, to be deformed.
Optionally, the purpose of step S15 is to determine in reality scene image and whether there is and preset standard picture phase
Matched first figure, and in case of presence, the second operational order of triggering monitoring user.
Wherein, test pattern is preset in advance, can be one can also be multiple, can be arbitrary figure (packet
It includes but is not limited to:Shape, color, size, pattern), it is preferred to use non-centrosymmetry figure, consequently facilitating the angle of test pattern
Variation, preset pattern are preferably pre-stored in specified storage region.
First figure refers to:Material object in reality scene shoots via photographic device and is present in shooting and is formed by reality
Figure in scene image, the figure can match with test pattern according to scheduled matching rule.
Second operational order, including but not limited to user, which touch, click on, slips off specific region, button or user sends out spy
The user behavior of attribute sound;Can also be to be generated by system according to scheduled rule when the important document of specified by rules is reached
Operational order, such as:After user completes specified Mission Objective in gaming, games system automatically generates operational order.
In more specifical embodiment, determine in display scene images with the presence or absence of matching with preset standard picture
The specific steps of the first figure may include:
Step S151, identification step:The reality scene image got is identified, determines figure to be matched
Position;Wherein, it can be the complete reality scene image of identification to the identification of reality scene image, can also be only identification the
Image in one display area in preset identification region, to reduce operand, preferably only to preset in the first display area
Image in identification region is identified, such as:Only the image in the central area of the first display area is identified;It is more excellent
Ground can render the boundary of identification region and/or identification region in the first interactive interface and identify figure with forming region,
With help for observing determining identification region, the area identification figure can be region frame (such as:Polygon wire frame), also may be used
Be special color (such as:The color lump being covered in identification region), the area identification figure is preferably disposed on the first friendship
In the UI control layers that mutual interface is included;Meanwhile can also prompt message be set in UI control layers and user is prompted to move the first figure
It moves to specified region, such as:By text prompt user's cell phone camera so that the first figure is located at screen center
Region.
Know or obtain the first figure for ease of user, further includes standard drawing before step S15 in preferred embodiment
Shape shows step and/or test pattern exports step, and the test pattern displaying step is used to show the standard body to user
Type, the test pattern output step is for outputting standard figure for operations such as user's preservation, printings.Test pattern displaying step
Rapid and test pattern output step is preferably only to trigger after the instruction for receiving user, more electedly, can be described the
Test pattern displaying control and/or test pattern are set in one interactive interface and export control, when detecting behaviour of the user to control
Corresponding test pattern displaying step is triggered when making or test pattern exports step.
In a preferred implementation process, after obtaining reality scene image, Scale invariant features transform matching algorithm can be carried out
(SI FT) carries out the detection of characteristic point, the position of figure to be matched is determined according to the characteristic point, and judge whether be institute
The figure needed, SI FT algorithms have rotational invariance and scaling invariance, are tilted or rotated even if figure to be matched exists
The case where, and feature dot density will not change, so still can correctly pick up.
Step S153, matching step:The figure to be matched is pre-processed;Preprocessing process is for the ease of from waiting for
It matches and extracts validity feature information in figure, to help to judge whether the figure to be matched is realized with preset test pattern
Matching.Pretreatment can include but is not limited to:Gray processing, binaryzation, extraction characteristic point etc..By first in reality scene image
In determine the position of figure to be matched, then carry out images match for the figure to be matched, so can effectively reduce calculating
Amount, while substantially reducing the time delay of images match.Determine that the position of figure to be matched can be by technologies such as edge contour detections
It is achieved.
In a preferred implementation process, pretreatment mode uses:It is down-sampled, color binaryzation, and cross a high-pass filtering.Its
In, down-sampled is the performance for improving detection, pattern to be detected is sufficiently large can be not necessarily to it is consistent with artwork;Color binaryzation be because
Selection a little is characterized independent of color, it can be than more visible after binaryzation;High-pass filtering can reduce tiny noise, prevent
Some pocket-handkerchieves in figure, such as cut, person's handwriting or etc influence.
Step S155, judgment step:Judge whether the figure to be matched is to match with preset test pattern
One figure.In a preferred implementation process, similarity threshold can be arranged in above-mentioned matching, such as:When figure to be matched and standard drawing
When the similarity of shape is more than or equal to 80%, then regard as matching.The threshold value can be arbitrarily arranged according to actual needs.
In more specifical embodiment, determine in display scene images with the presence or absence of matching with preset standard picture
The specific steps of the first figure can also include:
Step S152, pre-treatment step:The reality scene image got is pre-processed;Preprocessing process is
For the ease of from figure to be matched extract validity feature information, to help to judge the figure to be matched whether with it is preset
Test pattern realizes matching.Pretreatment can include but is not limited to:Gray processing, binaryzation, extraction characteristic point etc..Can so have
Effect reduces calculation amount, while substantially reducing the time delay of images match.
In a preferred implementation process, pretreatment mode uses:It is down-sampled, color binaryzation, and cross a high-pass filtering.Its
In, down-sampled is the performance for improving detection, pattern to be detected is sufficiently large can be not necessarily to it is consistent with artwork;Color binaryzation be because
Selection a little is characterized independent of color, it can be than more visible after binaryzation;High-pass filtering can reduce tiny noise, prevent
Some pocket-handkerchieves in figure, such as cut, person's handwriting or etc influence.
Step S154, identification step:The pretreated reality scene image is identified, determines figure to be matched
Position;Wherein, it can be the complete reality scene image of identification to the identification of reality scene image, can also be only to identify
Image in first display area in preset identification region, to reduce operand, preferably only to being preset in the first display area
Identification region in image be identified, such as:Only the image in the central area of the first display area is identified;More
Excellently, the boundary of identification region and/or identification region can be rendered with forming region mark figure in the first interactive interface
Shape, with help for observing determining identification region, the area identification figure can be region frame (such as:Polygon wire frame),
Can also be special color (such as:The color lump being covered in identification region), the area identification figure is preferably disposed on
In the UI control layers that one interactive interface is included;Meanwhile can also prompt message be set in UI control layers and prompt user by the first figure
Shape is moved to specified region, such as:By text prompt user's cell phone camera so that the first figure is located at screen
Central area.Determine that the position of figure to be matched can be achieved by technologies such as edge contour detections.
Step S156, judgment step:Judge whether the figure to be matched is to match with preset test pattern
One figure.In a preferred implementation process, similarity threshold can be arranged in above-mentioned matching, such as:When figure to be matched and standard drawing
When the similarity of shape is more than or equal to 80%, then regard as matching.The threshold value can be arbitrarily arranged according to actual needs.
Pretreatment for image be more first do it is better because be can exclusive PCR as possible, otherwise can meet for
Some obviously cannot matched figure be also identified as correct situation and occur, with effective reductions identification error probability.
Optionally, further include executing step after step S155 or step S156:
Step S157, if there is no with test pattern to matched first figure, then return to step 151 or step 152.
Optionally, in step S15, judge to whether there is and preset test pattern phase in the reality scene image
The first figure matched, if it does, further including step performed below:
Step S16 is rendered and is generated the first prompt message, and the prompt message is for prompting user to carry out second behaviour
Make.
When there is the first figure to match with preset test pattern in reality scene image, rendering generation first and carrying
Show information, this identification can be perceived in order to user and has been succeeded, contributes to the operation for prompting user to carry out next step.
In a preferred implementation process, step S16, specially:On the position of first figure or according to the first figure
Information and default rule and on the position of determination render generate the first threedimensional model.The position of above-mentioned first figure is first
Position of the figure in reality scene image.First threedimensional model can arbitrary preset said three-dimensional body, such as:Method battle array, arched door, point
The second operational order control is generated by platform, will-o'-the-wisp, and/or in the interactive interface, the second operational order control is for connecing
Receive the second operational order of user.
In a preferred embodiment, according to screen shared by the normal size information of preset test pattern and first figure
Size information and location information, calculate first figure Virtual Space a space conversion matrices, then by institute
The matrix for stating the first threedimensional model is directly disposed as carrying out rendering after the space conversion matrices showing, you can realizes threedimensional model
Locating and displaying.
Optionally, other than generating threedimensional model, it can also be any other prompt message, such as:Send out prompting sound
Sound, pop-up prompting frame, generation vibration prompting signal etc..
Optionally, in step S17, the threedimensional model number of corresponding virtual objects is obtained according to second operational order
According to can be any one in following manner:
The three-dimensional modeling data is obtained from preset 3 d model library according to second operational order;Or
The corresponding request of second operational order is sent to preset server-side, and receives the three-dimensional mould of server-side feedback
Type data;Or
The corresponding request of second operational order is sent to preset server-side, and receives the response letter of server-side feedback
Breath, obtains the three-dimensional modeling data, the response message is at least according to the response message from preset 3 d model library
Including:The identifier of the virtual objects, the identifier is for obtaining the corresponding three-dimensional modeling data of the virtual objects.
In a preferred embodiment, it can also be such as under type:In the specific region of the first user interface, show
The all or part of virtual objects title or icon of acquisition receive second operational order, specific virtual for selecting
Object, and obtain corresponding three-dimensional modeling data according to the specific virtual objects.
Optionally, in step S17, the virtual objects are generated, including:
Step S175, according to virtual objects described in location determination of first figure in first display area
Rendering position;
Step S177 is rendered on the rendering position and is generated the virtual objects.
Optionally, second operational order includes:The quantity of the virtual objects to be obtained;First figure
Quantity is one or more;In step S17, the virtual objects are generated, can also include:
Step S176, according to the quantity of first figure and first figure in first display area
One or more rendering positions of virtual objects described in location determination;
Step S178 is rendered according to preset rules on the rendering position and is generated whole virtual objects.
Wherein, may include the quantity of virtual objects to be obtained in the second operational order, in addition, in reality scene image
In can also match one or more first figures of acquisition, when the quantity of virtual objects is multiple and/or the first figure quantity
When being multiple, it can be rendered at rendering position according to preset rules and generate whole virtual objects.
Preferably, first user interface further includes virtual objects display layer, and the virtual corresponding rendering is in institute
It states on virtual objects display layer, which is preferably superimposed on first display area, and the UI is controlled
Part stacking plus with the virtual objects display layer on.
It is highly preferred that the parameters such as the position of virtual objects, direction, size are by the first figure in the first display area
Dispaly state determine.In a preferred implementation process, the first figure location information in first display area is extracted, is come true
Determine the location information of virtual objects, direction of the first figure of extraction in first display area, to determine virtual objects
Direction, depth information of the first figure of extraction in first display area, to determine the size of virtual objects.When virtual right
After the parameter information of elephant determines, is rendered according to the parameter information and generate corresponding virtual objects.Preferably, above-mentioned function can pass through
The mode of hanging point mounting realizes, i.e.,:The first hanging point is set on the first figure, the second hanging point will be set on virtual objects, is led to
Cross the linkage that the mode of the second hanging point and the first extension node mounting is realized to virtual objects and the first figure, wherein described second
Hanging point is preferably virtual objects 3D models with node.
In a preferred embodiment, the parameters such as the position of virtual objects, direction, size obtain in the following way:According to pre-
If test pattern normal size information and first figure shared by screen size information, you can it is counter to release described first
The depth of figure obtains the characteristic point of first figure by SIFT algorithms, you can obtain the direction of first figure with
World's transformation matrix is calculated in position in terminal display space according to the above, then by the void
The matrix of quasi- object is directly disposed as carrying out rendering after world's transformation matrix showing, you can realizes that the positioning of virtual objects is aobvious
Show.
Optionally, step S178 is rendered according to preset rules on the rendering position and is generated the described virtual right of whole
As including the following steps:
When the quantity of the virtual objects is less than or equal to the quantity of first figure, according to preset priority rule
It is rendered on one or more of rendering positions and generates whole virtual objects;Alternatively,
When the virtual objects quantity is more than the quantity of first figure, operated according to prefixed time interval or third
It instructs to render successively according to preset order on one or more of rendering positions and generates whole virtual objects.
Wherein, third operational order, including but not limited to user touch, click on, slip off specific region, button or user
Send out the user behavior of special sound;Can also be according to scheduled rule, when the important document of specified by rules is reached by system
The operational order of generation, such as:After user completes specified Mission Objective in gaming, games system automatically generates operation and refers to
It enables, to indicate to continue to obtain the instruction of the request of virtual objects.
In a preferred embodiment, if the quantity of virtual objects is less than or equal to the quantity of the first figure, it is based on first
Therefore the quantity that the rendering position quantity that figure obtains then is greater than or equal to virtual objects can be advised according to preset priority
It is then rendered on one or more of rendering positions and generates whole virtual objects.Wherein, priority rule can be
The size order etc. of the successful sequencing of first Graphic Pattern Matching, the first figure, does not limit herein.If virtual objects
Quantity is more than the quantity of the first figure, then being then less than the number of virtual objects based on the rendering position quantity that the first figure obtains
Amount can not be rendered once on whole rendering positions generate whole virtual objects at this time, therefore, according to preset time
Interval or third operational order render on one or more of rendering positions according to preset order successively generates whole institutes
State virtual objects.Such as:It is 6 according to the rendering position that the first figure determines, and the quantity of virtual objects is 11, then first
At all 6 rendering positions in sequence (can be number order, acquisition sequence of virtual objects of virtual objects etc.)
It renders and generates preceding 6 virtual objects, be spaced after preset time interval or wait for receive user and send out third later and operate
After instruction, 6 virtual objects for rendering and generating before are terminated, and (can be the first figure according to preset priority rule
Size order etc. with successful sequencing, the first figure) 5 last void of generation are rendered at preceding 5 rendering positions
Quasi- object.In the present embodiment, the batch that virtual objects are realized according to the quantity of the quantity of virtual objects and the first figure is aobvious
Show, effectively increases the efficiency that virtual objects are shown, and greatly improve user experience.
In a preferred implementation process, after step S177 further include step S19, step S19:Control the virtual objects
First figure is followed to move.
Wherein, when the first figure relatively moves in the first display area, which can be by first
Can also be caused by movement has occurred in photographic device caused by movement has occurred in figure corresponding ontology in reality scene.
The movement includes but not limited to:It is subjected to displacement, changes towards change, depth.Control virtual objects follow the first figure to move
It can be realized at least through following two ways:First, the first figure is detected in real time relative to occurring in the first display area
Mobile, real-time rendering generates the virtual objects after the movement for the first figure is adjusted, and this method is computationally intensive, Ke Nengcun
Postponing;The second, motion state Predicting Technique may be used, scheduled sampling interval duration is set, is obtained at every sampling moment
The mobile status for taking the first figure predicts the motion state of subsequent time according to the mobile status that sampling obtains, and renders to generate and adjust
Virtual objects after whole, and the virtual objects after adjustment are smoothed, thus while substantially reducing operand again
It ensure that mobile continuity.Wherein, using in motion state Predicting Technique, can to the directions of preceding several frames and position into
Row interpolation ensures in moving process smoothly, while the tiny shake in movement is prevented by the way of high-pass filtering.
Optionally, further include executing step after step S17:
Step S111 is terminated after detecting that first figure disappears and is rendered the virtual objects, and monitor institute in real time
It states with the presence or absence of the first figure to match with preset test pattern in reality scene image, if it is present repeating step
S175、S177。
Preferably, further include step S21:The 4th operational order for monitoring user refers in the 4th operation for receiving user
After order, the image currently shown is exported with preset picture format, it is folded with reality scene image which is preferably virtual objects
Image after adding.
The present invention through the above steps, virtual objects and specific object in reality scene is associated, the spy is passed through
The position of earnest body adjusts position/displaying angle etc. to control virtual objects in the reality scene image that shooting obtains, from
And it realizes and adds free virtual objects interaction.
Fig. 2-6 is the flow chart according to the present invention wherein display methods of the virtual objects of a preferred embodiment.
When this life for getting user calling instruction (the first operational order), as shown in Figure 2.It renders and generates the first interaction
Interface, interactive interface as shown in Figure 3, in the interactive interface, including the first display area, in the present embodiment, this is first aobvious
Show that region occupies whole interactive interfaces;Reality scene image is obtained by the camera of terminal and in first viewing area
It is shown in domain.
Judge with the presence or absence of the first figure to match with preset test pattern in reality scene image, such as Fig. 3-4 institutes
Show, sphere of movements for the elephants shape pattern is preset test pattern, when being scanned to reality scene image, if there is with standard drawing
When the first figure that shape sphere of movements for the elephants shape pattern matches, is rendered on the position of the first figure and generate a three-dimensional method battle array (the
One threedimensional model), to prompt the user with the information of successful match, as shown in figure 4, method battle array is located on the first figure.
Monitor that user slides the operational order (the second operational order) of screen, which indicates acquisition request formula
Refreshing (virtual objects), as shown in Figure 5.The request is sent, and receives corresponding response message, the response message is at least
Including:The identifier of corresponding god, the identifier is for obtaining the corresponding three-dimensional modeling data of formula god.
The rendering position of formula god is determined in the position of the first display area according to the first figure, in the rendering position
Upper generation formula god, as shown in fig. 6, formula spirit tablet is on the first figure and method battle array.
By the sphere of movements for the elephants shape pattern or dollying head in mobile reality scene, may be implemented formula god and method battle array with
With movement.
According to a wherein embodiment of the invention, a kind of embodiment of the display methods of virtual objects is additionally provided.
When this life for getting user calling instruction (the first operational order), renders and generate the first interactive interface, at this
In interactive interface, including the first display area, in the present embodiment, which occupies whole interactive interfaces;It is logical
The camera for crossing terminal obtains reality scene image and is shown in first display area.
Judge to whether there is the first figure of one or more to match with preset test pattern in reality scene image,
With sphere of movements for the elephants shape pattern it is preset test pattern in the present embodiment, when being scanned to reality scene image, if deposited
In one or more first figures to match with test pattern sphere of movements for the elephants shape pattern, the wash with watercolours on all positions of the first figure
Dye generates one or more three-dimensional method battle arrays (the first threedimensional model), to prompt the user with the information of successful match, method battle array position
On the first figure.
Monitor that user slides the operational order (the second operational order) of screen, which indicates acquisition request formula
Refreshing (virtual objects), the request include formula god's quantity to be obtained, such as 11 even take out.The request is sent, and receives phase
Corresponding response message, the response message include at least:The identifier of corresponding god, the identifier are corresponding for obtaining formula god
Three-dimensional modeling data.
One or more formulas are determined in the position of the first display area according to the quantity of the first figure and the first figure
The rendering position of god, to generate formula god on the rendering position, formula spirit tablet is on the first figure and method battle array, wherein such as
The quantity of fruit formula god is less than or equal to the quantity of the first figure, then rendering position in one or more according to preset priority rule
It sets to render and generates whole formula god.If the quantity of formula god is more than the quantity of the first figure, according to prefixed time interval
Or the touching instruction that user sends out renders the formula god for generating whole successively on one or more rendering positions according to preset order.
Such as:It is 6 according to the rendering position that the first figure determines, and the quantity of formula god is 11, then first all 6
It is rendered in sequence at a rendering position and generates preceding 6 formula god, be spaced after preset time interval or wait for later and receive
After user sends out third operational order, 6 virtual objects for rendering and generating before are terminated, and exist according to preset priority rule
It is rendered at preceding 5 rendering positions and generates 5 last virtual objects.In the present embodiment, according to the quantity of virtual objects and
The batch that the quantity of one figure realizes virtual objects is shown, effectively increases the efficiency that virtual objects are shown, and greatly promote
User experience.
According to a wherein embodiment of the invention, a kind of embodiment of the display device of virtual objects is additionally provided.Fig. 7 is root
According to the structure diagram of the present invention wherein display device of the virtual objects of an embodiment, as shown in fig. 7, the device may include:
Interface rendering unit 10 generates the first interactive interface for when getting the first operational order of user, rendering, and described first
Interactive interface includes the first display area;Display unit 20, for obtaining reality scene image and in first display area
Inside shown;Matching unit 30 whether there is and preset test pattern phase for judging in the reality scene image
The first figure matched;If it is present the second operational order of monitoring user;Acquiring unit 40, for receiving described
When two operational orders, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order, according to described three
Dimension module data render generates the virtual objects;
Optionally, further include:Follow 50 (not shown) of mobile unit, for control the virtual objects follow it is described
First figure moves.
Display device provided in this embodiment for virtual objects can perform the use that the method for the present invention embodiment is provided
In the display methods of virtual objects, have the corresponding function module of execution method and advantageous effect.
According to a wherein embodiment of the invention, a kind of storage medium is additionally provided, storage medium includes the program of storage,
In, equipment where controlling storage medium when program is run executes the display methods of above-mentioned virtual objects.Above-mentioned storage medium can
To include but not limited to:USB flash disk, read-only memory (ROM), random access memory (RAM), mobile hard disk, magnetic disc or CD
Etc. the various media that can store program code.
According to a wherein embodiment of the invention, a kind of processor is additionally provided, processor is for running program, wherein journey
The display methods of above-mentioned virtual objects is executed when sort run.Above-mentioned processor can include but is not limited to:Microprocessor (MCU) or
The processing unit of programmable logic device (FPGA) etc..
According to a wherein embodiment of the invention, a kind of terminal is additionally provided, including:One or more processors, memory,
Display device and one or more program, wherein one or more programs are stored in memory, and be configured as by
One or more processors execute, and program includes that the display methods of above-mentioned virtual objects is required for perform claim.In some realities
Apply in example, above-mentioned terminal can be smart mobile phone (such as:Android phone, iOS mobile phones etc.), tablet computer, palm PC with
And the terminal devices such as mobile internet device (Mobile Internet Devices, referred to as MID), PAD.Above-mentioned display dress
Set can be touch-screen type liquid crystal display (LCD), which may make user can be with the user interface of terminal
It interacts.In addition, above-mentioned terminal can also include:Input/output interface (I/O interfaces), universal serial bus (USB) end
Mouth, network interface, power supply and/or camera.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
In the above embodiment of the present invention, all emphasizes particularly on different fields to the description of each embodiment, do not have in some embodiment
The part of detailed description may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others
Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, for example, the unit division, Ke Yiwei
A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module
It connects, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
On unit.Some or all of unit therein can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can be stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or
Part steps.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (18)
1. a kind of display methods of virtual objects, which is characterized in that including:
It when getting the first operational order, renders and generates the first interactive interface, first interactive interface includes the first display
Region;
It obtains reality scene image and is shown in first display area;
Judge with the presence or absence of the first figure to match with preset test pattern in the reality scene image, if it does,
Then monitor the second operational order of user;
When receiving second operational order, the three-dimensional mould of corresponding virtual objects is obtained according to second operational order
Type data render according to the three-dimensional modeling data and generate the virtual objects.
2. the method as described in claim 1, which is characterized in that obtain reality scene image and in first display area
It is shown, including:It is existing described in real-time display in first display area by photographic device captured in real-time reality scene
Real scene image.
3. the method as described in claim 1, which is characterized in that first figure is non-centrosymmetric image.
4. the method as described in claim 1, which is characterized in that exist and preset mark in judging the reality scene image
When the first figure that quasi- figure matches, renders and generate the first prompt message, the prompt message is for prompting user to carry out institute
State the second operation.
5. method as claimed in claim 4, which is characterized in that it renders and generates the first prompt message, including:In first figure
It is rendered on the position of shape and generates the first threedimensional model, and/or the second operational order control is generated in the interactive interface, described the
Two operational order controls are used to receive the second operational order of user.
6. the method as described in claim 1, which is characterized in that obtain corresponding virtual objects according to second operational order
Three-dimensional modeling data, including:
The three-dimensional modeling data is obtained from preset 3 d model library according to second operational order;Or
The corresponding request of second operational order is sent to preset server-side, and receives the threedimensional model number of server-side feedback
According to;Or
The corresponding request of second operational order is sent to preset server-side, and receives the response message of server-side feedback,
The three-dimensional modeling data is obtained from preset 3 d model library according to the response message, the response message is at least wrapped
It includes:The identifier of the virtual objects, the identifier is for obtaining the corresponding three-dimensional modeling data of the virtual objects.
7. the method as described in claim 1, which is characterized in that second operational order includes:To be obtained is described virtual
The quantity of object;The quantity of first figure is one or more;Described rendered according to the three-dimensional modeling data generates institute
Virtual objects are stated, including:
According to the location determination of the quantity of first figure and first figure in first display area
One or more rendering positions of virtual objects;
It is rendered according to preset rules on the rendering position and generates whole virtual objects.
8. the method for claim 7, which is characterized in that render and generate entirely according to preset rules on the rendering position
The virtual objects in portion, including:
When the quantity of the virtual objects is less than or equal to the quantity of first figure, according to preset priority rule in institute
It states to render on one or more rendering positions and generates whole virtual objects;
When the virtual objects quantity is more than the quantity of first figure, according to prefixed time interval or third operational order
It is rendered successively according to preset order on one or more of rendering positions and generates whole virtual objects.
9. the method as described in claim 1-8 is any, which is characterized in that judge to whether there is in the reality scene and preset
The first figure for matching of test pattern, including:
Identification step:The reality scene image got is parsed, identifies the position of figure to be matched;
Matching step:The figure to be matched is pre-processed;
Judgment step:Judge whether the figure to be matched is the first figure to match with preset test pattern.
10. the method as described in claim 1-8 is any, which is characterized in that judge in the reality scene with the presence or absence of with it is pre-
If the first figure for matching of test pattern, including:
Pre-treatment step:The reality scene image got is pre-processed;
Identification step:The pretreated reality scene image is identified, determines the position of figure to be matched;
Judgment step:Judge whether the figure to be matched is the first figure to match with preset test pattern.
11. the method as described in claim 1, which is characterized in that if there is no to match with preset test pattern
One figure then shows the prompt message of recognition failures.
12. the method as described in claim 1, which is characterized in that it is the multiple to render generation according to the three-dimensional modeling data
After virtual objects, further include:
After detecting that first figure disappears, terminates and render the virtual objects.
13. the method as described in claim 1, which is characterized in that it is the multiple to render generation according to the three-dimensional modeling data
After virtual objects, further include:
Controlling the virtual objects follows first figure to move.
14. according to any methods of claim 1-8, it is characterised in that:It is further comprising the steps of:
The 4th operational order for monitoring user is exported after receiving the 4th operational order of user with preset picture format
The image currently shown.
15. a kind of display device of virtual objects, which is characterized in that including:
Interface rendering unit is rendered and generates the first interactive interface for when getting the first operational order of user, and described the
One interactive interface includes the first display area;
Display unit, for obtaining reality scene image and being shown in first display area;
Matching unit, for judging in the reality scene image with the presence or absence of the first figure to match with preset test pattern
Shape;If it is present the second operational order of monitoring user;
Acquiring unit, for when receiving second operational order, corresponding void to be obtained according to second operational order
The three-dimensional modeling data of quasi- object, renders according to the three-dimensional modeling data and generates the virtual objects.
16. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein run in described program
When control the storage medium where equipment perform claim require the display sides of the virtual objects described in any one of 1 to 14
Method.
17. a kind of processor, which is characterized in that the processor is for running program, wherein right of execution when described program is run
Profit requires the display methods of the virtual objects described in any one of 1 to 14.
18. a kind of terminal, including:One or more processors, memory, display device and one or more programs, wherein
One or more of programs are stored in the memory, and are configured as being held by one or more of processors
Row, described program include that the display methods of the virtual objects described in any one of 1 to 14 is required for perform claim.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710061376 | 2017-01-25 | ||
CN2017100613767 | 2017-01-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108305325A true CN108305325A (en) | 2018-07-20 |
Family
ID=62801204
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710313563.XA Pending CN108305325A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
CN201710314092.4A Pending CN108273265A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
CN201710314103.9A Pending CN108288306A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710314092.4A Pending CN108273265A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
CN201710314103.9A Pending CN108288306A (en) | 2017-01-25 | 2017-05-05 | The display methods and device of virtual objects |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN108305325A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109939433A (en) * | 2019-03-11 | 2019-06-28 | 网易(杭州)网络有限公司 | The method of controlling operation thereof and device, storage medium and electronic equipment of virtual card |
CN110058685A (en) * | 2019-03-20 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Display methods, device, electronic equipment and the computer readable storage medium of virtual objects |
CN110404250A (en) * | 2019-08-26 | 2019-11-05 | 网易(杭州)网络有限公司 | A kind of pumping card method and device in game |
CN110533780A (en) * | 2019-08-28 | 2019-12-03 | 深圳市商汤科技有限公司 | A kind of image processing method and its device, equipment and storage medium |
CN111821691A (en) * | 2020-07-24 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Interface display method, device, terminal and storage medium |
CN112710254A (en) * | 2020-12-21 | 2021-04-27 | 珠海格力智能装备有限公司 | Object measuring method, system, device, storage medium and processor |
CN114758042A (en) * | 2022-06-14 | 2022-07-15 | 深圳智华科技发展有限公司 | Novel virtual simulation engine, virtual simulation method and device |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109107156A (en) * | 2018-08-10 | 2019-01-01 | 腾讯科技(深圳)有限公司 | Game object acquisition methods, device, electronic equipment and readable storage medium storing program for executing |
CN109078327A (en) * | 2018-08-28 | 2018-12-25 | 百度在线网络技术(北京)有限公司 | Game implementation method and equipment based on AR |
CN110069125B (en) * | 2018-09-21 | 2023-12-22 | 北京微播视界科技有限公司 | Virtual object control method and device |
CN111103967A (en) * | 2018-10-25 | 2020-05-05 | 北京微播视界科技有限公司 | Control method and device of virtual object |
CN109472873B (en) * | 2018-11-02 | 2023-09-19 | 北京微播视界科技有限公司 | Three-dimensional model generation method, device and hardware device |
CN109685910A (en) * | 2018-11-16 | 2019-04-26 | 成都生活家网络科技有限公司 | Room setting setting method, device and VR wearable device based on VR |
CN110109726B (en) * | 2019-04-30 | 2022-08-23 | 网易(杭州)网络有限公司 | Virtual object receiving processing method, virtual object transmitting method, virtual object receiving processing device and virtual object transmitting device, and storage medium |
CN112516593B (en) * | 2019-09-19 | 2023-01-24 | 上海哔哩哔哩科技有限公司 | Card drawing method, card drawing system and computer equipment |
CN110639202B (en) * | 2019-10-29 | 2021-11-12 | 网易(杭州)网络有限公司 | Display control method and device in card game |
CN110975285B (en) * | 2019-12-06 | 2024-03-22 | 北京像素软件科技股份有限公司 | Smooth cutter light acquisition method and device |
CN111752161B (en) * | 2020-06-18 | 2023-06-30 | 格力电器(重庆)有限公司 | Electrical appliance control method, system and storage medium |
CN111913624B (en) * | 2020-08-18 | 2022-06-07 | 腾讯科技(深圳)有限公司 | Interaction method and device for objects in virtual scene |
CN112051961A (en) * | 2020-09-04 | 2020-12-08 | 脸萌有限公司 | Virtual interaction method and device, electronic equipment and computer readable storage medium |
CN112221124B (en) * | 2020-10-21 | 2022-11-08 | 腾讯科技(深圳)有限公司 | Virtual object generation method and device, electronic equipment and storage medium |
CN113058267B (en) * | 2021-04-06 | 2024-02-02 | 网易(杭州)网络有限公司 | Virtual object control method and device and electronic equipment |
CN113101647B (en) * | 2021-04-14 | 2023-10-24 | 北京字跳网络技术有限公司 | Information display method, device, equipment and storage medium |
CN113289334A (en) * | 2021-05-14 | 2021-08-24 | 网易(杭州)网络有限公司 | Game scene display method and device |
WO2022241701A1 (en) * | 2021-05-20 | 2022-11-24 | 华为技术有限公司 | Image processing method and device |
CN113590013B (en) * | 2021-07-13 | 2023-08-25 | 网易(杭州)网络有限公司 | Virtual resource processing method, nonvolatile storage medium and electronic device |
CN113691796B (en) * | 2021-08-16 | 2023-06-02 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer readable storage medium |
CN114047998B (en) * | 2021-11-30 | 2024-04-19 | 珠海金山数字网络科技有限公司 | Object updating method and device |
CN114307138B (en) * | 2021-12-28 | 2023-09-26 | 北京字跳网络技术有限公司 | Interaction method and device based on card, computer equipment and storage medium |
CN116679824A (en) * | 2022-02-23 | 2023-09-01 | 华为技术有限公司 | Man-machine interaction method and device in augmented reality AR scene and electronic equipment |
CN115350475B (en) * | 2022-06-30 | 2023-06-23 | 元素创造(深圳)网络科技有限公司 | Virtual object control method and device |
CN115185374B (en) * | 2022-07-14 | 2023-04-07 | 北京奇岱松科技有限公司 | Data processing system based on virtual reality |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100138193A (en) * | 2009-06-24 | 2010-12-31 | 넥스트키 주식회사 | The augmented reality content providing system and equipment for the user interaction based on touchscreen |
CN102902710A (en) * | 2012-08-08 | 2013-01-30 | 成都理想境界科技有限公司 | Bar code-based augmented reality method and system, and mobile terminal |
CN106157359A (en) * | 2015-04-23 | 2016-11-23 | 中国科学院宁波材料技术与工程研究所 | A kind of method for designing of virtual scene experiencing system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9111418B2 (en) * | 2010-12-15 | 2015-08-18 | Bally Gaming, Inc. | System and method for augmented reality using a player card |
CN105844714A (en) * | 2016-04-12 | 2016-08-10 | 广州凡拓数字创意科技股份有限公司 | Augmented reality based scenario display method and system |
CN105929945A (en) * | 2016-04-18 | 2016-09-07 | 展视网(北京)科技有限公司 | Augmented reality interaction method and device, mobile terminal and mini-computer |
-
2017
- 2017-05-05 CN CN201710313563.XA patent/CN108305325A/en active Pending
- 2017-05-05 CN CN201710314092.4A patent/CN108273265A/en active Pending
- 2017-05-05 CN CN201710314103.9A patent/CN108288306A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100138193A (en) * | 2009-06-24 | 2010-12-31 | 넥스트키 주식회사 | The augmented reality content providing system and equipment for the user interaction based on touchscreen |
CN102902710A (en) * | 2012-08-08 | 2013-01-30 | 成都理想境界科技有限公司 | Bar code-based augmented reality method and system, and mobile terminal |
CN106157359A (en) * | 2015-04-23 | 2016-11-23 | 中国科学院宁波材料技术与工程研究所 | A kind of method for designing of virtual scene experiencing system |
Non-Patent Citations (1)
Title |
---|
无: "AR新玩法——现世召唤开启体验!", 《HTTP://YYS.163.COM/M/ZLP/20170118/24874_668345.HTML》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109939433A (en) * | 2019-03-11 | 2019-06-28 | 网易(杭州)网络有限公司 | The method of controlling operation thereof and device, storage medium and electronic equipment of virtual card |
CN110058685A (en) * | 2019-03-20 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Display methods, device, electronic equipment and the computer readable storage medium of virtual objects |
CN110404250A (en) * | 2019-08-26 | 2019-11-05 | 网易(杭州)网络有限公司 | A kind of pumping card method and device in game |
CN110404250B (en) * | 2019-08-26 | 2023-08-22 | 网易(杭州)网络有限公司 | Card drawing method and device in game |
CN110533780A (en) * | 2019-08-28 | 2019-12-03 | 深圳市商汤科技有限公司 | A kind of image processing method and its device, equipment and storage medium |
CN110533780B (en) * | 2019-08-28 | 2023-02-24 | 深圳市商汤科技有限公司 | Image processing method and device, equipment and storage medium thereof |
US11880956B2 (en) | 2019-08-28 | 2024-01-23 | Shenzhen Sensetime Technology Co., Ltd. | Image processing method and apparatus, and computer storage medium |
CN111821691A (en) * | 2020-07-24 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Interface display method, device, terminal and storage medium |
CN112710254A (en) * | 2020-12-21 | 2021-04-27 | 珠海格力智能装备有限公司 | Object measuring method, system, device, storage medium and processor |
CN114758042A (en) * | 2022-06-14 | 2022-07-15 | 深圳智华科技发展有限公司 | Novel virtual simulation engine, virtual simulation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108273265A (en) | 2018-07-13 |
CN108288306A (en) | 2018-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108305325A (en) | The display methods and device of virtual objects | |
CN108229239B (en) | Image processing method and device | |
US20180088663A1 (en) | Method and system for gesture-based interactions | |
US20190236259A1 (en) | Method for 3d graphical authentication on electronic devices | |
CN108229329A (en) | Face false-proof detection method and system, electronic equipment, program and medium | |
CN108525299B (en) | System and method for enhancing computer applications for remote services | |
CN111324253B (en) | Virtual article interaction method and device, computer equipment and storage medium | |
CN108874114B (en) | Method and device for realizing emotion expression of virtual object, computer equipment and storage medium | |
US20130235045A1 (en) | Systems and methods for creating and distributing modifiable animated video messages | |
CN108108748A (en) | A kind of information processing method and electronic equipment | |
US20210089639A1 (en) | Method and system for 3d graphical authentication on electronic devices | |
CN109064390A (en) | A kind of image processing method, image processing apparatus and mobile terminal | |
CN106200960A (en) | The content display method of electronic interactive product and device | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
JP2011159329A (en) | Automatic 3d modeling system and method | |
US20230177755A1 (en) | Predicting facial expressions using character motion states | |
CN111273777A (en) | Virtual content control method and device, electronic equipment and storage medium | |
CN107817701A (en) | Apparatus control method, device, computer-readable recording medium and terminal | |
CN107797661A (en) | A kind of plant control unit, method and mobile terminal | |
CN103839032B (en) | A kind of recognition methods and electronic equipment | |
TW202138971A (en) | Interaction method and apparatus, interaction system, electronic device, and storage medium | |
CN111651054A (en) | Sound effect control method and device, electronic equipment and storage medium | |
CN110719415A (en) | Video image processing method and device, electronic equipment and computer readable medium | |
JP2022092745A (en) | Operation method using gesture in extended reality and head-mounted display system | |
CN114546103A (en) | Operation method through gestures in augmented reality and head-mounted display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180720 |