CN108288306A - The display methods and device of virtual objects - Google Patents

The display methods and device of virtual objects Download PDF

Info

Publication number
CN108288306A
CN108288306A CN201710314103.9A CN201710314103A CN108288306A CN 108288306 A CN108288306 A CN 108288306A CN 201710314103 A CN201710314103 A CN 201710314103A CN 108288306 A CN108288306 A CN 108288306A
Authority
CN
China
Prior art keywords
virtual objects
operational order
reality scene
preset
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710314103.9A
Other languages
Chinese (zh)
Inventor
唐梓文
张颖鹏
沈俊毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Publication of CN108288306A publication Critical patent/CN108288306A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The invention discloses a kind of display methods of virtual objects and devices.Wherein, this method includes:It when getting the first operational order of user, renders and generates the first interactive interface, first interactive interface includes the first display area;It obtains reality scene image and is shown in first display area;Judge in the reality scene image with the presence or absence of the first figure to match with preset test pattern;If it is present the second operational order of monitoring user;And when receiving second operational order, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order;It is rendered according to the three-dimensional modeling data and generates the virtual objects;Controlling the virtual objects follows first figure to move.The present invention solves in the related technology the technical issues of obtaining for virtual objects lacks novel interactive mode, poor user experience.

Description

The display methods and device of virtual objects
Technical field
The present invention relates to field of play, in particular to the display methods and device of a kind of virtual objects.
Background technology
In existing Games Software, it usually needs user extracts the virtual objects such as card, role or stage property to play, Especially card cards game, the extraction of card and displaying are mostly important one of game contents.However, in existing game, The acquisition modes of the virtual objects such as card, role or stage property are usually to be clicked to extract button by user, and system is according to default general Rate is shown to user after generating specific virtual objects;And exhibition method is usually the mark that virtual objects are clicked by user, It is shown to user after the specific 2D images of system generation virtual objects.However, there are following defects for aforesaid operations flow:Lack trip Interactivity between the virtual objects obtained in play and user, in addition, also very single for the display of virtual objects in game One, user experience is not high.
Invention content
A present invention wherein embodiment provides a kind of display methods and device of virtual objects, at least to solve related skill The technical issues of novel interactive mode of the acquisition shortage of virtual objects in art, poor user experience.
According to the one side of a wherein embodiment of the invention, a kind of display methods of virtual objects is provided, including:When It when getting the first operational order, renders and generates the first interactive interface, first interactive interface includes the first display area;It obtains It takes reality scene image and is shown in first display area;Judge in the reality scene image with the presence or absence of with The first figure that preset test pattern matches, if it is present the second operational order of monitoring user;It is described receiving When the second operational order, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order, according to described Three-dimensional modeling data, which renders, generates the virtual objects;Controlling the virtual objects follows first figure to move.
According to the another aspect of a wherein embodiment of the invention, a kind of display device of virtual objects is additionally provided, including: Interface rendering unit generates the first interactive interface for when getting the first operational order of user, rendering, and described first hands over Mutual interface includes the first display area;Display unit, for obtain reality scene image and in first display area into Row display;Matching unit, for judging in the reality scene image with the presence or absence of the to match with preset test pattern One figure;If it is present the second operational order of monitoring user;Acquiring unit, for receive it is described second operation refer to When enabling, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order, according to the threedimensional model number The virtual objects are generated according to rendering;Mobile unit is followed, follows first figure to move for controlling the virtual objects.
According to the one side of a wherein embodiment of the invention, a kind of storage medium is provided, the program of storage is included, In, equipment where the storage medium is controlled when described program is run executes the display methods of above-mentioned virtual objects.
According to the one side of a wherein embodiment of the invention, a kind of processor is provided, for running program, wherein Described program executes the display methods of above-mentioned virtual objects when running.
According to the one side of a wherein embodiment of the invention, a kind of terminal is provided, including:One or more processing Device, memory, display device and one or more programs, wherein one or more of programs are stored in the storage It in device, and is configured as being executed by one or more of processors, described program includes for executing above-mentioned virtual objects Display methods.
In a wherein embodiment of the invention, given in reality scene in manner shown using by game virtual object, Achieve the purpose that the display of virtual objects being combined with reality scene, to by realizing game virtual scene and reality Novel interactive mode effectively improves the technique effect of user experience between World Scene, and then solves empty in the related technology The technical issues of obtaining for quasi- object lacks novel interactive mode, poor user experience.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and is constituted part of this application, this hair Bright illustrative embodiments and their description are not constituted improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is the flow chart according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 2 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 3 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 4 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 5 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 6 is the schematic diagram according to the display methods of the virtual objects of the one of embodiment of the present invention;
Fig. 7 is the structure diagram according to the present invention wherein display device of the virtual objects of an embodiment.
Specific implementation mode
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The every other embodiment that member is obtained without making creative work should all belong to the model that the present invention protects It encloses.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, " Two " etc. be for distinguishing similar object, without being used to describe specific sequence or precedence.It should be appreciated that using in this way Data can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover It includes to be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment to cover non-exclusive Those of clearly list step or unit, but may include not listing clearly or for these processes, method, product Or the other steps or unit that equipment is intrinsic.
According to embodiments of the present invention, a kind of display methods of virtual objects is provided, this method can be applied at terminal It manages during executing software application on device, i.e.,:This method can be given by software application and is embodied, and be especially included in and answer In Games Software, including mobile game and other Games Softwares.It should be noted that the flow in attached drawing illustrates Although the step of can execute in the computer system of such as a group of computer-executable instructions also, show in flow charts Logical order, but in some cases, it can with the steps shown or described are performed in an order that is different from the one herein.
Fig. 1 is the flow chart of the display methods of virtual objects according to the ... of the embodiment of the present invention, as shown in Figure 1, this method packet Include following steps:
Step S11 is rendered when getting the first operational order of user and is generated the first interactive interface, and described first hands over Mutual interface includes the first display area;
Step S13 obtains reality scene image and is shown in first display area;
Step S15 judges in the reality scene image with the presence or absence of the first figure to match with preset test pattern Shape, if it is present the second operational order of monitoring user;
Step S17 is obtained corresponding virtual when receiving second operational order according to second operational order The three-dimensional modeling data of object renders according to the three-dimensional modeling data and generates the virtual objects;
Step S19 controls the virtual objects and first figure is followed to move.
The present invention through the above steps, virtual objects and specific object in reality scene is associated, the spy is passed through The position of earnest body adjusts position/displaying angle etc. to control virtual objects in the reality scene image that shooting obtains, from And it realizes and adds free virtual objects interaction.
Further to disclose technical scheme of the present invention, more specific or preferred embodiment is provided below with to the present invention Each specific steps, technical principle is illustrated:
In a preferred implementation process, above-mentioned virtual objects can be the card in game, role, pet, hero, stage property Deng.
Optionally, in step S11, first operational order, including but not limited to user touch, click on, it is specific to slip off Region, button or user send out the user behavior of special sound;Can also be according to scheduled rule, in specified by rules The operational order generated by system when important document is reached, such as:After user completes specified Mission Objective in gaming, games system Automatically generate operational order.
It after receiving the first operational order, renders and generates the first interactive interface, which includes the first display Region can also include UI control layers, and the first display area, which can be the part display area of the first interactive interface, to be accounted for According to the whole region of the first interactive interface.Preferably, using layered structure, the first display area is the whole of the first interactive interface Region, the UI controls layer are located on first display area, and to the white space on UI control layers using transparent or It is translucent to be processed such that the content of first display area is able to show user.
If the first display area only occupies the first interactive interface of part, it can be arranged in any position at interface, this The place present invention is not specially limited.
Optionally, in step s 13, obtain reality scene image and display is carried out in first display area can be with Including step performed below:
Step S131 calls the photographic device of equipment, aobvious described first by photographic device captured in real-time reality scene Show reality scene image described in real-time display in region, such as:It is applied in mobile phone games system when by the method for the present embodiment When, step S131 can be:By the camera shooting captured in real-time reality scene of calling mobile phone, to obtain reality scene image, so Dynamic Announce is carried out in the first display area afterwards;
In a particular embodiment, the photographic device can be mobile phone built-in camera, can also be external photographic device, Likewise, the position of photographic device can be preposition, can also be postposition;The number of photographic device can be one, two It is a or multiple.
In a preferred embodiment, before by photographic device captured in real-time reality scene, it is also necessary to get parms to described Photographic device carries out initialization adaptation, otherwise may result in the first model and deforms, and the parameter includes:Photographic device sheet The parameter (white balance and Focusing parameter etc.) of body and the parameter (the ratio of width to height, field angle size etc.) of test pattern.This is specifically fitted It is with process:The parameter and parameter configuration files of preset test pattern are obtained from the equipment for carrying the photographic device, The parameter that described photographic device itself is extracted from the parameter configuration files is adapted to the photographic device, the parameter Configuration file is what system was automatically generated according to model information in advance, or was acquired in advance from third-party server.
Specifically, each type of iOS system is adapted to by model information at present, Android system equipment can be attempted to connect Artoolkit servers are first adapted to, if link failure, can attempt to be adapted to the ios machines similar to model (identical or connect The camera of close pixel, resolution ratio identical or close to pixel), if can not be adapted to really, then full frame post-processing is carried out, carried out Suitably it is stretched to standard proportional and attempts preset white balance and auto-focusing, it is ensured that at least under to a certain degree, described first Threedimensional model, which is shown, to be deformed.
Optionally, the purpose of step S15 is to determine in reality scene image and whether there is and preset standard picture phase Matched first figure, and in case of presence, the second operational order of triggering monitoring user.
Wherein, test pattern is preset in advance, can be one can also be multiple, can be arbitrary figure (packet It includes but is not limited to:Shape, color, size, pattern), it is preferred to use non-centrosymmetry figure, consequently facilitating the angle of test pattern Variation, preset pattern are preferably pre-stored in specified storage region.
First figure refers to:Material object in reality scene shoots via photographic device and is present in shooting and is formed by reality Figure in scene image, the figure can match with test pattern according to scheduled matching rule.
Second operational order, including but not limited to user, which touch, click on, slips off specific region, button or user sends out spy The user behavior of attribute sound;Can also be to be generated by system according to scheduled rule when the important document of specified by rules is reached Operational order, such as:After user completes specified Mission Objective in gaming, games system automatically generates operational order.
In more specifical embodiment, determine in display scene images with the presence or absence of matching with preset standard picture The specific steps of the first figure may include:
Step S151, identification step:The reality scene image got is identified, determines figure to be matched Position;Wherein, it can be the complete reality scene image of identification to the identification of reality scene image, can also be only identification the Image in one display area in preset identification region, to reduce operand, preferably only to preset in the first display area Image in identification region is identified, such as:Only the image in the central area of the first display area is identified;It is more excellent Ground can render the boundary of identification region and/or identification region in the first interactive interface and identify figure with forming region, With help for observing determining identification region, the area identification figure can be region frame (such as:Polygon wire frame), also may be used Be special color (such as:The color lump being covered in identification region), the area identification figure is preferably disposed on the first friendship In the UI control layers that mutual interface is included;Meanwhile can also prompt message be set in UI control layers and user is prompted to move the first figure It moves to specified region, such as:By text prompt user's cell phone camera so that the first figure is located at screen center Region.
Know or obtain the first figure for ease of user, further includes standard drawing before step S15 in preferred embodiment Shape shows step and/or test pattern exports step, and the test pattern displaying step is used to show the standard body to user Type, the test pattern output step is for outputting standard figure for operations such as user's preservation, printings.Test pattern displaying step Rapid and test pattern output step is preferably only to trigger after the instruction for receiving user, more electedly, can be described the Test pattern displaying control and/or test pattern are set in one interactive interface and export control, when detecting behaviour of the user to control Corresponding test pattern displaying step is triggered when making or test pattern exports step.
In a preferred implementation process, after obtaining reality scene image, Scale invariant features transform matching algorithm can be carried out (SIFT) detection for carrying out characteristic point, the position of figure to be matched is determined according to the characteristic point, and judge whether be institute The figure needed, SIFT algorithms have rotational invariance and scaling invariance, are tilted or rotated even if figure to be matched exists The case where, and feature dot density will not change, so still can correctly pick up.
Step S153, matching step:The figure to be matched is pre-processed;Preprocessing process is for the ease of from waiting for It matches and extracts validity feature information in figure, to help to judge whether the figure to be matched is realized with preset test pattern Matching.Pretreatment can include but is not limited to:Gray processing, binaryzation, extraction characteristic point etc..By first in reality scene image In determine the position of figure to be matched, then carry out images match for the figure to be matched, so can effectively reduce calculating Amount, while substantially reducing the time delay of images match.Determine that the position of figure to be matched can be by technologies such as edge contour detections It is achieved.
In a preferred implementation process, pretreatment mode uses:It is down-sampled, color binaryzation, and cross a high-pass filtering.Its In, down-sampled is the performance for improving detection, pattern to be detected is sufficiently large can be not necessarily to it is consistent with artwork;Color binaryzation be because Selection a little is characterized independent of color, it can be than more visible after binaryzation;High-pass filtering can reduce tiny noise, prevent Some pocket-handkerchieves in figure, such as cut, person's handwriting or etc influence.
Step S155, judgment step:Judge whether the figure to be matched is to match with preset test pattern One figure.In a preferred implementation process, similarity threshold can be arranged in above-mentioned matching, such as:When figure to be matched and standard drawing When the similarity of shape is more than or equal to 80%, then regard as matching.The threshold value can be arbitrarily arranged according to actual needs.
In more specifical embodiment, determine in display scene images with the presence or absence of matching with preset standard picture The specific steps of the first figure can also include:
Step S152, pre-treatment step:The reality scene image got is pre-processed;Preprocessing process is For the ease of from figure to be matched extract validity feature information, to help to judge the figure to be matched whether with it is preset Test pattern realizes matching.Pretreatment can include but is not limited to:Gray processing, binaryzation, extraction characteristic point etc..Can so have Effect reduces calculation amount, while substantially reducing the time delay of images match.
In a preferred implementation process, pretreatment mode uses:It is down-sampled, color binaryzation, and cross a high-pass filtering.Its In, down-sampled is the performance for improving detection, pattern to be detected is sufficiently large can be not necessarily to it is consistent with artwork;Color binaryzation be because Selection a little is characterized independent of color, it can be than more visible after binaryzation;High-pass filtering can reduce tiny noise, prevent Some pocket-handkerchieves in figure, such as cut, person's handwriting or etc influence.
Step S154, identification step:The pretreated reality scene image is identified, determines figure to be matched Position;Wherein, it can be the complete reality scene image of identification to the identification of reality scene image, can also be only to identify Image in first display area in preset identification region, to reduce operand, preferably only to being preset in the first display area Identification region in image be identified, such as:Only the image in the central area of the first display area is identified;More Excellently, the boundary of identification region and/or identification region can be rendered with forming region mark figure in the first interactive interface Shape, with help for observing determining identification region, the area identification figure can be region frame (such as:Polygon wire frame), Can also be special color (such as:The color lump being covered in identification region), the area identification figure is preferably disposed on In the UI control layers that one interactive interface is included;Meanwhile can also prompt message be set in UI control layers and prompt user by the first figure Shape is moved to specified region, such as:By text prompt user's cell phone camera so that the first figure is located at screen Central area.Determine that the position of figure to be matched can be achieved by technologies such as edge contour detections.
Step S156, judgment step:Judge whether the figure to be matched is to match with preset test pattern One figure.In a preferred implementation process, similarity threshold can be arranged in above-mentioned matching, such as:When figure to be matched and standard drawing When the similarity of shape is more than or equal to 80%, then regard as matching.The threshold value can be arbitrarily arranged according to actual needs.
Pretreatment for image be more first do it is better because be can exclusive PCR as possible, otherwise can meet for Some obviously cannot matched figure be also identified as correct situation and occur, with effective reductions identification error probability.
Optionally, further include executing step after step S155 or step S156:
Step S157, if there is no with test pattern to matched first figure, then return to step 151 or step 152.
Optionally, in step S15, judge to whether there is and preset test pattern phase in the reality scene image The first figure matched, if it does, further including step performed below:
Step S16 is rendered and is generated the first prompt message, and the prompt message is for prompting user to carry out second behaviour Make.
When there is the first figure to match with preset test pattern in reality scene image, rendering generation first and carrying Show information, this identification can be perceived in order to user and has been succeeded, contributes to the operation for prompting user to carry out next step.
In a preferred implementation process, step S16, specially:On the position of first figure or according to the first figure Information and default rule and on the position of determination render generate the first threedimensional model.The position of above-mentioned first figure is first Position of the figure in reality scene image.First threedimensional model can arbitrary preset said three-dimensional body, such as:Method battle array, arched door, point The second operational order control is generated by platform, will-o'-the-wisp, and/or in the interactive interface, the second operational order control is for connecing Receive the second operational order of user.
In a preferred embodiment, according to screen shared by the normal size information of preset test pattern and first figure Size information and location information, calculate first figure Virtual Space a space conversion matrices, then by institute The matrix for stating the first threedimensional model is directly disposed as carrying out rendering after the space conversion matrices showing, you can realizes threedimensional model Locating and displaying.
Optionally, other than generating threedimensional model, it can also be any other prompt message, such as:Send out prompting sound Sound, pop-up prompting frame, generation vibration prompting signal etc..Optionally, it in step S17, is obtained according to second operational order The three-dimensional modeling data of corresponding virtual objects, can be any one in following manner:
The three-dimensional modeling data is obtained from preset 3 d model library according to second operational order;Or
The corresponding request of second operational order is sent to preset server-side, and receives the three-dimensional mould of server-side feedback
Type data;Or
The corresponding request of second operational order is sent to preset server-side, and receives the response letter of server-side feedback Breath, obtains the three-dimensional modeling data, the response message is at least according to the response message from preset 3 d model library Including:The identifier of the virtual objects, the identifier is for obtaining the corresponding three-dimensional modeling data of the virtual objects.
In a preferred embodiment, it can also be such as under type:In the specific region of the first user interface, show The all or part of virtual objects title or icon of acquisition receive second operational order, specific virtual for selecting Object, and obtain corresponding three-dimensional modeling data according to the specific virtual objects.
Optionally, in step S17, the virtual objects are generated, including:
Step S175, according to virtual objects described in location determination of first figure in first display area Rendering position;
Step S177 is rendered on the rendering position and is generated the virtual objects.
Preferably, first user interface further includes virtual objects display layer, and the virtual corresponding rendering is in institute It states on virtual objects display layer, which is preferably superimposed on first display area, and the UI is controlled Part stacking plus with the virtual objects display layer on.
It is highly preferred that the parameters such as the position of virtual objects, direction, size are by the first figure in the first display area Dispaly state determine.In a preferred implementation process, the first figure location information in first display area is extracted, is come true Determine the location information of virtual objects, direction of the first figure of extraction in first display area, to determine virtual objects Direction, depth information of the first figure of extraction in first display area, to determine the size of virtual objects.When virtual right After the parameter information of elephant determines, is rendered according to the parameter information and generate corresponding virtual objects.Preferably, above-mentioned function can pass through The mode of hanging point mounting realizes, i.e.,:The first hanging point is set on the first figure, the second hanging point will be set on virtual objects, is led to Cross the linkage that the mode of the second hanging point and the first extension node mounting is realized to virtual objects and the first figure, wherein described second Hanging point is preferably virtual objects 3D models with node.
In a preferred embodiment, the parameters such as the position of virtual objects, direction, size obtain in the following way:According to pre- If test pattern normal size information and first figure shared by screen size information, you can it is counter to release described first The depth of figure obtains the characteristic point of first figure by SIFT algorithms, you can obtain the direction of first figure with World's transformation matrix is calculated in position in terminal display space according to the above, then by the void The matrix of quasi- object is directly disposed as carrying out rendering after world's transformation matrix showing, you can realizes that the positioning of virtual objects is aobvious Show.
It in a preferred implementation process,, should when the first figure relatively moves in the first display area in step S19 Relative movement can be by the first figure in reality scene corresponding ontology have occurred it is mobile caused by, can also be camera shooting Caused by movement has occurred in device.The movement includes but not limited to:It is subjected to displacement, changes towards change, depth.Control is virtual Object follows the first figure movement that can be realized at least through following two ways:First, in real time detect the first figure relative to In first display area occur movement, real-time rendering generate for the first figure movement be adjusted after virtual objects, This method is computationally intensive, it is understood that there may be delay;The second, motion state Predicting Technique may be used, the scheduled sampling interval is set Time obtains the mobile status of the first figure at every sampling moment, according to the mobile status that sampling obtains, predicts subsequent time Motion state, render the virtual objects after generating adjustment, and the virtual objects after adjustment be smoothed, to big Mobile continuity is in turn ensured while big reduction operand.Wherein, in using motion state Predicting Technique, if can be to preceding The direction of dry frame and position ensure in moving process smoothly into row interpolation, while being prevented in movement by the way of high-pass filtering Tiny shake.
Optionally, further include executing step after step S19:
Step S111 is terminated after detecting that first figure disappears and is rendered the virtual objects, and monitor institute in real time It states with the presence or absence of the first figure to match with preset test pattern in reality scene image, if it is present repeating step S175, S177 and S19.
Preferably, further include step S21:The third operational order for monitoring user refers in the third operation for receiving user After order, the image currently shown is exported with preset picture format, it is folded with reality scene image which is preferably virtual objects Image after adding.
The present invention through the above steps, virtual objects and specific object in reality scene is associated, the spy is passed through The position of earnest body adjusts position/displaying angle etc. to control virtual objects in the reality scene image that shooting obtains, from And it realizes and adds free virtual objects interaction.
Fig. 2-6 is the flow chart according to the present invention wherein display methods of the virtual objects of a preferred embodiment.
When this life for getting user calling instruction (the first operational order), as shown in Figure 2.It renders and generates the first interaction Interface, interactive interface as shown in Figure 3, in the interactive interface, including the first display area, in the present embodiment, this is first aobvious Show that region occupies whole interactive interfaces;Reality scene image is obtained by the camera of terminal and in first viewing area It is shown in domain.
Judge with the presence or absence of the first figure to match with preset test pattern in reality scene image, such as Fig. 3-4 institutes Show, sphere of movements for the elephants shape pattern is preset test pattern, when being scanned to reality scene image, if there is with standard drawing When the first figure that shape sphere of movements for the elephants shape pattern matches, is rendered on the position of the first figure and generate a three-dimensional method battle array (the One threedimensional model), to prompt the user with the information of successful match, as shown in figure 4, method battle array is located on the first figure.
Monitor that user slides the operational order (the second operational order) of screen, which indicates acquisition request formula Refreshing (virtual objects), as shown in Figure 5.The request is sent, and receives corresponding response message, the response message is at least Including:The identifier of corresponding god, the identifier is for obtaining the corresponding three-dimensional modeling data of formula god.
The rendering position of formula god is determined in the position of the first display area according to the first figure, in the rendering position Upper generation formula god, as shown in fig. 6, formula spirit tablet is on the first figure and method battle array.
By the sphere of movements for the elephants shape pattern or dollying head in mobile reality scene, may be implemented formula god and method battle array with With movement.
According to a wherein embodiment of the invention, a kind of embodiment of the display device of virtual objects is additionally provided.Fig. 7 is root According to the structure diagram of the present invention wherein display device of the virtual objects of an embodiment, as shown in fig. 7, the device may include: Interface rendering unit 10 generates the first interactive interface for when getting the first operational order of user, rendering, and described first Interactive interface includes the first display area;Display unit 20, for obtaining reality scene image and in first display area Inside shown;Matching unit 30 whether there is and preset test pattern phase for judging in the reality scene image The first figure matched;If it is present the second operational order of monitoring user;Acquiring unit 40, for receiving described When two operational orders, the three-dimensional modeling data of corresponding virtual objects is obtained according to second operational order, according to described three Dimension module data render generates the virtual objects;Mobile unit 50 is followed, described is followed for controlling the virtual objects One figure moves.
Display device provided in this embodiment for virtual objects can perform the use that the method for the present invention embodiment is provided In the display methods of virtual objects, have the corresponding function module of execution method and advantageous effect.
According to a wherein embodiment of the invention, a kind of storage medium is additionally provided, storage medium includes the program of storage, In, equipment where controlling storage medium when program is run executes the display methods of above-mentioned virtual objects.Above-mentioned storage medium can To include but not limited to:USB flash disk, read-only memory (ROM), random access memory (RAM), mobile hard disk, magnetic disc or CD Etc. the various media that can store program code.
According to a wherein embodiment of the invention, a kind of processor is additionally provided, processor is for running program, wherein journey The display methods of above-mentioned virtual objects is executed when sort run.Above-mentioned processor can include but is not limited to:Microprocessor (MCU) or The processing unit of programmable logic device (FPGA) etc.
According to a wherein embodiment of the invention, a kind of terminal is additionally provided, including:One or more processors, memory, Display device and one or more program, wherein one or more programs are stored in memory, and be configured as by One or more processors execute, and program includes that the display methods of above-mentioned virtual objects is required for perform claim.In some realities Apply in example, above-mentioned terminal can be smart mobile phone (such as:Android phone, iOS mobile phones etc.), tablet computer, palm PC with And the terminal devices such as mobile internet device (Mobile Internet Devices, referred to as MID), PAD.Above-mentioned display dress Set can be touch-screen type liquid crystal display (LCD), which may make user can be with the user interface of terminal It interacts.In addition, above-mentioned terminal can also include:Input/output interface (I/O interfaces), universal serial bus (USB) end Mouth, network interface, power supply and/or camera.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
In the above embodiment of the present invention, all emphasizes particularly on different fields to the description of each embodiment, do not have in some embodiment The part of detailed description may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, for example, the unit division, Ke Yiwei A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module It connects, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple On unit.Some or all of unit therein can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can be stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or Part steps.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (17)

1. a kind of display methods of virtual objects, which is characterized in that including:
It when getting the first operational order, renders and generates the first interactive interface, first interactive interface includes the first display Region;
It obtains reality scene image and is shown in first display area;
Judge with the presence or absence of the first figure to match with preset test pattern in the reality scene image, if it does, Then monitor the second operational order of user;
When receiving second operational order, the three-dimensional mould of corresponding virtual objects is obtained according to second operational order Type data render according to the three-dimensional modeling data and generate the virtual objects;
Controlling the virtual objects follows first figure to move.
2. the method as described in claim 1, which is characterized in that obtain reality scene image and in first display area It is shown, including:It is existing described in real-time display in first display area by photographic device captured in real-time reality scene Real scene image.
3. the method as described in claim 1, which is characterized in that exist and preset mark in judging the reality scene image When the first figure that quasi- figure matches, renders and generate the first prompt message, the prompt message is for prompting user to carry out institute State the second operation.
4. method as claimed in claim 3, which is characterized in that it renders and generates the first prompt message, including:In first figure It is rendered on the position of shape and generates the first threedimensional model, and/or the second operational order control is generated in the interactive interface, described the Two operational order controls are used to receive the second operational order of user.
5. the method as described in claim 1, which is characterized in that obtain corresponding virtual objects according to second operational order Three-dimensional modeling data, including:
The three-dimensional modeling data is obtained from preset 3 d model library according to second operational order;Or
The corresponding request of second operational order is sent to preset server-side, and receives the threedimensional model number of server-side feedback According to;Or
The corresponding request of second operational order is sent to preset server-side, and receives the response message of server-side feedback, The three-dimensional modeling data is obtained from preset 3 d model library according to the response message, the response message is at least wrapped It includes:The identifier of the virtual objects, the identifier is for obtaining the corresponding three-dimensional modeling data of the virtual objects.
6. the method as described in claim 1, which is characterized in that described rendered according to the three-dimensional modeling data generates the void Quasi- object, including:
According to the rendering position of virtual objects described in location determination of first figure in first display area;
It is rendered on the rendering position and generates the virtual objects.
7. the method as described in claim 1, which is characterized in that first figure is non-centrosymmetric image.
8. the method as described in claim 1-7 is any, which is characterized in that judge to whether there is in the reality scene and preset The first figure for matching of test pattern, including:
Identification step:The reality scene image got is parsed, identifies the position of figure to be matched;
Matching step:The figure to be matched is pre-processed;
Judgment step:Judge whether the figure to be matched is the first figure to match with preset test pattern.
9. method as claimed in claim 8, which is characterized in that judge in the reality scene image with the presence or absence of with it is preset The first figure that test pattern matches traverses whole figures to be matched if it does not exist, then repeating the identification step Afterwards, the prompt message of recognition failures is shown.
10. the method as described in claim 1-7 is any, which is characterized in that judge in the reality scene with the presence or absence of with it is pre- If the first figure for matching of test pattern, including:
Pre-treatment step:The reality scene image got is pre-processed;
Identification step:The pretreated reality scene image is identified, determines the position of figure to be matched;
Judgment step:Judge whether the figure to be matched is the first figure to match with preset test pattern.
11. method as claimed in claim 10, which is characterized in that judge to whether there is in the reality scene image and preset The first figure for matching of test pattern traverse whole figures to be matched if it does not exist, then repeating the identification step Afterwards, the prompt message of recognition failures is shown.
12. the method as described in claim 1, which is characterized in that control the virtual objects and first figure is followed to move Later, further include:
After detecting that first figure disappears, terminates and render the virtual objects.
13. according to any methods of claim 1-7, it is characterised in that:It is further comprising the steps of:
The third operational order for monitoring user is exported after receiving the third operational order of user with preset picture format The image currently shown.
14. a kind of display device of virtual objects, which is characterized in that including:
Interface rendering unit is rendered and generates the first interactive interface for when getting the first operational order of user, and described the One interactive interface includes the first display area;
Display unit, for obtaining reality scene image and being shown in first display area;
Matching unit, for judging in the reality scene image with the presence or absence of the first figure to match with preset test pattern Shape;If it is present the second operational order of monitoring user;
Acquiring unit, for when receiving second operational order, corresponding void to be obtained according to second operational order The three-dimensional modeling data of quasi- object, renders according to the three-dimensional modeling data and generates the virtual objects;
Mobile unit is followed, follows first figure to move for controlling the virtual objects.
15. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein run in described program When control the storage medium where equipment perform claim require the display sides of the virtual objects described in any one of 1 to 13 Method.
16. a kind of processor, which is characterized in that the processor is for running program, wherein right of execution when described program is run Profit requires the display methods of the virtual objects described in any one of 1 to 13.
17. a kind of terminal, including:One or more processors, memory, display device and one or more programs, wherein One or more of programs are stored in the memory, and are configured as being held by one or more of processors Row, described program include that the display methods of the virtual objects described in any one of 1 to 13 is required for perform claim.
CN201710314103.9A 2017-01-25 2017-05-05 The display methods and device of virtual objects Pending CN108288306A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017100613767 2017-01-25
CN201710061376 2017-01-25

Publications (1)

Publication Number Publication Date
CN108288306A true CN108288306A (en) 2018-07-17

Family

ID=62801204

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201710313563.XA Pending CN108305325A (en) 2017-01-25 2017-05-05 The display methods and device of virtual objects
CN201710314092.4A Pending CN108273265A (en) 2017-01-25 2017-05-05 The display methods and device of virtual objects
CN201710314103.9A Pending CN108288306A (en) 2017-01-25 2017-05-05 The display methods and device of virtual objects

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201710313563.XA Pending CN108305325A (en) 2017-01-25 2017-05-05 The display methods and device of virtual objects
CN201710314092.4A Pending CN108273265A (en) 2017-01-25 2017-05-05 The display methods and device of virtual objects

Country Status (1)

Country Link
CN (3) CN108305325A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110639202A (en) * 2019-10-29 2020-01-03 网易(杭州)网络有限公司 Display control method and device in card game
CN110975285A (en) * 2019-12-06 2020-04-10 北京像素软件科技股份有限公司 Smooth knife light obtaining method and device
CN113590013A (en) * 2021-07-13 2021-11-02 网易(杭州)网络有限公司 Virtual resource processing method, nonvolatile storage medium, and electronic device
CN114307138A (en) * 2021-12-28 2022-04-12 北京字跳网络技术有限公司 Card-based interaction method and device, computer equipment and storage medium

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109107156A (en) * 2018-08-10 2019-01-01 腾讯科技(深圳)有限公司 Game object acquisition methods, device, electronic equipment and readable storage medium storing program for executing
CN109078327A (en) * 2018-08-28 2018-12-25 百度在线网络技术(北京)有限公司 Game implementation method and equipment based on AR
CN110069125B (en) * 2018-09-21 2023-12-22 北京微播视界科技有限公司 Virtual object control method and device
CN111103967A (en) * 2018-10-25 2020-05-05 北京微播视界科技有限公司 Control method and device of virtual object
CN109472873B (en) * 2018-11-02 2023-09-19 北京微播视界科技有限公司 Three-dimensional model generation method, device and hardware device
CN109685910A (en) * 2018-11-16 2019-04-26 成都生活家网络科技有限公司 Room setting setting method, device and VR wearable device based on VR
CN109939433B (en) * 2019-03-11 2022-09-30 网易(杭州)网络有限公司 Operation control method and device of virtual card, storage medium and electronic equipment
CN110058685B (en) * 2019-03-20 2021-07-09 北京字节跳动网络技术有限公司 Virtual object display method and device, electronic equipment and computer-readable storage medium
CN110109726B (en) * 2019-04-30 2022-08-23 网易(杭州)网络有限公司 Virtual object receiving processing method, virtual object transmitting method, virtual object receiving processing device and virtual object transmitting device, and storage medium
CN110404250B (en) * 2019-08-26 2023-08-22 网易(杭州)网络有限公司 Card drawing method and device in game
CN110533780B (en) * 2019-08-28 2023-02-24 深圳市商汤科技有限公司 Image processing method and device, equipment and storage medium thereof
CN112516593B (en) * 2019-09-19 2023-01-24 上海哔哩哔哩科技有限公司 Card drawing method, card drawing system and computer equipment
CN111752161B (en) * 2020-06-18 2023-06-30 格力电器(重庆)有限公司 Electrical appliance control method, system and storage medium
CN111821691A (en) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 Interface display method, device, terminal and storage medium
CN111913624B (en) * 2020-08-18 2022-06-07 腾讯科技(深圳)有限公司 Interaction method and device for objects in virtual scene
CN112051961A (en) * 2020-09-04 2020-12-08 脸萌有限公司 Virtual interaction method and device, electronic equipment and computer readable storage medium
CN112221124B (en) * 2020-10-21 2022-11-08 腾讯科技(深圳)有限公司 Virtual object generation method and device, electronic equipment and storage medium
CN112710254A (en) * 2020-12-21 2021-04-27 珠海格力智能装备有限公司 Object measuring method, system, device, storage medium and processor
CN113058267B (en) * 2021-04-06 2024-02-02 网易(杭州)网络有限公司 Virtual object control method and device and electronic equipment
CN113101647B (en) * 2021-04-14 2023-10-24 北京字跳网络技术有限公司 Information display method, device, equipment and storage medium
CN113289334A (en) * 2021-05-14 2021-08-24 网易(杭州)网络有限公司 Game scene display method and device
CN117296082A (en) * 2021-05-20 2023-12-26 华为技术有限公司 Image processing method and device
CN113691796B (en) * 2021-08-16 2023-06-02 福建凯米网络科技有限公司 Three-dimensional scene interaction method through two-dimensional simulation and computer readable storage medium
CN116679824A (en) * 2022-02-23 2023-09-01 华为技术有限公司 Man-machine interaction method and device in augmented reality AR scene and electronic equipment
CN114758042B (en) * 2022-06-14 2022-09-02 深圳智华科技发展有限公司 Novel virtual simulation engine, virtual simulation method and device
CN115350475B (en) * 2022-06-30 2023-06-23 元素创造(深圳)网络科技有限公司 Virtual object control method and device
CN115185374B (en) * 2022-07-14 2023-04-07 北京奇岱松科技有限公司 Data processing system based on virtual reality

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150356812A1 (en) * 2010-12-15 2015-12-10 Bally Gaming, Inc. System and method for augmented reality using a player card
CN105929945A (en) * 2016-04-18 2016-09-07 展视网(北京)科技有限公司 Augmented reality interaction method and device, mobile terminal and mini-computer

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100138193A (en) * 2009-06-24 2010-12-31 넥스트키 주식회사 The augmented reality content providing system and equipment for the user interaction based on touchscreen
CN102902710B (en) * 2012-08-08 2015-08-26 成都理想境界科技有限公司 Based on the augmented reality method of bar code, system and mobile terminal
CN106157359B (en) * 2015-04-23 2020-03-10 中国科学院宁波材料技术与工程研究所 Design method of virtual scene experience system
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150356812A1 (en) * 2010-12-15 2015-12-10 Bally Gaming, Inc. System and method for augmented reality using a player card
CN105929945A (en) * 2016-04-18 2016-09-07 展视网(北京)科技有限公司 Augmented reality interaction method and device, mobile terminal and mini-computer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无: "AR新玩法——现世召唤开启体验!", 《HTTP://YYS.163.COM/M/ZLP/20170118/24874_668345.HTML》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110639202A (en) * 2019-10-29 2020-01-03 网易(杭州)网络有限公司 Display control method and device in card game
CN110639202B (en) * 2019-10-29 2021-11-12 网易(杭州)网络有限公司 Display control method and device in card game
CN110975285A (en) * 2019-12-06 2020-04-10 北京像素软件科技股份有限公司 Smooth knife light obtaining method and device
CN110975285B (en) * 2019-12-06 2024-03-22 北京像素软件科技股份有限公司 Smooth cutter light acquisition method and device
CN113590013A (en) * 2021-07-13 2021-11-02 网易(杭州)网络有限公司 Virtual resource processing method, nonvolatile storage medium, and electronic device
CN113590013B (en) * 2021-07-13 2023-08-25 网易(杭州)网络有限公司 Virtual resource processing method, nonvolatile storage medium and electronic device
CN114307138A (en) * 2021-12-28 2022-04-12 北京字跳网络技术有限公司 Card-based interaction method and device, computer equipment and storage medium
CN114307138B (en) * 2021-12-28 2023-09-26 北京字跳网络技术有限公司 Interaction method and device based on card, computer equipment and storage medium

Also Published As

Publication number Publication date
CN108273265A (en) 2018-07-13
CN108305325A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN108288306A (en) The display methods and device of virtual objects
US20180088663A1 (en) Method and system for gesture-based interactions
CN108229239B (en) Image processing method and device
CN106897658B (en) Method and device for identifying human face living body
CN108229329A (en) Face false-proof detection method and system, electronic equipment, program and medium
US20190236259A1 (en) Method for 3d graphical authentication on electronic devices
CN108525299B (en) System and method for enhancing computer applications for remote services
CN111324253B (en) Virtual article interaction method and device, computer equipment and storage medium
CN108108748A (en) A kind of information processing method and electronic equipment
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
US20210089639A1 (en) Method and system for 3d graphical authentication on electronic devices
CN113112614B (en) Interaction method and device based on augmented reality
CN115003396A (en) Detecting counterfeit virtual objects
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
JP2011159329A (en) Automatic 3d modeling system and method
CN111580652A (en) Control method and device for video playing, augmented reality equipment and storage medium
CN111273777A (en) Virtual content control method and device, electronic equipment and storage medium
CN106536004B (en) enhanced gaming platform
US20230177755A1 (en) Predicting facial expressions using character motion states
CN103839032B (en) A kind of recognition methods and electronic equipment
TW202138971A (en) Interaction method and apparatus, interaction system, electronic device, and storage medium
CN111651054A (en) Sound effect control method and device, electronic equipment and storage medium
CN111159609A (en) Attribute information modification method and related device
CN114917590B (en) Virtual reality game system
CN113963355B (en) OCR character recognition method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180717