WO2020253655A1 - 多虚拟角色的控制方法、装置、设备及存储介质 - Google Patents
多虚拟角色的控制方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2020253655A1 WO2020253655A1 PCT/CN2020/096180 CN2020096180W WO2020253655A1 WO 2020253655 A1 WO2020253655 A1 WO 2020253655A1 CN 2020096180 W CN2020096180 W CN 2020096180W WO 2020253655 A1 WO2020253655 A1 WO 2020253655A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- virtual
- virtual character
- user interface
- virtual characters
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000003860 storage Methods 0.000 title claims abstract description 31
- 238000009877 rendering Methods 0.000 claims abstract description 74
- 230000003190 augmentative effect Effects 0.000 claims abstract description 16
- 230000036544 posture Effects 0.000 claims description 83
- 230000006870 function Effects 0.000 claims description 19
- 230000001960 triggered effect Effects 0.000 claims description 14
- 230000001276 controlling effect Effects 0.000 description 29
- 230000009471 action Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 20
- 238000012545 processing Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000003796 beauty Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 239000003814 drug Substances 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/02—Non-photorealistic rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
Definitions
- the embodiments of the present application relate to the computer field, and in particular to a method, device, device, and storage medium for controlling multiple virtual characters.
- Augmented Reality (AR) technology is a technology that seamlessly integrates real world information and virtual world information, and can superimpose the real environment and virtual objects on the same screen in real time.
- AR technology is applied in some application programs, and the AR application program is provided with a virtual character, and the virtual character may be an object with an image of an animal or an animation.
- the AR application can superimpose the virtual character in the real environment in real time.
- the user can observe the real environment surrounding the virtual character through the display.
- the AR application can also implement AR camera technology to capture the augmented reality of the real environment and the virtual character superimposed on the same screen.
- users can perform operations on the virtual character, such as zoom in, zoom out, drag, and so on.
- the phenomenon of piercing will occur, so that the user will select the virtual characters , There may be incorrect operations.
- the first virtual character is partially overlapped on the second virtual character.
- the final selected may be the second virtual character. Difficulty of human-computer interaction.
- the embodiments of the present application provide a method, device, device, and storage medium for controlling multiple virtual characters.
- a user can accurately select a target virtual character.
- the technical solution is as follows:
- a method for controlling multiple virtual characters is provided.
- the method is applied to a terminal, and an application program with an augmented reality function is run in the terminal.
- the method includes:
- a first user interface of the application is displayed, where the first user interface includes: selection items of multiple virtual characters;
- the second user interface of the application is displayed, and the real-world background picture and at least two virtual characters on the background picture are displayed on the second user interface.
- the at least two virtual characters are determined according to the depth information.
- the depth information is set according to the order of the first selection operation;
- the target virtual character is determined from the at least two virtual characters according to the second selection operation and the rendering sequence.
- a control device for multiple virtual characters in which an application program with an augmented reality function runs, and the device includes:
- the display module is used to display the first user interface of the application program, the first user interface includes: selection items of multiple virtual characters;
- a receiving module configured to receive a first selection operation of at least two virtual characters on the first user interface
- the display module is used to display the second user interface of the application program.
- the second user interface displays a real-world background screen and at least two virtual characters on the background screen, the at least two virtual characters are determined according to depth information At least two virtual characters are rendered after the rendering sequence, and the depth information is set according to the sequence of the first selection operation;
- a receiving module configured to receive a second selection operation on the second user interface
- the determining module is used to determine the target virtual character from the at least two virtual characters according to the second selection operation and the rendering sequence.
- a computer device which includes:
- a processor electrically connected to the memory
- the processor is used to load and execute executable instructions to implement the method for controlling multiple virtual characters as described in the above one aspect.
- a computer-readable storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code
- the set or instruction set is loaded and executed by the processor to implement the control method for multiple virtual characters as described in the above one aspect.
- the rendering order is determined by depth information, and at least two virtual characters are drawn according to the rendering order, which can avoid the phenomenon of piercing when the virtual characters are superimposed, so that the user can accurately determine the target virtual character when selecting the virtual character ;
- FIG. 1 is a schematic diagram of an implementation environment of a method for controlling multiple virtual characters provided by an exemplary embodiment of the present application
- Fig. 2 is a flowchart of a method for controlling multiple virtual characters provided by an exemplary embodiment of the present application
- FIG. 3 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by an exemplary embodiment of the present application
- FIG. 4 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application
- Fig. 5 is a flowchart of a pixel processing method for a virtual character provided by an exemplary embodiment of the present application
- FIG. 6 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- Fig. 7 is a flowchart of a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- FIG. 8 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- FIG. 9 is a flowchart of a method for controlling multiple virtual characters according to another exemplary embodiment of the present application.
- FIG. 10 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- Fig. 11 is a schematic diagram of a grid provided by an exemplary embodiment of the present application.
- FIG. 12 is a schematic diagram of an information structure of information encoding provided by an exemplary embodiment of the present application.
- FIG. 13 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- FIG. 14 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- 15 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- 16 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- FIG. 17 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- FIG. 18 is a flowchart of a method for controlling multiple virtual characters according to another exemplary embodiment of the present application.
- 19 is a schematic diagram of an interface implemented by a method for controlling multiple virtual characters provided by another exemplary embodiment of the present application.
- FIG. 20 is a block diagram of a device for controlling multiple virtual characters according to an exemplary embodiment of the present application.
- FIG. 21 is a block diagram of a terminal provided by an exemplary embodiment of the present application.
- Fig. 22 is a block diagram of a server provided by an exemplary embodiment of the present application.
- AR technology It is a technology that seamlessly integrates real world information and virtual world information, and can superimpose the real environment and virtual objects on the same screen in real time.
- the application program provides a three-dimensional virtual environment when running on a terminal by using AR technology.
- the three-dimensional virtual environment includes the real environment collected by the camera and virtual objects and virtual characters generated by computer simulation.
- the virtual character refers to a movable object in the above-mentioned three-dimensional virtual environment
- the movable object may be at least one of a virtual character, a virtual animal, and an animation character.
- the virtual character is a three-dimensional model created based on animation skeleton technology.
- Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the augmented reality environment.
- FIG. 1 shows a schematic diagram of an implementation environment of a method for controlling multiple virtual characters provided by an exemplary embodiment of the present application.
- the implementation environment includes: a terminal 120, a server cluster 140, and a communication network 160.
- the terminal 120 is connected to the server cluster 140 through the communication network 160.
- An application program with an augmented reality function is installed and running in the terminal 120, and the application program also has a function of supporting virtual characters.
- the application program may be any one of an augmented reality (Augmented Reality, AR) game program, an AR education program, and an AR navigation program.
- an application program with information sharing channels is installed and running in the terminal 120, and the first account or the second account is logged in the application program.
- the terminal 120 may be a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, MP3) player, an MP4 (Moving Picture Experts Group Layer IV, MP4). ) At least one of a player and a laptop computer.
- MP3 Motion Picture Experts Group Audio Layer III, MP3
- MP4 Motion Picture Experts Group Layer IV, MP4
- the server cluster 140 includes at least one of one server, multiple servers, a cloud computing platform, and a virtualization center.
- the server cluster 140 is used to provide background services for applications with augmented reality functions.
- the server cluster 140 is responsible for the main calculation work and the terminal 120 is responsible for the secondary calculation work; or the server cluster 140 is responsible for the secondary calculation work and the terminal 120 is responsible for the main calculation work; or, between the server cluster 140 and the terminal 120 Use distributed computing architecture for collaborative computing.
- the server cluster 140 includes: an access server and a background server.
- the access server is used to provide access services and information transceiving services of the terminal 120, and forward effective information between the terminal 120 and the background server.
- the back-end server is used to provide the back-end services of the application, such as: game logic service, material providing service, virtual character generation service, virtual character three-dimensional image generation service, virtual character two-dimensional image conversion and storage service, virtual character transaction At least one of service and virtual character display service.
- the communication network 160 may be a wired network and/or a wireless network.
- the wired network may be a metropolitan area network, a local area network, an optical fiber network, etc.
- the wireless network may be a mobile communication network or a wireless fidelity (WiFi).
- the control method of multi-virtual characters is applied in AR game programs. Multiple virtual characters are displayed on the user interface through the AR game program; the selection operation triggered on the user interface is received, and physical rays are emitted from the trigger position of the selection operation.
- the virtual character whose rays collide is determined as the target virtual character, and the target virtual character is controlled through control operations, such as controlling the position of the virtual character in the three-dimensional virtual environment, controlling the virtual character to make continuous actions, and so on.
- the control method of multiple virtual characters is used in AR education programs.
- the AR education program displays and controls multiple virtual characters on the user interface.
- the AR education program simulates chemical experiments and displays experimental equipment and medicines in the user interface; receiving users
- the control method of multiple virtual characters is applied in the AR military simulation program, and multiple virtual characters are displayed and controlled in the user interface through the AR military simulation program.
- the military layout is carried out through the AR education program, and multiple outposts are displayed in the user interface. ; Receive the selection operation triggered on the user interface, the physical rays reflected from the trigger position of the selection operation, and determine the turret colliding with the physical rays as the target post; drag the target post to a reasonable position, Build a cordon.
- the control method of multiple virtual characters is applied to the AR building program.
- a variety of buildings are displayed in the user interface, such as residences, shops, garages, traffic lights, viaducts, etc.; to receive selection operations triggered on the user interface, from Select the trigger position of the operation to emit physical rays, determine the building that collides with the physical rays as the target building, and set the geographic location of the target building or the display angle of the target building through the control operation.
- FIG. 2 shows a flowchart of a method for controlling multiple virtual characters provided by an exemplary embodiment of the present application.
- the method is applied to the implementation environment shown in FIG. 1 as an example.
- the method includes:
- Step 201 Display the first user interface of the application.
- An application program with enhanced display function runs in the terminal, and the application program has a function of supporting virtual characters.
- the first user interface of the application is displayed on the terminal, and the above-mentioned first user interface includes: selection items of multiple virtual characters.
- the application program includes at least one of an AR game program, an AR education program, an AR military simulation program, an AR construction program, and an AR navigation program.
- the terminal displaying the first user interface of the application program may include the following illustrative steps:
- the second user interface including a list item control
- the list item control is a control used to trigger display of selection items of multiple virtual characters
- the terminal receives the trigger operation on the list item control
- the first user interface of the application program is displayed according to the above-mentioned triggering operation.
- the first user interface includes selection items of multiple virtual characters; wherein, a real-world background picture is displayed on the AR camera interface.
- the terminal displays the first user interface by triggering the list item control on the second user interface when reselecting the virtual character or choosing to add the virtual character.
- the terminal displaying the first user interface of the application program may further include the following illustrative steps:
- the AR main page of the application is displayed on the terminal, and the AR main page includes a display control;
- the display control is a control used to trigger the display of selection items of multiple virtual characters;
- the terminal receives the trigger operation on the display control
- the first user interface of the application program is displayed according to the above-mentioned trigger operation, and the first user interface includes selection items of multiple virtual characters; wherein, a real-world background screen is displayed on the AR homepage.
- the aforementioned trigger operation includes at least one of a single-click operation, a multiple-click operation, a long press operation and a sliding operation, a drag operation, and a combination thereof.
- the AR main page 11 of the AR application is displayed on the terminal, and the AR main page 11 includes display controls 12; the terminal receives the long-press operation on the display controls 12; The user interface 13 displays the selection items 14 of multiple virtual characters on the first user interface 13.
- Step 202 Receive a first selection operation of at least two virtual characters on the first user interface.
- the terminal receives a first selection operation of at least two virtual characters on the first user interface; optionally, the first selection operation includes a single-click operation, multiple-click operation, sliding operation, dragging operation, long-press operation, and their At least one of the combinations.
- Step 203 Display the second user interface of the application.
- the second user interface displays a real-world background picture and at least two virtual characters on the background picture.
- the at least two virtual characters are rendered after determining the rendering order of the at least two virtual characters according to the depth information.
- the information is set according to the order of the first selection operation.
- the depth information of the virtual character includes the depth of field of the virtual character; where the depth of field refers to the front and back range distance of the subject measured by the front edge of the camera lens or other imager, within which the subject can be clearly imaged.
- displaying the second user interface of the application on the terminal may include the following illustrative steps:
- At least two virtual characters are rendered according to the rendering order and displayed in the second user interface.
- rendering refers to coloring the model of the virtual character according to the principle of the real camera; in this embodiment, the terminal renders the model of the virtual character to make the virtual character appear three-dimensional in the three-dimensional virtual environment; the rendering sequence refers to The sequence in which the models of at least two virtual characters are rendered.
- the rendering order is the virtual character 17 first and then the virtual character 18.
- the visual In effect the position of the virtual character 17 in the second user interface is higher than the position of the virtual character 18.
- the terminal draws the camera texture corresponding to the virtual character at the pixel point.
- the camera texture refers to the grooves and/or patterns on the surface of the object collected by the camera; the camera refers to the camera model when the user views the virtual world.
- Stroke refers to making edge lines; in the process shown in Figure 5, when the pixel is an effective pixel, the terminal draws the edge of the image; effective pixel refers to the pixel located on the edge of the image of the virtual character .
- Special effects refer to special effects, for example, the clothes of the virtual character will constantly change colors, or the skirt of the virtual character will constantly swing and so on.
- the terminal draws the special effect on the pixel.
- the model refers to the structure that describes the morphological structure of the object; the terminal draws the model on the pixel.
- step 2036 Clear the depth information of the current pixel and return to step 2032 to draw the next pixel; until the rendering of the virtual character is completed, step 2036 is performed.
- Step 204 Receive a second selection operation on the second user interface.
- the terminal receives a second selection operation of the virtual character on the second user interface; optionally, the second selection operation includes at least one of a single-click operation, a multiple-click operation, and a long-press operation.
- Step 205 Determine the target virtual character from the at least two virtual characters according to the second selection operation and the rendering sequence.
- the terminal determines the virtual character selected corresponding to the second selection operation as the target virtual character.
- the terminal emits a physical ray from the trigger position of the second selection operation; the virtual character that collides with the physical ray is determined as the target virtual character.
- the virtual character that collides with the physical ray is determined as the target virtual character.
- the terminal determines the virtual character at the front end in the rendering sequence as the target virtual character according to the second selection operation.
- the schematic steps for the terminal to determine the target virtual character are as follows:
- the terminal emits physical rays from the trigger position of the second selection operation
- the terminal determines the virtual character that collides with the physical rays according to the rendering order as the target virtual character.
- the collision between the physical rays and the virtual character refers to the collision between the elements of the physical rays and the elements of the virtual character.
- the terminal determines the collision between the physical ray and the virtual character through collision detection. Among them, the physical rays collide with the virtual character at the front end in the rendering sequence.
- the rendering order is determined by depth information, and at least two virtual characters are drawn in accordance with the rendering order, which can avoid virtual
- the phenomenon of piercing occurs, so that the user can accurately determine the target virtual character when selecting the virtual character; so that the user can control the three-dimensional model like a two-dimensional picture in the AR scene .
- the terminal can According to the selection operation, the first virtual character is accurately determined as the target virtual character.
- the user interface displayed by the terminal in this embodiment is a two-dimensional image.
- the first user interface and the second user interface are both two-dimensional images.
- steps 301 to 302 are added after step 205, and the terminal updates the second user interface after determining the target virtual character, as shown in Fig. 7, the schematic steps are as follows:
- Step 301 Determine the target virtual character as the virtual character at the front end in the rendering sequence, and update the rendering sequence.
- the terminal sets the rendering sequence of the target virtual character to the front end, the rendering sequence of the remaining virtual characters remains unchanged, and the rendering sequence is updated.
- the rendering sequence of virtual character 16, virtual character 17, and virtual character 18 is: virtual character 16, virtual character 17, virtual character 18; when the terminal determines virtual character 17 as the target virtual character, the rendering sequence is updated to: The virtual character 17, the virtual character 16, and the virtual character 18.
- Step 302 Display at least two virtual characters according to the updated rendering sequence.
- the terminal redraws and displays at least two virtual characters according to the updated rendering sequence.
- the rendering sequence of the three virtual characters in the second user interface 19 is: virtual character 17, virtual character 16, virtual character 18; after the terminal determines the virtual character 18 as the target virtual character, the rendering sequence is updated It is: virtual character 18, virtual character 17, virtual character 16.
- the above three virtual characters are displayed according to the updated rendering sequence. In terms of visual effect, the virtual character 18 is obviously ahead of the virtual Character 16 and virtual character 17.
- the second user interface is a photographing interface.
- steps 401 to 407 are added after step 205.
- the terminal implements the function of taking pictures of multiple virtual characters and the function of taking pictures, as shown in Figure 9, schematic steps as follows:
- Step 401 Receive a gesture setting operation triggered on a target virtual character.
- the posture setting operation is used to set posture information of the target virtual character; optionally, the posture information includes at least one of position information, action information, and size information of the virtual character.
- the location information includes geographic location information of the target virtual character in the three-dimensional virtual environment and its own rotation information.
- the gesture setting operation includes at least one of a sliding operation, a single-click operation, a multiple-click operation, a drag operation, and a zoom operation.
- the terminal receiving the setting operation triggered on the target virtual character may include the following illustrative steps:
- the aforementioned drag operation is used to set the position information of the target virtual character in the three-dimensional virtual environment.
- the above zoom operation is used to set the size information of the target virtual character.
- a click operation on the second user interface is received; the click operation is used to freeze the action of the target virtual character, and the freeze action is used to set the action information of the target virtual character.
- Step 402 Set the posture information of the target virtual character according to the posture setting operation.
- the terminal rotates the display angle of the target virtual character according to the posture setting operation; for example, the terminal rotates and adjusts the display angle of the target virtual character according to the sliding operation in the left and right directions.
- the terminal sets the geographic location of the target virtual character in the three-dimensional virtual environment according to the posture setting operation; for example, the terminal moves the target virtual character up and down in the second user interface according to a drag operation in the up and down direction.
- the terminal sets the size information of the target virtual character according to the posture setting operation; for example, the terminal sets the size of the target virtual object according to the zoom operation.
- the terminal sets the action information of the target virtual character according to the posture setting operation; for example, the terminal plays the continuous action of the target virtual character, and freezes the action of the target virtual character according to the click operation.
- Step 403 Receive a photographing operation triggered on the photographing control.
- the second user interface includes a camera control; the terminal receives a camera operation triggered on the camera control.
- the photographing operation is used to take a photograph of at least two virtual characters after posture information is set, or to take a photo with at least two virtual characters after posture information is set.
- Step 404 Taking pictures of at least two virtual characters according to the photographing operation to obtain photographed pictures.
- the terminal takes pictures of at least two virtual characters according to the photographing operation to obtain a photographed picture; wherein the photographed picture includes the target virtual character displayed in the posture information setting.
- the captured picture also includes objects in the real environment.
- the objects in the real environment can be real objects, real animals, and real people.
- the terminal when at least two virtual characters are photographed in the application for the first time, the terminal prompts that the photograph can be taken in a horizontal screen.
- a prompt 32 is displayed in the user interface 31: "Rotate the mobile phone to take pictures on the horizontal screen.”
- the terminal prompts to unlock the mobile phone screen. For example, the prompt "Please unlock the phone screen first" is displayed in the user interface.
- Step 405 Display the third user interface of the application.
- the terminal displays the captured picture on the third user interface of the application, and the third user interface also includes a share button control.
- the share button control is used to share at least one of the taken picture and the posture information of the at least two virtual characters in the taken picture.
- Step 406 Receive the sharing operation on the sharing button control.
- the terminal receives the sharing operation on the sharing button control.
- the sharing operation can be a single-click operation.
- Step 407 Share the information code from the first account to the second account according to the sharing operation.
- the foregoing information encoding includes posture information of at least two virtual characters in the photographed picture, and the information encoding is used to set the postures of the at least two virtual characters.
- the terminal maps geographic location information to a grid (Grid layout) for encoding; for example, the terminal converts 16 floating-point numbers representing geographic location information into two integers with known upper limits, and the above upper limits are based on the terminal The display size of the screen is determined.
- Grid layout for encoding
- the terminal maps the Euler angle representing the rotation information into a two-dimensional plane, and encodes the hash value obtained after Euler angle mapping through a hash function; for example, the terminal passes the Euler angle from 0 degrees to The angle value in the range of 360 degrees is expressed, and the angle value is calculated by a hash function to obtain a hash value.
- the terminal reduces the accuracy of the size information, and encodes the reduced-precision size information.
- the above geographic location information can be represented by two integers, such as the Grid origin (0,0), and each integer is encoded as a 16-bit (bit) index, such as "xGrid index” And “yGrid index”; the above rotation information combines three 16-bit integers (representing Euler angles) into one 32-bit integer, such as "rotation Hash”; the above size information can reduce the precision of three floating-point numbers to one 16bit Integer of, such as "zoom"; among them, each small cell in the figure represents 8bit.
- the above-mentioned posture information can be transmitted through 10-byte "byte” information encoding.
- the information encoding may also be processed.
- the terminal encrypts the information encoding through Base64 (based on 64 printable characters to represent binary data).
- the data type of the posture information is Transform.
- the terminal automatically copies and pastes the information code into the information sharing channel according to the sharing operation; or the user copies and pastes the information code into the information sharing channel.
- the information sharing channel may be an instant messaging program or a network platform.
- the first account is logged in the application program of the terminal, and the information code is shared with the second account through the first account.
- the application program of the terminal is an instant messaging program
- the instant messaging program is logged in with a first account
- the first account and the second account are social friends
- the terminal sends information codes to the second account through the first account.
- the application program of the terminal is a network platform
- the network platform is logged in with a first account
- the first account and the second account are strangers
- the terminal uses the first account to publish the information code on the network platform
- the second account can access the network The platform obtains the information code.
- the third user interface 43 includes a share button control 41. Click the share button control 41 to obtain the information code; when the information code is successfully obtained, the terminal displays the prompt message "The information code has been copied, go share it ⁇ " .
- the third user interface also includes a share button control 42 for sharing the captured picture.
- the method for controlling multiple virtual characters also uses information encoding to share the posture information of at least two virtual characters, so that users can share various poses of the virtual characters; and information encoding It is possible to use a short character string for propagation, which reduces the difficulty of propagating posture information; for example, the information encoding in Figure 12 only occupies 10 bytes.
- the terminal can set the action of the virtual character through the progress bar; for example, in FIG. 14, the terminal receives the trigger operation on the selection control 51 and displays the action option; receives the trigger operation on the action option 52, and selects the action 52 corresponding actions are played, that is, the virtual character 54 performs the above-mentioned actions; receiving the drag operation on the progress bar 53 can select the action screen; receiving the freeze operation on the progress bar 53, freeze the action of the virtual character 54; the freeze action is The final action.
- actions can include leisure, energy accumulation, standby, jumping, attack, skill, falling to the ground, dizziness and so on.
- the terminal can delete the target virtual character; as shown in Figure 14, when the terminal determines that the target virtual character is the virtual character 54, a delete button control 55 is displayed on the user interface 56; when the terminal receives a trigger operation on the delete button control 55 , Delete the virtual character 54 and display the user interface 57.
- the terminal can switch the camera between the front and the rear.
- the user interface 57 includes a camera switch button 58.
- the terminal can also restore the posture information of the virtual character to the default posture information with one click, as shown in Figure 14.
- the user interface 57 includes a restore control 59; the terminal receives the trigger operation of the restore control 59 and displays
- the card 60 includes reminder information, and whether to restore is determined by confirming or canceling.
- the displayed virtual characters can be added.
- the user interface 61 includes a list control 62; the terminal receives a trigger operation on the list control 62 and displays a selection item 63 of a virtual character; the terminal can add and display a virtual character according to the selection operation.
- the terminal can add filters or beauty to the virtual character.
- a filter control 65 is displayed on the user interface 64; the trigger operation on the filter control 65 is received, and the filter list is displayed; when the terminal receives the trigger operation on the filter control 66, the corresponding Filter list: As shown in the user interface 67 in the figure, when the terminal receives a trigger operation on the beauty control 68, the corresponding filter list under the beauty control is displayed.
- the terminal also uses the progress bar to set the corresponding value of the filter, for example, adjust the percentage of beauty.
- the user interface 71 includes a texture button control 72; the terminal receives a trigger operation on the texture button control 72 and displays a list of textures; the terminal receives a selection operation on a template, and displays a template 75.
- the template 75 includes a close button 74. When the terminal receives a trigger operation on the close button 74, the image is cancelled.
- the template 75 includes a rotation button 73. When the terminal receives a drag operation on the rotation button 73, the terminal rotates the template 75 according to the drag operation.
- a user interface 76 is displayed.
- the user interface 76 includes a keyboard 77 for inputting text content in the template 75.
- the terminal can set the posture information of the virtual character through information encoding.
- step 203 is replaced with step 501 to step 502, as shown in FIG. 18.
- the steps are as follows:
- Step 501 Obtain information codes.
- the information encoding is obtained by encoding the target posture information, and the target posture information is used to set the postures of at least two virtual characters.
- the terminal receives the information code shared by the second account to the first account through an information sharing channel.
- Step 502 Display at least two virtual characters with target posture information on the second user interface.
- the target pose information includes target depth information;
- the setting of the pose information of the at least two virtual characters includes the following schematic steps:
- the terminal determines the first correspondence between the target depth information and the rendering order.
- n, i, and j are positive integers, and i and j are all less than n.
- the first correspondence and the second correspondence are included.
- the rendering order corresponding to the target posture information is 1, 2, 3, and the third The target posture information corresponds to the first virtual character, the second target posture information corresponds to the second virtual character, and the first target posture information corresponds to the third virtual character; the terminal sets the third target posture information as the first For the posture information of one virtual character, the second target posture information is set as the posture information of the second virtual character, and the first target posture information is set as the posture information of the third virtual character.
- the way of obtaining information codes can be as shown in Fig. 19, the first user interface 81 includes a code control 82; the terminal receives a trigger operation on the code control 82 and displays a card 83; the card 83 displays a code input control, and the user can Copy and paste the information code into the code input control, and click the OK button to set the posture information of the virtual character; the user interface 84 is displayed, and the user interface 84 includes the prompt message "the information code corresponds to 4 characters", prompting the user for the information code The number of corresponding virtual characters.
- the method for controlling multiple virtual characters also uses information encoding to share the posture information of at least two virtual characters, so that users can share various poses of the virtual characters; and information encoding It can use very short character strings to spread, which reduces the difficulty of spreading posture information.
- FIG. 20 shows a control device for multiple virtual characters provided by an exemplary embodiment of the present application.
- the device runs an application with augmented reality function.
- the device includes:
- the display module 601 is configured to display a first user interface of the application program, and the first user interface includes: selection items of multiple virtual characters;
- the first receiving module 602 is configured to receive a first selection operation of at least two virtual characters on the first user interface
- the display module 601 is used to display the second user interface of the application program.
- the second user interface displays a background picture of the real world and at least two virtual characters on the background picture.
- the at least two virtual characters are based on depth information After determining the rendering order of at least two virtual characters, the depth information is set according to the order of the first selection operation;
- the first receiving module 602 is configured to receive a second selection operation on the second user interface
- the determining module 603 is configured to determine the target virtual character from the at least two virtual characters according to the second selection operation and the rendering sequence.
- the determining module 603 is configured to determine the virtual character at the front end in the rendering sequence as the target virtual character according to the second selection operation when at least two virtual characters overlap.
- the determining module 603 is configured to emit physical rays from the trigger position of the second selection operation in the three-dimensional virtual environment where the virtual character is located; determine the virtual character that collides with the physical rays according to the rendering order as The target virtual character; where the physical rays collide with the virtual character at the front end in the rendering sequence.
- the device further includes:
- the update module 604 is used to determine the target virtual character as the virtual character at the front end in the rendering sequence, and to update the rendering sequence;
- the display module 601 is configured to display at least two virtual characters according to the updated rendering sequence.
- the second user interface includes a camera control
- the device also includes:
- the first receiving module 602 is configured to receive a gesture setting operation triggered on the target virtual character
- the setting module 605 is used to set the posture information of the target virtual character according to the posture setting operation
- the first receiving module 602 is configured to receive a photographing operation triggered on the photographing control
- the photographing module 606 is configured to take pictures of at least two virtual characters according to the photographing operation to obtain photographed pictures; the photographed pictures include the target virtual characters displayed in the posture information setting.
- the device further includes:
- the display module 601 is used to display the third user interface of the application, and the third user interface includes controls for taking pictures and sharing buttons;
- the first receiving module 602 is configured to receive the sharing operation on the sharing button control
- the sharing module 607 is configured to share the information code from the first account to the second account according to the sharing operation; the information code includes posture information of at least two virtual characters in the taken picture, and the information code is used to set the posture of the at least two virtual characters.
- the sharing module 607 is configured to obtain posture information of at least two virtual characters according to the sharing operation, and generate information codes; copy and paste the information codes into the information sharing channel; One account is shared to the second account.
- the display module 601 is used to obtain information codes; the information codes are obtained by encoding the target posture information, and the target posture information is used to set the postures of at least two virtual characters; At least two virtual characters with target posture information.
- the target pose information includes target depth information
- the display module 601 is configured to determine the first correspondence between the target depth information and the rendering order; determine the second correspondence between the n target pose information and the n virtual characters according to the first correspondence;
- the i target posture information is set as the posture information of the jth virtual character; the jth virtual character is displayed on the second user interface;
- n, i, and j are positive integers, and i and j are all less than n.
- the device further includes:
- the second receiving module 608 is configured to receive the information code shared by the second account to the first account through the information sharing channel.
- the control device for multiple virtual characters determines the rendering order through depth information, and draws at least two virtual characters according to the rendering order, which can avoid the phenomenon of piercing when the virtual characters are superimposed, so that the user can When performing the selection operation, accurately determine the target virtual character; enable the user to control the 3D model like a 2D picture in the AR scene, ensure that the touch judgment is consistent with the vision, and solve the human-computer interaction for the 3D virtual character For example, when the first virtual character and the second virtual character overlap, and the first virtual character precedes the second virtual character in the rendering sequence, the terminal can accurately determine the first virtual character as the target virtual character according to the selection operation.
- FIG. 21 shows a structural block diagram of a terminal 700 provided by an exemplary embodiment of the present application.
- the terminal 700 may be: a smartphone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compressing standard audio Level 4) Player, laptop or desktop computer.
- the terminal 700 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
- the terminal 700 includes a processor 701 and a memory 702.
- the processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
- the processor 701 may adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve.
- the processor 701 may also include a main processor and a coprocessor.
- the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
- the processor 701 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
- the processor 701 may also include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
- AI Artificial Intelligence
- the memory 702 may include one or more computer-readable storage media, which may be non-transitory.
- the memory 702 may also include high-speed random access memory and non-volatile memory, such as one or more disk storage devices and flash memory storage devices.
- the non-transitory computer-readable storage medium in the memory 702 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 701 to implement the multi-virtualization provided by the method embodiments of the present application.
- the control method of the role is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 701 to implement the multi-virtualization provided by the method embodiments of the present application. The control method of the role.
- the terminal 700 may optionally further include: a peripheral device interface 703 and at least one peripheral device.
- the processor 701, the memory 702, and the peripheral device interface 703 may be connected by a bus or a signal line.
- Each peripheral device can be connected to the peripheral device interface 703 through a bus, a signal line or a circuit board.
- the peripheral device includes: at least one of a radio frequency circuit 704, a touch display screen 705, a camera 706, an audio circuit 707, a positioning component 708, and a power supply 709.
- the peripheral device interface 703 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 701 and the memory 702.
- the processor 701, the memory 702, and the peripheral device interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 701, the memory 702, and the peripheral device interface 703 or The two can be implemented on separate chips or circuit boards, which are not limited in this embodiment.
- the radio frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 704 communicates with a communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 704 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
- the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
- the radio frequency circuit 704 can communicate with other terminals through at least one wireless communication protocol.
- the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
- the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which is not limited in this application.
- the display screen 705 is used to display UI (User Interface, user interface).
- the UI can include graphics, text, icons, videos, and any combination thereof.
- the display screen 705 also has the ability to collect touch signals on or above the surface of the display screen 705.
- the touch signal may be input to the processor 701 as a control signal for processing.
- the display screen 705 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- the display screen 705 may be one display screen 705, which is provided with the front panel of the terminal 700; in other embodiments, there may be at least two display screens 705, which are respectively provided on different surfaces of the terminal 700 or in a folding design; In still other embodiments, the display screen 705 may be a flexible display screen, which is arranged on the curved surface or the folding surface of the terminal 700. Furthermore, the display screen 705 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
- the display screen 705 may be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
- the camera assembly 706 is used to capture images or videos.
- the camera assembly 706 includes a front camera and a rear camera.
- the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
- the camera assembly 706 may also include a flash.
- the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
- the audio circuit 707 may include a microphone and a speaker.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 701 for processing, or input to the radio frequency circuit 704 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 700.
- the microphone can also be an array microphone or an omnidirectional acquisition microphone.
- the speaker is used to convert the electrical signal from the processor 701 or the radio frequency circuit 704 into sound waves.
- the speaker can be a traditional membrane speaker or a piezoelectric ceramic speaker.
- the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for purposes such as distance measurement.
- the audio circuit 707 may also include a headphone jack.
- the positioning component 708 is used to locate the current geographic location of the terminal 700 to implement navigation or LBS (Location Based Service, location-based service).
- the positioning component 708 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
- the power supply 709 is used to supply power to various components in the terminal 700.
- the power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
- the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
- a wired rechargeable battery is a battery charged through a wired line
- a wireless rechargeable battery is a battery charged through a wireless coil.
- the rechargeable battery can also be used to support fast charging technology.
- the terminal 700 further includes one or more sensors 710.
- the one or more sensors 710 include, but are not limited to, an acceleration sensor 711, a gyroscope sensor 712, a pressure sensor 713, a fingerprint sensor 714, an optical sensor 715, and a proximity sensor 716.
- the acceleration sensor 711 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 700.
- the acceleration sensor 711 may be used to detect the components of the gravitational acceleration on three coordinate axes.
- the processor 701 may control the touch screen 705 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 711.
- the acceleration sensor 711 may also be used for the collection of game or user motion data.
- the gyroscope sensor 712 can detect the body direction and the rotation angle of the terminal 700, and the gyroscope sensor 712 can cooperate with the acceleration sensor 711 to collect the user's 3D actions on the terminal 700.
- the processor 701 can implement the following functions according to the data collected by the gyroscope sensor 712: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
- the pressure sensor 713 may be disposed on the side frame of the terminal 700 and/or the lower layer of the touch screen 705.
- the processor 701 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 713.
- the processor 701 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 705.
- the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
- the fingerprint sensor 714 is used to collect the user's fingerprint.
- the processor 701 can identify the user's identity based on the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 can identify the user's identity based on the collected fingerprint. When it is recognized that the user's identity is a trusted identity, the processor 701 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
- the fingerprint sensor 714 may be provided on the front, back or side of the terminal 700. When a physical button or a manufacturer logo is provided on the terminal 700, the fingerprint sensor 714 can be integrated with the physical button or the manufacturer logo.
- the optical sensor 715 is used to collect the ambient light intensity.
- the processor 701 may control the display brightness of the touch screen 705 according to the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 705 is decreased.
- the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 according to the ambient light intensity collected by the optical sensor 715.
- the proximity sensor 716 also called a distance sensor, is usually arranged on the front panel of the terminal 700.
- the proximity sensor 716 is used to collect the distance between the user and the front of the terminal 700.
- the processor 701 controls the touch screen 705 to switch from the on-screen state to the off-screen state; when the proximity sensor 716 detects When the distance between the user and the front of the terminal 700 gradually increases, the processor 701 controls the touch display screen 705 to switch from the rest screen state to the bright screen state.
- FIG. 21 does not constitute a limitation on the terminal 700, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
- the present application also provides a computer-readable storage medium in which at least one instruction, at least one program, code set or instruction set is stored, and the at least one instruction, at least one program, code set or instruction set is loaded and Execute to realize the control method for multiple virtual characters provided by the above method embodiment.
- FIG. 22 is a schematic structural diagram of a server provided by an embodiment of the present application.
- the server 800 includes a central processing unit (English: Central Processing Unit, abbreviated as: CPU) 801, includes random access memory (English: Random Access Memory, abbreviated as: RAM) 802 and read-only memory (English: Read-Only Memory (abbreviation: ROM) 803 system memory 804, and system bus 805 connecting the system memory 804 and the central processing unit 801.
- the server 800 also includes a basic input/output system (I/O system) 806 to help transfer information between various devices in the computer, and a large-capacity storage for storing the operating system 813, application programs 814, and other program modules 815 Equipment 807.
- I/O system basic input/output system
- the basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse and a keyboard for the user to input information.
- the display 808 and the input device 809 are both connected to the central processing unit 801 through the input/output controller 810 connected to the system bus 805.
- the basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from multiple other devices such as a keyboard, a mouse, or an electronic stylus.
- the input/output controller 810 also provides output to a display screen, a printer, or other types of output devices.
- the mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805.
- the mass storage device 807 and its associated computer-readable medium provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or a read-only optical disk (English: Compact Disc Read-Only Memory, CD-ROM for short) drive.
- Computer-readable media may include computer storage media and communication media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data.
- Computer storage media include RAM, ROM, Erasable Programmable Read-Only Memory (English: Erasable Programmable Read-Only Memory, referred to as EPROM), Electrically Erasable Programmable Read-Only Memory (English: Electrically Erasable Programmable Read-Only Memory) , Abbreviation: EEPROM), flash memory or other solid-state storage technology, CD-ROM, digital versatile disc (English: Digital Versatile Disc, abbreviation: DVD) or other optical storage, tape cartridges, magnetic tape, disk storage or other magnetic storage devices.
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- flash memory or other solid-state storage technology CD-ROM
- digital versatile disc English: Digital Versatile Disc, abbreviation: DVD
- tape cartridges magnetic tape
- disk storage disk storage or other magnetic storage devices.
- the aforementioned system memory 804 and mass storage device 807 may be collectively referred to as memory.
- the server 800 may also be connected to a remote computer on the network to run through a network such as the Internet. That is, the server 800 can be connected to the network 812 through the network interface unit 811 connected to the system bus 805, or in other words, the network interface unit 811 can also be used to connect to other types of networks or remote computer systems (not shown) .
- the present application also provides a computer program product, which, when it runs on an electronic device, causes the electronic device to execute the control method for multiple virtual characters described in the foregoing method embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (22)
- 一种多虚拟角色的控制方法,其特征在于,所述方法应用于终端中,所述终端中运行有具有增强现实功能的应用程序,所述方法包括:显示所述应用程序的第一用户界面,所述第一用户界面包括:多个虚拟角色的选择项目;接收所述第一用户界面上对至少两个所述虚拟角色的第一选择操作;显示所述应用程序的第二用户界面,在所述第二用户界面上显示有真实世界的背景画面,以及位于所述背景画面上的至少两个所述虚拟角色,至少两个所述虚拟角色是根据深度信息确定至少两个所述虚拟角色的渲染顺序后渲染得到的,所述深度信息是根据所述第一选择操作的顺序设置的;接收所述第二用户界面上的第二选择操作;根据所述第二选择操作和所述渲染顺序从至少两个所述虚拟角色中确定出目标虚拟角色。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第二选择操作和所述渲染顺序从至少两个所述虚拟角色中确定出目标虚拟角色,包括:当至少两个所述虚拟角色重叠时,根据所述第二选择操作将所述渲染顺序中位于前端的所述虚拟角色确定为所述目标虚拟角色。
- 根据权利要求2所述的方法,其特征在于,所述根据所述第二选择操作将所述渲染顺序中位于前端的所述虚拟角色确定为所述目标虚拟角色,包括:在所述虚拟角色所处的三维虚拟环境中,从所述第二选择操作的触发位置发射出物理射线;将按照所述渲染顺序与所述物理射线发生碰撞的所述虚拟角色确定为所述目标虚拟角色;其中,所述物理射线与所述渲染顺序中位于前端的所述虚拟角色发生碰撞。
- 根据权利要求1至3任一所述的方法,其特征在于,所述根据所述第二选择操作和所述渲染顺序从至少两个所述虚拟角色中确定出目标虚拟角色之后,还包括:将所述目标虚拟角色确定为所述渲染顺序中位于前端的所述虚拟角色,对所述渲染顺序进行更新;根据更新后的所述渲染顺序显示至少两个所述虚拟角色。
- 根据权利要求1至3任一所述的方法,其特征在于,所述第二用户界面上包括拍照控件;所述根据所述第二选择操作和所述渲染顺序从至少两个所述虚拟角色中确定出目标虚拟角色之后,还包括:接收对所述目标虚拟角色触发的姿势设置操作;根据所述姿势设置操作设置所述目标虚拟角色的姿势信息;接收所述拍照控件上触发的拍照操作;根据所述拍照操作对至少两个所述虚拟角色进行拍照得到拍摄图片;所述拍摄图片中包括以所述姿势信息设置显示的所述目标虚拟角色。
- 根据权利要求5所述的方法,其特征在于,所述根据所述拍照操作对至少两个所述虚 拟角色进行拍照得到的拍摄图片之后,包括:显示所述应用程序的第三用户界面,所述第三用户界面上包括所述拍摄图片和分享按钮控件;接收所述分享按钮控件上的分享操作;根据所述分享操作将信息编码从第一帐号分享至第二帐号;所述信息编码包括所述拍摄图片中至少两个所述虚拟角色的姿势信息,所述信息编码用于设置至少两个所述虚拟角色的姿势。
- 根据权利要求6所述的方法,其特征在于,所述根据所述分享操作将信息编码从第一帐号分享至第二帐号,包括:根据所述分享操作获取至少两个所述虚拟角色的所述姿势信息,生成所述信息编码;将所述信息编码复制和粘贴至所述信息分享渠道中;通过所述信息分享渠道将所述信息编码从所述第一帐号分享至所述第二帐号。
- 根据权利要求1至3任一所述的方法,其特征在于,所述显示所述应用程序的第二用户界面,包括:获取信息编码;所述信息编码是对目标姿势信息编码得到的,所述目标姿势信息用于设置至少两个所述虚拟角色的姿势;在所述第二用户界面上显示设置有所述目标姿势信息的至少两个所述虚拟角色。
- 根据权利要求8所述的方法,其特征在于,所述目标姿势信息包括目标深度信息;所述在所述第二用户界面上显示设置有所述目标姿势信息的至少两个所述虚拟角色,包括:确定所述目标深度信息与所述渲染顺序的第一对应关系;根据所述第一对应关系确定n个所述目标姿势信息与n个所述虚拟角色之间的第二对应关系;根据第二对应关系将第i个所述目标姿势信息设置为第j个所述虚拟角色的姿势信息;在所述第二用户界面上显示第j个所述虚拟角色;n、i、j为正整数,i、j均小于n。
- 根据权利要求8所述的方法,其特征在于,所述获取信息编码,包括:接收第二帐号通过信息分享渠道分享至第一帐号的所述信息编码。
- 一种多虚拟角色的控制装置,其特征在于,所述装置中运行有具有增强现实功能的应用程序,所述装置包括:显示模块,用于显示所述应用程序的第一用户界面,所述第一用户界面包括:多个虚拟角色的选择项目;接收模块,用于接收所述第一用户界面上对至少两个所述虚拟角色的第一选择操作;所述显示模块,用于显示所述应用程序的第二用户界面,在所述第二用户界面上显示有真实世界的背景画面,以及位于所述背景画面上的至少两个所述虚拟角色,至少两个所述虚拟角色是根据深度信息确定至少两个所述虚拟角色的渲染顺序后渲染得到的,所述深度信息是根据所述第一选择操作的顺序设置的;所述接收模块,用于接收所述第二用户界面上的第二选择操作;确定模块,用于根据所述第二选择操作和所述渲染顺序从至少两个所述虚拟角色中确定出目标虚拟角色。
- 根据权利要求11所述的装置,其特征在于,所述确定模块,用于当至少两个所述虚拟角色重叠时,根据所述第二选择操作将所述渲染顺序中位于前端的所述虚拟角色确定为所述目标虚拟角色。
- 根据权利要求12所述的装置,其特征在于,所述确定模块,用于在所述虚拟角色所处的三维虚拟环境中,从所述第二选择操作的触发位置发射出物理射线;将按照所述渲染顺序与所述物理射线发生碰撞的所述虚拟角色确定为所述目标虚拟角色;其中,所述物理射线与所述渲染顺序中位于前端的所述虚拟角色发生碰撞。
- 根据权利要求11至13任一所述的装置,其特征在于,所述装置包括更新模块;所述更新模块,用于将所述目标虚拟角色确定为所述渲染顺序中位于前端的所述虚拟角色,对所述渲染顺序进行更新;所述显示模块,用于根据更新后的所述渲染顺序显示至少两个所述虚拟角色。
- 根据权利要求11至13任一所述的装置,其特征在于,所述第二用户界面上包括拍照控件;所述装置还包括:第一接收模块,用于接收对所述目标虚拟角色触发的姿势设置操作;设置模块,用于根据所述姿势设置操作设置所述目标虚拟角色的姿势信息;所述第一接收模块,用于接收所述拍照控件上触发的拍照操作;拍照模块,用于根据所述拍照操作对至少两个所述虚拟角色进行拍照得到拍摄图片;所述拍摄图片中包括以所述姿势信息设置显示的所述目标虚拟角色。
- 根据权利要求15所述的装置,其特征在于,所述装置包括分享模块;所述显示模块,用于显示所述应用程序的第三用户界面,所述第三用户界面上包括所述拍摄图片和分享按钮控件;所述第一接收模块,用于接收所述分享按钮控件上的分享操作;所述分享模块,用于根据所述分享操作将信息编码从第一帐号分享至第二帐号;所述信息编码包括所述拍摄图片中至少两个所述虚拟角色的姿势信息,所述信息编码用于设置至少两个所述虚拟角色的姿势。
- 根据权利要求16所述的装置,其特征在于,所述分享模块,用于根据所述分享操作获取至少两个所述虚拟角色的所述姿势信息,生成所述信息编码;将所述信息编码复制和粘贴至所述信息分享渠道中;通过所述信息分享渠道将所述信息编码从所述第一帐号分享至所述第二帐号。
- 根据权利要求11至13任一所述的装置,其特征在于,所述显示模块,用于获取信息编码;所述信息编码是对目标姿势信息编码得到的,所述目标姿势信息用于设置至少两个所述虚拟角色的姿势;在所述第二用户界面上显示设置有所述目标姿势信息的至少两个所述虚拟角色。
- 根据权利要求18所述的装置,其特征在于,所述显示模块,用于确定所述目标深度信息与所述渲染顺序的第一对应关系;根据所述第一对应关系确定n个所述目标姿势信息与 n个所述虚拟角色之间的第二对应关系;根据第二对应关系将第i个所述目标姿势信息设置为第j个所述虚拟角色的姿势信息;在所述第二用户界面上显示第j个所述虚拟角色;n、i、j为正整数,i、j均小于n。
- 根据权利要求18所述的装置,其特征在于,所述装置包括第二接收模块;所述第二接收模块,用于接收第二帐号通过信息分享渠道分享至第一帐号的所述信息编码。
- 一种计算机设备,其特征在于,所述计算机设备包括:存储器;与所述存储器电性相连的处理器;其中,所述处理器用于加载并执行可执行指令以实现如权利要求1至10任一所述的多虚拟角色的控制方法。
- 一种计算机可读存储介质,其特征在于,所述可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至10任一所述的多虚拟角色的控制方法。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202105103QA SG11202105103QA (en) | 2019-06-21 | 2020-06-15 | Method for controlling multiple virtual characters, device, apparatus, and storage medium |
KR1020217025341A KR102497683B1 (ko) | 2019-06-21 | 2020-06-15 | 다수의 가상 캐릭터를 제어하는 방법, 기기, 장치 및 저장 매체 |
EP20826732.8A EP3989177A4 (en) | 2019-06-21 | 2020-06-15 | METHOD FOR CONTROLLING MULTIPLE VIRTUAL CHARACTERS, DEVICE, APPARATUS, AND MEDIA |
JP2021549944A JP7344974B2 (ja) | 2019-06-21 | 2020-06-15 | マルチ仮想キャラクターの制御方法、装置、およびコンピュータプログラム |
KR1020237004089A KR102595150B1 (ko) | 2019-06-21 | 2020-06-15 | 다수의 가상 캐릭터를 제어하는 방법, 기기, 장치 및 저장 매체 |
US17/367,267 US11962930B2 (en) | 2019-06-21 | 2021-07-02 | Method and apparatus for controlling a plurality of virtual characters, device, and storage medium |
US18/600,457 US20240214513A1 (en) | 2019-06-21 | 2024-03-08 | Method and apparatus for controlling a plurality of virtual characters, device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910544446.3 | 2019-06-21 | ||
CN201910544446.3A CN110276840B (zh) | 2019-06-21 | 2019-06-21 | 多虚拟角色的控制方法、装置、设备及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/367,267 Continuation US11962930B2 (en) | 2019-06-21 | 2021-07-02 | Method and apparatus for controlling a plurality of virtual characters, device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253655A1 true WO2020253655A1 (zh) | 2020-12-24 |
Family
ID=67961576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/096180 WO2020253655A1 (zh) | 2019-06-21 | 2020-06-15 | 多虚拟角色的控制方法、装置、设备及存储介质 |
Country Status (7)
Country | Link |
---|---|
US (2) | US11962930B2 (zh) |
EP (1) | EP3989177A4 (zh) |
JP (1) | JP7344974B2 (zh) |
KR (2) | KR102497683B1 (zh) |
CN (1) | CN110276840B (zh) |
SG (1) | SG11202105103QA (zh) |
WO (1) | WO2020253655A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793288A (zh) * | 2021-08-26 | 2021-12-14 | 广州微咔世纪信息科技有限公司 | 虚拟人物合拍方法、装置和计算机可读存储介质 |
CN113946265A (zh) * | 2021-09-29 | 2022-01-18 | 北京五八信息技术有限公司 | 一种数据处理方法、装置、电子设备及存储介质 |
US11962930B2 (en) | 2019-06-21 | 2024-04-16 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for controlling a plurality of virtual characters, device, and storage medium |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111142669B (zh) * | 2019-12-28 | 2023-08-29 | 上海米哈游天命科技有限公司 | 二维界面到三维场景的交互方法、装置、设备及存储介质 |
CN111640166B (zh) * | 2020-06-08 | 2024-03-26 | 上海商汤智能科技有限公司 | 一种ar合影方法、装置、计算机设备及存储介质 |
CN112001824A (zh) * | 2020-07-31 | 2020-11-27 | 天津洪恩完美未来教育科技有限公司 | 基于增强现实的数据处理方法及装置 |
CN111913624B (zh) * | 2020-08-18 | 2022-06-07 | 腾讯科技(深圳)有限公司 | 虚拟场景中对象的交互方法及装置 |
CN114217689A (zh) * | 2020-09-04 | 2022-03-22 | 本田技研工业(中国)投资有限公司 | 虚拟角色的控制方法、车载终端及服务器 |
CN113350785A (zh) * | 2021-05-08 | 2021-09-07 | 广州三七极创网络科技有限公司 | 虚拟角色渲染方法、装置及电子设备 |
CN113413594B (zh) * | 2021-06-24 | 2024-09-03 | 网易(杭州)网络有限公司 | 虚拟角色的虚拟拍照方法、装置、存储介质及计算机设备 |
CN114028807A (zh) * | 2021-11-05 | 2022-02-11 | 腾讯科技(深圳)有限公司 | 虚拟对象的渲染方法、装置、设备及可读存储介质 |
CN114422698B (zh) * | 2022-01-19 | 2023-09-26 | 北京字跳网络技术有限公司 | 视频生成方法、装置、设备及存储介质 |
CN114546227B (zh) * | 2022-02-18 | 2023-04-07 | 北京达佳互联信息技术有限公司 | 虚拟镜头控制方法、装置、计算机设备及介质 |
CN115050228B (zh) * | 2022-06-15 | 2023-09-22 | 北京新唐思创教育科技有限公司 | 一种素材收集方法及装置、电子设备 |
CN115379195B (zh) * | 2022-08-26 | 2023-10-03 | 维沃移动通信有限公司 | 视频生成方法、装置、电子设备和可读存储介质 |
KR102632973B1 (ko) * | 2023-11-30 | 2024-02-01 | 이수민 | 멀티미디어 컨텐츠 생성을 위한 사용자 인터페이스를 제공하는 전자 장치 및 이의 동작 방법 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103080885A (zh) * | 2010-08-27 | 2013-05-01 | 富士胶片株式会社 | 用于编辑对象布局的方法和装置 |
US8773468B1 (en) * | 2010-08-27 | 2014-07-08 | Disney Enterprises, Inc. | System and method for intuitive manipulation of the layering order of graphics objects |
CN108765541A (zh) * | 2018-05-23 | 2018-11-06 | 歌尔科技有限公司 | 一种3d场景对象显示方法、装置、设备及存储介质 |
CN110276840A (zh) * | 2019-06-21 | 2019-09-24 | 腾讯科技(深圳)有限公司 | 多虚拟角色的控制方法、装置、设备及存储介质 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110105516A (ko) * | 2010-03-19 | 2011-09-27 | (주) 써니그라피 | 양방향 상호작용 가능한 강의시스템 및 이를 이용한 강의 수행, 녹화, 재생방법 |
US20140108979A1 (en) | 2012-10-17 | 2014-04-17 | Perceptive Pixel, Inc. | Controlling Virtual Objects |
US9818225B2 (en) * | 2014-09-30 | 2017-11-14 | Sony Interactive Entertainment Inc. | Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space |
WO2017039348A1 (en) | 2015-09-01 | 2017-03-09 | Samsung Electronics Co., Ltd. | Image capturing apparatus and operating method thereof |
CN106484086B (zh) * | 2015-09-01 | 2019-09-20 | 北京三星通信技术研究有限公司 | 用于辅助拍摄的方法及其拍摄设备 |
IL258705B (en) | 2015-10-20 | 2022-07-01 | Magic Leap Inc | Selection of virtual objects in three-dimensional space |
KR102219304B1 (ko) * | 2016-11-07 | 2021-02-23 | 스냅 인코포레이티드 | 이미지 변경자들의 선택적 식별 및 순서화 |
CN107661630A (zh) * | 2017-08-28 | 2018-02-06 | 网易(杭州)网络有限公司 | 一种射击游戏的控制方法及装置、存储介质、处理器、终端 |
CN109078326B (zh) * | 2018-08-22 | 2022-03-08 | 网易(杭州)网络有限公司 | 游戏的控制方法和装置 |
CN109350964B (zh) * | 2018-09-28 | 2020-08-11 | 腾讯科技(深圳)有限公司 | 控制虚拟角色的方法、装置、设备及存储介质 |
CN109550247B (zh) * | 2019-01-09 | 2022-04-08 | 网易(杭州)网络有限公司 | 游戏中虚拟场景调整方法、装置、电子设备及存储介质 |
-
2019
- 2019-06-21 CN CN201910544446.3A patent/CN110276840B/zh active Active
-
2020
- 2020-06-15 WO PCT/CN2020/096180 patent/WO2020253655A1/zh active Application Filing
- 2020-06-15 EP EP20826732.8A patent/EP3989177A4/en active Pending
- 2020-06-15 KR KR1020217025341A patent/KR102497683B1/ko active IP Right Grant
- 2020-06-15 KR KR1020237004089A patent/KR102595150B1/ko active IP Right Grant
- 2020-06-15 SG SG11202105103QA patent/SG11202105103QA/en unknown
- 2020-06-15 JP JP2021549944A patent/JP7344974B2/ja active Active
-
2021
- 2021-07-02 US US17/367,267 patent/US11962930B2/en active Active
-
2024
- 2024-03-08 US US18/600,457 patent/US20240214513A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103080885A (zh) * | 2010-08-27 | 2013-05-01 | 富士胶片株式会社 | 用于编辑对象布局的方法和装置 |
US8773468B1 (en) * | 2010-08-27 | 2014-07-08 | Disney Enterprises, Inc. | System and method for intuitive manipulation of the layering order of graphics objects |
CN108765541A (zh) * | 2018-05-23 | 2018-11-06 | 歌尔科技有限公司 | 一种3d场景对象显示方法、装置、设备及存储介质 |
CN110276840A (zh) * | 2019-06-21 | 2019-09-24 | 腾讯科技(深圳)有限公司 | 多虚拟角色的控制方法、装置、设备及存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11962930B2 (en) | 2019-06-21 | 2024-04-16 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for controlling a plurality of virtual characters, device, and storage medium |
CN113793288A (zh) * | 2021-08-26 | 2021-12-14 | 广州微咔世纪信息科技有限公司 | 虚拟人物合拍方法、装置和计算机可读存储介质 |
CN113946265A (zh) * | 2021-09-29 | 2022-01-18 | 北京五八信息技术有限公司 | 一种数据处理方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
KR20230023824A (ko) | 2023-02-17 |
KR102497683B1 (ko) | 2023-02-07 |
JP7344974B2 (ja) | 2023-09-14 |
KR102595150B1 (ko) | 2023-10-26 |
SG11202105103QA (en) | 2021-06-29 |
KR20210113333A (ko) | 2021-09-15 |
EP3989177A4 (en) | 2022-08-03 |
CN110276840B (zh) | 2022-12-02 |
US11962930B2 (en) | 2024-04-16 |
US20240214513A1 (en) | 2024-06-27 |
CN110276840A (zh) | 2019-09-24 |
EP3989177A1 (en) | 2022-04-27 |
US20210337138A1 (en) | 2021-10-28 |
JP2022537614A (ja) | 2022-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253655A1 (zh) | 多虚拟角色的控制方法、装置、设备及存储介质 | |
US11221726B2 (en) | Marker point location display method, electronic device, and computer-readable storage medium | |
US11151773B2 (en) | Method and apparatus for adjusting viewing angle in virtual environment, and readable storage medium | |
CN110992493B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN110841285B (zh) | 界面元素的显示方法、装置、计算机设备及存储介质 | |
CN110427110B (zh) | 一种直播方法、装置以及直播服务器 | |
KR102602074B1 (ko) | 가상 환경에서 가상 물체를 관찰하는 방법 및 디바이스, 및 판독 가능한 저장 매체 | |
CN108694073B (zh) | 虚拟场景的控制方法、装置、设备及存储介质 | |
CN111701238A (zh) | 虚拟画卷的显示方法、装置、设备及存储介质 | |
US11954200B2 (en) | Control information processing method and apparatus, electronic device, and storage medium | |
WO2019179237A1 (zh) | 获取实景电子地图的方法、装置、设备和存储介质 | |
WO2022052620A1 (zh) | 图像生成方法及电子设备 | |
KR102633468B1 (ko) | 핫스팟 맵 표시 방법 및 장치, 그리고 컴퓨터 기기와 판독 가능한 저장 매체 | |
JP2021520540A (ja) | カメラの位置決め方法および装置、端末並びにコンピュータプログラム | |
WO2022134632A1 (zh) | 作品处理方法及装置 | |
WO2022142295A1 (zh) | 弹幕显示方法及电子设备 | |
KR20210097765A (ko) | 가상 환경에 기반한 객체 구축 방법 및 장치, 컴퓨터 장치 및 판독 가능 저장 매체 | |
US12061773B2 (en) | Method and apparatus for determining selected target, device, and storage medium | |
CN110290191B (zh) | 资源转移结果处理方法、装置、服务器、终端及存储介质 | |
WO2022199102A1 (zh) | 图像处理方法及装置 | |
CN111598981B (zh) | 角色模型的显示方法、装置、设备及存储介质 | |
CN113018865A (zh) | 攀爬线生成方法、装置、计算机设备及存储介质 | |
CN112000899A (zh) | 景点信息的展示方法、装置、电子设备及存储介质 | |
CN111539794A (zh) | 凭证信息的获取方法、装置、电子设备及存储介质 | |
CN114115660B (zh) | 媒体资源处理方法、装置、终端及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20826732 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217025341 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021549944 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2020826732 Country of ref document: EP |