WO2022267729A1 - 基于虚拟场景的互动方法、装置、设备、介质及程序产品 - Google Patents
基于虚拟场景的互动方法、装置、设备、介质及程序产品 Download PDFInfo
- Publication number
- WO2022267729A1 WO2022267729A1 PCT/CN2022/092190 CN2022092190W WO2022267729A1 WO 2022267729 A1 WO2022267729 A1 WO 2022267729A1 CN 2022092190 W CN2022092190 W CN 2022092190W WO 2022267729 A1 WO2022267729 A1 WO 2022267729A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- virtual
- cutout
- virtual scene
- camera
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 230000003993 interaction Effects 0.000 title claims abstract description 62
- 230000002452 interceptive effect Effects 0.000 claims description 55
- 230000009471 action Effects 0.000 claims description 43
- 230000004044 response Effects 0.000 claims description 27
- 238000003860 storage Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 42
- 238000010586 diagram Methods 0.000 description 19
- 238000012545 processing Methods 0.000 description 13
- 230000033001 locomotion Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 8
- 230000002093 peripheral effect Effects 0.000 description 8
- 238000013473 artificial intelligence Methods 0.000 description 6
- 238000013500 data storage Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013475 authorization Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/837—Shooting of targets
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/74—Circuits for processing colour signals for obtaining special effects
- H04N9/75—Chroma key
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8076—Shooting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the embodiments of the present application relate to the field of virtual environments, and in particular to an interactive method, device, device, medium and program product based on virtual scenes.
- the application program based on the virtual scene is usually a program that runs based on the virtual environment after the virtual environment is constructed through a three-dimensional model.
- the player can interact with the virtual environment by controlling the movement of virtual objects in the virtual environment.
- a player controls a virtual object in a virtual environment
- he or she can control it by touching a display screen or inputting a control signal through an external input device, and the virtual object moves in the virtual environment according to the player's control.
- the interaction process realized through the above method stays at the interaction process of the virtual object in the virtual environment, the interaction process is relatively simple, and the player needs to control the virtual object to complete the interaction, and the interaction realization process is relatively cumbersome.
- the embodiment of the present application provides an interaction method, device, equipment, medium and program product based on a virtual scene, which can improve the diversity and efficiency of interaction between a player and a virtual environment. Described technical scheme is as follows:
- an interaction method based on a virtual scene is provided, the method is executed by a first terminal configured with a camera, and the method includes:
- the first scene image includes a first object, and the first object is located within the shooting range of the camera of the first terminal;
- the virtual environment picture is a picture for displaying a virtual scene
- the virtual scene includes a cutout object
- the cutout object includes the cutout object obtained by cutting out the first scene image.
- an interactive device based on a virtual scene comprising:
- a receiving module configured to receive a virtual scene display operation
- a collection module configured to collect a first scene image through the camera, the first scene image includes a first object, and the first object is located within the shooting range of the camera of the first terminal;
- a display module configured to display a virtual environment picture
- the virtual environment picture is a picture for displaying a virtual scene
- the virtual scene includes a cutout object
- the cutout object includes a cutout object for the first scene image
- a computer device in another aspect, includes a processor and a memory, at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to implement the above-mentioned The interactive method based on virtual scene described in any one of the embodiments.
- a computer-readable storage medium is provided, and at least one program is stored in the computer-readable storage medium, and the at least one program is loaded and executed by a processor to implement any one of the above-mentioned embodiments of the present application.
- the described interactive method based on virtual scene.
- a computer program product comprising computer instructions stored on a computer readable storage medium.
- the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual scene-based interaction method described in any one of the above embodiments.
- the first object and the second object are displayed in the virtual scene, wherein the first object and the second object are obtained from the scene images collected by the camera, that is, people in reality , objects and virtual scenes are combined, so that people and objects in reality can interact directly with the virtual scene without interacting with the virtual scene in the form of virtual objects, which improves the interaction diversity between the virtual scene and users, and eliminates the need for The player controls the virtual object and interacts with the virtual scene, which improves the interaction efficiency.
- the camera directly collects real people and objects to add objects to the virtual scene, there is no need to perform data modeling for new objects, thereby reducing resource consumption when generating model data , and the resource consumption of model data storage.
- FIG. 1 is a schematic diagram of a cutout object generation process provided by an exemplary embodiment of the present application
- Fig. 2 is a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
- Fig. 3 is a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application.
- Fig. 4 is a flowchart of an interactive method based on a virtual scene provided by an exemplary embodiment of the present application
- Fig. 5 is a schematic interface diagram of a virtual environment screen provided based on the embodiment shown in Fig. 4;
- FIG. 6 is an overall schematic diagram of an implementation environment provided by an exemplary embodiment of the present application.
- Fig. 7 is a flowchart of an interactive method based on a virtual scene provided by another exemplary embodiment of the present application.
- Fig. 8 is a schematic diagram of the viewing angle change process provided based on the embodiment shown in Fig. 7;
- Fig. 9 is a flow chart of an interactive observation method based on a virtual scene provided by another exemplary embodiment of the present application.
- Fig. 10 is a schematic interface diagram of an angle adjustment control provided based on the embodiment shown in Fig. 9;
- Fig. 11 is an overall flowchart of an interactive process based on a virtual scene provided by an exemplary embodiment of the present application.
- Fig. 12 is a structural block diagram of an interactive device based on a virtual scene provided by an exemplary embodiment of the present application.
- Fig. 13 is a structural block diagram of an interactive device based on a virtual scene provided by another exemplary embodiment of the present application.
- Fig. 14 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
- Virtual Environment is the virtual environment displayed (or provided) by the application when it is run on the terminal.
- the virtual environment can be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
- the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment and a three-dimensional virtual environment, which is not limited in this application.
- the following embodiments are described with an example that the virtual environment is a three-dimensional virtual environment. In this embodiment of the present application, the virtual environment is also called a virtual scene.
- Cutout object refers to the specified object obtained from the cutout of the scene image after the scene image is captured by the real-scene camera.
- an illustration is made by taking a cutout object from a scene image as an example to obtain a cutout object.
- FIG. 1 shows a schematic diagram of a matting object generation process provided by an exemplary embodiment of the present application.
- the image collection range of the real-scene camera 100 includes the person 120 , so the scene image 110 includes the corresponding object 121 , and the object 121 is cut out from the scene image 110 to obtain the cutout object 122 .
- the experience of the player interacting in the virtual scene is created.
- the terminal in this application can be a desktop computer, a laptop portable computer, a mobile phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio level 3) player, an MP4 ( Moving Picture Experts Group Audio Layer IV, moving picture experts compress standard audio layer 4) Players, vehicle terminals, aircraft, etc.
- An application program supporting a virtual environment such as an application program supporting a three-dimensional virtual environment, is installed and run in the terminal.
- the application program can be a virtual reality application program, a three-dimensional map program, a third-person shooter game (Third-Person Shooting game, TPS), a first-person shooter game (First-Person Shooting game, FPS), a multiplayer online tactical arena game (Multiplayer Any one of Online Battle Arena Games, MOBA).
- the application program may be a stand-alone version of the application, such as a stand-alone version of a 3D game program, or an online version of the application.
- Fig. 2 shows a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
- the electronic device 200 includes: an operating system 220 and an application program 222 .
- Operating system 220 is the underlying software that provides application programs 222 with secure access to computer hardware.
- Application 222 is an application that supports a virtual environment.
- the application program 222 is an application program supporting a three-dimensional virtual environment.
- the application program 222 may be any one of a virtual reality application program, a three-dimensional map program, a TPS game, an FPS game, a MOBA game, and a multiplayer survival game.
- the application program 222 may be a stand-alone version of the application, such as a stand-alone version of a 3D game program, or an online version of the application.
- Fig. 3 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
- the computer system 300 includes: a first device 320 , a server 340 and a second device 360 .
- the first device 320 has installed and runs an application supporting a virtual environment.
- the application program may be any one of a virtual reality application program, a three-dimensional map program, a TPS game, an FPS game, a MOBA game, and a multiplayer gun battle survival game.
- the first device 320 is a device used by the first user. The first user uses the first device 320 to control the activities of the first matting object located in the virtual environment.
- the first device 320 is configured with a first camera, and the first camera controls the After the first user or other users within the image collection range perform image collection and cutout, the first cutout object is displayed in the virtual environment.
- the first device 320 is connected to the server 340 through a wireless network or a wired network.
- the server 340 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
- the server 340 is used to provide background services for applications supporting the 3D virtual environment.
- the server 340 undertakes the main calculation work, and the first device 320 and the second device 360 undertake the secondary calculation work; or, the server 340 undertakes the secondary calculation work, and the first device 320 and the second device 360 undertake the main calculation work;
- the server 340, the first device 320, and the second device 360 use a distributed computing architecture to perform collaborative computing.
- the second device 360 has installed and runs an application supporting a virtual environment.
- the second device 360 is a device used by the second user, and the second user uses the second device 360 to control the activity of the second matting object located in the virtual environment, wherein the second device 360 is configured with a second camera, through which the second camera After the second user or other users within the image collection range perform image collection and cutout, the second cutout object is displayed in the virtual environment.
- first cutout object and the second cutout object are in the same virtual environment.
- first cutout object and the second cutout object may belong to the same team, the same organization, have friendship or have temporary communication rights.
- first cutout object and the second cutout object may also belong to different teams, different organizations, or two hostile groups.
- the application programs installed on the first device 320 and the second device 360 are the same, or the application programs installed on the two devices are the same type of application programs on different control system platforms.
- the first device 320 may generally refer to one of multiple devices, and the second device 360 may generally refer to one of multiple devices.
- This embodiment only uses the first device 320 and the second device 360 as examples.
- the device types of the first device 320 and the second device 360 are the same or different, and the device types include: game consoles, desktop computers, smart phones, tablet computers, e-book readers, MP3 players, MP4 players, and laptops. At least one of a computer, a vehicle terminal, and an aircraft. The following embodiments are described by taking the device as a desktop computer as an example.
- the number of the above-mentioned devices may be more or less.
- the above-mentioned device may be only one, or there may be dozens or hundreds of the above-mentioned devices, or more.
- the embodiment of the present application does not limit the number and type of devices.
- server 340 can be implemented as a physical server or as a cloud server in the cloud.
- Cloud technology refers to the unification of a series of resources such as hardware, software, and network in a wide area network or a local area network.
- a hosted technology that realizes data calculation, storage, processing and sharing.
- the method provided by the embodiments of the present application can be applied to cloud game scenarios, so that the calculation of data logic during the game process is completed through the cloud server, and the terminal is responsible for the display of the game interface.
- the above-mentioned server 340 can also be implemented as a node in a blockchain system.
- Blockchain is a new application model of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
- the first one is applied to a game scene, where the game can be implemented as a cloud game, that is, the cloud server completes the calculation logic during the game, and the terminal is used to complete the display logic during the game.
- the game can be implemented as at least one of dance games, shooting games, and puzzle games.
- Player A collects a scene image through a first terminal configured with a first camera, and cuts out a picture from the scene image to obtain a cutout object a corresponding to player A;
- player B collects a scene image through a second terminal configured with a second camera, and Obtain the cutout object b corresponding to player B from the scene image cutout.
- the cutout object a, the cutout object b and the preset virtual scene are displayed on the terminal interface, so as to realize the process of player A and player B interacting in the virtual scene and participating in the game.
- the second type is applied to the live broadcast scene, where the live broadcast application includes the anchor and the audience.
- the anchor refers to the user who creates the live broadcast room
- the audience refers to the user who watches the live broadcast room.
- the audience can Interact with the audience in the virtual scene, or, the anchor can interact with the audience in the virtual scene.
- the host 1 creates a virtual scene interactive activity in the live broadcast room and invites the audience to participate.
- the audience 2 is invited to participate in the virtual scene interactive activity with the host 1.
- the host 1 collects the scene image through the first terminal equipped with the first camera , and cut out from the scene image to obtain the cutout object m corresponding to the anchor 1;
- the audience 2 collects the scene image through the second terminal equipped with a second camera, and cuts out the scene image to obtain the cutout object n corresponding to the audience 2 .
- the cutout object m and cutout object n and the preset virtual scene are displayed on the terminal interface, so that the anchor 1 and the audience 2 can interact in the virtual scene.
- other audiences except audience 2 can watch the interaction process between anchor 1 and audience 2 in the virtual scene.
- FIG. 4 shows the interaction method based on the virtual scene provided by an exemplary embodiment of the present application.
- the flow chart is illustrated by taking the method executed by the first terminal equipped with a camera as an example, as shown in Figure 4, the method includes:
- Step 401 receiving a virtual scene display operation.
- the virtual scene display operation refers to an operation in which the user instructs to start the virtual scene.
- the implementation of the virtual scene display operation includes at least one of the following methods:
- the user triggers the game start operation as a virtual scene display operation, that is, starts the cloud game match and enters the game interface according to the game start operation.
- the user can form a team with friends to enter the cloud game match, or invite friends to form a team after entering the cloud game match.
- the anchor account triggers the opening operation of the interactive space as a virtual scene display operation, that is, the virtual scene where the anchor account interacts with the audience account is opened according to the opening operation of the interactive space.
- the player opens the link of the game live broadcast on the smart device, and at the same time fixes the camera device in front of the display to ensure that the designated part or the whole body of the player is within the viewing range of the camera.
- the camera configured on the first terminal is a 2D camera, that is, used to collect planar images.
- the camera configured on the first terminal is a 3D camera, that is, the depth of field information is collected during the image collection process.
- the data interaction between the terminal and the server is based on a plane image, which reduces the amount of data interaction;
- the camera configured on the first terminal is a 3D camera
- the terminal can build a 3D model corresponding to the player based on the collected depth of field information, improving the authenticity of the cutout objects displayed in the virtual scene.
- Step 402 collecting a first scene image through a camera.
- the first scene image includes the first object within the shooting range of the camera of the first terminal. That is, the first scene image refers to the image collected by the camera configured on the first terminal, wherein the first terminal continuously collects the first scene image in the form of a video stream, and during the collection process of the first scene image , the first object is within the shooting range of the camera and displayed in the first scene image.
- the first terminal when the first terminal receives the virtual scene display operation, it turns on the camera and collects the first scene image through the camera, that is, in the process of displaying the virtual scene after receiving the virtual scene display operation , the camera will be turned on immediately to collect images of the first scene.
- the first terminal when the first terminal receives the virtual scene display operation, it displays the virtual environment picture. At this time, there is no virtual cutout object in the virtual environment picture, or there is only the first terminal corresponding to the first terminal. A cutout object other than an object.
- the camera In response to receiving the joining operation, the camera is turned on to collect images of the first scene.
- the first scene image needs to be matted, and the first object is obtained from the first scene image, and the above-mentioned first object is the real object displayed in the first scene image. person or thing.
- the above-mentioned first object is that after the first terminal uploads the first scene image to the server, the server performs object recognition on the first scene image to determine the first object in the first scene image; or, the above-mentioned first
- the object identification process may also be implemented by the first terminal, which is not limited here.
- the first terminal determines from the candidate objects by receiving an object selection operation Select the first object;
- the above-mentioned first object determined from the plurality of candidate objects may be one, or may be multiple.
- the process of cutting out the first scene image can be completed by the terminal or by the server.
- the process of cutting out the first scene image is completed by the terminal, after the first scene image is collected by the camera, The first scene image is directly cut out, thereby saving the amount of data interaction between the terminal and the server;
- the process of cutting out the first scene image is completed by the server, after the camera captures the first scene image, The first scene image is sent to the server, and the server cuts out the first scene image to obtain the first object.
- the terminal before the terminal turns on the camera to obtain the first scene image, it needs to obtain the user's authorization for the application program to capture the first scene image through the camera, that is, display the authorization prompt information, the authorization prompt information includes the required The prompt information for enabling the camera, and the prompt information for the use purpose of the scene image collected by the camera, in response to receiving the confirmation operation of the authorization prompt information, the terminal turns on the camera.
- Step 403 displaying a virtual environment picture, where the virtual environment picture is a picture for displaying a virtual scene, and the virtual scene includes cutout objects.
- the cutout object includes a first object obtained by cutting out the first scene image, and a second object obtained by cutting out the second scene image.
- the first object is a real person or object displayed in the first scene image
- the second object is a real person or object displayed in the second scene image.
- the first object and the second object can be made of different Different people or objects in reality can be captured by different cameras, or the same person or object in reality can be captured by different cameras.
- the first object and the second object obtained by cutout are added to the virtual scene as cutout objects, and the first terminal displays the virtual environment picture according to the virtual scene.
- the first scene image is an image collected by a first terminal configured with a camera
- the second scene image is an image collected by a second terminal configured with a camera.
- the first scene is the scene taken by the first terminal through the camera
- the second scene is the scene taken by the second terminal through the camera.
- the first scene and the second scene may be different real scenes taken by different cameras , or the same real-life scene captured by different cameras.
- the above-mentioned virtual scene is a scene constructed by an application program, and the virtual scene can be a two-dimensional animation scene or a three-dimensional animation scene, or a scene obtained by computer simulation of reality, that is, a virtual scene is a scene obtained by computer fiction, and the above-mentioned
- the first scene and the second scene are real scenes captured by the camera.
- the aforementioned virtual scene may be a scene composed of cutout objects and virtual elements, wherein the aforementioned virtual elements include at least one of a virtual environment, a virtual object, and a virtual prop.
- the aforementioned virtual objects are fictitious objects in the virtual scene, and the cutout objects are objects in the virtual scene used to display real people or things that exist in reality.
- the first object and the second object are objects that participate in the same virtual game or virtual room. It is worth noting that in this embodiment, the first object and the second object are included in the virtual scene as an example. In an optional embodiment, the virtual scene may also include only one object, or include three or more objects, which is not limited in this embodiment.
- the number of objects corresponding to the cutout objects obtained from the same scene image can be single or multiple, that is, the user can map himself to the virtual scene Set the number of cutout objects.
- a target account is registered in the first terminal, and the first terminal receives an object quantity setting operation indicated by the target account, wherein the displayed quantity indicated by the object quantity setting operation is the target quantity, and the first terminal sets the object quantity according to the object quantity setting operation.
- the operation displays a target number of first objects in the virtual environment screen.
- the cutout image in the first scene image is displayed on the virtual environment screen
- the server cuts out the scene image as an example for illustration, then the terminal sends the first scene image to the server, and receives the screen display data fed back by the server, and the screen display data includes the scene data corresponding to the virtual scene And the object data corresponding to the cutout object.
- the server obtains the scene images collected by the terminal, and mattes the scene images to obtain object data, so as to uniformly
- Multiple different terminals are configured with screen display data corresponding to the same virtual scene, which can reduce the waste of processing resources.
- differences caused by different hardware conditions when matting scene images are avoided, ensuring that the display The uniformity of screen display of terminals in the same virtual scene.
- the scene data of the virtual scene is the data corresponding to the virtual scene determined according to the preset virtual scene;
- the scheme of selecting the scene increases the diversity in the virtual scene interaction; or, the scene data is determined according to the randomly obtained virtual scene and corresponds to the random result. This embodiment does not limit it.
- the terminal displays the virtual environment picture based on the scene data and object data fed back by the server.
- the display positions of the first object and the second object in the virtual scene are randomly determined in preset candidate positions, thereby increasing the uncertainty of object display; or, the first object and the second
- the display position of the object in the virtual scene is indicated by the server to the terminal according to preset display rules; or, the display positions of the first object and the second object in the virtual scene are respectively indicated by the user through the first terminal and the second terminal , that is, the user sets the display position of the cutout object in the virtual scene by himself.
- the display positions of the first object and the second object in the virtual scene may be fixed, or may change following changes in position indications.
- the object data includes the object display position, then the virtual scene is displayed based on the scene data, and the cutout object is displayed at the object display position in the virtual scene.
- the above object data may be data configured for the terminal after the server receives the virtual scene display operation and obtains the scene image, that is, the server cuts out the scene images obtained from different terminals to Obtain the cutout object, configure the display position of the cutout object, obtain the object data, and send the above object data to each terminal.
- the terminals in the same virtual scene display the virtual environment screen, the displayed virtual environment screen
- the content of the virtual environment contained in the .net is the same or similar. Therefore, when configuring object data, the server performs unified configuration for multiple terminals, which can improve the efficiency of object data configuration and reduce resource consumption during object data configuration.
- the object data includes first object data corresponding to the first object and second object data corresponding to the second object, the first object data includes the display position of the first object, and the second object data includes the display position of the second object, so that according to The first object display position displays the first object at a corresponding position in the virtual scene, and displays the second object at a corresponding position in the virtual scene according to the second object display position.
- the display position of the object is realized in the form of coordinates, that is, the display position of the cutout object is indicated by placing the designated identification point of the cutout object at the object display position.
- the display position of the cutout object is indicated in such a way that the center point of the smallest bounding box of the cutout object coincides with the display position of the object.
- the display size of the cutout object is related to the display size of the cutout object in the scene image, that is, the closer the player is to the camera, the larger the display area of the object corresponding to the player in the collected scene image is, and the cutout object is larger.
- the display size of the matting object is adjusted by the server according to the size obtained from the matting according to the preset size requirements, that is, the server obtains the After matting the scene image of the scene image to obtain the matting object, the display size of the matting object in the same virtual scene is unified to ensure the rationality of the matting object display in the virtual scene; or, in other embodiments, the matting object
- the display size of the image object may also be determined by a size adjustment operation indicated by the terminal corresponding to the cutout object. In an example, the user may input a size adjustment operation through the first device to adjust the display size of the first object.
- FIG. 5 is a schematic interface diagram of a virtual environment screen provided by an exemplary embodiment of the present application. As shown in FIG.
- a first scene image 510 and a second scene image 520 are displayed superimposed on the virtual environment screen 500
- the first scene image 510 includes a first object 511 corresponding to the first object 511 displayed in the virtual environment screen 500, that is, the first object 511 displayed in the virtual environment screen 500 is obtained from the first scene image 510
- the second scene image 520 includes a second object 521 corresponding to the second object 521 displayed in the virtual environment picture 500, that is, the second object 521 displayed in the virtual environment picture 500 is extracted from the second scene image 520 Figure obtained.
- the viewing angle of the virtual scene can be adjusted.
- the target account corresponds to one camera model for observing the virtual scene in the virtual scene; or, the target account corresponds to multiple camera models for observing the virtual scene in the virtual scene.
- the above-mentioned target account is an account logged into the application program providing the virtual scene in the current terminal.
- the camera model logo 530 is also displayed on the virtual environment screen 500, which includes a first camera model 531, a second camera model 532, a third camera model 533, and a fourth camera model 534; when the target account corresponds to a camera model , taking the target account corresponding to the first camera model 531 as an example, the second camera model 532 , the third camera model 533 and the fourth camera model 534 are camera models used by other accounts participating in the virtual scene to observe the virtual scene.
- the first camera model is the camera model currently used to observe the virtual scene, based on The angle of view adjustment operation is to switch the first camera model for observing the virtual scene to the second camera model, and observe the virtual scene from the second viewing angle.
- the first viewing angle of the first camera model is different from the second viewing angle of the second camera model. That is, through the one-to-one correspondence between the camera model and the observation angle, the switching function of the observation angle can be provided, and the angle of view adjustment operation can be used to quickly determine which camera model to use to provide the displayed virtual environment picture, thereby improving the viewing angle switching. screen switching efficiency.
- the camera model can be assigned according to the account's observation authority on the virtual scene, so as to avoid displaying illegal content on the virtual environment screen.
- a camera model logo 530 is also displayed on the virtual environment screen 500, including a first camera model 531, a second camera model 532, a third camera model 533 and a fourth camera model 534;
- the first camera model 531 , the second camera model 532 , the third camera model 533 and the fourth camera model 534 are all camera models corresponding to the target account in the virtual scene.
- the target account currently observes the virtual scene through the first camera model 531.
- the target account needs to adjust the angle of observation of the virtual scene, it switches from the first camera model 531 to the second camera model based on the angle of view adjustment operation. 532. Observing the virtual scene and the cutout object through the second camera model 532.
- the display direction of the cutout object is adjusted in real time according to the angle change, and the cutout object is kept facing the viewing angle for display.
- the first object and the second object are displayed in the virtual scene, wherein the first object and the second object are scenes collected from the camera
- the cutout in the image is to combine the real people and objects with the virtual scene, so that the real people and objects can interact directly with the virtual scene without interacting with the virtual scene in the form of virtual objects, which improves the The diversity of interaction between the virtual scene and the user, and the interaction between the virtual object and the virtual scene without the need for the player to control the virtual object, improves the interaction efficiency.
- FIG. 6 is an overall schematic diagram of an implementation environment provided by an exemplary embodiment of the present application.
- player A operates a smart device 610 to participate in the game, and the smart device 610 is configured with a camera to collect images of player A;
- Player B operates the smart device 620 to participate in the game, and the smart device 620 is equipped with a camera to collect images of player B.
- the server 640 receives the collected scene images sent by the smart device 610 and the smart device 620, and cuts out the scene images to obtain the corresponding cutout objects for players A and B, and feeds back the virtual scene and the cutout objects to the viewing terminal 650 display, wherein the smart device 610 and the smart device 620 also belong to the viewing terminal.
- FIG. 7 is a flow chart of an interactive method based on a virtual scene provided by another exemplary embodiment of the present application. The method is applied to a first terminal equipped with a camera as an example. As shown in FIG. 7 , the method includes :
- Step 701 receiving a virtual scene display operation.
- the virtual scene display operation refers to an operation in which the user instructs to start the virtual scene.
- the player opens the link of the game live broadcast on the smart device, and at the same time fixes the camera device in front of the display to ensure that the designated part or the whole body of the player is within the viewing range of the camera.
- the first scene image includes the first object within the shooting range of the camera of the first terminal. That is, the first scene image refers to an image collected by a camera configured on the first terminal.
- Step 703 displaying a calibration picture, the calibration picture includes a first scene image, and the first scene image includes an indicator box and an indicator line.
- the indication frame is used to indicate the frame selection of the first object, and the indication line is located at a specified position of the first scene image and divides the first scene image into a first area and a second area.
- FIG. 8 shows a schematic diagram of a calibration screen provided by an exemplary embodiment of the present application.
- a first scene image 810 is displayed in the calibration screen 800, and the first scene image 810 includes a first object 820, an indication frame 830, and an indication line 840, wherein the indication frame 830 selects the first object 820 through a pre-trained object recognition model, and the indication line 840 is vertically displayed in the middle of the first scene image 820 , divide the first scene image 810 into a left half area 841 and a right half area 842 .
- Step 704 Indicate the background part of the first scene image by collecting stage images of the first object moving from the first position to the second position.
- FIG. 9 shows a schematic diagram of a calibration process provided by an exemplary embodiment of the present application.
- the first object 910 is located in the middle of the first scene image 900 at the initial moment.
- the player moves to the left, so that the first object 910 is located on the left side of the first scene image 900, and the indicator box 920 is completely located on the left side of the indicator line 930; then the player moves to the right, so that the first object 910 is located in the first scene
- the position on the right side of the image 900, and the indicator box 920 is completely located on the right side of the indicator line 930, according to when the indicator box 920 is completely located on the left side of the indicator line 930, the image on the right side of the indicator line 930, and the indicator box 920 is completely located on the indicator
- the right side of the line 930 indicates the image on the left side of the line 930
- the background image of the first scene image 900 is obtained, which provides a basis for obtaining the first object
- the calibration process of the first scene image is taken as an example for illustration.
- the calibration process of the second scene image is consistent with the calibration process of the first scene image, and the embodiment of the present application does not add repeat.
- the above calibration process may be performed before displaying the matting object corresponding to the current terminal; or, it may also be performed when the terminal detects that the background image in the scene image changes, that is, in response to The sensor detects that the shooting direction of the camera corresponding to the terminal changes, and prompts that a calibration process needs to be performed, or, in response to detecting a change in the background image transmission in the collected scene image, prompts that a calibration process needs to be performed.
- Step 705 displaying a virtual environment picture, where the virtual environment picture is a picture for displaying a virtual scene, and the virtual scene includes cutout objects.
- the virtual environment picture in response to the completion of the calibration process of the first object and the second object, the virtual environment picture is displayed, and the server cuts out the first object in the first scene image according to the calibrated first background area of the first object Similarly, the server performs cutout processing on the second object in the second scene image according to the calibrated second background area of the second object, so as to display the first object and the second object in the virtual scene.
- the method provided in this embodiment determines the background areas of the first scene image and the second scene image through the calibration process, so as to obtain the first object for the subsequent matting of the first scene image, and obtain the first object for the subsequent pair of second scene images.
- the matting of the second scene image provides a basis for obtaining the second object, which improves the accuracy of obtaining the first object from the first scene image, and improves the accuracy of obtaining the second object from the second scene image.
- FIG. 10 is a flow chart of an interactive method based on a virtual scene provided by another exemplary embodiment of the present application. The method is applied to a first terminal equipped with a camera as an example. As shown in FIG. 10 , the method includes :
- Step 1001 receiving a virtual scene display operation.
- the virtual scene display operation refers to an operation in which the user instructs to start the virtual scene.
- the first scene image includes the first object within the shooting range of the camera of the first terminal. That is, the first scene image refers to an image collected by a camera configured on the first terminal.
- Step 1003 displaying a virtual environment screen, which includes virtual scenes and cutout objects.
- the cutout object includes a first object obtained by cutting out the first scene image, and a second object obtained by cutting out the second scene image, and the second scene image is collected by a second terminal equipped with a camera the resulting image.
- the virtual environment picture in response to the completion of the calibration process of the first object and the second object, the virtual environment picture is displayed, and the server extracts the first object in the first scene image according to the calibrated first background area of the first object picture.
- Figure processing in response to the completion of the calibration process of the first object and the second object, the virtual environment picture is displayed, and the server extracts the first object in the first scene image according to the calibrated first background area of the first object picture.
- an interactive animation is displayed through the virtual environment screen.
- the interactive animation is displayed; or, in response to the cutout actions of the first object and the second object meeting the action requirements, the interactive animation is displayed .
- the action requirement includes at least one of the following situations:
- the virtual scene includes an interaction triggering object, and in response to the first object’s cutout action and the interaction triggering object’s contact, the interactive animation between the first object and the virtual scene is displayed through the virtual environment screen; or, in response to the second The cutout action and interaction of the object triggers the contact of the object, and the interactive animation between the second object and the virtual scene is displayed through the virtual environment screen.
- an interactive animation corresponding to the preset reference motion is displayed on the virtual environment screen.
- the interactive animation control of the cutout object in the virtual scene can be quickly realized, which reduces the user's input operations on the terminal during the interaction realization process, and improves the interaction efficiency.
- the matching relationship between the matting action between multiple objects and the work of the preset reference point is used to realize interactive animation, which enhances the sense of interaction between different objects in the virtual scene.
- the three-dimensional virtual objects corresponding to the first object and the second object can be based on the preset three-dimensional virtual model, and according to the image recognition results of the first object and the second object, the hair style, hair color, clothing color, and bottom
- the three-dimensional model obtained by adjusting model parts and model parameters such as clothing type, bottom color, shoe type, and shoe color.
- the first object and the second object are displayed in the virtual scene, wherein the first object and the second object are scenes collected from the camera
- the cutout in the image is to combine the real people and objects with the virtual scene, so that the real people and objects can interact directly with the virtual scene without interacting with the virtual scene in the form of virtual objects, which improves the The diversity of interaction between the virtual scene and the user, and the interaction between the virtual object and the virtual scene without the need for the player to control the virtual object, improves the interaction efficiency.
- the method provided in this embodiment provides the interaction between the first object and the virtual scene, the second object and the virtual scene, and the first object and the second object, increases the interaction mode in the virtual scene, and improves the interaction between the user and the virtual scene. Interaction diversity between objects or users.
- Step 1103 the player joins the room through the lobby.
- Step 1104 the cloud server initializes player data.
- the cloud server first initializes the game account data corresponding to the player.
- Step 1105 the cloud server creates a personal rendering camera.
- Step 1106 the cloud server creates a personal audio group.
- the aforementioned personal audio group is the backend support for providing voice communication function and audio transmission for players in the same game scene.
- Step 1107 the player establishes a connection with the cloud server, and exchanges codec information.
- encoding and decoding are the corresponding processes of video processing.
- the cloud server encodes the video of the game process and sends it to the terminal. Encode the video stream captured by the camera and send it to the cloud server for decoding and subsequent processing such as matting.
- Step 1108 the cloud server sends the camera rendered video stream and the encoded audio stream to the player.
- Step 1111 the data processing server transmits the data flow to the artificial intelligence (AI) computing power server.
- AI artificial intelligence
- Step 1112 the AI computing power server performs skeleton calculation and video matting.
- a receiving module 1210 configured to receive a virtual scene display operation
- the display module 1230 is configured to display a virtual environment picture, the virtual environment picture is a picture for displaying a virtual scene, the virtual scene includes a cutout object, and the cutout object includes an image of the first scene The first object obtained by cutting out a picture, and the second object obtained by cutting out a second scene image, wherein the second scene image is an image collected by a second terminal configured with a camera.
- the device also includes:
- the display module 1230 is further configured to display the virtual environment picture based on the scene data and the object data.
- the object data includes the display position of the object
- the display module 1230 is further configured to display the virtual scene based on the scene data
- the display module 1230 is further configured to locate a display position of the cutout object corresponding to the virtual scene based on the object display position; and display the cutout object at the display position.
- the display module 1230 is further configured to display a calibration picture, the calibration picture includes the first scene image, the first scene image includes an indicator frame and an indicator line, the The indication frame is used to indicate the frame selection of the first object, the indication line is located at a specified position of the first scene image and divides the first scene image into a first area and a second area;
- the display module 1230 is further configured to display the interactive animation through the virtual environment screen in response to the cutout actions of the first object and the second object meeting the action requirements .
- the virtual scene further includes interactive triggering objects
- the display module 1230 is further configured to display an interactive animation between the second object and the virtual scene through the virtual environment screen in response to the cutout action of the second object being in contact with the interaction triggering object .
- the display module 1230 is further configured to respond to the matching of the cutout action of the first object and the second object with a preset reference action, displaying and An interactive animation corresponding to the preset reference action.
- the interactive device based on the virtual scene provided by the above-mentioned embodiments is only illustrated by the division of the above-mentioned functional modules.
- the internal structure of the system is divided into different functional modules to complete all or part of the functions described above.
- the virtual scene-based interaction device and the virtual scene-based interaction method embodiments provided by the above embodiments belong to the same idea, and the specific implementation process thereof is detailed in the method embodiments, and will not be repeated here.
- the processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
- Memory 1402 may include one or more computer-readable storage media, which may be non-transitory.
- the non-transitory computer-readable storage medium in the memory 1402 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1401 to implement the virtual-based The interaction method of the scene.
- the terminal 1400 may optionally further include: a peripheral device interface 1403 and at least one peripheral device.
- the processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected through buses or signal lines.
- Each peripheral device can be connected to the peripheral device interface 1403 through a bus, a signal line or a circuit board.
- the peripheral device includes: at least one of a radio frequency circuit 1404 , a display screen 1405 , a camera 1406 , an audio circuit 1407 and a power supply 1409 .
- the peripheral device interface 1403 may be used to connect at least one peripheral device related to I/O (Input/Output, input/output) to the processor 1401 and the memory 1402 .
- I/O Input/Output, input/output
- the radio frequency circuit 1404 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 1404 communicates with the communication network and other communication devices through electromagnetic signals.
- the display screen 1405 is used to display a UI (User Interface, user interface).
- the UI can include graphics, text, icons, video, and any combination thereof.
- the camera assembly 1406 is used to capture images or video.
- Audio circuitry 1407 may include a microphone and speakers.
- the power supply 1409 is used to supply power to various components in the terminal 1400 .
- FIG. 14 does not limit the terminal 1400, and may include more or less components than shown in the figure, or combine certain components, or adopt different component arrangements.
- the computer-readable storage medium may include: a read-only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), a solid-state hard drive (SSD, Solid State Drives) or an optical disc, etc.
- the random access memory may include resistive random access memory (ReRAM, Resistance Random Access Memory) and dynamic random access memory (DRAM, Dynamic Random Access Memory).
- ReRAM resistive random access memory
- DRAM Dynamic Random Access Memory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种基于虚拟场景的互动方法、装置、设备、介质及程序产品,涉及虚拟环境领域。该方法包括:接收虚拟场景显示操作(401);通过摄像头采集第一场景图像(402);显示虚拟环境画面,其中包括虚拟场景与抠图对象,抠图对象中包括对第一场景图像抠图得到的第一对象,以及对第二场景图像抠图得到的第二对象,其中,第二场景图像是通过配置有摄像头的第二终端采集得到的图像(403)。
Description
本申请要求于2021年06月24日提交的申请号为202110703616.5、发明名称为“基于虚拟场景的互动方法、装置、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及虚拟环境领域,特别涉及一种基于虚拟场景的互动方法、装置、设备、介质及程序产品。
基于虚拟场景的应用程序通常是通过三维模型构建虚拟环境后,基于虚拟环境运行的程序,该应用程序运行时,玩家能够通过控制虚拟对象在虚拟环境之中运动,从而与虚拟环境进行互动。
相关技术中,玩家在控制虚拟环境中的虚拟对象时,可以通过触摸显示屏进行控制,也可以通过外部输入设备输入控制信号进行控制,虚拟对象则根据玩家的控制实现在虚拟环境中的运动。
然而,通过上述方式实现的互动过程停留在虚拟对象在虚拟环境中的互动过程上,互动过程较为单一,且玩家需要对虚拟对象进行控制完成互动,互动实现过程较为繁琐。
发明内容
本申请实施例提供了一种基于虚拟场景的互动方法、装置、设备、介质及程序产品,可以提高玩家与虚拟环境进行互动的多样性和效率。所述技术方案如下:
一方面,提供了一种基于虚拟场景的互动方法,所述方法由配置有摄像头的第一终端执行,所述方法包括:
接收虚拟场景显示操作;
通过所述摄像头采集第一场景图像,所述第一场景图像中包括第一对象,所述第一对象位于所述第一终端的摄像头拍摄范围内;
显示虚拟环境画面,所述虚拟环境画面为对虚拟场景进行显示的画面,所述虚拟场景中包括抠图对象,所述抠图对象中包括对所述第一场景图像进行抠图得到的所述第一对象,以及对第二场景图像进行抠图得到的第二对象,其中,所述第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
另一方面,提供了一种基于虚拟场景的互动装置,所述装置包括:
接收模块,用于接收虚拟场景显示操作;
采集模块,用于通过所述摄像头采集第一场景图像,所述第一场景图像中包括第一对象,所述第一对象位于所述第一终端的摄像头拍摄范围内;
显示模块,用于显示虚拟环境画面,所述虚拟环境画面为对虚拟场景进行显示的画面,所述虚拟场景中包括抠图对象,所述抠图对象中包括对所述第一场景图像进行抠图得到的所述第一对象,以及对第二场景图像进行抠图得到的第二对象,其中,所述第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
另一方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如上述本申请实施例中任一所述的基于虚拟场景的互动方法。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一 段程序,所述至少一段程序由处理器加载并执行以实现如上述本申请实施例中任一所述的基于虚拟场景的互动方法。
另一方面,提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中任一所述的基于虚拟场景的互动方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
在显示虚拟场景的过程中,在虚拟场景中增加显示第一对象和第二对象,其中,第一对象和第二对象是从摄像头采集的场景图像中抠图得到的,也即将现实中的人、物与虚拟场景结合起来,使现实中的人、物能够与虚拟场景进行直接交互,而无需通过虚拟对象的形式与虚拟场景交互,提高了虚拟场景与用户之间的交互多样性,以及无需玩家对虚拟对象进行控制与虚拟场景进行交互,提高了交互效率。同时,在向虚拟场景中添加对象时,由于直接通过摄像头来采集真实的人、物来为虚拟场景增加对象,而无需为新的对象进行数据建模,从而减少了模型数据生成时的资源消耗,以及模型数据存储时的资源消耗。
图1是本申请一个示例性实施例提供的抠图对象生成过程示意图;
图2是本申请一个示例性实施例提供的电子设备的结构框图;
图3是本申请一个示例性实施例提供的实施环境示意图;
图4是本申请一个示例性实施例提供的基于虚拟场景的互动方法的流程图;
图5是基于图4示出的实施例提供的虚拟环境画面的界面示意图;
图6是本申请一个示例性实施例提供的实施环境整体示意图;
图7是本申请另一个示例性实施例提供的基于虚拟场景的互动方法的流程图;
图8是基于图7示出的实施例提供的观察角度变化过程的示意图;
图9是本申请另一个示例性实施例提供的基于虚拟场景的互动观察方法的流程图;
图10是基于图9示出的实施例提供的角度调整控件的界面示意图;
图11是本申请一个示例性实施例提供的基于虚拟场景的互动过程整体流程图;
图12是本申请一个示例性实施例提供的基于虚拟场景的互动装置的结构框图;
图13是本申请另一个示例性实施例提供的基于虚拟场景的互动装置的结构框图;
图14是本申请一个示例性的实施例提供的终端的结构框图。
首先,对本申请实施例中涉及的名词进行简单介绍:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。本申请实施例中,该虚拟环境又称为虚拟场景。
抠图对象:是指通过实景摄像头采集得到场景图像后,从场景图像中抠图得到的指定对象。示意性的,本申请实施例中,以从场景图像中对人像进行抠图得到抠图对象为例进行说明。示意性的,请参考图1,其示出了本申请一个示例性实施例提供的抠图对象生成过程示意图,如图1所示,通过实景摄像头100对场景进行图像采集,得到场景图像110,其中,实景摄像头100的图像采集范围内包括人物120,从而场景图像110中包括对应的对象121,将对象121从场景图像110中抠出,得到抠图对象122。
本申请实施例中,通过在虚拟环境画面中显示虚拟场景以及位于虚拟场景中的抠图对象, 营造玩家自身在虚拟场景中进行互动的体验。
本申请中的终端可以是台式计算机、膝上型便携计算机、手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、车载终端、飞行器等等。该终端中安装和运行有支持虚拟环境的应用程序,比如支持三维虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、第三人称射击游戏(Third-Person Shooting game,TPS)、第一人称射击游戏(First-Person Shooting game,FPS)、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)中的任意一种。可选地,该应用程序可以是单机版的应用程序,比如单机版的三维游戏程序,也可以是网络联机版的应用程序。
图2示出了本申请一个示例性实施例提供的电子设备的结构框图。该电子设备200包括:操作系统220和应用程序222。
操作系统220是为应用程序222提供对计算机硬件的安全访问的基础软件。
应用程序222是支持虚拟环境的应用程序。可选地,应用程序222是支持三维虚拟环境的应用程序。该应用程序222可以是虚拟现实应用程序、三维地图程序、TPS游戏、FPS游戏、MOBA游戏、多人枪战类生存游戏中的任意一种。该应用程序222可以是单机版的应用程序,比如单机版的三维游戏程序,也可以是网络联机版的应用程序。
图3示出了本申请一个示例性实施例提供的计算机系统的结构框图。该计算机系统300包括:第一设备320、服务器340和第二设备360。
第一设备320安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、TPS游戏、FPS游戏、MOBA游戏、多人枪战类生存游戏中的任意一种。第一设备320是第一用户使用的设备,第一用户使用第一设备320控制位于虚拟环境中的第一抠图对象活动,其中,第一设备320配置有第一摄像头,通过第一摄像头对第一用户或者其他图像采集范围内的用户进行图像采集并抠图后,将第一抠图对象显示在虚拟环境中。
第一设备320通过无线网络或有线网络与服务器340相连。
服务器340包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器340用于为支持三维虚拟环境的应用程序提供后台服务。可选地,服务器340承担主要计算工作,第一设备320和第二设备360承担次要计算工作;或者,服务器340承担次要计算工作,第一设备320和第二设备360承担主要计算工作;或者,服务器340、第一设备320和第二设备360三者之间采用分布式计算架构进行协同计算。
第二设备360安装和运行有支持虚拟环境的应用程序。第二设备360是第二用户使用的设备,第二用户使用第二设备360控制位于虚拟环境中的第二抠图对象活动,其中,第二设备360配置有第二摄像头,通过第二摄像头对第二用户或者其他图像采集范围内的用户进行图像采集并抠图后,将第二抠图对象显示在虚拟环境中。
可选地,第一抠图对象和第二抠图对象处于同一虚拟环境中。可选地,第一抠图对象和第二抠图对象可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。可选地,第一抠图对象和第二抠图对象也可以属于不同队伍、不同组织、或具有敌对性的两个团体。
可选地,第一设备320和第二设备360上安装的应用程序是相同的,或两个设备上安装的应用程序是不同控制系统平台的同一类型应用程序。第一设备320可以泛指多个设备中的一个,第二设备360可以泛指多个设备中的一个,本实施例仅以第一设备320和第二设备360来举例说明。第一设备320和第二设备360的设备类型相同或不同,该设备类型包括:游戏主机、台式计算机、智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器和膝上 型便携计算机、车载终端、飞行器中的至少一种。以下实施例以设备是台式计算机来举例说明。
本领域技术人员可以知晓,上述设备的数量可以更多或更少。比如上述设备可以仅为一个,或者上述设备为几十个或几百个,或者更多数量。本申请实施例对设备的数量和设备类型不加以限定。
值得注意的是,上述服务器340可以实现为物理服务器,也可以实现为云端的云服务器,其中,云技术(Cloud technology)是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
在一些实施例中,本申请实施例提供的方法可以应用于云游戏场景中,从而通过云服务器完成游戏过程中数据逻辑的计算,而终端负责游戏界面的显示。
在一些实施例中,上述服务器340还可以实现为区块链系统中的节点。区块链(Blockchain)是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。
本申请实施例的应用场景包括如下场景中的至少一种:
第一种,应用于游戏场景中,其中,该游戏可以实现为云游戏,也即由云服务器完成游戏过程中的计算逻辑,而终端用于完成游戏过程中的显示逻辑。
示意性的,该游戏可以实现为劲舞游戏、射击游戏、益智游戏中的至少一种。玩家A通过配置有第一摄像头的第一终端采集场景图像,并从场景图像中抠图得到玩家A对应的抠图对象a;玩家B通过配置有第二摄像头的第二终端采集场景图像,并从场景图像中抠图得到玩家B对应的抠图对象b。将抠图对象a和抠图对象b以及预先设置好的虚拟场景显示在终端界面中,从而实现玩家A和玩家B在虚拟场景中互动并参与游戏的过程。
第二种,应用于直播场景中,其中,直播应用程序中包括主播和观众,其中,主播是指创建直播间的用户,观众是指对直播间进行观看的用户,在直播间中,观众能够与观众在虚拟场景中进行互动,或者,主播能够与观众在虚拟场景中进行互动。
示意性的,主播1在直播间中创建虚拟场景互动活动,并邀请观众参与,观众2受邀与主播1共同参与虚拟场景互动活动,主播1通过配置有第一摄像头的第一终端采集场景图像,并从场景图像中抠图得到主播1对应的抠图对象m;观众2通过配置有第二摄像头的第二终端采集场景图像,并从场景图像中抠图得到观众2对应的抠图对象n。将抠图对象m和抠图对象n以及预先设置好的虚拟场景显示在终端界面中,从而实现主播1和观众2在虚拟场景中互动。其中,除观众2以外的其他观众能够对主播1与观众2在虚拟场景中的互动过程进行观看。
结合上述名词简介以及实施环境说明,对本申请实施例中提供的基于虚拟场景的互动方法进行说明,请参考图4,其示出了本申请一个示例性实施例提供的基于虚拟场景的互动方法的流程图,以该方法由配置有摄像头的第一终端执行为例进行说明,如图4所示,该方法包括:
步骤401,接收虚拟场景显示操作。
在一些实施例中,该虚拟场景显示操作是指用户指示开启虚拟场景的操作。
示意性的,针对不同的应用场景,该虚拟场景显示操作的实现方式包括如下方式中的至少一种:
第一,针对云游戏应用场景,用户触发游戏开启操作作为虚拟场景显示操作,也即根据游戏开启操作启动云游戏对局并进入游戏界面。其中,用户可以与好友组队进入云游戏对局,也可以在进入云游戏对局之后邀请好友组队。
第二,针对直播应用场景,主播帐号触发互动空间开启操作作为虚拟场景显示操作,也 即根据互动空间开启操作开启主播帐号与观众帐号进行互动的虚拟场景。
示意性的,玩家在智能设备上打开游戏直播的链接,同时在显示器前固定摄像头设备,确保自己的指定部位或者全身处于摄像头的取景范围内。
在一些实施例中,第一终端所配置的摄像头为2维摄像头,也即,用于对平面图像进行采集。或者,第一终端所配置的摄像头为3维摄像头,也即,在采集图像的过程中对景深信息进行采集。
当第一终端配置的摄像头为2维摄像头时,在终端与服务器之间的数据交互为平面图像基础上的数据交互,减少了数据交互量;当第一终端配置的摄像头为3维摄像头时,终端能够根据采集到的景深信息构建出玩家对应的三维模型,提高虚拟场景中显示的抠图对象的真实性。
步骤402,通过摄像头采集第一场景图像。
其中,第一场景图像中包括位于与第一终端的摄像头拍摄范围内的第一对象。也即,第一场景图像是指第一终端配置的摄像头所采集得到的图像,其中,第一终端以视频流的形式连续对第一场景图像进行采集,而在第一场景图像的采集过程中,第一对象处于摄像头的拍摄范围内并显示在第一场景图像中。
在一些实施例中,当第一终端接收到虚拟场景显示操作时,即开启摄像头并通过摄像头对第一场景图像进行采集,也即,在接收到虚拟场景显示操作对虚拟场景进行显示的过程中,会立即开启摄像头以进行第一场景图像的采集。
在另一些实施例中,当第一终端接收到虚拟场景显示操作后,显示虚拟环境画面,此时,该虚拟环境画面中不存在虚拟抠图对象,或者,仅存在除第一终端对应的第一对象之外的抠图对象。响应于接收到加入操作,开启摄像头以对第一场景图像进行采集。
其中,在对第一场景图像进行采集后还需要对第一场景图像进行抠图处理,从第一场景图像中抠图得到第一对象,上述第一对象为第一场景图像中显示的真实的人或物。在一些实施例中,上述第一对象为第一终端将第一场景图像上传至服务器后,服务器对第一场景图像进行对象识别,确定第一场景图像中的第一对象;或者,上述第一对象的识别过程也可以是由第一终端实现的,在此不进行限定。
在一些实施例中,当第一场景图像中包括多个对象时,通过第一终端或服务器进行对象识别后,显示多个候选对象,由第一终端通过接收对象选取操作来从候选对象中确定出第一对象;可选地,上述从多个候选对象中确定出的第一对象可以是一个,也可以是多个。
对第一场景图像进行抠图的过程可以由终端完成,也可以由服务器完成,其中,当对第一场景图像进行抠图的过程由终端完成时,则在摄像头采集得到第一场景图像后,直接对第一场景图像进行抠图处理,从而节约了终端与服务器之间的数据交互量;当对第一场景图像进行抠图的过程由服务器完成时,在摄像头采集得到第一场景图像后,将第一场景图像发送至服务器,由服务器对第一场景图像进行抠图得到第一对象。
在一些实施例中,终端在开启摄像头以获取第一场景图像之前,需要获取用户对应用程序通过摄像头对第一场景图像进行采集的授权,即,显示授权提示信息,该授权提示信息中包括需要对摄像头进行启用的提示信息,以及对摄像头采集到的场景图像的使用用途的提示信息,响应于该授权提示信息接收到确认操作,终端开启摄像头。
步骤403,显示虚拟环境画面,虚拟环境画面为对虚拟场景进行显示的画面,虚拟场景中包括抠图对象。
其中,抠图对象中包括对第一场景图像进行抠图得到的第一对象,以及对第二场景图像进行抠图得到的第二对象。第一对象为第一场景图像中显示显示的真实的人或物,第二对象为第二场景图像中显示的真实的人或物,可选地,第一对象和第二对象可以是由不同的摄像头拍摄得到的实中不同的人或物,也可以是由不同摄像头拍摄得到的现实中的同一个人或物。示意性的,将通过抠图得到的第一对象和第二对象作为抠图对象加入至虚拟场景中,第一终 端根据虚拟场景来显示虚拟环境画面。
第一场景图像是通过配置有摄像头的第一终端采集得到的图像,第二场景图像是通过配置有摄像头的第二终端采集得到的图像。第一场景为第一终端通过摄像头所拍摄的场景,第二场景为第二终端通过摄像头所拍摄的场景,可选地,第一场景和第二场景可以是由不同摄像头拍摄的不同的现实场景,也可以是由不同摄像头拍摄的同一现实场景。
上述虚拟场景为由应用程序构建的场景,虚拟场景可以是二维动画场景或三维动画场景,也可以是由计算机模拟现实所得到的场景,即,虚拟场景是通过计算机虚构得到的场景,而上述第一场景和第二场景则为摄像头所拍摄的真实场景。
示意性的,上述虚拟场景可以是由抠图对象和虚拟元素共同组成的场景,其中,上述虚拟元素包括虚拟环境、虚拟对象、虚拟道具中的至少一种。上述虚拟对象为虚拟场景中虚构的对象,抠图对象则为虚拟场景中用于展示现实中真实存在的人或物的对象。
第一对象和第二对象为共同参与在同一虚拟对局或虚拟房间中的对象,值得注意的是,本实施例中以虚拟场景中包括第一对象和第二对象为例进行说明,在一些可选的实施例中,虚拟场景中也可以仅包括一个对象,或者包括三个或者三个以上对象,本实施例对此不加以限定。
可选地,虚拟环境画面中显示的抠图对象中,同一场景图像经过抠图得到的对象对应的显示数量可以是单个,也可以是多个,即,用户可以对自身映射至虚拟场景中的抠图对象的数量进行设置。在一个示例中,第一终端中登录有目标帐号,第一终端接收目标帐号指示的对象数量设置操作,其中,该对象数量设置操作指示的显示数量为目标数量,第一终端根据该对象数量设置操作在虚拟环境画面中显示目标数量的第一对象。例如,第一终端指示的对象数量设置操作指示的目标数量为3个,第二终端指示的对象数量设置操作指示的目标数量为2个,则虚拟环境画面中显示有第一场景图像中抠图得到的第一对象A1、第一对象A2、第一对象A3,以及第二场景图像中抠图得到的第二对象B1、第二对象B2,其中,第一对象A1、第一对象A2、第一对象A3为具有相同形象的对象,第二对象B1、第二对象B2为具有相同形象的对象。
其中,本实施例中以服务器对场景图像进行抠图为例进行说明,则终端向服务器发送第一场景图像,并接收服务器反馈的画面显示数据,该画面显示数据中包括虚拟场景对应的场景数据以及抠图对象对应的对象数据。当虚拟场景中存在由多个不同终端提供的场景图像进过抠图得到的抠图对象时,由服务器获取终端采集的场景图像,并对场景图像进行抠图以获取对象数据,从而统一地为多个不同终端配置同一虚拟场景对应的画面显示数据,能够减少处理资源的浪费,同时,由于无需终端进行本地抠图处理,避免不同硬件条件对场景图像进行抠图时造成的差异,保证了显示同一虚拟场景的终端在画面显示时的统一性。
其中,虚拟场景的场景数据是根据预先设定的虚拟场景确定的与虚拟场景对应的数据;或者,场景数据是根据用户对虚拟场景的选择确定的与选择结果对应的数据,提供了用户对虚拟场景进行选择的方案,增加了虚拟场景互动中的多样性;或者,场景数据是根据随机得到的虚拟场景确定的与随机结果对应的数据。本实施例对此不加以限定。
终端基于服务器反馈的场景数据和对象数据显示虚拟环境画面。
在一些实施例中,第一对象和第二对象在虚拟场景中的显示位置是在预设的候选位置中随机确定的,从而增加了对象显示的不确定性;或者,第一对象和第二对象在虚拟场景中的显示位置是服务器根据预设的显示规则向终端指示的;或者,第一对象和第二对象在虚拟场景中的显示位置为用户通过第一终端和第二终端分别指示的,即,由用户自己对抠图对象在虚拟场景中的显示位置进行设置。可选地,第一对象和第二对象在虚拟场景中的显示位置可以是固定不变的,也可以是跟随位置指示的变化而变化的。
示意性的,对象数据中包括对象显示位置,则基于场景数据显示虚拟场景,并在虚拟场景中的对象显示位置处显示抠图对象。在一些实施例中,上述对象数据可以是服务器在接收 到虚拟场景显示操作,并获取到场景图像后,为终端配置的数据,即,由服务器对从不同终端获取到的场景图像进行抠图以得到抠图对象,并对抠图对象的显示位置进行配置,得到对象数据,向各个终端发送上述对象数据,由于处于同一虚拟场景中的终端在对虚拟环境画面进行显示时,显示的虚拟环境画面中包含的虚拟环境的内容相同或相近,故在对象数据配置时,由服务器为多个终端进行统一配置,能够提升对象数据配置时的效率,同时减少对象数据配置时的资源消耗。
其中,对象数据中包括第一对象对应的第一对象数据和第二对象对应的第二对象数据,第一对象数据包括第一对象显示位置,第二对象数据包括第二对象显示位置,从而根据第一对象显示位置将第一对象显示在虚拟场景中的对应位置,并根据第二对象显示位置将第二对象显示在虚拟场景中的对应位置。
其中,对象显示位置以坐标的形式实现,也即,将抠图对象的指定标识点处于对象显示位置的方式对抠图对象的显示位置进行指示。示意性的,以抠图对象最小外接框的中心点与对象显示位置重合的方式对抠图对象的显示位置进行指示。
在一些实施例中,抠图对象的显示尺寸与抠图对象在场景图像中的显示大小相关,也即玩家距离摄像头越近,采集得到的场景图像中玩家对应的对象显示区域越大,则抠图得到的抠图对象显示尺寸越大;或者,在另一些实施例中,抠图对象的显示尺寸为服务器根据抠图得到的大小按照预设尺寸要求调整得到的,即,服务器在对获取到的场景图像进行抠图得到抠图对象后,对同一虚拟场景中的抠图对象的显示尺寸进行统一,以保证虚拟场景中抠图对象显示的合理性;或者,在另一些实施例中,抠图对象的显示尺寸也可以是由抠图对象对应的终端指示的尺寸调整操作确定的,在一个示例中,用户可以通过第一设备输入尺寸调整操作,以调整第一对象的显示尺寸。
在一些实施例中,虚拟环境画面上还叠加显示有场景图像,如:虚拟环境换面的右上角叠加显示有第一场景图像和第二场景图像。示意性的,图5是本申请一个示例性实施例提供的虚拟环境画面的界面示意图,如图5所示,在虚拟环境画面500上叠加显示有第一场景图像510和第二场景图像520,第一场景图像510中包括第一对象511,与虚拟环境画面500中显示的第一对象511对应,也即虚拟环境画面500中显示的第一对象511是从第一场景图像510中抠图得到的;第二场景图像520中包括第二对象521,与虚拟环境画面500中显示的第二对象521对应,也即虚拟环境画面500中显示的第二对象521是从第二场景图像520中抠图得到的。
在一些实施例中,对虚拟场景进行观察的角度能够调整。其中,目标帐号在虚拟场景中对应一个对虚拟场景进行观察的摄像机模型;或者,目标帐号在虚拟场景中对应多个对虚拟场景进行观察的摄像机模型。上述目标帐号为当前终端中登录提供虚拟场景的应用程序的帐号。
当目标账号仅能控制虚拟场景中的一个摄像机模型时,则当观察角度需要调整时,对摄像机模型在虚拟场景中的观察位置、观察焦距、观察角度进行调整;示意性的,如图5所示,在虚拟环境画面500上还显示有摄像机模型标识530,其中包括第一摄像机模型531、第二摄像机模型532、第三摄像机模型533和第四摄像机模型534;当目标帐号对应一个摄像机模型时,以目标帐号对应第一摄像机模型531为例,则第二摄像机模型532、第三摄像机模型533和第四摄像机模型534为其他参与虚拟场景的帐号对虚拟场景进行观察采用的摄像机模型。当目标帐号需要调整对虚拟场景进行观察的角度时,对第一摄像机模型531在虚拟场景中的观察位置进行调整;或者,对第一摄像机模型531在虚拟场景中的观察焦距进行调整;或者,对第一摄像机模型531在虚拟场景中的观察角度进行调整。
当目标帐号能够控制虚拟场景中的多个摄像机模型时,则当观察角度需要调整时,在观察虚拟环境的摄像机模型之间进行切换。也即,接收视角调整操作,基于视角调整操作,将观察虚拟场景与抠图对象的第一观察角度调整为第二观察角度,其中,第一观察角度与虚拟 场景中的第一摄像机模型对应,第二观察角度与虚拟场景中的第二摄像机模型对应。可选地,确定目标帐号在虚拟场景中对应的摄像机模型,其中,摄像机模型至少包括第一摄像机模型和第二摄像机模型,第一摄像机模型为当前对虚拟场景进行观察所应用的摄像机模型,基于视角调整操作,将观察虚拟场景的第一摄像机模型切换至第二摄像机模型,以第二观察角度对虚拟场景进行观察。其中,第一摄像机模型的第一观察角度与第二摄像机模型的第二观察角度不同。即,通过摄像机模型与观察视角一一对应的方式,来提供对观察视角的切换功能,能够通过视角调整操作来快速确定启用哪个摄像机模型来提供显示的虚拟环境画面,从而提升在观察视角切换时的画面切换效率。而通过将帐号与摄像机模型进行绑定来实现对视角调整时,能够根据帐号对虚拟场景的观察权限来进行摄像机模型的分配,从而避免虚拟环境画面显示违规内容。
示意性的,如图5所示,在虚拟环境画面500上还显示有摄像机模型标识530,其中包括第一摄像机模型531、第二摄像机模型532、第三摄像机模型533和第四摄像机模型534;当目标帐号对应多个摄像机模型时,示意性的,第一摄像机模型531、第二摄像机模型532、第三摄像机模型533和第四摄像机模型534皆为目标帐号在虚拟场景中对应的摄像机模型。示意性的,目标帐号当前通过第一摄像机模型531对虚拟场景进行观察,当目标帐号需要调整对虚拟场景进行观察的角度时,基于视角调整操作,从第一摄像机模型531切换至第二摄像机模型532,通过第二摄像机模型532对虚拟场景以及抠图对象进行观察。
在一些实施例中,调整对虚拟场景进行观察的角度时,根据角度变化实时调整抠图对象的显示方向,保持抠图对象面对观察角度显示。
综上所述,本申请实施例提供的方法,在显示虚拟场景的过程中,在虚拟场景中增加显示第一对象和第二对象,其中,第一对象和第二对象是从摄像头采集的场景图像中抠图得到的,也即将现实中的人、物与虚拟场景结合起来,使现实中的人、物能够与虚拟场景进行直接交互,而无需通过虚拟对象的形式与虚拟场景交互,提高了虚拟场景与用户之间的交互多样性,以及无需玩家对虚拟对象进行控制与虚拟场景进行交互,提高了交互效率。同时,在向虚拟场景中添加对象时,由于直接通过摄像头来采集真实的人、物来为虚拟场景增加对象,而无需为新的对象进行数据建模,从而减少了模型数据生成时的资源消耗,以及模型数据存储时的资源消耗。
示意性的,图6是本申请一个示例性实施例提供的实施环境整体示意图,如图6所示,玩家A操作智能设备610进行游戏参与,智能设备610配置有摄像头对玩家A进行图像采集;玩家B操作智能设备620进行游戏参与,智能设备620配置有摄像头对玩家B进行图像采集。服务器640接收智能设备610、智能设备620发送的采集的场景图像,并对场景图像进行抠图,得到玩家A、玩家B以对应的抠图对象,将虚拟场景和抠图对象反馈至观看终端650进行显示,其中,智能设备610、智能设备620也属于观看终端。
在一些实施例中,用户首先需要进行画面校准,从而将对象与图像背景进行区分,提高对象的抠图准确率。图7是本申请另一个示例性实施例提供的基于虚拟场景的互动方法的流程图,以该方法应用于配置有摄像头的第一终端中为例进行说明,如图7所示,该方法包括:
步骤701,接收虚拟场景显示操作。
在一些实施例中,该虚拟场景显示操作是指用户指示开启虚拟场景的操作。
示意性的,玩家在智能设备上打开游戏直播的链接,同时在显示器前固定方式摄像头设备,确保自己的指定部位或者全身处于摄像头的取景范围内。
步骤702,通过摄像头采集第一场景图像。
其中,第一场景图像中包括位于与第一终端的摄像头拍摄范围内的第一对象。也即,第一场景图像是指第一终端配置的摄像头所采集得到的图像。
步骤703,显示校准画面,校准画面中包括第一场景图像,第一场景图像中包括指示框和指示线。
其中,指示框用于对第一对象进行框选指示,指示线位于第一场景图像的指定位置且将第一场景图像分割为第一区域和第二区域。
示意性的,请参考图8,其示出了本申请一个示例性实施例提供的校准画面的示意图,如图8所示,在校准画面800中显示有第一场景图像810,第一场景图像810中包括第一对象820、指示框830以及指示线840,其中,指示框830通过预先训练好的对象识别模型对第一对象820进行框选,指示线840纵向显示在第一场景图像820中间,将第一场景图像810分为左半区域841和右半区域842。
值得注意的是,上述图8中,以指示线纵向显示为例进行说明,在一些实施例中,指示线也可以实现为横向显示,本实施例对此不加以限定。
步骤704,通过采集第一对象从第一位置移动至第二位置的阶段图像,对第一场景图像的背景部分进行指示。
其中,第一位置为指示框位于第一区域的位置,第二位置为指示框位于第二区域的位置。也即,控制指示框从第一场景图像的第一区域移动至第一场景图像的第二区域,从而在指示框在第一区域内显示时,第二区域内显示的内容为完整的背景图像,当指示框在第二区域内显示时,第一区域内显示的内容为完整的背景图像,将两部分完整的背景图像结合,即可得到第一场景图像中除第一对象以外完整的背景图像,后续在对的第一场景图像中的第一对象进行抠图时,能够根据已识别得到的背景图像实现抠图过程,提高了抠图结果的准确率。
示意性的,请参考图9,其示出了本申请一个示例性实施例提供的校准过程的示意图,如图9所示,初始时刻第一对象910位于第一场景图像900的中间位置,首先玩家向左移动,从而第一对象910位于第一场景图像900的左侧位置,且使得指示框920完整位于指示线930的左侧;继而玩家向右移动,从而第一对象910位于第一场景图像900的右侧位置,且使得指示框920完整位于指示线930的右侧,根据指示框920完整位于指示线930的左侧时,指示线930右侧的图像,以及指示框920完整位于指示线930的右侧时,指示线930左侧的图像,得到第一场景图像900的背景图像,为从第一场景图像900中抠图得到第一对象910提供基础。
值得注意的是,上述过程中以第一场景图像的校准过程为例进行说明,本申请实施例中,第二场景图像的校准过程与第一场景图像的校准过程一致,本申请实施例不加以赘述。
在一些实施例中,上述校准过程可以是在显示包括当前终端对应的抠图对象之前执行的;或者,也可以是终端在检测到场景图像中的背景图像发生变化时执行的,即,响应于通过传感器检测到终端对应的摄像头的拍摄方向发生变化,提示需要执行校准过程,或者,响应于检测到采集到的场景图像中背景图像发送变化,提示需要执行校准过程。
步骤705,显示虚拟环境画面,虚拟环境画面为对虚拟场景进行显示的画面,虚拟场景中包括抠图对象。
其中,抠图对象中包括对第一场景图像进行抠图得到的第一对象,以及对第二场景图像进行抠图得到的第二对象,第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
在一些实施例中,响应于第一对象和第二对象的校准过程完成,显示虚拟环境画面,服务器根据第一对象校准后的第一背景区域对第一场景图像中的第一对象进行抠图处理,同理,服务器根据第二对象校准后的第二背景区域对第二场景图像中的第二对象进行抠图处理,从而在虚拟场景中显示第一对象和第二对象。
综上所述,本申请实施例提供的方法,在显示虚拟场景的过程中,在虚拟场景中增加显示第一对象和第二对象,其中,第一对象和第二对象是从摄像头采集的场景图像中抠图得到的,也即将现实中的人、物与虚拟场景结合起来,使现实中的人、物能够与虚拟场景进行直接交互,而无需通过虚拟对象的形式与虚拟场景交互,提高了虚拟场景与用户之间的交互多 样性,以及无需玩家对虚拟对象进行控制与虚拟场景进行交互,提高了交互效率。同时,在向虚拟场景中添加对象时,由于直接通过摄像头来采集真实的人、物来为虚拟场景增加对象,而无需为新的对象进行数据建模,从而减少了模型数据生成时的资源消耗,以及模型数据存储时的资源消耗。
本实施例提供的方法,通过校准的过程,对第一场景图像以及第二场景图像的背景区域进行确定,从而为后续对第一场景图像进行抠图得到第一对象,以及为后续对的第二场景图像进行抠图得到第二对象提供基础,提高了从第一场景图像中抠图得到第一对象的准确率,以及提高了从第二场景图像中抠图得到第二对象的准确率。
在一个可选的实施例中,上述第一对象和第二对象还能够在虚拟场景中进行互动。图10是本申请另一个示例性实施例提供的基于虚拟场景的互动方法的流程图,以该方法应用于配置有摄像头的第一终端中为例进行说明,如图10所示,该方法包括:
步骤1001,接收虚拟场景显示操作。
在一些实施例中,该虚拟场景显示操作是指用户指示开启虚拟场景的操作。
示意性的,玩家在智能设备上打开游戏直播的链接,同时在显示器前固定方式摄像头设备,确保自己的指定部位或者全身处于摄像头的取景范围内。
步骤1002,通过摄像头采集第一场景图像。
其中,第一场景图像中包括位于与第一终端的摄像头拍摄范围内的第一对象。也即,第一场景图像是指第一终端配置的摄像头所采集得到的图像。
步骤1003,显示虚拟环境画面,虚拟环境画面中包括虚拟场景与抠图对象。
其中,抠图对象中包括对第一场景图像进行抠图得到的第一对象,以及对第二场景图像进行抠图得到的第二对象,第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
在一些实施例中,响应于第一对象和第二对象的校准过程完成,显示虚拟环境画面,服务器根据第一对象画面校准后的第一背景区域对第一场景图像中的第一对象进行抠图处理。
步骤1004,响应于第一对象和第二对象在虚拟场景中符合互动要求,通过虚拟环境画面显示互动动画。
在一些实施例中,响应于第一对象和第二对象的抠图动作符合动作要求,通过虚拟环境画面显示互动动画。其中,响应于第一对象和第二对象中的其中一个对象抠图动作符合动作要求,显示互动动画;或者,响应于第一对象和第二对象的抠图动作皆符合动作要求,显示互动动画。
其中,动作要求包括如下情况中的至少一种:
第一种,虚拟场景中包括互动触发物体,响应于第一对象的抠图动作与互动触发物体接触,通过虚拟环境画面显示第一对象与虚拟场景之间的互动动画;或者,响应于第二对象的抠图动作与互动触发物体接触,通过虚拟环境画面显示第二对象与虚拟场景之间的互动动画。
示意性的,以第一对象的互动过程为例进行说明,在虚拟场景中显示有目标物体,第一对象通过执行任意动作与目标物体接触,从而触发第一对象在虚拟场景中的显示特效。如:第一对象与目标物体接触后,在虚拟场景中围绕第一对象显示烟花特效。即,提供了抠图对象与虚拟场景中的虚拟物品之间的交互方式,丰富了用户在通过抠图对象实现虚拟场景中对象的显示时的交互多样性,且由于抠图对象可以是场景图像中真实的人,可以增强人与虚拟物品之间的互动感。
值得注意的是,上述举例中以第一对象与虚拟场景之间的互动动画为例进行说明,本实施例还可以实现为当第一对象与目标物体接触后,产生与第二对象之间的互动关系,如:显示第一对象向第二对象递花的动画。
第二种,响应于第一对象和第二对象的抠图动作与预设基准动作匹配,通过虚拟环境画面显示与预设基准动作对应的互动动画。
示意性的,当第一对象和第二对象的动作皆为双臂张开的动作时,则表示第一对象和第二对象与预设基准动作匹配,并显示第一对象和第二对象手拉手跳舞的动画。
上述在判断第一对象和第二对象的动作与预设基准动作的匹配情况时,以第一对象为例,通过服务器对第一对象的识别在第一对象上设置骨骼点,用于表示第一对象的骨骼动作,从而将第一对象的骨骼动作与预设基准动作进行匹配,得到第一对象的动作与预设基准动作的匹配情况。其中,骨骼点的设置是通过对人像姿势进行计算得到头部、手足等肢体骨骼点的坐标实现的。即,通过对对象的动作进行检测,从而快速实现对虚拟场景中抠图对象的互动动画的控制,减少了用户在互动实现过程中对终端的输入操作,提升了互动效率,同时,由于是根据多个对象之间的抠图动作与预设基准点工作之间的匹配关系来实现互动动画,增强了虚拟场景中不同对象之间的互动感。
第三种,响应于第一对象的抠图动作与预设基准动作匹配,通过虚拟环境画面显示第一对象与虚拟场景的互动动画;或者,响应于第二对象的抠图动作与预设基准动作匹配,通过虚拟环境画面显示第二对象与虚拟场景的互动动画。
在一些实施例中,终端在虚拟环境画面对应的界面上显示有上述预设基准动作对应的提示信息。可选地,上述提示信息可以是文字形式、图像形式或者动画形式的信息。可选地,上述提示信息可以是在接收到动作提示操作后显示的;或者,上述提示信息可以是通过动作轮盘调取的,上述动作轮盘中包括至少两个候选的预设基准动作对应的提示信息;或者,上述提示信息可以是检测到抠图对象对应的抠图动作与预设基准动作之间的相似度达到预设阈值时自动显示的。
值得注意的是,由于本申请实施例中的第一场景图像和第二场景图像为二维图像,故从第一场景图像中抠图得到的第一对象,以及从第二场景图像中抠图得到的第二对象也对应为二维图像,故,互动动画的显示方式包括如下情况中的至少一种:
1、创作与第一对象和第二对象对应的三维虚拟对象进行互动动作的执行,从而显示互动动画。
其中,第一对象和第二对象对应的三维虚拟对象可以是在预设三维虚拟模型的基础上,根据对第一对象和第二对象的图像识别结果,对发型、发色、衣服颜色、下装类型、下装颜色、鞋子类型、鞋子颜色等模型部分和模型参数进行调整得到的三维模型。
2、在预设三维虚拟模型的头部位置贴上第一对象的头部贴图作为第一对象参与互动的模型,在预设三维虚拟模型的头部位置贴上第二对象的头部贴图作为第二对象参与互动的模型。
3、显示第一对象和第二对象在横向平面内互动的动画,如:第一对象和第二对象的动作为向身体两侧伸出双手时,显示第一对象和第二对象牵手面对镜头的跳舞动画。
综上所述,本申请实施例提供的方法,在显示虚拟场景的过程中,在虚拟场景中增加显示第一对象和第二对象,其中,第一对象和第二对象是从摄像头采集的场景图像中抠图得到的,也即将现实中的人、物与虚拟场景结合起来,使现实中的人、物能够与虚拟场景进行直接交互,而无需通过虚拟对象的形式与虚拟场景交互,提高了虚拟场景与用户之间的交互多样性,以及无需玩家对虚拟对象进行控制与虚拟场景进行交互,提高了交互效率。同时,在向虚拟场景中添加对象时,由于直接通过摄像头来采集真实的人、物来为虚拟场景增加对象,而无需为新的对象进行数据建模,从而减少了模型数据生成时的资源消耗,以及模型数据存储时的资源消耗。
本实施例提供的方法,提供了第一对象与虚拟场景、对第二对象与虚拟场景以及第一对象与第二对象之间的互动,增加了虚拟场景中的互动方式,提高了用户与虚拟对象或者用户之间的互动多样性。
示意性的,以本申请实施例提供的基于虚拟场景的互动方法应用于云游戏中为例。图11是本申请一个示例性实施例提供的基于虚拟场景的互动过程整体流程图,如图11所示,该过 程中包括:
步骤1101,云服务器启动云游戏。
在云服务器中运行云游戏,从而玩家能够接入云服务器进行云游戏。
步骤1102,玩家登陆至大厅。
在一些实施例中,多名玩家使用账号系统登陆到云游戏大厅。
步骤1103,玩家通过大厅加入房间。
一个云游戏房间中包括至少一个玩家,在本申请实施例中,有一个云游戏房间中包括至少两个玩家,且至少两个玩家之间可以互动。
步骤1104,云服务器初始化玩家数据。
云服务器首先将玩家对应的游戏帐号数据进行初始化。
步骤1105,云服务器创建个人渲染相机。
云服务器在游戏场景内创建玩家独有的虚拟摄像机,与玩家一一绑定,用来捕获指定角度的游戏画面回传给指定玩家。
步骤1106,云服务器创建个人音频组。
上述个人音频组为用于为同一游戏场景内的玩家提供语音通讯功能以及音频传输的后端支持。
步骤1107,玩家与云服务器建立连接,并交换编解码信息。
其中,编码和解码为视频处理的对应过程,由云服务器对游戏过程视频进行编码后发送至终端,终端对编码视频进行解码,得到解码视频流进行播放;而终端在通过摄像头采集得到视频流,对摄像头采集的视频流进行编码,并发送至云服务器进行解码以及后续的抠图等处理。
步骤1108,云服务器向玩家发送相机渲染视频流,以及发送编码音频流。
玩家端的终端将获取到的视频流和音频流进行解码,以得到能够进行渲染播放的音视频数据。在一些实施例中,终端可以将接收到的视频流和音频流通过转码得到不同播放格式的音视频数据,从而满足不同设备的播放需求。
步骤1109,玩家进行数据流模拟输入。
在云服务器实现的云游戏场景下,玩家通过终端输入控制操作,该控制操作被传输至云服务器中,通过云服务器来执行控制逻辑,即,云游戏场景下由于终端并不需要安装提供虚拟场景的应用程序,故终端不执行实质的控制逻辑,由终端通过向云服务器发送数据流来模拟输入。
步骤1110,玩家向数据处理服务器发送摄像头采集流。
玩家端的终端将配置的摄像头所采集到的图像或视频数据实时传输至数据处理服务器,即,摄像头进行图像/视频的捕获,终端进行图像/视频的传输。
步骤1111,数据处理服务器向人工智能(Artificial Intelligence,AI)算力服务器进行数据流传递。
数据处理服务器作为对数据进行导流的服务器,根据不同数据的处理需求将数据分配至其他专用服务器(如AI算力服务器)。
步骤1112,AI算力服务器进行骨骼计算和视频抠图。
AI算力服务器对终端摄像头采集的视频流的每一帧进行抠图,去除背景抠出人像,按照人像姿势计算手足、头部等肢体骨骼点坐标。
步骤1113,AI算力服务器向云服务器进行数据流传递。
AI算力服务器完成抠图后,将带有透明背景的视频流(图片流)连同骨骼点坐标数据传递到云服务器;云服务器收到抠图后的视频流,渲染到游戏中,根据骨骼点坐标实现触碰特效等功能;场景内的相机渲染出包含人像的场景内容,传输到玩家端进行游戏画面的显示;玩家端的终端对获取到的游戏画面数据进行解码,然后渲染得到能够显示于显示屏上的虚拟 环境画面。
图12是本申请一个示例性实施例提供的基于虚拟场景的互动装置的结构框图,如图12所示,以该装置应用于配置有摄像头的第一终端中,该装置包括:
接收模块1210,用于接收虚拟场景显示操作;
采集模块1220,用于显示操作通过所述摄像头采集第一场景图像,所述第一场景图像中包括第一对象,所述第一对象位于所述第一终端的摄像头拍摄范围内;
显示模块1230,用于显示虚拟环境画面,所述虚拟环境画面为对虚拟场景进行显示的画面,所述虚拟场景中包括抠图对象,所述抠图对象中包括对所述第一场景图像进行抠图得到的所述第一对象,以及对第二场景图像进行抠图得到的第二对象,其中,所述第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
在一个可选的实施例中,所述装置还包括:
发送模块1240,用于向服务器发送所述第一场景图像;
所述接收模块1210,还用于接收服务器反馈的画面显示数据,所述画面显示数据中包括所述虚拟场景对应的场景数据以及所述抠图对象对应的对象数据;
所述显示模块1230,还用于基于所述场景数据和所述对象数据显示所述虚拟环境画面。
在一个可选的实施例中,所述对象数据中包括对象显示位置;
所述显示模块1230,还用于基于所述场景数据显示所述虚拟场景;
所述显示模块1230,还用于基于所述对象显示位置定位所述抠图对象对应所述虚拟场景的显示位置;在所述显示位置处显示所述抠图对象。
在一个可选的实施例中,所述显示模块1230,还用于显示校准画面,所述校准画面中包括所述第一场景图像,所述第一场景图像中包括指示框和指示线,所述指示框用于对所述第一对象进行框选指示,所述指示线位于所述第一场景图像的指定位置且将所述第一场景图像分割为第一区域和第二区域;
所述采集模块1220,还用于通过采集所述第一对象从第一位置移动至第二位置的阶段图像,对所述第一场景图像的背景部分进行指示,其中,所述第一位置为所述指示框的位于所述第一区域的位置,所述第二位置为所述指示框位于所述第二区域的位置。
在一个可选的实施例中,所述显示模块1230,还用于响应于所述第一对象和所述第二对象在所述虚拟场景中符合互动要求,通过所述虚拟环境画面显示互动动画。
在一个可选的实施例中,所述显示模块1230,还用于响应于所述第一对象和所述第二对象的抠图动作符合动作要求,通过所述虚拟环境画面显示所述互动动画。
在一个可选的实施例中,所述虚拟场景中还包括互动触发物体;
所述显示模块1230,还用于响应于所述第一对象的抠图动作与所述互动触发物体接触,通过所述虚拟环境画面显示所述第一对象与所述虚拟场景之间的互动动画;
或,
所述显示模块1230,还用于响应于所述第二对象的抠图动作与所述互动触发物体接触,通过所述虚拟环境画面显示所述第二对象与所述虚拟场景之间的互动动画。
在一个可选的实施例中,所述显示模块1230,还用于响应于所述第一对象和所述第二对象的抠图动作与预设基准动作匹配,通过所述虚拟环境画面显示与所述预设基准动作对应的互动动画。
综上所述,本申请实施例提供的装置,在显示虚拟场景的过程中,在虚拟场景中增加显示第一对象和第二对象,其中,第一对象和第二对象是从摄像头采集的场景图像中抠图得到的,也即将现实中的人、物与虚拟场景结合起来,使现实中的人、物能够与虚拟场景进行直接交互,而无需通过虚拟对象的形式与虚拟场景交互,提高了虚拟场景与用户之间的交互多样性,以及无需玩家对虚拟对象进行控制与虚拟场景进行交互,提高了交互效率。同时,在 向虚拟场景中添加对象时,由于直接通过摄像头来采集真实的人、物来为虚拟场景增加对象,而无需为新的对象进行数据建模,从而减少了模型数据生成时的资源消耗,以及模型数据存储时的资源消耗。
需要说明的是:上述实施例提供的基于虚拟场景的互动装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的基于虚拟场景的互动装置与基于虚拟场景的互动方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图14示出了本申请一个示例性实施例提供的终端1400的结构框图。
通常,终端1400包括有:处理器1401和存储器1402。
处理器1401可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。
存储器1402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。在一些实施例中,存储器1402中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1401所执行以实现本申请中方法实施例提供的基于虚拟场景的互动方法。
在一些实施例中,终端1400还可选包括有:外围设备接口1403和至少一个外围设备。处理器1401、存储器1402和外围设备接口1403之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1403相连。具体地,外围设备包括:射频电路1404、显示屏1405、摄像头1406、音频电路1407和电源1409中的至少一种。
外围设备接口1403可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1401和存储器1402。
射频电路1404用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1404通过电磁信号与通信网络以及其他通信设备进行通信。
显示屏1405用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。
摄像头组件1406用于采集图像或视频。
音频电路1407可以包括麦克风和扬声器。
电源1409用于为终端1400中的各个组件进行供电。
在一些实施例中,终端1400还包括有一个或多个传感器1410。该一个或多个传感器1410包括但不限于:加速度传感器1411、陀螺仪传感器1412、压力传感器1413、光学传感器1415以及接近传感器1416。
本领域技术人员可以理解,图14中示出的结构并不构成对终端1400的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
可选地,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
Claims (14)
- 一种基于虚拟场景的互动方法,所述方法由配置有摄像头的第一终端执行,所述方法包括:接收虚拟场景显示操作;通过所述摄像头采集第一场景图像,所述第一场景图像中包括第一对象,所述第一对象位于所述第一终端的摄像头拍摄范围内;显示虚拟环境画面,所述虚拟环境画面为对虚拟场景进行显示的画面,所述虚拟场景中包括抠图对象,所述抠图对象中包括对所述第一场景图像进行抠图得到的所述第一对象,以及对第二场景图像进行抠图得到的第二对象,其中,所述第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
- 根据权利要求1所述的方法,其中,所述显示虚拟环境画面,包括:向服务器发送所述第一场景图像;接收服务器反馈的画面显示数据,所述画面显示数据中包括所述虚拟场景对应的场景数据以及所述抠图对象对应的对象数据;基于所述场景数据和所述对象数据显示所述虚拟环境画面。
- 根据权利要求2所述的方法,其中,所述对象数据中包括对象显示位置;所述基于所述场景数据和所述对象数据显示所述虚拟环境画面,包括:基于所述场景数据显示所述虚拟场景;在所述虚拟场景中的所述对象显示位置处显示所述抠图对象。
- 根据权利要求1至3任一所述的方法,其中,所述显示虚拟环境画面之前,还包括:显示校准画面,所述校准画面中包括所述第一场景图像,所述第一场景图像中包括指示框和指示线,所述指示框用于对所述第一对象进行框选指示,所述指示线位于所述第一场景图像的指定位置且将所述第一场景图像分割为第一区域和第二区域;通过采集所述第一对象从第一位置移动至第二位置的阶段图像,对所述第一场景图像的背景部分进行指示,其中,所述第一位置为所述指示框位于所述第一区域的位置,所述第二位置为所述指示框位于所述第二区域的位置。
- 根据权利要求1至3任一所述的方法,其中,所述显示虚拟环境画面之后,还包括:响应于所述第一对象和所述第二对象在所述虚拟场景中符合互动要求,通过所述虚拟环境画面显示互动动画。
- 根据权利要求5所述的方法,其中,所述响应于所述第一对象和所述第二对象在所述虚拟场景中符合互动要求,通过所述虚拟环境画面显示互动动画,包括:响应于所述第一对象和所述第二对象的抠图动作符合动作要求,通过所述虚拟环境画面显示所述互动动画。
- 根据权利要求6所述的方法,其中,所述虚拟场景中还包括互动触发物体;所述响应于所述第一对象和所述第二对象的抠图动作符合动作要求,通过所述虚拟环境画面显示所述互动动画,包括:响应于所述第一对象的抠图动作与所述互动触发物体接触,通过所述虚拟环境画面显示所述第一对象与所述虚拟场景之间的互动动画;或,响应于所述第二对象的抠图动作与所述互动触发物体接触,通过所述虚拟环境画面显示所述第二对象与所述虚拟场景之间的互动动画。
- 根据权利要求6所述的方法,其中,所述响应于所述第一对象和所述第二对象的抠图动作符合动作要求,通过所述虚拟环境画面显示所述互动动画,包括:响应于所述第一对象和所述第二对象的抠图动作与预设基准动作匹配,通过所述虚拟环境画面显示与所述预设基准动作对应的互动动画。
- 根据权利要求1至3任一所述的方法,其中,所述方法还包括:接收视角调整操作;基于所述视角调整操作,将观察所述虚拟场景与所述抠图对象的第一观察角度调整为第二观察角度,其中,所述第一观察角度与所述虚拟场景中的第一摄像机模型对应,所述第二观察角度与所述虚拟场景中的第二摄像机模型对应。
- 根据权利要求9所述的方法,其中,所述基于所述视角调整操作,将观察所述虚拟场景与所述抠图对象的第一观察角度调整为第二观察角度,包括:确定目标帐号在所述虚拟场景中对应的摄像机模型,所述摄像机模型至少包括所述第一摄像机模型和所述第二摄像机模型,所述第一摄像机模型为当前对所述虚拟场景进行观察所应用的摄像机模型;基于所述视角调整操作,将观察所述虚拟场景的所述第一摄像机模型切换至所述第二摄像机模型,以所述第二观察角度对所述虚拟场景进行观察。
- 一种基于虚拟场景的互动装置,所述装置包括:接收模块,用于接收虚拟场景显示操作;采集模块,用于通过所述摄像头采集第一场景图像,所述第一场景图像中包括第一对象,所述第一对象位于所述第一终端的摄像头拍摄范围内的第一对象;显示模块,用于显示虚拟环境画面,所述虚拟环境画面为对虚拟场景进行显示的画面,所述虚拟场景中包括抠图对象,所述抠图对象中包括对所述第一场景图像进行抠图得到的所述第一对象,以及对第二场景图像进行抠图得到的第二对象,其中,所述第二场景图像是通过配置有摄像头的第二终端采集得到的图像。
- 一种计算机设备,其中,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至10任一所述的基于虚拟场景的互动方法。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如权利要求1至10任一所述的基于虚拟场景的互动方法。
- 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现如权利要求1至10任一所述的基于虚拟场景的互动方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/299,772 US20230245385A1 (en) | 2021-06-24 | 2023-04-13 | Interactive method and apparatus based on virtual scene, device, and medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110703616.5 | 2021-06-24 | ||
CN202110703616.5A CN113244616B (zh) | 2021-06-24 | 2021-06-24 | 基于虚拟场景的互动方法、装置、设备及可读存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/299,772 Continuation US20230245385A1 (en) | 2021-06-24 | 2023-04-13 | Interactive method and apparatus based on virtual scene, device, and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022267729A1 true WO2022267729A1 (zh) | 2022-12-29 |
Family
ID=77189499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/092190 WO2022267729A1 (zh) | 2021-06-24 | 2022-05-11 | 基于虚拟场景的互动方法、装置、设备、介质及程序产品 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230245385A1 (zh) |
CN (1) | CN113244616B (zh) |
WO (1) | WO2022267729A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113244616B (zh) * | 2021-06-24 | 2023-09-26 | 腾讯科技(深圳)有限公司 | 基于虚拟场景的互动方法、装置、设备及可读存储介质 |
CN113709515A (zh) * | 2021-09-06 | 2021-11-26 | 广州麦田信息技术有限公司 | 一种新媒体直播和用户线上互动方法 |
CN115997385A (zh) * | 2022-10-12 | 2023-04-21 | 广州酷狗计算机科技有限公司 | 基于增强现实的界面显示方法、装置、设备、介质和产品 |
CN116954412A (zh) * | 2022-12-15 | 2023-10-27 | 腾讯科技(深圳)有限公司 | 虚拟角色的显示方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160284134A1 (en) * | 2015-03-24 | 2016-09-29 | Intel Corporation | Augmentation modification based on user interaction with augmented reality scene |
CN106792246A (zh) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | 一种融合式虚拟场景互动的方法及系统 |
CN111050189A (zh) * | 2019-12-31 | 2020-04-21 | 广州酷狗计算机科技有限公司 | 直播方法、装置、设备、存储介质和程序产品 |
CN112044068A (zh) * | 2020-09-10 | 2020-12-08 | 网易(杭州)网络有限公司 | 人机交互方法、装置、存储介质及计算机设备 |
CN112148197A (zh) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | 增强现实ar交互方法、装置、电子设备及存储介质 |
CN113244616A (zh) * | 2021-06-24 | 2021-08-13 | 腾讯科技(深圳)有限公司 | 基于虚拟场景的互动方法、装置、设备及可读存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2786303A4 (en) * | 2011-12-01 | 2015-08-26 | Lightcraft Technology Llc | TRANSPARENCY SYSTEM WITH AUTOMATIC TRACKING |
CN105184787B (zh) * | 2015-08-31 | 2018-04-06 | 广州市幸福网络技术有限公司 | 一种自动对人像进行抠图的证照相机及方法 |
CN110969905A (zh) * | 2019-11-29 | 2020-04-07 | 塔普翊海(上海)智能科技有限公司 | 混合现实的远程教学互动、教具互动系统及其互动方法 |
CN111698390B (zh) * | 2020-06-23 | 2023-01-10 | 网易(杭州)网络有限公司 | 虚拟摄像机控制方法及装置、虚拟演播厅实现方法及系统 |
CN111701238B (zh) * | 2020-06-24 | 2022-04-26 | 腾讯科技(深圳)有限公司 | 虚拟画卷的显示方法、装置、设备及存储介质 |
-
2021
- 2021-06-24 CN CN202110703616.5A patent/CN113244616B/zh active Active
-
2022
- 2022-05-11 WO PCT/CN2022/092190 patent/WO2022267729A1/zh unknown
-
2023
- 2023-04-13 US US18/299,772 patent/US20230245385A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160284134A1 (en) * | 2015-03-24 | 2016-09-29 | Intel Corporation | Augmentation modification based on user interaction with augmented reality scene |
CN106792246A (zh) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | 一种融合式虚拟场景互动的方法及系统 |
CN111050189A (zh) * | 2019-12-31 | 2020-04-21 | 广州酷狗计算机科技有限公司 | 直播方法、装置、设备、存储介质和程序产品 |
CN112044068A (zh) * | 2020-09-10 | 2020-12-08 | 网易(杭州)网络有限公司 | 人机交互方法、装置、存储介质及计算机设备 |
CN112148197A (zh) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | 增强现实ar交互方法、装置、电子设备及存储介质 |
CN113244616A (zh) * | 2021-06-24 | 2021-08-13 | 腾讯科技(深圳)有限公司 | 基于虚拟场景的互动方法、装置、设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113244616B (zh) | 2023-09-26 |
CN113244616A (zh) | 2021-08-13 |
US20230245385A1 (en) | 2023-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022267729A1 (zh) | 基于虚拟场景的互动方法、装置、设备、介质及程序产品 | |
US10424077B2 (en) | Maintaining multiple views on a shared stable virtual space | |
TWI468734B (zh) | 用於在共享穩定虛擬空間維持多個視面的方法、攜帶式裝置以及電腦程式 | |
CN113633973B (zh) | 游戏画面的显示方法、装置、设备以及存储介质 | |
CN111744202B (zh) | 加载虚拟游戏的方法及装置、存储介质、电子装置 | |
RU2617914C2 (ru) | Системы и способы облачной обработки и наложения содержимого на потоковые видеокадры удаленно обрабатываемых приложений | |
WO2022083452A1 (zh) | 虚拟对象的二维形象展示方法、装置、设备及存储介质 | |
US9092910B2 (en) | Systems and methods for cloud processing and overlaying of content on streaming video frames of remotely processed applications | |
CN108876878B (zh) | 头像生成方法及装置 | |
CN117085322B (zh) | 基于虚拟场景的互动观察方法、装置、设备及介质 | |
JP2023527846A (ja) | バーチャルシーンにおけるデータ処理方法、装置、コンピュータデバイス、及びコンピュータプログラム | |
US20230072463A1 (en) | Contact information presentation | |
CN114288654A (zh) | 直播互动方法、装置、设备、存储介质及计算机程序产品 | |
CN112774185B (zh) | 牌类虚拟场景中的虚拟牌控制方法、装置及设备 | |
CN112156454B (zh) | 虚拟对象的生成方法、装置、终端及可读存储介质 | |
JP2023531128A (ja) | 仮想環境におけるエフェクト生成方法、装置、機器及びコンピュータプログラム | |
CN113171613B (zh) | 组队对局方法、装置、设备及存储介质 | |
CN114425162A (zh) | 一种视频处理方法和相关装置 | |
CN113599829B (zh) | 虚拟对象的选择方法、装置、终端及存储介质 | |
WO2024060895A1 (zh) | 用于虚拟场景的群组建立方法、装置、设备及存储介质 | |
CN118236700A (zh) | 基于虚拟世界的角色互动方法、装置、设备和介质 | |
TW202111480A (zh) | 虛擬實境與擴增實境之互動系統及其方法 | |
CN118504719A (zh) | 虚拟对局的预约方法、装置、设备、存储介质及程序产品 | |
CN116943243A (zh) | 基于虚拟场景的互动方法、装置、设备、介质及程序产品 | |
CN118142173A (zh) | 虚拟投掷物的控制方法、装置、设备、介质及程序产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22827227 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.05.2024) |