CN115761106A - Information processing method, information processing apparatus, storage medium, and electronic apparatus - Google Patents

Information processing method, information processing apparatus, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN115761106A
CN115761106A CN202211289323.8A CN202211289323A CN115761106A CN 115761106 A CN115761106 A CN 115761106A CN 202211289323 A CN202211289323 A CN 202211289323A CN 115761106 A CN115761106 A CN 115761106A
Authority
CN
China
Prior art keywords
scene
virtual
sub
area
shadow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211289323.8A
Other languages
Chinese (zh)
Inventor
冯玮轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211289323.8A priority Critical patent/CN115761106A/en
Publication of CN115761106A publication Critical patent/CN115761106A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses an information processing method, an information processing device, a storage medium and an electronic device. The method comprises the following steps: determining a scene area of a virtual scene and outline information of a virtual role in the virtual scene; dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role; baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; creating a first shadow map using the first depth map; and rendering and displaying a first target shadow expression on the virtual character based on the first shadow map, wherein the first target shadow expression is a shadow formed by the first virtual object to the virtual character. The technical problem that the shadow effect which is matched with the requirement of the virtual scene cannot be achieved is solved.

Description

Information processing method, information processing apparatus, storage medium, and electronic apparatus
Technical Field
The present application relates to the field of computers, and in particular, to an information processing method, an information processing apparatus, a storage medium, and an electronic apparatus.
Background
In the related art, the shadow in the virtual scene may be implemented by a ghost Engine, for example, by a cast shadow switch (CastShadow) carried by a ghost Engine4 (non Engine4, abbreviated as UE 4). However, this method can only achieve that all objects in the virtual scene cast shadows between each other, and cannot achieve the non-physical real representation that only objects cast shadows to virtual characters, but virtual characters do not cast shadows to objects, and there is also a precision loss, thereby causing a technical problem that a shadow effect that meets the requirements of the virtual scene cannot be achieved.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least some embodiments of the present application provide an information processing method, an information processing apparatus, a storage medium, and an electronic apparatus, so as to at least solve a technical problem that a shadow effect that conforms to a requirement of a virtual scene cannot be realized.
According to one embodiment of the present application, there is provided an information processing method. The method comprises the following steps: determining a scene area of a virtual scene and outline information of a virtual role in the virtual scene; dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role; baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; creating a first shadow map using the first depth map; and rendering and displaying a first target shadow expression on the virtual character based on the first shadow map, wherein the first target shadow expression is a shadow formed by the first virtual object to the virtual character.
According to one embodiment of the application, an information processing device is also provided. The device includes: the virtual character recognition system comprises a determining unit, a judging unit and a recognition unit, wherein the determining unit is used for determining a scene area of a virtual scene and outline information of a virtual character in the virtual scene; the dividing unit is used for dividing the scene area based on the contour information to obtain a plurality of sub-scene areas, wherein the contour of each sub-scene area is larger than the contour of the virtual character; the baking unit is used for baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; a creating unit configured to create a first shadow map using the first depth map; and the rendering unit is used for rendering and displaying a first target shadow expression on the virtual character based on the first shadow map, wherein the first target shadow expression is a shadow formed by the first virtual object on the virtual character.
According to an embodiment of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, where when the computer program is executed by a processor, the computer-readable storage medium is controlled to implement an information processing method according to an embodiment of the present application.
According to an embodiment of the present application, there is also provided an electronic apparatus including a memory and a processor, the memory storing a computer program therein, and the processor being configured to execute the computer program to perform the information processing method in any one of the above.
In at least some embodiments of the present application, a scene area of a virtual scene and contour information of a virtual character in the virtual scene are determined; dividing the scene area based on the outline information to obtain a plurality of sub-scene areas; baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; creating a first shadow map using the first depth map; and rendering and displaying the first target shadow expression on the virtual character based on the first shadow map. That is to say, the embodiment of the present application implements a method for dividing a scene area, and by dividing the scene area into a plurality of sub-scene areas, a virtual character in a virtual scene can be baked in the sub-scene areas to generate a depth map, and a shadow map is created based on the depth map, and the corresponding shadow map is used on the virtual character to render and display a shadow representation, so as to achieve the purpose of not only ensuring the accuracy requirement of the depth map, but also not causing shadows to other scene objects, thereby solving the technical problem that a shadow effect that conforms to the requirement of the virtual scene cannot be achieved, and achieving the technical effect of achieving the shadow effect that conforms to the requirement of the virtual scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal according to an information processing method of an embodiment of the present application;
FIG. 2 is a flow chart of a method of information processing according to an embodiment of the present application;
FIG. 3 (a) is a schematic diagram of a shadow effect according to an embodiment of the present application;
FIG. 3 (b) is a schematic diagram of another implementation of a shadow effect according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a zone bake operator interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a depth map corresponding to each region in a virtual scene according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an information processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the accompanying drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some of the nouns or terms appearing in the description of the embodiments of the present application are used for the following explanation:
an illusion Engine4 (hereinafter referred to as UE 4) for developing a graphic Engine of three-dimensional software and games;
the blueprint system is characterized in that a service logic control layer of a UE4 engine can write a playing logic and a transmission parameter value into a material system;
a texture system (Shader), a rendering system of the UE4 engine, can write the rendering effect of an object through a Shader language;
shadow mask (shadow mask), which is a method that static baking shadow map by off-line method can be applied to person shadow;
render Target (RT) is used to record information, e.g., depth, for rendering objects of a scene.
A depth camera (Scene Capture 2D) used in conjunction with the RT, e.g., to depth-shoot Scene objects onto the RT;
capture (Capture), which refers to the act of capturing the scene depth onto the RT with a depth camera;
static Mesh (Static Mesh), which refers to an object that cannot be moved or changed in a virtual scene;
a game Character (Character), an object that moves in a virtual scene;
camera shooting area Width (Ortho Projection Width), capture camera shooting area Width;
and regions (cells) for partitioning the designated virtual scene, wherein each Cell represents a small cuboid region.
In accordance with one embodiment of the present application, there is provided an information processing method, wherein the steps shown in the flowchart of the figure may be executed in a computer system such as a set of computer executable instructions, and wherein, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown.
The above-described method embodiments referred to in this application may be implemented in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the mobile terminal, the mobile terminal may be a terminal device such as a smart phone, a tablet computer, a palm computer, a mobile internet device, a PAD, a game machine, and the like. Fig. 1 is a block diagram of a hardware configuration of a mobile terminal according to an information processing method of an embodiment of the present application. As shown in fig. 1, the mobile terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data, which in one embodiment of the present application may further include: an input output device 108, and a display device 110.
In some optional game scenario-based embodiments, the device may further provide a human-machine interface with a touch-sensitive surface, the human-machine interface may sense finger contact and/or gestures to perform human-machine interaction with a Graphical User Interface (GUI), and the human-machine interaction function may include the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
In accordance with one embodiment of the present application, there is provided an information processing method, wherein the steps shown in the flowchart of the figure may be executed in a computer system such as a set of computer executable instructions, and wherein, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown.
In one possible implementation, an embodiment of the present application provides an information processing method. Fig. 2 is a flowchart of an information processing method according to an embodiment of the present application, and as shown in fig. 2, the method includes the steps of:
step S202, determining a scene area of the virtual scene and outline information of the virtual character in the virtual scene.
In the technical solution provided in the foregoing step S202 of the present application, a scene area of a virtual scene and contour information of a virtual character in the virtual scene are determined, where the virtual scene may be a game scene, the scene area may be an area in the virtual scene that needs a baking depth map, the virtual character may be a virtual game character that is controlled by a player through a terminal device in the virtual scene, that is, a controlled virtual object in the virtual scene, and the contour information may be used to represent a geometric space enclosing the virtual character, for example, an enclosure (bounding boxes), and is not specifically limited herein.
Step S204, dividing the scene area based on the outline information to obtain a plurality of sub-scene areas.
In the technical solution provided in step S204 above, a dividing parameter may be obtained by dividing the volume of a scene area of a virtual scene by the contour information of a virtual character and rounding, and then the scene area is divided by using the dividing parameter to obtain a plurality of sub-scene areas, where the sub-scene areas may be partitioned areas of the virtual scene, for example, small rectangular solid areas (cells).
Optionally, the outline of each sub-scene region is larger than the outline of the virtual character, that is, the volume of each sub-scene region obtained by dividing the scene region may be larger than the volume of the virtual character, so as to avoid the situation that the boundary of the sub-scene region crosses the body of the virtual character, for example, the situation that the upper half of the virtual character in the scene region a and the lower half of the virtual character in the scene region B cannot occur.
Optionally, the contour information of the virtual character may also be adjusted according to the motion amplitude of the virtual character in the virtual scene, for example, the side length of the bounding box of the virtual character is adjusted to obtain the side length of the bounding box of the sub-scene region, the side length of the bounding box of the sub-scene region is determined as a division parameter, and then the scene region is divided by using the division parameter to obtain a plurality of sub-scene regions.
Optionally, after the scene area is divided according to the division parameters to obtain a plurality of sub-scene areas, an area position of each sub-scene area in the virtual scene, a position of the depth camera corresponding to each sub-scene area, and an identifier of each sub-scene area may be further determined, for example, the identifier may be an Identity document (Id), where the area position of each sub-scene area is uniquely identified by Id, and the area position of the corresponding sub-scene area may be determined by Id.
Optionally, the corresponding relationship between the Id of the sub-scene region and the virtual object included in the sub-scene region may be stored as list data in a Key-Value pair (Key-Value) manner, and the corresponding virtual object may be directly called from the list data by the Id of the sub-scene region.
Step S206, baking the first virtual object included in the sub-scene region to obtain a first depth map of the first virtual object.
In the technical solution provided in the above step S206 of the present application, the first virtual objects included in the sub-scene area are respectively baked by the depth camera to obtain a plurality of first depth maps of the first virtual objects, and the shadow effect is implemented on the virtual character in the virtual scene by the plurality of first depth maps, where the first virtual object may be an object included in the sub-scene area and generating a shadow on the virtual character, and may be an object that is stationary in the sub-scene area, for example, an object that is stationary in the sub-scene area, such as a window, a vase, and the like, which is only illustrated by way of example and is not limited specifically.
Optionally, the shooting area of the sub-scene area by the depth camera may be determined according to the volume of the bounding box of the sub-scene area, and in order to ensure that the virtual character does not exceed the sub-scene area division boundary in the motion state, the shooting area of the depth camera needs to be larger than the volume of the bounding box of the virtual character in the sub-scene area, that is, the sub-scene area needs to be located within the shooting area of the depth camera, for example, the volume of the bounding box of the sub-scene area, root 2, is determined as the shooting area of the depth camera in the virtual scene, which is only an example and is not limited specifically herein.
Alternatively, if a dynamic object appears in the sub-scene area, the first virtual object may be directly called from the list data through the Id of the sub-scene area, so that the depth camera only needs to capture the depth maps of the first virtual object and the dynamic object on the shooting area, create the depth maps as shadow maps, and render and display the shadow generated by the dynamic object and the first virtual object on the virtual character based on the shadow maps, thereby achieving the purpose that the dynamic object only causes the shadow effect on the virtual character, but does not cause the shadow effect on other scene objects (e.g., a floor).
In step S208, a first shadow map is created by using the first depth map.
In the technical solution provided by the foregoing step S208 in the present application, a first shadow map is created based on a first depth map obtained by a depth camera shooting a first virtual object in a sub-scene area, where the first shadow map may be used to render and display a shadow generated by the first virtual object on a virtual character.
Step S210, the first target shadow representation is rendered and displayed on the virtual character based on the first shadow map.
In the technical solution provided in the above step S210 of the present application, a first target shadow representation is rendered and displayed on the virtual character through the created first shadow map, where the first target shadow representation may be a shadow display effect formed by the first virtual object on the virtual character, for example, a shadow display effect generated by a window on the virtual character.
Determining a scene area of a virtual scene and outline information of a virtual character in the virtual scene through the steps S202 to S210; dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role; baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; creating a first shadow map using the first depth map; and rendering and displaying a first target shadow expression on the virtual character based on the first shadow map, wherein the first target shadow expression is a shadow formed by the first virtual object to the virtual character. That is to say, the embodiment of the present application implements a method for dividing a scene area, in which a virtual character in a virtual scene can be baked in a manner of a sub-scene area to generate a depth map by dividing the scene area into a plurality of sub-scene areas, a shadow map is created based on the depth map, and a corresponding shadow map is rendered on the virtual character to display a shadow representation, so as to achieve the purpose of ensuring the accuracy requirement of the depth map and not causing shadows to other scene objects, thereby solving the technical problem that a shadow effect that meets the requirement of the virtual scene cannot be achieved, and achieving the technical effect of achieving the shadow effect that meets the requirement of the virtual scene.
The above method of this embodiment is further described below.
As an optional implementation manner, in step S204, dividing the scene area based on the contour information to obtain a plurality of sub-scene areas, including: determining a division parameter based on the outline information of the virtual role; and dividing the scene area according to the dividing parameters to obtain a plurality of sub-scene areas.
In this embodiment, the dividing parameter may be obtained by dividing the volume of the scene area of the virtual scene by the contour information of the virtual character and rounding, and then dividing (Split) the scene area by using the dividing parameter to obtain a plurality of sub-scene areas, where the dividing parameter may be a dividing value, that is, the number of partitions for dividing the scene area of the virtual scene.
Optionally, the volume of the bounding box of the scene area of the virtual scene is divided by the volume of the bounding box of the virtual character and rounded to obtain a division parameter, and the scene area is divided (Split) by using the division parameter to obtain a plurality of sub-scene areas, where the division parameter may be a detail value of each axial direction of the bounding box of the scene area of the virtual scene.
As an optional implementation, the method may further include determining a partitioning parameter based on the outline information of the virtual character, including: determining a first bounding box of the virtual character; a side length of the sub-scene region is determined based on the side length of the first bounding box.
In this embodiment, in order to avoid a situation that the boundary of the sub-scene region passes through the body of the virtual character, the volume of each sub-scene region obtained by dividing the scene region needs to be larger than the volume of the virtual character, that is, the boundary of the sub-scene region cannot pass through the body of the virtual character, so that the first bounding box of the virtual character may be determined first, then the side length of the first bounding box is adjusted, and the adjusted side length is determined as the side length of the sub-scene region, that is, the side length of the bounding box of the sub-scene region, where the first bounding box is used to represent the contour information of the region occupied by the virtual character in the virtual scene.
As an optional implementation, the method may further include determining a side length of the sub-scene region based on the side length of the first bounding box, including: and determining the side length of the first bounding box as the side length of the sub-scene area according to the result of adjusting the side length of the first bounding box by the target coefficient.
In this embodiment, in order to make the side length of the sub-scene region greater than the motion amplitude of the virtual character in the virtual scene, the side length of the first bounding box may be adjusted by using a target coefficient, and the adjusted side length is determined as the side length of the sub-scene region, where the target coefficient may be determined by the motion amplitude of the virtual character, the target coefficient may be an amplification factor of the side length of the first bounding box, for example, the value of the target coefficient is root 2, and then the length of the side length of the first bounding box is determined as the side length of the sub-scene region, which is only illustrated here by way of example and is not limited specifically.
As an optional implementation manner, the method may further include determining a partitioning parameter based on the outline information of the virtual character, including: determining a first enclosure box of the virtual role and a second enclosure box of the scene area; a partitioning parameter is determined based on the first bounding box and the second bounding box.
In this embodiment, a value obtained by dividing a volume of a second bounding box of the scene area by a volume of a first bounding box of the virtual character and then taking an integer may be determined as the partition parameter, where the second bounding box is used to represent the contour information of the scene area, and the first bounding box is used to represent the contour information of the area occupied by the virtual character in the virtual scene.
As an optional implementation manner, in step S204, the dividing the scene area according to the dividing parameter to obtain a plurality of sub-scene areas includes: determining at least one of the following sub-scene regions according to the partitioning parameter: region location, location of the corresponding depth camera, and region identification.
In this embodiment, after determining the dividing parameters based on the contour information of the virtual character, the scene regions are divided according to the dividing parameters, and a region position of each sub-scene region, a position of the depth camera corresponding to each sub-scene region, and a region identifier of each sub-scene region are obtained, where the region position may be a position where each sub-scene region is located in a scene region of the virtual scene, and the region position may be represented by using three-dimensional coordinates, for example, (1, 2), and the depth camera may be used to capture a depth map of the sub-scene region, and the region identifier may be an Id of the sub-scene region.
Alternatively, the area Location of each sub scene area is uniquely identified by Id, for example, the minimum coordinate (Min Location) and the maximum coordinate (MaxLocation) of the bounding box of the scene area are gradually increased from Min Location to Max Location, id is sequentially increased by 1 in the increasing order of coordinate axes (ZYX) from one axis to the next axis, so as to obtain the area Location and Id of each sub scene area, for example, one scene area is divided into 8 sub scene areas, and Id and the area Location of the scene area may be expressed as: 1: (0, 0), 2: (0, 1), 3: (0, 1, 0), 4: (0, 1), 5: (1, 0), 6 (1, 0, 1), 7: (1, 0), 8: (1, 1), which is herein described by way of example only and not by way of limitation.
As an alternative implementation, in step S206, baking the first virtual object included in the sub-scene region to obtain a first depth map includes: determining a shooting area of the depth camera for the virtual scene based on the sub-scene area; a first depth map captured by a depth camera over a capture area is acquired.
In this embodiment, a shooting area of the virtual scene by the depth camera is determined according to the volume of the sub-scene area, and a first depth map corresponding to the sub-scene area is shot by the depth camera, where the first depth map may be used to characterize the distance between the virtual character in the sub-scene area and the depth camera.
Alternatively, the sub-scene regions may be located within the shooting region such that transitions between different sub-scene regions intersect, that is, the depth map of each sub-scene region may partially overlap, such that the sub-scene region division boundary is not exceeded even if the virtual character is in motion, such as running and jumping.
Alternatively, the shooting area of the depth camera for the virtual scene may be larger than the volume of the bounding box of the virtual character in the sub-scene area, for example, the size of the volume of the bounding box of the sub-scene area, root 2, may be taken as the shooting area of the depth camera, and is not limited herein.
As an optional implementation manner, the method may further include that the first depth maps respectively corresponding to two adjacent sub-scene regions include depth maps of the same part of the first virtual object.
In this embodiment, since the sub-scene regions are located in the shooting region of the depth camera, the first depth maps acquired by the depth camera in two adjacent sub-scene regions include the same depth map partial depth map of the first virtual object, so that when a virtual character enters different sub-scene regions, the first depth maps of the sub-scene regions can be seamlessly switched and loaded.
As an optional embodiment, the method may further include, in response to the second virtual object moving to the sub-scene area, acquiring a second depth map of both the first virtual object and the second virtual object captured by the depth camera on the capture area; and creating the second depth map as a second shadow map, and rendering and displaying a second target shadow representation on the virtual character based on the second shadow map.
In this embodiment, if a dynamic object appears in the sub-scene area, that is, when the second virtual object moves to the sub-scene area where the first virtual object is located, the first virtual object and the second virtual object need to form a shadow together for the virtual character, a second depth map of the first virtual object and the second virtual object captured by the depth camera on the capturing area may be dynamically obtained through Id of the sub-scene area, and the second depth map may be created as a second shadow map, and a second target shadow representation is rendered and displayed on the virtual character based on the second shadow map, where the second depth map may be used to represent a distance between the first virtual object and the depth camera captured by the depth camera on the capturing area and a distance between the second virtual object and the depth camera, the first virtual object may be an object in a stationary state in the sub-scene area, and the second virtual object may be an object in a moving state in the sub-scene area, for example, a moving box.
Optionally, the second depth maps of the first virtual object and the second virtual object may be obtained by directly overlaying the depth map corresponding to the first virtual object and the depth map corresponding to the second virtual object, or may be obtained by first creating the depth map corresponding to the first virtual object as a shadow map of the first virtual object, creating the depth map corresponding to the second virtual object as a shadow map of the second virtual object, and then overlaying the shadow map of the first virtual object and the shadow map of the second virtual object, which is not specifically limited herein.
As an optional embodiment, the method may further include determining, as the first virtual object, a virtual object called from a database based on a region identifier of the sub-scene region, where a correspondence between the region identifier and the virtual object included in the sub-scene region is stored in the database.
In this embodiment, the corresponding relationship between the area identifier of the sub-scene area and the virtual object included in the sub-scene area may be stored as list data by means of a key-value pair, for example, (the area identifier of the sub-scene area, the virtual object list), and the list data is stored in the database, when a dynamic object appears in the sub-scene area, the first virtual object may be directly called from the list data in the database by the area identifier of the sub-scene area, and then the depth camera may capture only the depth map of the first virtual object, and this step may also be performed in an offline state as in the static baked shadow mapping technique, without performing additional performance consumption.
The technical solutions of the embodiments of the present application are further described below by way of example with reference to preferred embodiments.
In the virtual scene development process, the picture reality can be enhanced through the shadow, but the shadow is realized, so that the game performance is consumed greatly and is difficult to process, and the shadow effect with attractive performance and low performance consumption is a difficult problem of optimization processing in the industry, for example, the shadow without edge saw teeth and moire fringes is realized, or the shadow precision is improved.
In the virtual scene, the shadow effect can be divided into two types according to a defined criterion of a sender and a receiver, that is, the shadow effect caused by the object to other objects and the shadow effect caused by other objects to the object, in this embodiment of the present application, a performance optimization method mainly analyzes the shadow effect caused by other objects (e.g., scene objects) to the object (e.g., virtual characters).
Cartoon rendering is a more popular rendering style, and usually the cartoon rendering has some non-physical and real representations, for example, the avatar can accept the shadow caused by the scene object to the avatar, but at the same time, the scene object itself has no shadow, and the avatar has no shadow to the scene object, fig. 3 (a) is a schematic diagram of implementing a shadow effect according to the embodiment of the present application, as shown in fig. 3 (a), only the window 32 and the wall 33 of the avatar 31 cause the shadow to the avatar, as shown by the black oval area in the diagram, and the avatar does not cause the shadow to the ground 34, fig. 3 (b) is a schematic diagram of implementing another shadow effect according to the embodiment of the present application, as shown in fig. 3 (b), the black oval area on the avatar 31 represents the shadow caused by the window 32 to the avatar 31 itself.
In a related art, the light source is set to be dynamically movable by a CastShadow switch of the UE4 engine, which sets scene objects, so that all objects can cast shadows between each other, but this method can cause the objects (windows, boxes, people) in the scene to be shadowed between each other, and the generation of the depth map of the whole area in real time is very resource-consuming and may cause a problem of accuracy loss.
In another related art, the Shadow effect is achieved by statically baking a depth map of a scene by using a Shadow Mask technology, converting a depth map matrix into a Shadow map, and then using the Shadow map on a virtual character, for example, using the Shadow Mask technology, a scene (window) is deeply photographed on an RT by using a SceneCapture2D camera, and then the RT texture is directly loaded on a character Shader during a game running to perform the Shadow effect, but if a scene area is large (for example, 1 km × 1 km), the baked depth map is required to retain enough precision, and an RT texture is very large, so that the RT texture needs to be matched with a Shadow (cascodewmap) to load Shadow maps of different levels far and near, which not only increases the amount of the multistage cascade Shadow maps stored in a memory, but also cannot ensure that the Shadow cast on the character of the scene is clearly represented correctly due to a precision loss, and in addition, the statically baked depth map is determined during an off-line baking, and a character cannot be updated during the running to correctly cast a movable object on the scene.
However, this embodiment of the present application provides a Shadow optimization method based on Shadow Mask, which can optimize the generation area of the Shadow while implementing the above Shadow effect to ensure the accuracy requirement of the depth map, and also can implement that the movable object in the virtual scene only causes the Shadow effect on the human object, but not the Shadow effect on the floor.
Further describing a Shadow Mask-based Shadow optimization method provided by this embodiment of the present application, the method may include the following two parts.
The first part, editing (Editor) is realized in an off-line state, and the logic of baking Shadow Map by regions is added on the basis of the Shadow Mask.
Fig. 4 is a schematic diagram of a regional baking operation interface according to an embodiment of the present application, as shown in fig. 4, each small Box represents a Cell, the virtual character is located in a region of the scene to be baked, a region Box may be first placed to cover the entire virtual scene of the depth map to be baked, and then a subdivision value of each axis of the number-of-divisions (Split) is defined, for example, 5 × 5.
Alternatively, the Split value for Split may be obtained by calculating the volume of the avatar, bounds, and then dividing it by the volume of the entire scene area, box, and rounding.
It should be noted that the divided region volume needs to be larger than the occupied volume of the virtual character in the virtual scene, that is, the boundary of the divided region cannot pass through the body of the virtual character, for example, the upper half body of the virtual character is in the region a, and the lower half body of the virtual character is in the region B, because the same virtual character Shader can only load one region depth, if the above situation occurs, the Shader loading region depth map of the virtual character will be in error.
Optionally, when the depth map is shot, the size of the volume of the minimum region × root 2 may be taken as the shot region, so that there is a partial overlap between each minimum region, and thus when the virtual character Shader enters a different minimum region, the depth map of the different minimum region may be loaded in a seamless switching manner, so that even if the virtual character is in a motion state, such as running and jumping, the situation that the boundary of the minimum region is exceeded does not occur, where the volume of the minimum region may be determined by obtaining a bounding box (bounding box) of the virtual character, and only a value greater than the whole virtual character is required, and no specific limitation is made herein.
Note that, the root 2 in the "volume of the minimum region × root 2" is merely an example, and is not particularly limited.
Next, defining a Box covering the whole scene area, dividing the Box into a plurality of partition areas, each area being a Cell, and then calculating the position and Id of each Cell and the position of the corresponding depth camera, including: calculating the location of each Cell, id, and the location of its corresponding depth camera may include: making a Shadow Map of each Cell, defining the length, width and height of each Cell, the position of the minimum world position point of a Box, the longest edge root number 2, an Id list of the cells, all static grid bodies in a storage scene, region division and three-dimensional variables, and corresponding the position of each Cell to a depth camera; constructing a function, calculating a numerical value under an editor, ensuring that the minimum XYZ value of the divided area is 1, resetting an Id list, calculating data of the Cell, defining a Box covering all scene areas through the minimum position and the maximum position of the XYZ of the Box, dividing the Box into a plurality of cells according to Split, traversing subscripts of the cells in the X direction, the Y direction and the Z direction, and storing the Id of each Cell in the data list; outputting the length, width, height, longest edge, root number 2 of the Cell and the position of the minimum world position point of the Box; and calling a function (such as a lamb function), storing all Static Mesh in the scene in an array, and storing objects which are not movable objects in the scene in the array to obtain a Static grid body on the Static grid body.
Alternatively, the Min Location and MaxLocation of the entire scene area Box may be gradually increased from Min Location to Max Location, and Id may be sequentially increased by 1 in the order of increasing axis-by-axis of the coordinate axis (ZYX) to obtain the Location and Id of each Cell, for example, if one scene area Box is divided into 8 cells, the Id and Location of each Cell may be: 1: (0, 0), 2: (0, 1), 3: (0, 1, 0), 4: (0, 1), 5: (1, 0), 6 (1, 0, 1), 7: (1, 0), 8: (1,1,1).
Finally, acquiring a static depth map corresponding to each partition area in the virtual scene based on the depth camera, and outputting a data list to obtain a plurality of static depth maps, namely a basis for making a shadow map, comprising: the timer calls a function to process each Cell; distributing storage space for distributed block Shadow Map, shadow texture storage list, matrix transformation of each Cell relative to a depth camera, ID number of the Capture, corresponding Cell number, sky parallel light source and light direction of directional light; performing overtime statistics, storing a depth map captured by the Capture, all static grids of a scene and a camera for shooting depth; determining that the save path cannot be null; acquiring a directional light source with a single scene, acquiring the direction of the directional light, and calling once every 0.01 second until all cells are processed; setting a depth camera to be allowed to shoot all objects in the current scene; dynamically creating an RT value containing a single channel and assigning values, placing the RT on a scene Capture component, setting the shooting width of a camera, acquiring the position of a light source, setting the depth camera to rotate relative to a bounding box, shifting the depth camera to 10000 units in the light source direction, updating the position of the depth camera, converting an inverse matrix of the depth camera into an array, creating the depth RT into a static texture, storing the static texture under a file, circularly processing the next Cell next time, and emptying a timer if an Index (Index) exceeds the maximum value of the Cell, so as to prove that all cells have finished Capture depth.
Fig. 5 is a schematic diagram of a depth map corresponding to each region in a virtual scene according to an embodiment of the present application, as shown in fig. 5, the depth map corresponding to a plurality of different sub-regions is included in the diagram, the number below each depth map represents Id of the depth map, and the shadow texture 109 and the shadow texture 110 are parts of the water tower-like object that overlap in the two regions, which is a problem of the minimum boundary transition of the sub-regions solved by the embodiment of the present application.
And the second part is realized in a running (running) state, and judges whether to switch the dynamic Capture partition regional depth map or not and optimizes the judgment process based on the obtained data list.
When a game runs, a dynamic object may appear in a virtual scene, and the movement of the dynamic object may affect the shadow effect of the scene projected onto the character Shader, so that when a moving object enters the divided area where the virtual character is located, the depth map of the cell can be re-captured dynamically instead of re-capturing the whole virtual scene area.
In order to achieve the above effect, optimization may be performed on the basis of the data list obtained in the first part, and the optimization includes: acquiring bounding boxes of all objects in the virtual scene by calling a UE4 function (for example, a GetBound function); traversing each Cell under the condition that the area cells are divided; and judging which object bounding boxes are contained in each Cell, adding the objects into a list corresponding to the Cell, storing the objects as list data in a key value pair mode (for example, TMap < Id of the Cell and an object list), and acquiring the object list according to the Id of the Cell when the depth of the Cell is dynamically updated, so that the depth camera only captures the depth of the objects and the movable objects (boxes).
Optionally, the obtained data is expansion data of the first partial data list, and therefore, the steps may also be performed in an offline state without performing performance consumption.
The beneficial effects brought by the technical scheme of the embodiment of the application can include: the problem of transition of the regional division boundary is solved; the preprocessed object list data is obtained, and objects can be captured as required when the depth map is dynamically acquired, so that partial performance optimization is realized; on the basis of the Shadow Mask technology, the optimized content of the region division algorithm is added, so that the effect that a depth map can be generated by 'static objects' and 'dynamic objects' in a scene in a minimum region mode, shadows are projected on virtual roles, and Shadow influence on other scene objects is avoided is achieved.
In the embodiment of the application, the optimization content of the region division algorithm is added on the basis of the Shadow Mask technology, so that a static depth map of each Cell in a region in a scene is obtained, and a data list is output; when the depth of the Cell is dynamically updated, the corresponding object list is obtained from the data list according to the Id of the Cell, and the depth camera is enabled to only capture the depth of the object and the moving object so as to achieve the purposes of optimizing the generation area of the shadow and ensuring the precision of the depth map, thereby solving the technical problem that the shadow effect which is in line with the requirement of the virtual scene cannot be realized, and achieving the technical effect of realizing the shadow effect which is in line with the requirement of the virtual scene.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method according to the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method described in the embodiments of the present application.
In this embodiment, an information processing apparatus for implementing the embodiment shown in fig. 2 is further provided, and the apparatus is used to implement the above-mentioned embodiment and the preferred embodiment, which have already been described and are not described again. As used hereinafter, the terms "unit", "module" and "modules" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a schematic diagram of an information processing apparatus according to an embodiment of the present application, and as shown in fig. 6, the information processing apparatus 600 includes: a determination unit 601, a division unit 602, a baking unit 603, a creation unit 604, and a rendering unit 605.
A determining unit 601, configured to determine a scene area of the virtual scene and contour information of the virtual character in the virtual scene.
A dividing unit 602, configured to divide the scene area based on the contour information to obtain a plurality of sub-scene areas, where a contour of each sub-scene area is greater than a contour of the virtual character.
A baking unit 603, configured to bake a first virtual object included in the sub-scene region, so as to obtain a first depth map of the first virtual object;
a creating unit 604 for creating a first shadow map using the first depth map.
A rendering unit 605, configured to render and display a first target shadow representation on the virtual character based on the first shadow map, where the first target shadow representation is a shadow formed by the first virtual object on the virtual character.
Optionally, the dividing unit 602 includes: the first division module is used for determining division parameters based on the outline information of the virtual role; and the second division module is used for dividing the scene areas according to the division parameters to obtain a plurality of sub-scene areas.
Optionally, the first partitioning module comprises: the first determining submodule is used for determining a first bounding box of the virtual character, wherein the first bounding box is used for representing outline information of an area occupied by the virtual character in a virtual scene; and the second determining submodule is used for determining the side length of the sub-scene area based on the side length of the first enclosure box.
Optionally, the second determining sub-module is further configured to determine the side length of the sub-scene region based on the side length of the first bounding box by: and determining the result of adjusting the side length of the first bounding box by the target coefficient as the side length of the sub-scene area, wherein the target coefficient is determined by the motion amplitude of the virtual character, so that the side length of the sub-scene area is greater than the motion amplitude of the virtual character.
Optionally, the first partitioning module comprises: the third determining submodule is used for determining a first bounding box of the virtual character and a second bounding box of the scene area, wherein the first bounding box is used for representing the outline information of the area occupied by the virtual character in the virtual scene, and the second bounding box is used for representing the outline information of the scene area; a fourth determining submodule, configured to determine the partition parameter based on the first enclosure box and the second enclosure box.
Optionally, the dividing unit 602 includes: a first determining module, configured to determine at least one of the following sub-scene regions according to the partitioning parameter: region location, location of the corresponding depth camera, and region identification.
Optionally, the roasting unit 603 comprises: the second determination module is used for determining a shooting area of the depth camera for the virtual scene based on the sub-scene area, wherein the sub-scene area is positioned in the shooting area; the acquisition module is used for acquiring a first depth map shot by the depth camera on the shooting area.
Optionally, the first depth maps respectively corresponding to two adjacent sub-scene regions include the same depth map of the partial first virtual object.
Optionally, the apparatus further comprises: an acquisition unit configured to acquire a second depth map of both the first virtual object and the second virtual object captured by the depth camera on the capture area in response to the second virtual object moving to the sub-scene area; and the processing unit is used for creating the second depth map as a second shadow map and rendering and displaying a second target shadow expression on the virtual character based on the second shadow map, wherein the second target shadow expression is a shadow formed by the first virtual object and the second virtual object on the virtual character together.
In the information processing apparatus of the embodiment, a determination unit configured to determine a scene area of a virtual scene and outline information of a virtual character in the virtual scene; the dividing unit is used for dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role; the baking unit is used for baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; a creating unit configured to create a first shadow map using the first depth map; and the rendering unit is used for rendering and displaying the first target shadow representation on the virtual character based on the first shadow map, wherein the first target shadow representation is a shadow formed by the first virtual object on the virtual character, so that the technical problem that a shadow effect conforming to the requirement of the virtual scene cannot be realized is solved, and the technical effect of realizing the shadow effect conforming to the requirement of the virtual scene is achieved.
It should be noted that, the above units and modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the units and the modules are all positioned in the same processor; alternatively, the units and modules may be located in different processors in any combination.
Embodiments of the present application further provide a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps in any of the above method embodiments when executed.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, determining a scene area of a virtual scene and outline information of a virtual character in the virtual scene;
s2, dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual character;
s3, baking the first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object;
s4, creating a first shadow map by using the first depth map;
and S5, rendering and displaying the first target shadow representation on the virtual character based on the first shadow map, wherein the first target shadow representation is a shadow formed by the first virtual object to the virtual character.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining a dividing parameter based on the outline information of the virtual role; and dividing the scene area according to the division parameters to obtain a plurality of sub-scene areas.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining a first bounding box of the virtual character, wherein the first bounding box is used for representing outline information of an area occupied by the virtual character in a virtual scene; a side length of the sub-scene region is determined based on the side length of the first bounding box.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: and determining the side length of the first bounding box as the side length of the sub-scene area as a result of adjusting the side length of the first bounding box by the target coefficient, wherein the target coefficient is determined by the motion amplitude of the virtual character, so that the side length of the sub-scene area is larger than the motion amplitude of the virtual character.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining a first bounding box of the virtual character and a second bounding box of the scene area, wherein the first bounding box is used for representing the outline information of the area occupied by the virtual character in the virtual scene, and the second bounding box is used for representing the outline information of the scene area; a partitioning parameter is determined based on the first bounding box and the second bounding box.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining at least one of the following sub-scene regions according to the partitioning parameters: region location, location of the corresponding depth camera, and region identification.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining a shooting area of the depth camera for the virtual scene based on the sub-scene area, wherein the sub-scene area is located in the shooting area; a first depth map captured by a depth camera on a capture area is acquired.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: the first depth maps respectively corresponding to two adjacent sub-scene areas comprise the depth maps of the same part of the first virtual objects.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: acquiring a second depth map of both the first virtual object and the second virtual object photographed by the depth camera on the photographing region in response to the second virtual object moving to the sub-scene region; and creating the second depth map as a second shadow map, and rendering and displaying a second target shadow expression on the virtual character based on the second shadow map, wherein the second target shadow expression is a shadow formed by the first virtual object and the second virtual object on the virtual character together.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: and determining a virtual object called from a database based on the area identification of the sub-scene area as a first virtual object, wherein the database stores the corresponding relation between the area identification and the virtual object contained in the sub-scene area. .
In the computer-readable storage medium of this embodiment, a scene area of a virtual scene and outline information of a virtual character in the virtual scene are determined; dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role; baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; creating a first shadow map using the first depth map; and rendering and displaying the first target shadow representation on the virtual character based on the first shadow map, wherein the first target shadow representation is a shadow formed by the first virtual object on the virtual character, so that the technical problem that a shadow effect conforming to the requirement of the virtual scene cannot be realized is solved, and the technical effect of realizing the shadow effect conforming to the requirement of the virtual scene is achieved.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer readable storage medium has stored thereon a program product capable of implementing the above-described method of the present embodiment. In some possible implementations, the various aspects of the embodiments of the present application may also be implemented in the form of a program product that includes program code for causing a terminal device to perform the steps according to various exemplary implementations of the present application described in the above section "exemplary method" of the present embodiment, when the program product is run on the terminal device.
According to the program product for implementing the method, the portable compact disc read only memory (CD-ROM) can be adopted and comprises program codes, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the computer readable storage medium may be any tangible medium that can contain or store the program, which can be used by or in connection with the instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present application further provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a scene area of a virtual scene and outline information of a virtual character in the virtual scene;
s2, dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual character;
s3, baking the first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object;
s4, creating a first shadow map by using the first depth map;
and S5, rendering and displaying the first target shadow representation on the virtual character based on the first shadow map, wherein the first target shadow representation is a shadow formed by the first virtual object to the virtual character.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining a division parameter based on the outline information of the virtual role; and dividing the scene area according to the division parameters to obtain a plurality of sub-scene areas.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining a first bounding box of the virtual character, wherein the first bounding box is used for representing outline information of an area occupied by the virtual character in a virtual scene; a side length of the sub-scene region is determined based on the side length of the first bounding box.
Optionally, the processor may be further configured to execute the following steps by a computer program: and determining the side length of the first bounding box as the side length of the sub-scene area as a result of adjusting the side length of the first bounding box by the target coefficient, wherein the target coefficient is determined by the motion amplitude of the virtual character, so that the side length of the sub-scene area is larger than the motion amplitude of the virtual character.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining a first bounding box of the virtual character and a second bounding box of the scene area, wherein the first bounding box is used for representing the outline information of the area occupied by the virtual character in the virtual scene, and the second bounding box is used for representing the outline information of the scene area; a partitioning parameter is determined based on the first bounding box and the second bounding box.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining at least one of the following sub-scene regions according to the partitioning parameter: region location, location of the corresponding depth camera, and region identification.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining a shooting area of the depth camera for the virtual scene based on the sub-scene area, wherein the sub-scene area is located in the shooting area; a first depth map captured by a depth camera over a capture area is acquired.
Optionally, the processor may be further configured to execute the following steps by a computer program: the first depth maps respectively corresponding to two adjacent sub-scene areas comprise the depth maps of the same part of the first virtual objects.
Optionally, in response to the second virtual object moving to the sub-scene area, acquiring a second depth map of both the first virtual object and the second virtual object captured by the depth camera on the capture area; and creating the second depth map as a second shadow map, and rendering and displaying a second target shadow expression on the virtual character based on the second shadow map, wherein the second target shadow expression is a shadow formed by the first virtual object and the second virtual object on the virtual character together.
Optionally, a virtual object called from a database based on the area identifier of the sub-scene area is determined as the first virtual object, wherein the database stores the corresponding relationship between the area identifier and the virtual object contained in the sub-scene area.
In the electronic apparatus of this embodiment, a scene area of a virtual scene and outline information of a virtual character in the virtual scene are determined; dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role; baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object; creating a first shadow map using the first depth map; and rendering and displaying the first target shadow representation on the virtual character based on the first shadow map, wherein the first target shadow representation is a shadow formed by the first virtual object on the virtual character, so that the technical problem that a shadow effect conforming to the requirement of the virtual scene cannot be realized is solved, and the technical effect of realizing the shadow effect conforming to the requirement of the virtual scene is achieved.
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the application. As shown in fig. 7, the electronic device 700 is only an example and should not bring any limitation to the functions and the scope of the application of the embodiments.
As shown in fig. 7, the electronic apparatus 700 is in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: the at least one processor 710, the at least one memory 720, the bus 730 connecting the various system components (including the memory 720 and the processor 710), and the display 740.
Wherein the above-mentioned memory 720 stores program code, which can be executed by the processor 710, to cause the processor 710 to perform the steps according to various exemplary embodiments of the present application described in the above-mentioned method section of the embodiments of the present application.
The memory 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 7201 and/or a cache memory unit 7202, may further include a read only memory unit (ROM) 7203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
In some examples, memory 720 may also include programs/utilities 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The memory 720 may further include memory located remotely from the processor 710, which may be connected to the electronic device 700 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, processor 710, or a local bus using any of a variety of bus architectures.
Display 740 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 700.
Optionally, the electronic apparatus 700 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic apparatus 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 760. As shown in fig. 7, the network adapter 760 communicates with the other modules of the electronic device 700 via the bus 730. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with electronic device 700, which may include but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The electronic device 700 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 700 may also include more or fewer components than shown in FIG. 7, or have a different configuration than shown in FIG. 1. The memory 720 can be used for storing computer programs and corresponding data, such as computer programs and corresponding data corresponding to the information processing method in the embodiment of the present application. The processor 710 executes various functional applications and data processing by executing computer programs stored in the memory 720, that is, implements the information processing method described above.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that, as will be apparent to those skilled in the art, numerous modifications and adaptations can be made without departing from the principles of the present application and such modifications and adaptations are intended to be considered within the scope of the present application.

Claims (13)

1. An information processing method, characterized by comprising:
determining a scene area of a virtual scene and outline information of a virtual character in the virtual scene;
dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role;
baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object;
creating a first shadow map using the first depth map;
rendering and displaying a first target shadow representation on the virtual character based on the first shadow map, wherein the first target shadow representation is a shadow formed by the first virtual object on the virtual character.
2. The method of claim 1, wherein dividing the scene area based on the contour information to obtain a plurality of sub-scene areas comprises:
determining a partitioning parameter based on the outline information of the virtual character;
and dividing the scene area according to the division parameters to obtain the plurality of sub-scene areas.
3. The method of claim 2, wherein determining a partitioning parameter based on the outline information of the virtual character comprises:
determining a first bounding box of the virtual character, wherein the first bounding box is used for representing outline information of an area occupied by the virtual character in the virtual scene;
determining a side length of the sub-scene region based on the side length of the first bounding box.
4. The method of claim 3, wherein determining the side length of the sub-scene region based on the side length of the first bounding box comprises:
and determining the result of adjusting the side length of the first bounding box by using a target coefficient as the side length of the sub-scene area, wherein the target coefficient is determined by the motion amplitude of the virtual character, so that the side length of the sub-scene area is larger than the motion amplitude of the virtual character.
5. The method of claim 2, wherein determining a partitioning parameter based on the outline information of the virtual character comprises:
determining a first bounding box of the virtual character and a second bounding box of the scene area, wherein the first bounding box is used for representing outline information of an area occupied by the virtual character in the virtual scene, and the second bounding box is used for representing outline information of the scene area;
determining the partitioning parameter based on the first bounding box and the second bounding box.
6. The method of claim 5, wherein dividing the scene area according to the division parameter to obtain the plurality of sub-scene areas comprises:
determining at least one of the following sub-scene regions according to the partitioning parameter: region location, location of the corresponding depth camera, and region identification.
7. The method of claim 6, wherein baking the first virtual object contained in the sub-scene region to obtain the first depth map comprises:
determining a shooting area of the depth camera for the virtual scene based on the sub-scene area, wherein the sub-scene area is located within the shooting area;
and acquiring the first depth map shot by the depth camera on the shooting area.
8. The method of claim 7, wherein the first depth maps corresponding to two adjacent sub-scene regions respectively comprise depth maps of the same part of the first virtual object.
9. The method of claim 7, further comprising:
acquiring a second depth map of both the first virtual object and the second virtual object photographed by the depth camera on the photographing region in response to a second virtual object moving to the sub-scene region;
and creating the second depth map as a second shadow map, and rendering and displaying a second target shadow expression on the virtual character based on the second shadow map, wherein the second target shadow expression is a shadow formed by the first virtual object and the second virtual object on the virtual character together.
10. The method of claim 6, further comprising:
determining a virtual object called from a database based on the area identifier of the sub-scene area as the first virtual object, wherein the database stores the corresponding relationship between the area identifier and the virtual object contained in the sub-scene area.
11. An information processing apparatus characterized by comprising:
the virtual character recognition device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining a scene area of a virtual scene and outline information of a virtual character in the virtual scene;
the dividing unit is used for dividing the scene area based on the outline information to obtain a plurality of sub-scene areas, wherein the outline of each sub-scene area is larger than the outline of the virtual role;
the baking unit is used for baking a first virtual object contained in the sub-scene area to obtain a first depth map of the first virtual object;
a creating unit configured to create a first shadow map using the first depth map;
and the rendering unit is used for rendering and displaying a first target shadow expression on the virtual character based on the first shadow map, wherein the first target shadow expression is a shadow formed by the first virtual object on the virtual character.
12. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to, when executed by a processor, perform the method of any one of claims 1 to 10.
13. An electronic device comprising a memory and a processor, wherein the memory has a computer program stored therein, and the processor is configured to execute the computer program to perform the method of any one of claims 1 to 10.
CN202211289323.8A 2022-10-20 2022-10-20 Information processing method, information processing apparatus, storage medium, and electronic apparatus Pending CN115761106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211289323.8A CN115761106A (en) 2022-10-20 2022-10-20 Information processing method, information processing apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211289323.8A CN115761106A (en) 2022-10-20 2022-10-20 Information processing method, information processing apparatus, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN115761106A true CN115761106A (en) 2023-03-07

Family

ID=85352393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211289323.8A Pending CN115761106A (en) 2022-10-20 2022-10-20 Information processing method, information processing apparatus, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN115761106A (en)

Similar Documents

Publication Publication Date Title
CN111701238B (en) Virtual picture volume display method, device, equipment and storage medium
CN114677467B (en) Terrain image rendering method, device, equipment and computer readable storage medium
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
JP2023524368A (en) ADAPTIVE DISPLAY METHOD AND DEVICE FOR VIRTUAL SCENE, ELECTRONIC DEVICE, AND COMPUTER PROGRAM
CN110716766A (en) Game scene loading method and device, computer readable medium and electronic equipment
CN115738249A (en) Method and device for displaying three-dimensional model of game role and electronic device
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN112231020B (en) Model switching method and device, electronic equipment and storage medium
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN115761106A (en) Information processing method, information processing apparatus, storage medium, and electronic apparatus
CN115888085A (en) Game information processing method, device and storage medium
EP4302847A1 (en) Object management method, apparatus and system, and device and storage medium
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
US20220032188A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
CN111681317B (en) Data processing method and device, electronic equipment and storage medium
CN111617475B (en) Interactive object construction method, device, equipment and storage medium
CN114816457A (en) Method, device, storage medium and electronic device for cloning virtual model
CN114119831A (en) Snow accumulation model rendering method and device, electronic equipment and readable medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN115814407A (en) Information processing method, information processing apparatus, storage medium, and electronic apparatus
CN114842122B (en) Model rendering method, device, equipment and storage medium
CN115999148A (en) Information processing method and device in game, storage medium and electronic device
CN117392305A (en) Mapping processing method and device, storage medium and electronic device
CN117745892A (en) Particle generation performance control method, device, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination