WO2023173828A1 - 场景元素处理方法、装置、设备和介质 - Google Patents

场景元素处理方法、装置、设备和介质 Download PDF

Info

Publication number
WO2023173828A1
WO2023173828A1 PCT/CN2022/137148 CN2022137148W WO2023173828A1 WO 2023173828 A1 WO2023173828 A1 WO 2023173828A1 CN 2022137148 W CN2022137148 W CN 2022137148W WO 2023173828 A1 WO2023173828 A1 WO 2023173828A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target area
element density
candidate position
mask image
Prior art date
Application number
PCT/CN2022/137148
Other languages
English (en)
French (fr)
Inventor
朱植秀
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US18/238,413 priority Critical patent/US20230401806A1/en
Publication of WO2023173828A1 publication Critical patent/WO2023173828A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the present application relates to the field of rendering technology, and in particular to a scene element processing method, device, equipment and medium.
  • Scene element rendering technology is a technology that automatically renders and generates scene elements in virtual scenes based on computers. Taking the gaming field as an example, scene elements in the game scene can be generated through scene element rendering technology. For example, for plant-growing games, virtual plant objects in plant-growing game scenes can be generated through scene element rendering technology.
  • scene elements are usually generated individually or in the form of cells.
  • the distribution of scene elements generated individually or in the form of cells is monotonically repeated, so that the scene elements cannot be well integrated into the virtual scene, resulting in The rendering effect is relatively poor.
  • this application provides a scene element processing method, which is executed by a terminal.
  • the method includes:
  • the scene element density distribution information is used to indicate that at least one scene element to be generated is in the target area distribution;
  • the element generation position is determined from each candidate position based on the element density value, and the corresponding scene element is rendered and generated in the element generation position.
  • this application provides a scene element processing device, which includes:
  • the display module is used to display the target area in the virtual scene to be rendered for scene elements
  • An acquisition module configured to obtain scene element density distribution information corresponding to the target area in response to a scene element addition operation for the target area; the scene element density distribution information is used to indicate that at least one scene element to be generated is in Distribution in the target area;
  • a determination module configured to determine the element density value corresponding to each candidate position in the target area based on the scene element density distribution information
  • a generation module configured to determine an element generation position from each candidate position based on the element density value, and render and generate a corresponding scene element in the element generation position.
  • the present application provides a computer device, including one or more memories and a processor.
  • the memory stores computer readable instructions.
  • the processor executes the computer readable instructions, it implements the method embodiments of the present application. step.
  • the present application provides one or more computer-readable storage media, which store computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the steps in each method embodiment of the present application are implemented.
  • the present application provides a computer program product, which includes computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the steps in each method embodiment of the present application are implemented.
  • Figure 1 is an application environment diagram of a scene element processing method in an embodiment
  • Figure 2 is a schematic flowchart of a scene element processing method in one embodiment
  • Figure 3 is a schematic diagram of generating scene elements in units of cells in the traditional scene element generation method
  • Figure 4 is a schematic diagram of the scene element distribution effect of generating scene elements in units of cells in the traditional scene element generation method
  • Figure 5 is a schematic diagram of a description interface of a virtual tool in an embodiment
  • Figure 6 is a schematic diagram of a movement area determined by controlling the movement of a virtual tool in one embodiment
  • Figure 7 is a schematic diagram of the adjustment interface for the virtual brush range size in one embodiment
  • Figure 8 is a schematic diagram of a movement area determined by a virtual brush with a large range and a movement area determined by a virtual brush with a small range in one embodiment
  • Figure 9 is a schematic diagram of each candidate position in the pixels of the mask image in one embodiment
  • Figure 10 is a schematic diagram of scene elements of various element types in one embodiment
  • Figure 11 is a schematic diagram of the rendering and generation effect of scene elements in one embodiment
  • Figure 12 is a schematic flowchart of a scene element processing method in another embodiment
  • Figure 13 is a structural block diagram of a scene element processing device in one embodiment
  • Figure 14 is an internal structure diagram of a computer device in one embodiment.
  • the terminal 102 may display a target area in the virtual scene to be subjected to scene element rendering processing.
  • the terminal 102 may respond to the scene element adding operation for the target area and obtain scene element density distribution information corresponding to the target area; the scene element density distribution information is used to indicate the distribution of at least one scene element to be generated in the target area.
  • the terminal 102 can determine the element density value corresponding to each candidate position in the target area based on the scene element density distribution information.
  • the terminal 102 can determine the element generation position from each candidate position based on the element density value, and render and generate the corresponding scene element in the element generation position.
  • the terminal 102 can obtain relevant data of the target area in the virtual scene to be rendered for scene elements from the server 104, and further, the terminal 102 can display the target area of the virtual scene to be rendered for scene elements based on the obtained relevant data. It can also be understood that after the terminal 102 renders and generates the corresponding scene elements in the element generation position, the rendered scene elements can be synchronized to the server 104 .
  • a scene element processing method is provided. This method can be applied to the terminal 102 in Figure 1, and can be executed independently by the terminal 102 itself, or can also be implemented between the terminal 102 and the server 104. interaction to achieve.
  • This embodiment takes the method applied to the terminal 102 in Figure 1 as an example to illustrate, including the following steps:
  • Step 202 Display the target area in the virtual scene to be rendered for scene elements.
  • the virtual scene is a fictitious scene. It can be understood that the virtual scene is a non-real scene, such as a scene in a game.
  • a scene element is a component element in a virtual scene.
  • the virtual scene element can be at least one of a virtual character, a virtual animal, a virtual plant, a virtual prop, etc. in the game.
  • Scene element rendering processing is a processing method for rendering and generating scene elements.
  • the target area is an area in the virtual scene where scene element rendering processing is to be performed. It can be understood that the virtual scene may include at least one area, and the target area is an area in at least one area of the virtual scene where scene element rendering processing is to be performed.
  • the virtual scene may include at least one of a scene in a game, a visual design, and a VR (Virtual Reality, virtual reality) scene.
  • a VR Virtual Reality, virtual reality
  • the virtual scene may include at least one area
  • the terminal may determine a target area to be subjected to scene element rendering processing from at least one area of the virtual scene in response to a selection operation for the area.
  • the terminal can display the target area in the virtual scene to be rendered for scene elements.
  • Step 204 In response to the scene element adding operation for the target area, obtain scene element density distribution information corresponding to the target area; the scene element density distribution information is used to indicate the distribution of at least one scene element to be generated in the target area.
  • the scene element adding operation is an operation of adding scene elements in the target area. It can be understood that before performing the scene element adding operation, the corresponding scene element may not be rendered and generated in the target area, or the corresponding scene element may have been rendered and generated. If the corresponding scene element is not rendered and generated in the target area before performing the scene element adding operation, the corresponding scene element can be newly added in the target area by performing the scene element adding operation. If the corresponding scene elements have been rendered and generated in the target area before performing the scene element adding operation, then by performing the scene element adding operation, the corresponding scene elements can be added in the target area, so that the scene elements in the target area are The number is greater than the number of scene elements in the target area before performing the scene element addition operation.
  • the scene element density distribution information includes multiple element density values.
  • the element density value is used to represent the density value of the scene element to be generated after the scene element addition operation is performed. Density value refers to the number of rendered scene elements per unit area.
  • the user can perform a scene element adding operation in the target area
  • the terminal can, in response to the user's scene element adding operation for the target area, update the initial scene element density distribution information corresponding to the target area before performing the scene element adding operation, to obtain The density distribution information of scene elements corresponding to the updated target area.
  • the target area is covered with an elemental density recording layer.
  • the user can perform a scene element addition operation on the element density recording layer covered on the target area, and the terminal can obtain the scene element density distribution information corresponding to the target area in response to the user's scene element addition operation on the element density recording layer.
  • the element density recording layer is an information recording layer used to record scene element density distribution information corresponding to the target area.
  • Step 206 Based on the scene element density distribution information, determine the element density value corresponding to each candidate position in the target area.
  • the candidate position is the position in the target area that has a certain probability of rendering and generating the corresponding scene element. It can be understood that scene elements can only be rendered and generated at each candidate position in the target area, and will not be rendered and generated at other positions in the target area other than each candidate position.
  • the scene element density distribution information may include element density values.
  • the terminal can determine the element density value corresponding to each candidate position in the target area based on the element density value in the scene element density distribution information.
  • each candidate position in the target area can be determined quickly and accurately.
  • the corresponding element density value improves the efficiency and accuracy of obtaining the element density value corresponding to each candidate position.
  • the terminal may perform upsampling in the element density recording layer based on the candidate position, and use the upsampled element density value directly as the element density value corresponding to the candidate position.
  • Step 208 Determine the element generation position from each candidate position based on the element density value, and render and generate the corresponding scene element in the element generation position.
  • the element generation position is the position in the target area used to render and generate scene elements.
  • the terminal can determine an element generation position where the scene element can be rendered and generated from each candidate position, and render and generate the corresponding scene element in the element generation position.
  • the terminal may compare the element density value corresponding to each candidate position with a preset element density threshold, and determine the candidate position whose element density value is greater than the preset element density threshold as the element generation position. Furthermore, the terminal can render and generate corresponding scene elements in the element generation position. In this way, by comparing the element density values corresponding to each candidate position with the preset element density threshold, the element generation position can be quickly determined, thereby improving the scene element generation efficiency.
  • the scene element density distribution information corresponding to the target area can be obtained, where the scene element density Distribution information can be used to indicate the distribution of at least one scene element to be generated in the target area.
  • the element density value corresponding to each candidate position in the target area can be determined, and then the element generation position can be determined from each candidate position based on the element density value, and the corresponding scene element can be rendered and generated in the element generation position.
  • the scene element processing method of this application can perform personalized scene element density distribution information corresponding to the target area by performing scene element addition operations for the target area.
  • the density of the scene elements generated in the target area is controllable, which can provide a more natural transition between scene elements. It can also avoid the monotonous and repeated distribution of the generated scene elements, so that the scene elements can be better Integrate into virtual scenes to improve rendering effects.
  • the target area Based on the updated scene element density distribution information, from the target area Determine the element generation position in each candidate position, and render the corresponding scene elements in the element generation position, which can avoid the monotonous and repeated distribution of the generated scene elements, so that the scene elements can be better integrated into the virtual scene, thereby improving rendering Effect.
  • the target area is covered with a mask image; the pixel values of the pixels in the mask image are used to represent scene element density distribution information; the scene element adding operation includes a pixel value modification operation; the pixel value modification operation is used to Modify the pixel values of the pixels in the mask image; in response to the scene element addition operation for the target area, obtain the scene element density distribution information corresponding to the target area, including: in response to the modification of the pixel value of the mask image covered on the target area Operation, update the pixel values of the pixels in the mask image to obtain the scene element density distribution information corresponding to the target area.
  • the mask image is an image that stores pixel values in the form of a map.
  • the pixel values of each pixel in the mask image can be edited through the map method.
  • the above element density recording layer includes a mask image.
  • the scene element density distribution information includes multiple element density values.
  • the pixel values of the pixels in the mask image are used to represent the scene element density distribution information. It can be understood that the pixel values of the pixels in the mask image are different from the scene element density distribution information.
  • Each element density value has a mapping relationship.
  • the pixel value of a pixel corresponds to an element density value. It can be understood that the pixel value of the pixel in the mask image can be used to represent the element density value of the scene element to be generated.
  • the user can perform a pixel value modification operation on the mask image covered on the target area, and the terminal can, in response to the user's pixel value modification operation on the mask image covered on the target area, modify the original pixel value of the corresponding pixel in the mask image.
  • the pixel value is updated to obtain the updated pixel value of the corresponding pixel point.
  • the terminal can determine the scene element density distribution information corresponding to the target area based on the updated pixel value of the corresponding pixel point.
  • the pixel value modification operation can be implemented by selecting pixels in the mask image.
  • the terminal may determine the selected pixel area on the mask image in response to the selection operation on the pixel points in the mask image.
  • the terminal can modify the pixel values of the pixels in the pixel area to obtain scene element density distribution information corresponding to the target area based on the pixel values of each pixel in the modified mask image.
  • the pixel area is the area where the pixel points are located based on the selection operation of the pixel points in the mask image.
  • the pixel area to be modified is determined by selecting the pixel points in the mask image, which provides a flexible and fast way to determine the pixel area. Then, by modifying the pixel values of pixels in the pixel area, the rendering generation for controlling scene elements is realized, thereby improving the rendering effect.
  • the mask image is overlaid on the target area, and the scene element density distribution information is represented by the pixel values of the pixels in the mask image, and then the pixel values of the pixels in the mask image can be updated. Quickly obtain the scene element density distribution information corresponding to the target area and improve the efficiency of scene element density distribution information acquisition.
  • the target area is covered with at least one mask image; the target area supports rendering and generating scene elements of at least one element type; the number of mask images covered on the target area is equal to the number of element types supported by the target area Same; one mask image corresponds to one element type.
  • the element types of scene elements can include multiple types.
  • the scene elements of this element type can be controlled through the mask image corresponding to the element type, that is, through the mask image corresponding to the element type.
  • the corresponding mask image records scene element density distribution information of scene elements belonging to that element type.
  • the mask images corresponding to various element types are independent of each other and do not affect each other.
  • the target area is covered with three mask images, namely mask image A, mask image B and mask image C.
  • These three mask images can be respectively element type a, element type b and element type c.
  • mask image A is used to record scene element density distribution information of scene elements belonging to element type a
  • mask image B is used to record scene element density distribution information of scene elements belonging to element type b
  • mask image C is used It is used to record scene element density distribution information of scene elements belonging to element type c.
  • the pixel value modification operation is implemented by controlling the virtual tool to move on the mask image; in response to the pixel value modification operation for the mask image covered on the target area, the pixel value of the pixel point in the mask image is modified.
  • Update to obtain the scene element density distribution information corresponding to the target area including: in response to the operation of controlling the movement of the virtual tool on the mask image, determining the movement area of the virtual tool on the mask image; the movement area is the movement area of the virtual tool on the mask image The area passed through when moving on the image; modify the pixel values of the pixels in the moving area to obtain the scene element density distribution information corresponding to the target area based on the pixel values of each pixel in the modified mask image.
  • virtual tools are drawing tools in virtual scenes. That is, the virtual tool can be provided in the virtual scene for operation. It can be understood that controlling the movement of the virtual tool on the mask image is one of the implementation methods for implementing the selection operation for pixels in the mask image. By controlling the virtual tool to move on the mask image, the pixel values of the pixels in the moved area can be modified. That is, the virtual tool is a tool that modifies the pixel values of each point on the mask image through movement to render and generate scene elements.
  • the user can control the virtual tool to move on the mask image
  • the terminal can determine the movement area that the virtual tool passes through when moving on the mask image in response to the operation of controlling the virtual tool to move on the mask image.
  • the terminal can modify the pixel values of the pixels in the moving area to obtain the pixel values of each pixel in the modified mask image.
  • the terminal can determine the scene element density distribution information corresponding to the target area based on the pixel value of each pixel in the modified mask image.
  • the size of the virtual tool range can be changed. It can be understood that the larger the virtual tool range, the larger the moving area the virtual tool passes when moving on the mask image. That is, the terminal can modify the moving area within the moving area. The greater the number of pixels, the larger the range of scene elements that can be rendered at once. The smaller the range of the virtual tool, the smaller the moving area that the virtual tool passes through when moving on the mask image, that is, the fewer the number of pixels in the moving area that the terminal can modify, and the range of scene elements that can be rendered and generated at one time The smaller it is. In this way, by controlling the size of the virtual tool range, scene elements of different range sizes can be generated by one-time rendering on demand. Compared with the traditional way of generating scene elements by rendering by particles or by cells, the operation of rendering and generating scene elements in this application is more convenient. For convenience, a wide range of scene elements can be rendered at once.
  • the terminal can display scene elements of various element types through the display interface 501 , for example, element type 1 to element type 12 displayed in the display interface 501 .
  • the terminal can display the selected element type and the related description for the virtual tool through the virtual tool description interface 503, that is, "scene elements can be generated in batches through virtual tools, and the range size of the virtual tool can be controlled at the same time.”
  • the terminal may display the selected element type 10 and related descriptions for the virtual tool through the virtual tool description interface 503.
  • the terminal in response to an operation of controlling the movement of the virtual tool on the mask image 601 , the terminal may determine a movement area 602 of the virtual tool on the mask image.
  • the virtual tool may be a virtual brush.
  • the terminal can display the brush range adjustment control 702 for adjusting the size of the virtual brush range through the brush range adjustment interface 701.
  • the terminal can display the brush range adjustment control 702 through the brush range adjustment control.
  • 702 Adjust the size of the virtual brush range.
  • the terminal in response to an operation of controlling the virtual brush to move on the mask image 801, may determine the movement area 802 and the movement area 803 of the virtual brush on the mask image, wherein, The virtual brush range corresponding to the movement area 802 is larger than the virtual brush range corresponding to the movement area 803 , and the area of the movement area 802 is larger than the area of the movement area 803 .
  • the pixel values of the pixels in the moving area are modified, thereby controlling the rendering and generation of scene elements, and providing a new rendering and generation method of scene elements.
  • the operation is more convenient.
  • the first element density value is obtained; the first element density value is the maximum element density value corresponding to the target element type that the target area supports rendering; the target element type is the element type to which the scene element to be generated belongs; according to The first element density value and the size of the target area determine the target number of scene elements to be generated; candidate locations that meet the target number are determined in the target area.
  • the first element density value is the maximum element density value corresponding to the target element type that is preset and the target area supports rendering.
  • the terminal can obtain the preset first element density value corresponding to the target element type that the target area supports rendering.
  • the terminal may determine the target number of scene elements to be generated based on the first element density value and the size of the target area. Furthermore, the terminal can determine candidate locations that meet the target number in the target area.
  • the terminal may determine the target number of scene elements to be generated based on the first element density value and the area of the target area.
  • the terminal may directly use the product of the first element density value and the area of the target area as the target number of scene elements to be generated.
  • the target number of scene elements to be generated is determined based on the first element density value corresponding to the target element type that supports rendering in the target area and the size of the target area, and candidate positions that meet the target number are determined in the target area. , In this way, according to different element types of scene elements, different numbers of candidate positions are set in the target area, which improves the visual effect of rendering and generating scene elements in the target area.
  • the number of targets at candidate positions in the target area may also be preset.
  • the terminal may determine candidate locations that meet a preset set target number in the target area.
  • the target area is covered with a mask image; the pixel values of the pixels in the mask image are used to represent scene element density distribution information; based on the scene element density distribution information, elements corresponding to each candidate position in the target area are determined
  • the density value includes: for each candidate position in the target area, upsampling in the mask image based on the candidate position, and using the upsampled pixel value as the element density value corresponding to the candidate position.
  • upsampling is an image processing method that improves image resolution.
  • the resolution of the image can be improved by upsampling the image. It can be understood that upsampling can determine the pixel values of missing pixels in the image based on the pixel values of the original pixels in the image.
  • the terminal can perform upsampling in the mask image based on the candidate position. Furthermore, the terminal can use the upsampled pixel value as the element density value corresponding to the candidate position. It can be understood that the pixel value obtained by upsampling may be the pixel value of the pixel point in the mask image, or it may be a new pixel value calculated based on the pixel value of the pixel point in the mask image.
  • upsampling can be implemented through single linear interpolation sampling, or upsampling can be implemented through bilinear interpolation sampling. This embodiment does not specifically limit the upsampling method.
  • the accuracy of the element density values corresponding to the candidate positions can be improved.
  • upsampling is performed in the mask image based on the candidate position, and the pixel value obtained by the upsampling is used as the element density value corresponding to the candidate position, including: in the mask Determine the mapping position in the image that has a mapping relationship with the candidate position; determine multiple target pixel points adjacent to the mapping position in each pixel point of the mask image; determine the mapping position according to the pixel values corresponding to the multiple target pixel points. The corresponding pixel value is used to obtain the element density value corresponding to the candidate position.
  • the mapping position is a position in the mask image that has a mapping relationship with the candidate position of the target area.
  • the target pixel is a pixel among each pixel in the mask image and adjacent to the mapping position.
  • the terminal can determine a mapping position that has a mapping relationship with the candidate position in the mask image, and determine multiple target pixels adjacent to the mapping position in each pixel point of the mask image. point.
  • the terminal can obtain the pixel values corresponding to multiple target pixels.
  • the terminal can determine the pixel value corresponding to the mapping position based on the pixel values corresponding to the multiple target pixel points, and determine the element density value corresponding to the candidate position based on the pixel value corresponding to the mapping position.
  • the terminal may directly use the pixel value corresponding to the mapping position as the element density value corresponding to the candidate position.
  • the terminal can determine four target pixels adjacent to the mapping position among each pixel of the mask image.
  • the terminal can obtain the pixel values corresponding to these four target pixels.
  • the terminal can determine the pixel value corresponding to the mapping position based on the pixel values corresponding to the four target pixel points.
  • the pixel value corresponding to the mapping position can be determined based on the pixel value corresponding to the target pixel point adjacent to the mapping position, and the element density value corresponding to the candidate position can be determined based on the pixel value corresponding to the mapping position. In this way, Further improve the accuracy of the element density value corresponding to the candidate position.
  • determining the element generation position from each candidate position based on the element density value includes: based on the element density value corresponding to each candidate position, respectively determining the element generation probability corresponding to each candidate position; the element generation probability refers to the element generation probability at each candidate position. The probability of scene elements generated by candidate position rendering; based on the element generation probability corresponding to each candidate position, the element generation position is determined from each candidate position.
  • the terminal may determine the element generation probability of rendering and generating scene elements at each candidate position based on the element density value corresponding to each candidate position. Furthermore, the terminal can determine the element generation position from each candidate position in the target area according to the element generation probability corresponding to each candidate position.
  • the terminal may use the element density value corresponding to each candidate position as the element generation probability of rendering and generating scene elements at each candidate position.
  • the element generation probability corresponding to each candidate position can be determined respectively through the element density value corresponding to each candidate position. According to the element generation probability corresponding to each candidate position, the element generation position can be determined from each candidate position, which improves the The accuracy of the determined location where the element is generated.
  • the terminal may directly use the ratio of the element density value corresponding to the candidate position and the second element density value as the element generation probability corresponding to the candidate position.
  • the pixel value of each pixel point in the mask image ranges from 0 to 15.
  • the pixel value of pixel point 901 is 0 and the pixel value of pixel point 902 is 15.
  • Candidate position A is located at the pixel center of pixel point 901. It can be seen that the pixel value of candidate position A is the pixel value of pixel point 901, that is, the pixel value of candidate position A is 0.
  • Candidate position C is located at the pixel center of pixel point 902. It can be known that candidate position C The pixel value of is the pixel value of pixel point 902, that is, the pixel value of candidate position C is 15.
  • the candidate position B is located in the middle of pixel point 901 and pixel point 902. It can be seen that the pixel value of candidate position B is the pixel value of pixel point 901 0 and The average pixel value of pixel point 902 is 15, that is, the pixel value of candidate position B is 7.5. It can be understood that the element density value corresponding to candidate position A is 0, the element density value corresponding to candidate position B is 7.5, and the element density value corresponding to candidate position C is 1.
  • the terminal can directly use the ratio of the element density value corresponding to each candidate position to the maximum element density value 15 of the scene element supported by the candidate position as the element generation probability corresponding to the candidate position. It can be seen that the element generation probability corresponding to candidate position A is 0 , the element generation probability corresponding to candidate position B is 0.5, and the element generation probability corresponding to candidate position C is 1.
  • the element generation probability corresponding to the candidate position can be determined through the ratio of the element density value corresponding to the candidate position and the second element density value of the scene element supported by rendering at the candidate position, which improves the accuracy of the element generation probability.
  • the method further includes: generating a random number corresponding to the target area; and determining the element generation position from each candidate position according to the element generation probability corresponding to each candidate position, including: according to the element generation probability corresponding to each candidate position.
  • the relationship between the random number and the size of the random number determines the element generation position from each candidate position.
  • the random number is a randomly generated value corresponding to the target area.
  • the terminal can compare the element generation probability corresponding to each candidate position with the random number, and obtain the relationship between the element generation probability corresponding to each candidate position and the random number. Furthermore, the terminal can determine the element generation position from each candidate position based on the relationship between the element generation probability and the random number corresponding to each candidate position.
  • the element generation position can be determined from each candidate position.
  • the initial element density value is used to characterize the density value of the scene element to be generated before the scene element adding operation is performed.
  • the terminal may obtain the initial scene element density distribution information corresponding to the target area before performing the scene element addition operation.
  • the terminal can compare each element density value in the scene element density distribution information with each initial element density value in the initial scene element density distribution information, and based on the comparison result, obtain each element density from the scene element density distribution information. value, filter out the updated element density value.
  • the terminal can synchronize the updated element density value to the server, so that the server stores the updated element density value. It can be understood that the initial scene element density distribution information corresponding to the target area before performing the scene element addition operation has been stored in the server.
  • the server After receiving the updated element density value synchronized by the terminal, the server can according to the updated element density value corresponding Position coordinates, find the corresponding initial element density value in the initial scene element density distribution information, and modify the found corresponding initial element density value to the updated element density value.
  • this application can provide eight element types of scene elements, namely element type 1 to element type 8.
  • element type 3 in Figure 10 if element type 3 in Figure 10 is selected, the terminal can render and generate scene elements belonging to element type 3 in the target area.
  • the distribution of scene elements in the target area can be lush in the middle and sparse on both sides, which avoids the monotonous repetitive distribution of the generated scene elements, thus enabling the scene elements to be better integrated into the virtual scene.
  • the virtual scene is a virtual game scene; the target area includes a virtual plot in the virtual game scene; the scene elements include virtual plants in the virtual game scene; and the scene element density distribution information is used to indicate at least one to-be-generated The distribution of virtual plants in the virtual plot.
  • the terminal can display the virtual plot in the virtual game scene that is to be subjected to scene element rendering processing, and can obtain the scene element density distribution information corresponding to the virtual plot in response to the scene element adding operation for the virtual plot, wherein the scene Element density distribution information, used to indicate the distribution of at least one virtual plant to be generated in the virtual plot.
  • the terminal can determine the element density value corresponding to each candidate location in the virtual plot based on the scene element density distribution information.
  • the terminal can determine the element generation position from each candidate position based on the element density value, and render and generate the corresponding virtual plant in the element generation position.
  • the above element type may be a plant type of a virtual plant.
  • the virtual plants in the simulated game scene can be better integrated into the virtual game scene.
  • a scene element processing method is provided. This method can be applied to the terminal 102 in Figure 1, and can be executed independently by the terminal 102 itself, or can also be implemented between the terminal 102 and the server 104. interaction to achieve.
  • the method specifically includes the following steps:
  • Step 1202 display the target area in the virtual scene to be rendered for scene elements; the target area is covered with a mask image; the pixel values of the pixels in the mask image are used to represent the scene element density distribution information corresponding to the target area; the scene elements Density distribution information is used to indicate the distribution of at least one scene element to be generated in the target area.
  • Step 1204 In response to the operation of controlling the movement of the virtual tool on the mask image, determine the movement area of the virtual tool on the mask image; the movement area is the area that the virtual tool passes through when moving on the mask image.
  • Step 1206 Modify the pixel values of the pixels in the moving area to obtain scene element density distribution information corresponding to the target area based on the pixel values of each pixel in the modified mask image.
  • the target area is covered with at least one mask image; the target area supports rendering and generating scene elements of at least one element type; the number of mask images covered on the target area is equal to the number of element types supported by the target area Same; one mask image corresponds to one element type.
  • Step 1208 Obtain the first element density value; the first element density value is the maximum element density value corresponding to the target element type that supports rendering in the target area; the target element type is the element type to which the scene element to be generated belongs.
  • Step 1210 Determine the target number of scene elements to be generated based on the first element density value and the size of the target area.
  • Step 1212 Determine candidate positions that meet the target number in the target area, and determine mapping positions that have a mapping relationship with the candidate positions in the mask image.
  • Step 1214 Determine multiple target pixel points adjacent to the mapping position among each pixel point in the mask image.
  • Step 1216 Determine the pixel value corresponding to the mapping position according to the pixel values corresponding to the multiple target pixel points, so as to obtain the element density value corresponding to the candidate position.
  • Step 1218 For each candidate position, obtain the second element density value corresponding to the candidate position; the second element density value is the maximum element density value that supports rendering and generating scene elements at the candidate position.
  • Step 1220 Determine the element generation probability corresponding to the candidate position based on the ratio of the element density value corresponding to the candidate position and the second element density value; the element generation probability refers to the probability of rendering and generating scene elements at each candidate position.
  • Step 1222 Generate a random number corresponding to the target area.
  • Step 1224 Based on the relationship between the element generation probability and the random number corresponding to each candidate position, determine the element generation position from each candidate position, and render and generate the corresponding scene element in the element generation position.
  • Step 1226 Obtain the initial scene element density distribution information corresponding to the target area before performing the scene element addition operation; the initial scene element density distribution information includes multiple initial element density values.
  • Step 1230 Synchronize the updated element density value to the server.
  • the scene element processing method can be applied to scenes where virtual plants are generated in games.
  • the terminal can display the virtual plot in the virtual game scene to be rendered for scene elements; the virtual plot is covered with a mask image; the pixel values of the pixels in the mask image are used to represent the scene element density distribution information corresponding to the virtual plot ;Scene element density distribution information, used to indicate the distribution of at least one virtual plant to be generated in the virtual plot.
  • a movement area of the virtual tool on the mask image is determined; the movement area is an area that the virtual tool passes through when moving on the mask image. Modify the pixel values of the pixels in the moving area to obtain scene element density distribution information corresponding to the virtual plot based on the pixel values of each pixel in the modified mask image.
  • the virtual plot can be covered with at least one mask image; the virtual plot can support rendering and generating virtual plants of at least one element type; the number of mask images covered on the virtual plot is related to the virtual plot. The number of supported element types is the same; one mask image corresponds to one element type.
  • the terminal can obtain the first element density value; the first element density value is the maximum element density value corresponding to the target element type that the virtual plot supports rendering; the target element type is the element type to which the virtual plant to be generated belongs.
  • the target number of virtual plants to be generated is determined. Determine the candidate positions that meet the target number in the virtual plot, and determine the mapping position that has a mapping relationship with the candidate positions in the mask image.
  • a plurality of target pixel points adjacent to the mapping position are determined among each pixel point of the mask image. According to the pixel values corresponding to the multiple target pixel points, the pixel value corresponding to the mapping position is determined to obtain the element density value corresponding to the candidate position.
  • the terminal can obtain the second element density value corresponding to the candidate position; the second element density value is the maximum element density value that supports rendering and generating virtual plants at the candidate position.
  • the element generation probability corresponding to the candidate position is determined; the element generation probability refers to the probability of rendering and generating virtual plants at each candidate position.
  • the terminal can obtain the initial scene element density distribution information corresponding to the virtual plot before executing the scene element adding operation; the initial scene element density distribution information includes multiple initial element density values; and each element density value in the scene element density distribution information is compared with each The initial element density values are compared to filter out the updated element density values from each element density value of the scene element density distribution information; the updated element density values are synchronized to the server.
  • the scene element processing method of this application can be applied to a multiplayer online role-playing game.
  • a home system entrance can be provided in the multiplayer online role-playing game.
  • Players can enter the home system through the home system entrance and enter the home system in the multiplayer online role-playing game.
  • virtual tools are used to edit the mask image covering the virtual plot and change the element density value of the virtual plant to be generated, so that virtual plants can be planted on the virtual plot.
  • the scene element processing method can be applied to scenes where virtual characters are generated in games or scenes where virtual animals are generated in games. It can be understood that this scene element processing method can also be applied to scenes generated by virtual elements (ie scene elements) in visual design and VR (Virtual Reality, virtual reality). It can be understood that the virtual elements may include one of virtual plants, virtual characters, virtual animals, virtual props, etc. For example, in a VR scene, through the scene element processing method of this application, corresponding virtual elements can be rendered and generated in the target area of the VR scene, so that the virtual elements can be better integrated into the VR scene.
  • VR Virtual Reality, virtual reality
  • the scene element processing method of this application can also be applied to application scenarios in industrial design.
  • the scene element processing method of this application can be applied to batch generation of scene elements such as virtual buildings in industrial design software.
  • the scene element processing method of this application the corresponding virtual building can be rendered and generated in the target area of the industrial design scene, so that the virtual building can be better integrated into the industrial design scene, thereby effectively assisting industrial design and meeting the more complex requirements of industrial design. .
  • steps in the flowcharts of the above embodiments are shown in sequence, these steps are not necessarily executed in sequence. Unless explicitly stated in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the above embodiments may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or stages The order of execution is not necessarily sequential, but may be performed in turn or alternately with other steps or sub-steps of other steps or at least part of the stages.
  • a scene element processing device 1300 is provided.
  • the device can adopt a software module or a hardware module, or a combination of the two to become part of a computer device.
  • the device specifically includes:
  • the display module 1302 is used to display the target area in the virtual scene to be rendered for scene elements.
  • Acquisition module 1304 configured to obtain scene element density distribution information corresponding to the target area in response to a scene element addition operation for the target area; scene element density distribution information, used to indicate the distribution of at least one scene element to be generated in the target area. Condition.
  • the determination module 1306 is used to determine the element density value corresponding to each candidate position in the target area based on the scene element density distribution information.
  • the generation module 1308 is used to determine the element generation position from each candidate position based on the element density value, and render and generate the corresponding scene element in the element generation position.
  • the target area is covered with a mask image; the pixel values of the pixels in the mask image are used to represent scene element density distribution information; the scene element adding operation includes a pixel value modification operation; the pixel value modification operation is used to Modify the pixel values of the pixels in the mask image; the acquisition module 1304 is also used to update the pixel values of the pixels in the mask image in response to the pixel value modification operation of the mask image covered on the target area, and obtain the target area. Corresponding scene element density distribution information.
  • the target area is covered with at least one mask image; the target area supports rendering and generating scene elements of at least one element type; the number of mask images covered on the target area is equal to the number of element types supported by the target area Same; one mask image corresponds to one element type.
  • the pixel value modification operation is implemented by controlling the movement of the virtual tool on the mask image; the acquisition module 1304 is also configured to respond to the operation of controlling the movement of the virtual tool on the mask image, and determine the position of the virtual tool on the mask image.
  • the moving area on the mask image; the moving area is the area that the virtual tool passes through when moving on the mask image; modify the pixel values of the pixels in the moving area to based on the pixel values of each pixel in the modified mask image, we get Density distribution information of scene elements corresponding to the target area.
  • the determination module 1306 is also used to obtain the first element density value; the first element density value is the maximum element density value corresponding to the target element type that the target area supports rendering; the target element type is the scene element to be generated The element type it belongs to; determine the target number of scene elements to be generated based on the first element density value and the size of the target area; determine candidate positions in the target area that meet the target number.
  • the target area is covered with a mask image; the pixel values of the pixels in the mask image are used to represent scene element density distribution information; the determination module 1306 is also used to determine for each candidate position in the target area based on The candidate position is upsampled in the mask image, and the pixel value obtained by the upsampling is used as the element density value corresponding to the candidate position.
  • the determination module 1306 is also configured to determine a mapping position that has a mapping relationship with the candidate position in the mask image; determine multiple target pixel points adjacent to the mapping position in each pixel point of the mask image; According to the pixel values corresponding to the multiple target pixel points, the pixel value corresponding to the mapping position is determined to obtain the element density value corresponding to the candidate position.
  • the generation module 1308 is also used to determine the element generation probability corresponding to each candidate position based on the element density value corresponding to each candidate position; the element generation probability refers to the probability of rendering and generating scene elements at each candidate position; according to The element generation probability corresponding to each candidate position is used to determine the element generation position from each candidate position.
  • the generation module 1308 is also used to obtain, for each candidate position, a second element density value corresponding to the candidate position; the second element density value is the maximum element density value of the scene element that the candidate position supports rendering and generation; based on The ratio of the element density value corresponding to the candidate position to the second element density value determines the generation probability of the element corresponding to the candidate position.
  • the generation module 1308 is also used to generate a random number corresponding to the target area; determine the element generation position from each candidate position according to the element generation probability corresponding to each candidate position and the size relationship of the random number.
  • the scene element density distribution information corresponding to the target area includes multiple element density values; the device further includes: a synchronization module for obtaining the initial scene element density distribution information corresponding to the target area before performing the scene element adding operation; initialization
  • the scene element density distribution information includes multiple initial element density values; each element density value in the scene element density distribution information is compared with each initial element density value, so as to obtain from each element density value in the scene element density distribution information, Filter out updated element density values; synchronize updated element density values to the server.
  • the virtual scene is a virtual game scene; the target area includes a virtual plot in the virtual game scene; the scene elements include virtual plants in the virtual game scene; and the scene element density distribution information is used to indicate at least one to-be-generated The distribution of virtual plants in the virtual plot.
  • the above-mentioned scene element processing device can obtain scene element density distribution information corresponding to the target area in response to the scene element adding operation for the target area by displaying the target area to be performed in the virtual scene, where the scene element density distribution Information that can be used to indicate the distribution of at least one scene element to be generated in the target area.
  • the element density value corresponding to each candidate position in the target area can be determined, and then the element generation position can be determined from each candidate position based on the element density value, and the corresponding scene element can be rendered and generated in the element generation position.
  • the scene element processing method of this application can perform personalized scene element density distribution information corresponding to the target area by performing scene element addition operations for the target area.
  • the generated scene elements can be avoided from showing a monotonous and repeated distribution, so that the scene elements can be better integrated into the virtual scene, thereby improving the rendering effect.
  • Each module in the above scene element processing device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in Figure 14.
  • the computer device includes a processor, memory, input/output interface, communication interface, display unit and input device.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface, display unit and input device are connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions. This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used for wired or wireless communication with external terminals.
  • the wireless mode can be implemented through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • the computer-readable instructions when executed by the processor, implement a scene element processing method.
  • the display unit of the computer device is used to form a visually visible picture and can be a display screen, a projection device or a virtual reality imaging device.
  • the display screen can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device can be a display screen.
  • the touch layer covered above can also be buttons, trackballs or touch pads provided on the computer equipment shell, or it can also be an external keyboard, touch pad or mouse, etc.
  • Figure 14 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • Specific computer equipment can May include more or fewer parts than shown, or combine certain parts, or have a different arrangement of parts.
  • a computer device including one or more memories and a processor.
  • Computer-readable instructions are stored in the memory.
  • the processor executes the computer-readable instructions, it implements the above method embodiments. step.
  • one or more computer-readable storage media are provided, storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the steps in the above method embodiments are implemented.
  • a computer program product which includes computer-readable instructions. When executed by one or more processors, the computer-readable instructions implement the steps in each of the above method embodiments.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM can be in many forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

本申请涉及一种场景元素处理方法,属于渲染技术领域。所述方法包括:展示虚拟场景中待进行场景元素渲染处理的目标区域(202);响应于针对目标区域的场景元素添加操作,获取目标区域对应的场景元素密度分布信息;场景元素密度分布信息,用于指示至少一个待生成的场景元素在目标区域中的分布情况(204);基于场景元素密度分布信息,确定目标区域中各候选位置对应的元素密度值(206);基于元素密度值从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素(208)。

Description

场景元素处理方法、装置、设备和介质
本申请要求于2022年03月18日提交中国专利局,申请号为2022102671791、发明名称为“场景元素处理方法、装置、设备和介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及渲染技术领域,特别是涉及一种场景元素处理方法、装置、设备和介质。
背景技术
随着计算机技术的发展,出现了场景元素渲染技术,场景元素渲染技术是一种基于计算机自动在虚拟场景中渲染生成场景元素的技术。以游戏领域为例,通过场景元素渲染技术可生成游戏场景中的场景元素。比如,针对植物种植类的游戏,通过场景元素渲染技术可生成植物种植类游戏场景中的虚拟植物对象。
传统技术中,通常是通过单个或以单元格的形式生成场景元素,然而,通过单个或以单元格的形式生成的场景元素的分布单调重复,从而使得场景元素无法很好地融入虚拟场景,导致渲染效果比较差。
发明内容
基于此,有必要针对上述技术问题,提供一种场景元素处理方法、装置、设备和介质。
第一方面,本申请提供了一种场景元素处理方法,由终端执行,所述方法包括:
展示虚拟场景中待进行场景元素渲染处理的目标区域;
响应于针对所述目标区域的场景元素添加操作,获取所述目标区域对应的场景元素密度分布信息;所述场景元素密度分布信息,用于指示至少一个待生成的场景元素在所述目标区域中的分布情况;
基于所述场景元素密度分布信息,确定所述目标区域中各候选位置对应的元素密度值;
基于所述元素密度值从各候选位置中确定元素生成位置,并在所述元素生成位置中渲染生成相应的场景元素。
第二方面,本申请提供了一种场景元素处理装置,所述装置包括:
展示模块,用于展示虚拟场景中待进行场景元素渲染处理的目标区域;
获取模块,用于响应于针对所述目标区域的场景元素添加操作,获取所述目标区域对应的场景元素密度分布信息;所述场景元素密度分布信息,用于指示至少一个待生成的场景元素在所述目标区域中的分布情况;
确定模块,用于基于所述场景元素密度分布信息,确定所述目标区域中各候选位置对应的元素密度值;
生成模块,用于基于所述元素密度值从各候选位置中确定元素生成位置,并在所述元素生成位置中渲染生成相应的场景元素。
第三方面,本申请提供了一种计算机设备,包括一个或多个存储器和处理器,存储器中存储有计算机可读指令,该处理器执行计算机可读指令时实现本申请各方法实施例中的步骤。
第四方面,本申请提供了一个或多个计算机可读存储介质,存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时实现本申请各方法实施例中的步骤。
第五方面,本申请提供了一种计算机程序产品,包括计算机可读指令,计算机可读指令被一个或多个处理器执行时实现本申请各方法实施例中的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中场景元素处理方法的应用环境图;
图2为一个实施例中场景元素处理方法的流程示意图;
图3为传统场景元素生成方法中,以单元格为单位生成场景元素的示意图;
图4为传统场景元素生成方法中,以单元格为单位生成场景元素的场景元素分布效果示意图;
图5为一个实施例中虚拟工具的描述界面示意图;
图6为一个实施例中通过控制虚拟工具移动确定得到的移动区域示意图;
图7为一个实施例中虚拟笔刷范围大小的调节界面示意图;
图8为一个实施例中通过范围大的虚拟笔刷确定得到的移动区域,以及通过范围小的虚拟笔刷确定得到的移动区域的示意图;
图9为一个实施例中掩膜图像的像素点中的各个候选位置示意图;
图10为一个实施例中各种元素类型的场景元素示意图;
图11为一个实施例中场景元素的渲染生成效果示意图;
图12为另一个实施例中场景元素处理方法的流程示意图;
图13为一个实施例中场景元素处理装置的结构框图;
图14为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的场景元素处理方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与服务器104进行通信。数据存储系统可以存储服务器104需要处理的数据。数据存储系统可以集成在服务器104上,也可以放在云上或其他服务器上。其中,终端102可以但不限于是各种台式计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能音箱、智能电视、智能空调、智能车载设备等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。服务器104可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端102以及服务器104可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
终端102可展示虚拟场景中待进行场景元素渲染处理的目标区域。终端102可响应于针对目标区域的场景元素添加操作,获取目标区域对应的场景元素密度分布信息;场景元素密度分布信息,用于指示至少一个待生成的场景元素在目标区域中的分布情况。终端102可基于场景元素密度分布信息,确定目标区域中各候选位置对应的元素密度值。终端102可基于元素密度值从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。
可以理解,终端102可从服务器104获取虚拟场景中待进行场景元素渲染处理的目标区域的相关数据,进而,终端102可基于获取的相关数据展示虚拟场景中待进行场景元素渲染处理的目标区域。还可以理解,终端102在元素生成位置中渲染生成相应的场景元素之后,可将渲染生成的场景元素同步至服务器104。
在一个实施例中,如图2所示,提供了一种场景元素处理方法,该方法可应用于图1中的终端102,由终端102自身单独执行,也可以通过终端102和服务器104之间的交互来实现。本实施例以该方法应用于图1中的终端102为例进行说明,包括以下步骤:
步骤202,展示虚拟场景中待进行场景元素渲染处理的目标区域。
其中,虚拟场景是一种虚构的场景,可以理解,虚拟场景是一种非真实存在的场景,比如,游戏中的场景。场景元素是虚拟场景中的组成元素,比如,针对游戏中的场景,虚拟场景元素可以是游戏中的虚拟人物、虚拟动物、虚拟植物和虚拟道具等中的至少一种。场景元素渲染处理,是一种渲染生成场景元素的处理方式。目标区域,是虚拟场景中待进行场景元素渲染处理的区域,可以理解,虚拟场景中可包括至少一个区域,目标区域是虚拟场景的至少一个区域中将要进行场景元素渲染处理的区 域。
在一个实施例中,虚拟场景可包括游戏中的场景、可视化设计和VR(Virtual Reality,虚拟现实)等场景中的至少一种。
具体地,虚拟场景中可包括至少一个区域,终端可响应于针对区域的选择操作,从虚拟场景的至少一个区域中,确定待进行场景元素渲染处理的目标区域。进而,终端可展示虚拟场景中待进行场景元素渲染处理的目标区域。
步骤204,响应于针对目标区域的场景元素添加操作,获取目标区域对应的场景元素密度分布信息;场景元素密度分布信息,用于指示至少一个待生成的场景元素在目标区域中的分布情况。
其中,场景元素添加操作,是在目标区域中添加场景元素的操作。可以理解,在执行场景元素添加操作之前,目标区域中可以未渲染生成有相应的场景元素,也可以已经渲染生成有相应的场景元素。若在执行场景元素添加操作之前,目标区域中未渲染生成有相应的场景元素,则通过执行场景元素添加操作,可在目标区域中新添加相应的场景元素。若在执行场景元素添加操作之前,目标区域中已经渲染生成有相应的场景元素,则通过执行场景元素添加操作,可在目标区域中再继续添加相应的场景元素,使得目标区域中的场景元素的数量,多于执行场景元素添加操作之前目标区域中的场景元素的数量。场景元素密度分布信息包括多个元素密度值。元素密度值,是用于表征执行场景元素添加操作后的、且待生成的场景元素的密度值。密度值,是指单位面积上渲染生成场景元素的数量。
具体地,用户可在目标区域中执行场景元素添加操作,终端可响应于用户针对目标区域的场景元素添加操作,对执行场景元素添加操作之前目标区域对应的初始场景元素密度分布信息进行更新,得到更新后的目标区域对应的场景元素密度分布信息。
在一个实施例中,目标区域上覆盖有元素密度记录层。用户可对目标区域上覆盖的元素密度记录层执行场景元素添加操作,终端可响应于用户针对元素密度记录层的场景元素添加操作,获取目标区域对应的场景元素密度分布信息。其中,元素密度记录层,是用于记录目标区域对应的场景元素密度分布信息的信息记录层。
步骤206,基于场景元素密度分布信息,确定目标区域中各候选位置对应的元素密度值。
其中,候选位置,是目标区域中具有一定概率渲染生成相应的场景元素的位置。可以理解,场景元素仅可在目标区域的各候选位置上渲染生成,不会在目标区域中各候选位置之外的其他位置渲染生成。
具体地,场景元素密度分布信息可包括元素密度值。终端可基于场景元素密度分布信息中的元素密度值,确定目标区域中各候选位置对应的元素密度值。
在一个实施例中,目标区域上覆盖有元素密度记录层,元素密度记录层中记录有元素密度值。针对目标区域中的每一个候选位置,终端可基于候选位置在元素密度记录层中进行上采样,并基于上采样得到的元素密度值,确定候选位置对应的元素密度值。
上述实施例中,通过在目标区域上覆盖元素密度记录层,并基于目标区域中的每一个候选位置在元素密度记录层中进行上采样,可以快速、准确地确定目标区域中的每一个候选位置对应的元素密度值,提升了各个候选位置对应的元素密度值的获取效率和准确率。
在一个实施例中,终端可基于候选位置在元素密度记录层中进行上采样,并将上采样得到的元素密度值,直接作为候选位置对应的元素密度值。
步骤208,基于元素密度值从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。
其中,元素生成位置,是目标区域中用于渲染生成场景元素的位置。
具体地,目标区域的各候选位置中可能有一部分候选位置可渲染生成场景元素,有一部分不可渲染生成场景元素,可以理解,可渲染生成场景元素的候选位置即为元素生成位置。终端可基于目标区域中各候选位置对应的元素密度值,从各候选位置中确定可渲染生成场景元素的元素生成位置,并在元素生成位置中渲染生成相应的场景元素。
在一个实施例中,终端可将各候选位置对应的元素密度值与预设的元素密度阈值进行比对,并将 元素密度值大于预设的元素密度阈值的候选位置,确定为元素生成位置。进而,终端可在元素生成位置中渲染生成相应的场景元素。这样,通过将各候选位置分别对应的元素密度值与预设的元素密度阈值进行比对的方式,可以快速确定元素生成位置,从而提升了场景元素的生成效率。
上述场景元素处理方法中,通过展示虚拟场景中待进行场景元素渲染处理的目标区域,响应于针对目标区域的场景元素添加操作,可获取目标区域对应的场景元素密度分布信息,其中,场景元素密度分布信息,可用于指示至少一个待生成的场景元素在目标区域中的分布情况。基于场景元素密度分布信息,可确定目标区域中各候选位置对应的元素密度值,进而基于元素密度值可从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。相较于传统的通过单个或以单元格生成场景元素的方式,本申请的场景元素处理方法通过执行针对目标区域的场景元素添加操作,可个性化地对目标区域对应的场景元素密度分布信息进行更新,以提升目标区域中待生成的场景元素的元素密度值,进而可基于更新后的场景元素密度分布信息,从目标区域的各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。这样,通过修改目标区域中待生成的场景元素的元素密度值,可避免出现方块感和边界感。同时,在目标区域中生成场景元素的疏密是可控的,可以提供场景元素之间更自然的过度,同时还可避免生成的场景元素呈现单调重复分布的情况,使场景元素可以更好地融入虚拟场景,从而能够提高渲染效果。
可以理解,传统的场景元素生成方式中,如图3所示,场景元素3a、3b、3c、3d和3e都是以单元格为单位生成的,每一个单元格中场景元素的元素密度值也是固定不变的,然而,通过以单元格为单位生成场景元素,会导致生成的场景元素的分布单调重复。如图4所示,以场景元素3b为例,在目标区域中渲染生成的场景元素呈现单调重复分布的情况,从而导致场景元素无法很好地融入虚拟场景。而本申请的场景元素处理方法通过执行针对目标区域的场景元素添加操作,可个性化地对目标区域对应的场景元素密度分布信息进行更新,基于更新后的场景元素密度分布信息,从目标区域的各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素,可避免生成的场景元素呈现单调重复分布的情况,使场景元素可以更好地融入虚拟场景,从而能够提高渲染效果。
在一个实施例中,目标区域上覆盖有掩膜图像;掩膜图像中像素点的像素值用于表征场景元素密度分布信息;场景元素添加操作包括像素值修改操作;像素值修改操作,用于修改掩膜图像中像素点的像素值;响应于针对目标区域的场景元素添加操作,获取目标区域对应的场景元素密度分布信息,包括:响应于针对目标区域上覆盖的掩膜图像的像素值修改操作,对掩膜图像中像素点的像素值进行更新,得到目标区域对应的场景元素密度分布信息。
其中,掩膜图像,是一种以贴图的方式存储像素值的图像,通过贴图的方式可对掩膜图像中各个像素点的像素值进行编辑。可以理解,上述元素密度记录层包括掩膜图像。场景元素密度分布信息包括多个元素密度值,掩膜图像中像素点的像素值用于表征场景元素密度分布信息,可以理解,掩膜图像中像素点的像素值与场景元素密度分布信息中的各个元素密度值具有映射关系,一个像素点的像素值对应一个元素密度值,可以理解,掩膜图像中像素点的像素值,可以用于表征待生成的场景元素的元素密度值。
具体地,用户可对目标区域上覆盖的掩膜图像执行像素值修改操作,终端可响应于用户针对目标区域上覆盖的掩膜图像的像素值修改操作,对掩膜图像中相应像素点的原始的像素值进行更新,得到更新后的相应像素点的像素值,终端可基于更新后的相应像素点的像素值,确定目标区域对应的场景元素密度分布信息。
在一个实施例中,像素值修改操作可以通过针对掩膜图像中像素点的选择操作来实现。终端可响应于针对掩膜图像中像素点的选择操作,确定在掩膜图像上所选择的像素区域。终端可修改像素区域内的像素点的像素值,以基于修改后的掩膜图像中各像素点的像素值,得到目标区域对应的场景元素密度分布信息。其中,像素区域,是基于针对掩膜图像中像素点的选择操作所选择的像素点所在的区域。这样,通过针对掩膜图像中像素点的选择操作来确定待修改的像素区域,提供了一种灵活、快速的像素区域的确定方式。进而通过修改像素区域内的像素点的像素值,实现了针对控制场景元素的渲染生成,从而提高渲染效果。
上述实施例中,通过在目标区域上覆盖掩膜图像,并通过掩膜图像中像素点的像素值来表征场景元素密度分布信息,进而可通过对掩膜图像中像素点的像素值进行更新,快速获得目标区域对应的场景元素密度分布信息,提升场景元素密度分布信息获取效率。
在一个实施例中,目标区域上覆盖有至少一张掩膜图像;目标区域支持渲染生成至少一种元素类型的场景元素;目标区域上覆盖掩膜图像的数量与目标区域支持的元素类型的数量相同;一张掩膜图像对应一种元素类型。
可以理解,场景元素的元素类型可包括多种,针对每一种元素类型,该种元素类型的场景元素,可通过与该种元素类型对应的掩膜图像进行控制,即通过与该种元素类型对应的掩膜图像,记录属于该种元素类型的场景元素的场景元素密度分布信息。各种元素类型对应的掩膜图像之间互相独立,互不影响。
举例说明,目标区域上覆盖有三张掩膜图像,分别是掩膜图像A,掩膜图像B和掩膜图像C。这三张掩膜图像可分别元素类型a、元素类型b和元素类型c。具体地,掩膜图像A用于记录属于元素类型a的场景元素的场景元素密度分布信息,掩膜图像B用于记录属于元素类型b的场景元素的场景元素密度分布信息,掩膜图像C用于记录属于元素类型c的场景元素的场景元素密度分布信息。
上述实施例中,通过在目标区域上覆盖至少一张掩膜图像,并通过每一张掩膜图像对应一种元素类型,可实现在目标区域分别独立渲染生成不同元素类型的场景元素,丰富目标区域上渲染生成的场景元素的元素类型。
在一个实施例中,像素值修改操作通过控制虚拟工具在掩膜图像上移动来实现;响应于针对目标区域上覆盖的掩膜图像的像素值修改操作,对掩膜图像中像素点的像素值进行更新,得到目标区域对应的场景元素密度分布信息,包括:响应于控制虚拟工具在掩膜图像上移动的操作,确定虚拟工具在掩膜图像上的移动区域;移动区域是虚拟工具在掩膜图像上移动时所经过的区域;修改移动区域内的像素点的像素值,以基于修改后的掩膜图像中各像素点的像素值,得到目标区域对应的场景元素密度分布信息。
其中,虚拟工具,是虚拟场景中的绘制工具。即虚拟场景中可以提供该虚拟工具,以供操作使用。可以理解,控制虚拟工具在掩膜图像上移动,是上述实现针对掩膜图像中像素点的选择操作的其中一种实现方式。通过控制虚拟工具在掩膜图像上进行移动,可以修改移动区域内的像素点的像素值。即,虚拟工具是通过移动修改掩膜图像上各点像素值以渲染生成场景元素的工具。
具体地,用户可控制虚拟工具在掩膜图像上移动,终端可响应于控制虚拟工具在掩膜图像上移动的操作,确定虚拟工具在掩膜图像上移动时所经过的移动区域。终端可修改移动区域内的像素点的像素值,得到修改后的掩膜图像中各像素点的像素值。进而,终端可基于修改后的掩膜图像中各像素点的像素值,确定目标区域对应的场景元素密度分布信息。
在一个实施例中,虚拟工具范围的大小可改变,可以理解,虚拟工具范围越大,虚拟工具在掩膜图像上移动时所经过的移动区域就越大,即,终端可修改移动区域内的像素点的数量就越多,可一次性渲染生成场景元素的范围就越大。虚拟工具范围越小,虚拟工具在掩膜图像上移动时所经过的移动区域就越小,即,终端可修改移动区域内的像素点的数量就越少,可一次性渲染生成场景元素的范围就越小。这样,通过控制虚拟工具范围大小,可按需一次性渲染生成不同范围大小的场景元素,相较于传统的按颗或按单元格渲染生成场景元素的方式,本申请渲染生成场景元素的操作更为便捷,可以一次性渲染生成大范围的场景元素。
在一个实施例中,如图5所示,终端可通过展示界面501展示各种元素类型的场景元素,比如,展示界面501中展示的元素类型1至元素类型12。在选定元素类型之后,终端可通过虚拟工具描述界面503展示所选定的元素类型和针对虚拟工具的相关描述,即“通过虚拟工具可批量生成场景元素,同时可控制虚拟工具范围大小”。比如,在选定元素类型10之后,终端可通过虚拟工具描述界面503展示所选定的元素类型10和针对虚拟工具的相关描述。
在一个实施例中,如图6所示,响应于控制虚拟工具在掩膜图像601上移动的操作,终端可确定虚拟工具在掩膜图像上的移动区域602。
在一个实施例中,虚拟工具可以是虚拟笔刷。如图7所示,以虚拟工具为虚拟笔刷为例,终端可通过笔刷范围调节界面701展示用于调节虚拟笔刷范围大小的笔刷范围调节控件702,终端可通过笔刷范围调节控件702调节虚拟笔刷范围的大小。
在一个实施例中,如图8所示,响应于控制虚拟笔刷在掩膜图像801上移动的操作,终端可确定虚拟笔刷在掩膜图像上的移动区域802和移动区域803,其中,移动区域802对应的虚拟笔刷范围大于移动区域803对应的虚拟笔刷范围,移动区域802的面积大于移动区域803的面积。
上述实施例中,通过控制虚拟工具在掩膜图像上移动的操作来修改移动区域内的像素点的像素值,从而实现控制场景元素的渲染生成,提供了一种全新的场景元素渲染生成方式,通过控制虚拟工具来渲染生成场景元素,操作更为便捷。
在一个实施例中,获取第一元素密度值;第一元素密度值,是目标区域支持渲染的目标元素类型所对应的最大元素密度值;目标元素类型是待生成场景元素所属的元素类型;根据第一元素密度值和目标区域的大小,确定待生成的场景元素的目标数量;在目标区域中确定符合目标数量的候选位置。
具体地,第一元素密度值是通过预先设置得到的、且目标区域支持渲染的目标元素类型所对应的最大元素密度值。终端可获取预先设置的、且目标区域支持渲染的目标元素类型所对应的第一元素密度值。终端可根据第一元素密度值和目标区域的大小,确定待生成的场景元素的目标数量。进而,终端可在目标区域中确定符合目标数量的候选位置。
在一个实施例中,终端可根据第一元素密度值和目标区域的面积,确定待生成的场景元素的目标数量。
在一个实施例中,终端可将第一元素密度值和目标区域的面积的乘积,直接作为待生成的场景元素的目标数量。
上述实施例中,根据目标区域支持渲染的目标元素类型所对应的第一元素密度值和目标区域的大小,确定待生成的场景元素的目标数量,并在目标区域中确定符合目标数量的候选位置,这样,根据场景元素的元素类型不同,在目标区域中设定不同数量的候选位置,提升了在目标区域中渲染生成场景元素的视觉效果。
在一个实施例中,目标区域中候选位置的目标数量,也可是预先设置的。终端可在目标区域中确定符合预设设置的目标数量的候选位置。
在一个实施例中,目标区域上覆盖有掩膜图像;掩膜图像中像素点的像素值用于表征场景元素密度分布信息;基于场景元素密度分布信息,确定目标区域中各候选位置对应的元素密度值,包括:针对目标区域中的每一个候选位置,基于候选位置在掩膜图像中进行上采样,并将上采样得到的像素值作为候选位置对应的元素密度值。
其中,上采样是一种提升图像分辨率的图像处理方式,通过对图像进行上采样可以提升图像的分辨率。可以理解,上采样可在图像原有像素点的像素值基础上,确定图像中已丢失像素点的像素值。
具体地,针对目标区域中的每一个候选位置,终端可基于该候选位置,在掩膜图像中进行上采样。进而,终端可将上采样得到的像素值作为候选位置对应的元素密度值。可以理解,上采样得到的像素值可以是掩膜图像中像素点的像素值,也可以是基于掩膜图像中像素点的像素值计算得到的新的像素值。
在一个实施例中,可通过单线性插值采样的方式实现上采样,也可通过双线性插值采样的方式实现上采样,本实施例对上采样的方式不做具体限定。
上述实施例中,通过基于候选位置在掩膜图像中进行上采样,将上采样得到的像素值作为候选位置对应的元素密度值,可提升候选位置对应的元素密度值的准确率。
在一个实施例中,针对目标区域中的每一个候选位置,基于候选位置在掩膜图像中进行上采样,并将上采样得到的像素值作为候选位置对应的元素密度值,包括:在掩膜图像中确定与候选位置具有映射关系的映射位置;在掩膜图像的各个像素点中确定与映射位置相邻的多个目标像素点;根据多个目标像素点分别对应的像素值,确定映射位置对应的像素值,以得到候选位置对应的元素密度值。
其中,映射位置,是掩膜图像中的、且与目标区域的候选位置具有映射关系的位置。目标像素点, 是掩膜图像的各个像素点中的、且与映射位置相邻的像素点。
具体地,针对每一个候选位置,终端可在掩膜图像中确定与该候选位置具有映射关系的映射位置,并在掩膜图像的各个像素点中,确定与映射位置相邻的多个目标像素点。终端可获取多个目标像素点分别对应的像素值。进而,终端可根据多个目标像素点分别对应的像素值,确定映射位置对应的像素值,并基于映射位置对应的像素值,确定候选位置对应的元素密度值。
在一个实施例中,终端可将映射位置对应的像素值,直接作为候选位置对应的元素密度值。
举例说明,终端可在掩膜图像的各个像素点中,确定与映射位置相邻的四个目标像素点。终端可获取这四个目标像素点分别对应的像素值。进而,终端可根据这四个目标像素点分别对应的像素值,确定映射位置对应的像素值。
上述实施例中,通过根据与映射位置相邻的目标像素点所对应的像素值,可以确定映射位置对应的像素值,根据映射位置对应的像素值可确定候选位置对应的元素密度值,这样可进一步提升候选位置对应的元素密度值的准确率。
在一个实施例中,基于元素密度值从各候选位置中确定元素生成位置,包括:基于各候选位置对应的元素密度值,分别确定各候选位置对应的元素生成概率;元素生成概率是指在各候选位置渲染生成场景元素的概率;根据各候选位置对应的元素生成概率,从各候选位置中确定元素生成位置。
具体地,终端可基于各候选位置对应的元素密度值,分别确定在各候选位置渲染生成场景元素的元素生成概率。进而,终端可根据各候选位置对应的元素生成概率,从目标区域的各候选位置中确定元素生成位置。
在一个实施例中,终端可将各候选位置对应的元素密度值,分别作为在各候选位置渲染生成场景元素的元素生成概率。
上述实施例中,通过各候选位置对应的元素密度值,可分别确定各候选位置对应的元素生成概率,根据各候选位置对应的元素生成概率,可从各候选位置中确定元素生成位置,提升了所确定的元素生成位置的准确性。
在一个实施例中,基于各候选位置对应的元素密度值,分别确定各候选位置对应的元素生成概率,包括:针对每一个候选位置,获取候选位置对应的第二元素密度值;第二元素密度值,是候选位置支持渲染生成场景元素的最大元素密度值;基于候选位置对应的元素密度值与第二元素密度值的比值,确定候选位置对应的元素生成概率。
具体地,针对每一个候选位置,终端可获取候选位置支持渲染生成场景元素的第二元素密度值,并确定候选位置对应的元素密度值与第二元素密度值的比值。进而,终端可基于候选位置对应的元素密度值与第二元素密度值的比值,确定候选位置对应的元素生成概率。
在一个实施例中,终端可将候选位置对应的元素密度值与第二元素密度值的比值,直接作为候选位置对应的元素生成概率。
在一个实施例中,若掩膜图像中各个像素点的像素值的取值范围为0至15之间。如图9所示,掩膜图像中任意两个相邻的像素点901和902,其中,像素点901的像素值为0,像素点902的像素值为15。候选位置A位于像素点901的像素中心,可知候选位置A的像素值为像素点901的像素值,即候选位置A的像素值0,候选位置C位于像素点902的像素中心,可知候选位置C的像素值为像素点902的像素值,即候选位置C的像素值15,候选位置B位于像素点901与像素点902的中间,可知候选位置B的像素值为像素点901的像素值0与像素点902的像素值15的平均值,即候选位置B的像素值7.5。可以理解,候选位置A对应的元素密度值为0,候选位置B对应的元素密度值为7.5,候选位置C对应的元素密度值为1。终端可将各候选位置对应的元素密度值与候选位置支持渲染生成场景元素的最大元素密度值15的比值,直接作为候选位置对应的元素生成概率,可知,候选位置A对应的元素生成概率为0,候选位置B对应的元素生成概率为0.5,候选位置C对应的元素生成概率为1。
上述实施例中,通过候选位置对应的元素密度值与候选位置支持渲染生成场景元素的第二元素密度值的比值,可确定候选位置对应的元素生成概率,提升了元素生成概率的准确性。
在一个实施例中,方法还包括:生成与目标区域对应的随机数;根据各候选位置对应的元素生成 概率,从各候选位置中确定元素生成位置,包括:根据各候选位置对应的元素生成概率和随机数的大小关系,从各候选位置中确定元素生成位置。
其中,随机数,是随机生成的、且与目标区域对应的数值。
具体地,终端可将各候选位置对应的元素生成概率分别和随机数进行大小比较,得到各候选位置对应的元素生成概率和随机数的大小关系。进而,终端可根据各候选位置对应的元素生成概率和随机数的大小关系,从各候选位置中确定元素生成位置。
在一个实施例中,终端可将元素生成概率大于或等于随机数的候选位置,确定为元素生成位置。
上述实施例中,通过各候选位置对应的元素生成概率和随机数的大小关系,可以从各候选位置中确定元素生成位置。通过增加随机数,可避免元素在生成概率相同的多个候选位置上同时生成或同时不生成场景元素,从而避免了重复感。
在一个实施例中,目标区域对应的场景元素密度分布信息包括多个元素密度值;方法还包括:获取执行场景元素添加操作之前目标区域对应的初始场景元素密度分布信息;初始场景元素密度分布信息包括多个初始元素密度值;将场景元素密度分布信息中的各个元素密度值分别与各个初始元素密度值进行比对,以从场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值;将已更新的元素密度值同步至服务器。
其中,初始元素密度值,是用于表征执行场景元素添加操作之前的、且待生成的场景元素的密度值。
具体地,终端可获取执行场景元素添加操作之前目标区域对应的初始场景元素密度分布信息。终端可将场景元素密度分布信息中的各个元素密度值,分别与初始场景元素密度分布信息中的各个初始元素密度值进行比对,并根据比对结果,从场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值。进而,终端可将已更新的元素密度值同步至服务器,以便服务器存储已更新的元素密度值。可以理解,服务器中已存储有执行场景元素添加操作之前目标区域对应的初始场景元素密度分布信息,在接收到终端同步的已更新的元素密度值后,服务器可根据已更新的元素密度值对应的位置坐标,查找初始场景元素密度分布信息中相应的初始元素密度值,并将查找到的相应的初始元素密度值修改为已更新的元素密度值。
上述实施例中,通过将场景元素密度分布信息中的各个元素密度值分别与各个初始元素密度值进行比对,可以从场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值,通过增量更新的方式,仅将已更新的元素密度值同步至服务器,相较于传统的将全量的元素密度值发送至服务器的数据更新方式,本申请增量更新的方式减少了每次同步的数据量,提升了数据更新效率。
在一个实施例中,如图10所示,本申请可提供8种元素类型的场景元素,即元素类型1至元素类型8。如图11,若选定了图10中的元素类型3,终端可在目标区域中渲染生成属于元素类型3的场景元素。从图11可知,场景元素在目标区域中的分布可呈现中间茂盛两边稀疏的情况,避免了生成的场景元素呈现单调重复分布的情况,从而能够使场景元素更好地融入虚拟场景。
在一个实施例中,虚拟场景为虚拟游戏场景;目标区域包括虚拟游戏场景中的虚拟地块;场景元素包括虚拟游戏场景中的虚拟植物;场景元素密度分布信息,用于指示至少一个待生成的虚拟植物在虚拟地块中的分布情况。
具体地,终端可展示虚拟游戏场景中待进行场景元素渲染处理的虚拟地块,并可响应于针对虚拟地块的场景元素添加操作,获取虚拟地块对应的场景元素密度分布信息,其中,场景元素密度分布信息,用于指示至少一个待生成的虚拟植物在虚拟地块中的分布情况。终端可基于场景元素密度分布信息,确定虚拟地块中各候选位置对应的元素密度值。进而,终端可基于元素密度值从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的虚拟植物。可以理解,上述的元素类型可以是虚拟植物的植物类型。
上述实施例中,通过本申请的将场景元素处理方法应用于虚拟游戏场景,可使得拟游戏场景中的虚拟植物能够更好地融入虚拟游戏场景。
如图12所示,在一个实施例中,提供了一种场景元素处理方法,该方法可应用于图1中的终端 102,由终端102自身单独执行,也可以通过终端102和服务器104之间的交互来实现。该方法具体包括以下步骤:
步骤1202,展示虚拟场景中待进行场景元素渲染处理的目标区域;目标区域上覆盖有掩膜图像;掩膜图像中像素点的像素值用于表征目标区域对应的场景元素密度分布信息;场景元素密度分布信息,用于指示至少一个待生成的场景元素在目标区域中的分布情况。
步骤1204,响应于控制虚拟工具在掩膜图像上移动的操作,确定虚拟工具在掩膜图像上的移动区域;移动区域是虚拟工具在掩膜图像上移动时所经过的区域。
步骤1206,修改移动区域内的像素点的像素值,以基于修改后的掩膜图像中各像素点的像素值,得到目标区域对应的场景元素密度分布信息。
在一个实施例中,目标区域上覆盖有至少一张掩膜图像;目标区域支持渲染生成至少一种元素类型的场景元素;目标区域上覆盖掩膜图像的数量与目标区域支持的元素类型的数量相同;一张掩膜图像对应一种元素类型。
步骤1208,获取第一元素密度值;第一元素密度值,是目标区域支持渲染的目标元素类型所对应的最大元素密度值;目标元素类型是待生成场景元素所属的元素类型。
步骤1210,根据第一元素密度值和目标区域的大小,确定待生成的场景元素的目标数量。
步骤1212,在目标区域中确定符合目标数量的候选位置,在掩膜图像中确定与候选位置具有映射关系的映射位置。
步骤1214,在掩膜图像的各个像素点中确定与映射位置相邻的多个目标像素点。
步骤1216,根据多个目标像素点分别对应的像素值,确定映射位置对应的像素值,以得到候选位置对应的元素密度值。
步骤1218,针对每一个候选位置,获取候选位置对应的第二元素密度值;第二元素密度值,是候选位置支持渲染生成场景元素的最大元素密度值。
步骤1220,基于候选位置对应的元素密度值与第二元素密度值的比值,确定候选位置对应的元素生成概率;元素生成概率是指在各候选位置渲染生成场景元素的概率。
步骤1222,生成与目标区域对应的随机数。
步骤1224,根据各候选位置对应的元素生成概率和随机数的大小关系,从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。
步骤1226,获取执行场景元素添加操作之前目标区域对应的初始场景元素密度分布信息;初始场景元素密度分布信息包括多个初始元素密度值。
步骤1228,将场景元素密度分布信息中的各个元素密度值分别与各个初始元素密度值进行比对,以从场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值。
步骤1230,将已更新的元素密度值同步至服务器。
本申请还提供一种应用场景,该应用场景应用上述的场景元素处理方法。具体地,该场景元素处理方法可应用于游戏中虚拟植物生成的场景。终端可展示虚拟游戏场景中待进行场景元素渲染处理的虚拟地块;虚拟地块上覆盖有掩膜图像;掩膜图像中像素点的像素值用于表征虚拟地块对应的场景元素密度分布信息;场景元素密度分布信息,用于指示至少一个待生成的虚拟植物在虚拟地块中的分布情况。响应于控制虚拟工具在掩膜图像上移动的操作,确定虚拟工具在掩膜图像上的移动区域;移动区域是虚拟工具在掩膜图像上移动时所经过的区域。修改移动区域内的像素点的像素值,以基于修改后的掩膜图像中各像素点的像素值,得到虚拟地块对应的场景元素密度分布信息。
需要说明的是,虚拟地块上可覆盖有至少一张掩膜图像;虚拟地块中可支持渲染生成至少一种元素类型的虚拟植物;虚拟地块上覆盖掩膜图像的数量与虚拟地块中支持的元素类型的数量相同;一张掩膜图像对应一种元素类型。
终端可获取第一元素密度值;第一元素密度值,是虚拟地块支持渲染的目标元素类型所对应的最大元素密度值;目标元素类型是待生成虚拟植物所属的元素类型。根据第一元素密度值和虚拟地块的大小,确定待生成的虚拟植物的目标数量。在虚拟地块中确定符合目标数量的候选位置,在掩膜图像 中确定与候选位置具有映射关系的映射位置。在掩膜图像的各个像素点中确定与映射位置相邻的多个目标像素点。根据多个目标像素点分别对应的像素值,确定映射位置对应的像素值,以得到候选位置对应的元素密度值。
针对每一个候选位置,终端可获取候选位置对应的第二元素密度值;第二元素密度值,是候选位置支持渲染生成虚拟植物的最大元素密度值。基于候选位置对应的元素密度值与第二元素密度值的比值,确定候选位置对应的元素生成概率;元素生成概率是指在各候选位置渲染生成虚拟植物的概率。生成与虚拟地块对应的随机数;根据各候选位置对应的元素生成概率和随机数的大小关系,从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的虚拟植物。
终端可获取执行场景元素添加操作之前虚拟地块对应的初始场景元素密度分布信息;初始场景元素密度分布信息包括多个初始元素密度值;将场景元素密度分布信息中的各个元素密度值分别与各个初始元素密度值进行比对,以从场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值;将已更新的元素密度值同步至服务器。
举例说明,本申请的场景元素处理方法可应用于多人在线角色扮演游戏中,可以理解,多人在线角色扮演游戏中可提供一个家园系统入口,通过家园系统入口玩家可进入家园系统,并在家园系统中通过虚拟工具编辑覆盖在虚拟地块上的掩膜图像,改变待生成的虚拟植物的元素密度值,从而可实现在虚拟地块上种植虚拟植物。通过修改虚拟地块中待生成的虚拟植物的元素密度值,可避免生成的虚拟植物呈现单调重复分布的情况,使虚拟植物可以更好地融入家园系统中的游戏场景,从而能够提高渲染效果。
本申请还另外提供一种应用场景,该应用场景应用上述的场景元素处理方法。具体地,该场景元素处理方法可应用于游戏中虚拟人物生成的场景或游戏中虚拟动物生成的场景。可以理解,该场景元素处理方法还可应用于可视化设计和VR(Virtual Reality,虚拟现实)等中虚拟元素(即场景元素)生成的场景。可以理解,虚拟元素可包括虚拟植物、虚拟人物、虚拟动物和虚拟道具等中的一种。比如,在VR场景中,通过本申请的场景元素处理方法,可在VR场景的目标区域中渲染生成相应的虚拟元素,使虚拟元素更好地融入VR场景。可以理解,本申请的场景元素处理方法还可应用于工业化设计当中的应用场景,比如,可将本申请的场景元素处理方法应用于工业设计软件中批量生成虚拟建筑等场景元素。通过本申请的场景元素处理方法,可在工业设计场景的目标区域中渲染生成相应的虚拟建筑,使虚拟建筑更好地融入工业设计场景,从而有效地辅助工业设计,满足工业设计较为复杂的要求。
应该理解的是,虽然上述各实施例的流程图中的各个步骤按照顺序依次显示,但是这些步骤并不是必然按照顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述各实施例中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图13所示,提供了一种场景元素处理装置1300,该装置可以采用软件模块或硬件模块,或者是二者的结合成为计算机设备的一部分,该装置具体包括:
展示模块1302,用于展示虚拟场景中待进行场景元素渲染处理的目标区域。
获取模块1304,用于响应于针对目标区域的场景元素添加操作,获取目标区域对应的场景元素密度分布信息;场景元素密度分布信息,用于指示至少一个待生成的场景元素在目标区域中的分布情况。
确定模块1306,用于基于场景元素密度分布信息,确定目标区域中各候选位置对应的元素密度值。
生成模块1308,用于基于元素密度值从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。
在一个实施例中,目标区域上覆盖有掩膜图像;掩膜图像中像素点的像素值用于表征场景元素密度分布信息;场景元素添加操作包括像素值修改操作;像素值修改操作,用于修改掩膜图像中像素点的像素值;获取模块1304还用于响应于针对目标区域上覆盖的掩膜图像的像素值修改操作,对掩膜图像中像素点的像素值进行更新,得到目标区域对应的场景元素密度分布信息。
在一个实施例中,目标区域上覆盖有至少一张掩膜图像;目标区域支持渲染生成至少一种元素类型的场景元素;目标区域上覆盖掩膜图像的数量与目标区域支持的元素类型的数量相同;一张掩膜图像对应一种元素类型。
在一个实施例中,像素值修改操作通过控制虚拟工具在掩膜图像上移动来实现;获取模块1304还用于响应于控制虚拟工具在掩膜图像上移动的操作,确定虚拟工具在掩膜图像上的移动区域;移动区域是虚拟工具在掩膜图像上移动时所经过的区域;修改移动区域内的像素点的像素值,以基于修改后的掩膜图像中各像素点的像素值,得到目标区域对应的场景元素密度分布信息。
在一个实施例中,确定模块1306还用于获取第一元素密度值;第一元素密度值,是目标区域支持渲染的目标元素类型所对应的最大元素密度值;目标元素类型是待生成场景元素所属的元素类型;根据第一元素密度值和目标区域的大小,确定待生成的场景元素的目标数量;在目标区域中确定符合目标数量的候选位置。
在一个实施例中,目标区域上覆盖有掩膜图像;掩膜图像中像素点的像素值用于表征场景元素密度分布信息;确定模块1306还用于针对目标区域中的每一个候选位置,基于候选位置在掩膜图像中进行上采样,并将上采样得到的像素值作为候选位置对应的元素密度值。
在一个实施例中,确定模块1306还用于在掩膜图像中确定与候选位置具有映射关系的映射位置;在掩膜图像的各个像素点中确定与映射位置相邻的多个目标像素点;根据多个目标像素点分别对应的像素值,确定映射位置对应的像素值,以得到候选位置对应的元素密度值。
在一个实施例中,生成模块1308还用于基于各候选位置对应的元素密度值,分别确定各候选位置对应的元素生成概率;元素生成概率是指在各候选位置渲染生成场景元素的概率;根据各候选位置对应的元素生成概率,从各候选位置中确定元素生成位置。
在一个实施例中,生成模块1308还用于针对每一个候选位置,获取候选位置对应的第二元素密度值;第二元素密度值,是候选位置支持渲染生成场景元素的最大元素密度值;基于候选位置对应的元素密度值与第二元素密度值的比值,确定候选位置对应的元素生成概率。
在一个实施例中,生成模块1308还用于生成与目标区域对应的随机数;根据各候选位置对应的元素生成概率和随机数的大小关系,从各候选位置中确定元素生成位置。
在一个实施例中,目标区域对应的场景元素密度分布信息包括多个元素密度值;装置还包括:同步模块,用于获取执行场景元素添加操作之前目标区域对应的初始场景元素密度分布信息;初始场景元素密度分布信息包括多个初始元素密度值;将场景元素密度分布信息中的各个元素密度值分别与各个初始元素密度值进行比对,以从场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值;将已更新的元素密度值同步至服务器。
在一个实施例中,虚拟场景为虚拟游戏场景;目标区域包括虚拟游戏场景中的虚拟地块;场景元素包括虚拟游戏场景中的虚拟植物;场景元素密度分布信息,用于指示至少一个待生成的虚拟植物在虚拟地块中的分布情况。
上述场景元素处理装置,通过展示虚拟场景中待进行场景元素渲染处理的目标区域,响应于针对目标区域的场景元素添加操作,可获取目标区域对应的场景元素密度分布信息,其中,场景元素密度分布信息,可用于指示至少一个待生成的场景元素在目标区域中的分布情况。基于场景元素密度分布信息,可确定目标区域中各候选位置对应的元素密度值,进而基于元素密度值可从各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。相较于传统的通过单个或以单元格生成场景元素的方式,本申请的场景元素处理方法通过执行针对目标区域的场景元素添加操作,可个性化地对目标区域对应的场景元素密度分布信息进行更新,以提升目标区域中待生成的场景元素的元素密度值,进而可基于更新后的场景元素密度分布信息,从目标区域的各候选位置中确定元素生成位置,并在元素生成位置中渲染生成相应的场景元素。这样,通过修改目标区域中待生成的场景元素的元素密度值,可避免生成的场景元素呈现单调重复分布的情况,使场景元素可以更好地融入虚拟场景,从而能够提高渲染效果。
上述场景元素处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图14所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机可读指令被处理器执行时以实现一种场景元素处理方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图14中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括一个或多个存储器和处理器,存储器中存储有计算机可读指令,该处理器执行计算机可读指令时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一个或多个计算机可读存储介质,存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机可读指令,计算机可读指令被一个或多个处理器执行时实现上述各方法实施例中的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (19)

  1. 一种场景元素处理方法,其特征在于,由终端执行,所述方法包括:
    展示虚拟场景中待进行场景元素渲染处理的目标区域;
    响应于针对所述目标区域的场景元素添加操作,获取所述目标区域对应的场景元素密度分布信息;所述场景元素密度分布信息,用于指示至少一个待生成的场景元素在所述目标区域中的分布情况;
    基于所述场景元素密度分布信息,确定所述目标区域中各候选位置对应的元素密度值;
    基于所述元素密度值从各候选位置中确定元素生成位置,并在所述元素生成位置中渲染生成相应的场景元素。
  2. 根据权利要求1所述的方法,其特征在于,所述目标区域上覆盖有掩膜图像;所述掩膜图像中像素点的像素值用于表征所述场景元素密度分布信息;所述场景元素添加操作包括像素值修改操作;所述像素值修改操作,用于修改所述掩膜图像中像素点的像素值;
    所述响应于针对所述目标区域的场景元素添加操作,获取所述目标区域对应的场景元素密度分布信息,包括:
    响应于针对所述目标区域上覆盖的掩膜图像的像素值修改操作,对所述掩膜图像中像素点的像素值进行更新,得到所述目标区域对应的场景元素密度分布信息。
  3. 根据权利要求2所述的方法,其特征在于,所述目标区域上覆盖有至少一张掩膜图像;所述目标区域支持渲染生成至少一种元素类型的场景元素;所述目标区域上覆盖掩膜图像的数量与所述目标区域支持的元素类型的数量相同;一张掩膜图像对应一种元素类型。
  4. 根据权利要求2所述的方法,其特征在于,所述像素值修改操作通过控制虚拟工具在所述掩膜图像上移动来实现;所述响应于针对所述目标区域上覆盖的掩膜图像的像素值修改操作,对所述掩膜图像中像素点的像素值进行更新,得到所述目标区域对应的场景元素密度分布信息,包括:
    响应于控制所述虚拟工具在所述掩膜图像上移动的操作,确定所述虚拟工具在所述掩膜图像上的移动区域;所述移动区域是所述虚拟工具在所述掩膜图像上移动时所经过的区域;
    修改所述移动区域内的像素点的像素值,以基于修改后的掩膜图像中各像素点的像素值,得到所述目标区域对应的场景元素密度分布信息。
  5. 根据权利要求4所述的方法,其特征在于,所述虚拟工具范围的大小支持改变;所述虚拟工具范围越大,所述虚拟工具在所述掩膜图像上移动时所经过的所述移动区域越大;同步修改所述移动区域内的像素点的数量越多;所述虚拟工具范围越小,所述虚拟工具在所述掩膜图像上移动时所经过的所述移动区域越小;同步修改所述移动区域内的像素点的数量越少。
  6. 根据权利要求2所述的方法,其特征在于,所述像素值修改操作通过针对所述掩膜图像中像素点的选择操作来实现;所述响应于针对所述目标区域上覆盖的掩膜图像的像素值修改操作,对所述掩膜图像中像素点的像素值进行更新,得到所述目标区域对应的场景元素密度分布信息,包括:
    终响应于针对所述掩膜图像中像素点的选择操作,确定在所述掩膜图像上所选择的像素区域;所述像素区域,是基于针对所述掩膜图像中像素点的选择操作所选择的像素点所在的区域;
    修改所述像素区域内的像素点的像素值,并基于修改后的所述掩膜图像中各像素点的像素值,得到所述目标区域对应的场景元素密度分布信息。
  7. 根据权利要求1所述的方法,其特征在于,所述方法包括:
    获取第一元素密度值;所述第一元素密度值,是所述目标区域支持渲染的目标元素类型所对应的最大元素密度值;所述目标元素类型是待生成场景元素所属的元素类型;
    根据所述第一元素密度值和所述目标区域的大小,确定所述待生成的场景元素的目标数量;
    在所述目标区域中确定符合所述目标数量的候选位置。
  8. 根据权利要求1所述的方法,其特征在于,所述目标区域上覆盖有掩膜图像;所述掩膜图像中像素点的像素值用于表征所述场景元素密度分布信息;
    所述基于所述场景元素密度分布信息,确定所述目标区域中各候选位置对应的元素密度值,包括:
    针对所述目标区域中的每一个候选位置,基于所述候选位置在所述掩膜图像中进行上采样,并将 上采样得到的像素值作为所述候选位置对应的元素密度值。
  9. 根据权利要求8所述的方法,其特征在于,所述针对所述目标区域中的每一个候选位置,基于所述候选位置在所述掩膜图像中进行上采样,并将上采样得到的像素值作为所述候选位置对应的元素密度值,包括:
    在所述掩膜图像中确定与所述候选位置具有映射关系的映射位置;
    在所述掩膜图像的各个像素点中确定与所述映射位置相邻的多个目标像素点;
    根据所述多个目标像素点分别对应的像素值,确定所述映射位置对应的像素值,以得到所述候选位置对应的元素密度值。
  10. 根据权利要求1所述的方法,其特征在于,所述基于所述元素密度值从各候选位置中确定元素生成位置,包括:
    基于各候选位置对应的元素密度值,分别确定各候选位置对应的元素生成概率;所述元素生成概率是指在所述各候选位置渲染生成场景元素的概率;
    根据所述各候选位置对应的元素生成概率,从所述各候选位置中确定元素生成位置。
  11. 根据权利要求10所述的方法,其特征在于,所述基于各候选位置对应的元素密度值,分别确定各候选位置对应的元素生成概率,包括:
    针对每一个候选位置,获取所述候选位置对应的第二元素密度值;所述第二元素密度值,是所述候选位置支持渲染生成所述场景元素的最大元素密度值;
    基于所述候选位置对应的元素密度值与所述第二元素密度值的比值,确定所述候选位置对应的元素生成概率。
  12. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    生成与所述目标区域对应的随机数;
    所述根据所述各候选位置对应的元素生成概率,从所述各候选位置中确定元素生成位置,包括:
    根据所述各候选位置对应的元素生成概率和所述随机数的大小关系,从所述各候选位置中确定元素生成位置。
  13. 根据权利要求1所述的方法,其特征在于,所述基于所述元素密度值从各候选位置中确定元素生成位置,包括:
    将各候选位置分别对应的所述元素密度值与预设的元素密度阈值进行比对,并将所述元素密度值大于所述元素密度阈值的候选位置,确定为元素生成位置。
  14. 根据权利要求1所述的方法,其特征在于,所述目标区域对应的场景元素密度分布信息包括多个元素密度值;所述方法还包括:
    获取执行所述场景元素添加操作之前所述目标区域对应的初始场景元素密度分布信息;所述初始场景元素密度分布信息包括多个初始元素密度值;
    将所述场景元素密度分布信息中的各个元素密度值分别与各个所述初始元素密度值进行比对,以从所述场景元素密度分布信息的各个元素密度值中,筛选出已更新的元素密度值;
    将所述已更新的元素密度值同步至服务器。
  15. 根据权利要求1至14中任一项所述的方法,其特征在于,所述虚拟场景为虚拟游戏场景;所述目标区域包括所述虚拟游戏场景中的虚拟地块;所述场景元素包括所述虚拟游戏场景中的虚拟植物;所述场景元素密度分布信息,用于指示至少一个待生成的虚拟植物在所述虚拟地块中的分布情况。
  16. 一种场景元素处理装置,其特征在于,所述装置包括:
    展示模块,用于展示虚拟场景中待进行场景元素渲染处理的目标区域;
    获取模块,用于响应于针对所述目标区域的场景元素添加操作,获取所述目标区域对应的场景元素密度分布信息;所述场景元素密度分布信息,用于指示至少一个待生成的场景元素在所述目标区域中的分布情况;
    确定模块,用于基于所述场景元素密度分布信息,确定所述目标区域中各候选位置对应的元素密度值;
    生成模块,用于基于所述元素密度值从各候选位置中确定元素生成位置,并在所述元素生成位置中渲染生成相应的场景元素。
  17. 一种计算机设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现权利要求1至15中任一项所述的方法的步骤。
  18. 一个或多个计算机可读存储介质,存储有计算机可读指令,其特征在于,所述计算机可读指令被一个或多个处理器执行时实现权利要求1至15中任一项所述的方法的步骤。
  19. 一种计算机程序产品,包括计算机可读指令,其特征在于,所述计算机可读指令被一个或多个处理器执行时实现权利要求1至15中任一项所述的方法的步骤。
PCT/CN2022/137148 2022-03-18 2022-12-07 场景元素处理方法、装置、设备和介质 WO2023173828A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/238,413 US20230401806A1 (en) 2022-03-18 2023-08-25 Scene element processing method and apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210267179.1 2022-03-18
CN202210267179.1A CN114344894B (zh) 2022-03-18 2022-03-18 场景元素处理方法、装置、设备和介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/238,413 Continuation US20230401806A1 (en) 2022-03-18 2023-08-25 Scene element processing method and apparatus, device, and medium

Publications (1)

Publication Number Publication Date
WO2023173828A1 true WO2023173828A1 (zh) 2023-09-21

Family

ID=81094776

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137148 WO2023173828A1 (zh) 2022-03-18 2022-12-07 场景元素处理方法、装置、设备和介质

Country Status (3)

Country Link
US (1) US20230401806A1 (zh)
CN (1) CN114344894B (zh)
WO (1) WO2023173828A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114344894B (zh) * 2022-03-18 2022-06-03 腾讯科技(深圳)有限公司 场景元素处理方法、装置、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080238919A1 (en) * 2007-03-27 2008-10-02 Utah State University System and method for rendering of texel imagery
CN108543311A (zh) * 2018-04-20 2018-09-18 苏州蜗牛数字科技股份有限公司 一种自动生成场景植被系统的方法
CN112245926A (zh) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 虚拟地形的渲染方法、装置、设备及介质
CN112587921A (zh) * 2020-12-16 2021-04-02 成都完美时空网络技术有限公司 模型处理方法和装置、电子设备和存储介质
CN114344894A (zh) * 2022-03-18 2022-04-15 腾讯科技(深圳)有限公司 场景元素处理方法、装置、设备和介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080238919A1 (en) * 2007-03-27 2008-10-02 Utah State University System and method for rendering of texel imagery
CN108543311A (zh) * 2018-04-20 2018-09-18 苏州蜗牛数字科技股份有限公司 一种自动生成场景植被系统的方法
CN112245926A (zh) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 虚拟地形的渲染方法、装置、设备及介质
CN112587921A (zh) * 2020-12-16 2021-04-02 成都完美时空网络技术有限公司 模型处理方法和装置、电子设备和存储介质
CN114344894A (zh) * 2022-03-18 2022-04-15 腾讯科技(深圳)有限公司 场景元素处理方法、装置、设备和介质

Also Published As

Publication number Publication date
CN114344894B (zh) 2022-06-03
CN114344894A (zh) 2022-04-15
US20230401806A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN109102560B (zh) 三维模型渲染方法及装置
CN108959392B (zh) 在3d模型上展示富文本的方法、装置及设备
WO2023231537A1 (zh) 地形图像渲染方法、装置、设备及计算机可读存储介质及计算机程序产品
CN110689604A (zh) 个性化脸部模型显示方法、装置、设备及存储介质
WO2023173828A1 (zh) 场景元素处理方法、装置、设备和介质
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
CN110688506A (zh) 一种模板生成方法、装置、电子设备及存储介质
CN113470092B (zh) 地形的渲染方法和装置、电子设备和存储介质
CN109598672A (zh) 一种地图道路渲染方法及装置
CN115487495A (zh) 数据渲染方法以及装置
CN113192173B (zh) 三维场景的图像处理方法、装置及电子设备
CN115830210A (zh) 虚拟对象的渲染方法、装置、电子设备及存储介质
WO2022100059A1 (zh) 一种数据存储的管理方法、对象渲染的方法及设备
JP7301453B2 (ja) 画像処理方法、画像処理装置、コンピュータプログラム、及び電子機器
WO2023221683A1 (zh) 图像渲染方法、装置、设备和介质
CN113064539A (zh) 特效控制方法、装置、电子设备及存储介质
CN111681317A (zh) 数据处理方法、装置、电子设备及存储介质
US20220351479A1 (en) Style transfer program and style transfer method
WO2023216771A1 (zh) 虚拟天气交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2024045787A1 (zh) 拾取对象的检测方法、装置、计算机设备、可读存储介质和计算机程序产品
CN116863065A (zh) 多模型同屏渲染方法、装置、电子设备及存储介质
CN117745892A (zh) 粒子生成表现控制方法、装置、存储介质及电子装置
CN114565715A (zh) 用于三维变电站模型复用的渲染方法、装置和计算机设备
CN116016892A (zh) 智能眼镜的图像显示方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22931839

Country of ref document: EP

Kind code of ref document: A1