WO2023184381A1 - 一种图像处理方法 - Google Patents

一种图像处理方法 Download PDF

Info

Publication number
WO2023184381A1
WO2023184381A1 PCT/CN2022/084512 CN2022084512W WO2023184381A1 WO 2023184381 A1 WO2023184381 A1 WO 2023184381A1 CN 2022084512 W CN2022084512 W CN 2022084512W WO 2023184381 A1 WO2023184381 A1 WO 2023184381A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
image
point cloud
cloud data
point
Prior art date
Application number
PCT/CN2022/084512
Other languages
English (en)
French (fr)
Inventor
侯大海
苏琦
殷浩越
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN202280000665.5A priority Critical patent/CN117178296A/zh
Priority to PCT/CN2022/084512 priority patent/WO2023184381A1/zh
Publication of WO2023184381A1 publication Critical patent/WO2023184381A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present disclosure relates to the field of image processing, and more specifically to an image processing method, an image processing plug-in, an electronic device, a computer-readable storage medium and a computer program product.
  • the present disclosure proposes an image processing method, an image processing device, an electronic device, a computer-readable storage medium, and a computer program product to solve the problem of high production costs and modification costs when processing image elements that need to appear repeatedly and randomly.
  • Technical problems include unnatural image presentation and low efficiency of the image generation process.
  • Embodiments of the present disclosure provide an image processing method, including: obtaining a basic model corresponding to an image element, the basic model indicating model data corresponding to the image element, obtaining a bearer model corresponding to the image element, and the bearer
  • the model indicates the distribution area of the image elements on the image;
  • Point cloud data is randomly generated based at least in part on the bearing model, and data corresponding to each point in the point cloud data indicates that the basic model is in spatial distribution information and/or posture information on the bearing model; and generating an image in which a plurality of the image elements are randomly arranged based at least in part on the point cloud data, the base model and the bearing model.
  • randomly generating point cloud data based at least in part on the bearing model includes: responding to a distribution area of the image elements on the image indicated by the bearing model as a closed area, (optionally, using random function) sampling multiple points in the closed area as the point cloud data; in response to the distribution area of the image elements on the image indicated by the bearing model being a non-closed curve, (optionally, using a random function ) samples multiple points on the non-closed curve as the point cloud data.
  • the point cloud data is a data set of multiple points in the image coordinate system corresponding to the image.
  • Each point in the point cloud data has a relationship with a basic model randomly arranged on the bearing model. A corresponding relationship.
  • the randomly generating point cloud data includes: assigning attributes to each point in the point cloud based on the basic model, so that the data corresponding to each point in the point cloud data indicates that the basic model is in the Carrying spatial distribution information and/or attitude information on the model.
  • the spatial distribution information of each point in the point cloud data is the value of each coordinate axis of the basic model corresponding to the point in the image coordinate system, and the posture of each point in the point cloud data is The information is the normal attribute and/or orientation attribute corresponding to the basic model corresponding to the point.
  • the randomly generating point cloud data includes: based on the type of the basic model, determining whether to use an ordered random process or an unordered random process to randomly generate the point cloud data, wherein the ordered random process indicates an ordered random process.
  • the proportion of the regular part determined by the ordered rules is higher than the disordered part determined by the disordered rules, and the ordered random process indicates that the proportion of the regular part is lower than the disordered random.
  • the randomly generating point cloud data includes: in response to determining to use an ordered random process to randomly generate the point cloud data, uniformly sampling a distribution area corresponding to the bearing model to generate the point cloud data ; In response to determining to use an unordered random process to randomly generate the point cloud data, randomly sampling the distribution area corresponding to the bearing model to generate the point cloud data.
  • the point cloud data has one or more of the following point cloud data attributes: random value, random degree, random size range, random color range, random number of types of image elements, random rotation range, and in the The spatial parameters of the base model are not plotted on the load-bearing model.
  • the randomly generating point cloud data includes: selecting one or more points from the point cloud data as a first choke point, wherein the first choke point indicates a point within a certain distance from the choke point. Not arranging the basic model; and generating a blocking sphere based on the first blocking point, and deleting points where the point cloud data is located inside the blocking sphere.
  • the randomly generating point cloud data includes: selecting one or more points from the point cloud data as a second blocking point, wherein the second blocking point indicates a certain distance from the second blocking point.
  • the first basic model is not arranged at the point, and the second basic model is arranged at the second blocking point, where the first basic model corresponds to the first image element and the second basic model corresponds to the Two image elements.
  • the model data corresponding to the basic model is data corresponding to the three-dimensional virtual object associated with the image element
  • the bearer model indicates the drawable area corresponding to the three-dimensional virtual object in the virtual scene.
  • a variety of basic models are arranged on the drawable area on the load-bearing model.
  • the method may optionally include: using the generated image as an asset to generate a basic model for another image.
  • the method may optionally include: generating a base model for another image based on the point cloud data, the base model and the bearing model.
  • obtaining the basic model corresponding to the image element further includes: selecting at least one basic model corresponding to the image element from the basic models corresponding to multiple image elements.
  • An embodiment of the present disclosure provides an image processing plug-in, the image processing plug-in is configured to perform the following operations: obtain a basic model corresponding to an image element, the basic model indicates model data corresponding to the image element, obtain the A bearer model corresponding to the image element, the bearer model indicating the distribution area of the image element on the image; based at least in part on the bearer model, point cloud data is randomly generated, each point cloud data The data corresponding to the points indicates the spatial distribution information and/or posture information of the base model on the bearing model; and generating a random arrangement based at least in part on the point cloud data, the base model and the bearing model.
  • Embodiments of the present disclosure provide an image processing device, including: a basic model acquisition module configured to acquire a basic model corresponding to an image element, where the basic model indicates model data corresponding to the image element; and a loader model acquisition module, configured to acquire a basic model corresponding to an image element.
  • a bearing model corresponding to the image element Configured to obtain a bearing model corresponding to the image element, the bearing model indicating the distribution area of the image element on the image; a point cloud generation module configured to randomly generate based at least in part on the bearing model Point cloud data, data corresponding to each point in the point cloud data indicates spatial distribution information and/or posture information of the base model on the bearing model; and an image generation module configured to be at least partially based on The point cloud data, the basic model and the bearing model generate an image in which a plurality of the image elements are randomly arranged.
  • Some embodiments of the present disclosure provide an electronic device, including: a processor; and a memory.
  • the memory stores computer instructions, and when the computer instructions are executed by the processor, the above method is implemented.
  • Some embodiments of the present disclosure provide a computer-readable storage medium on which computer instructions are stored, and when the computer instructions are executed by a processor, the above-mentioned method is implemented.
  • Some embodiments of the present disclosure provide a computer program product, which includes computer-readable instructions. When executed by a processor, the computer-readable instructions cause the processor to perform the above method.
  • various embodiments of the present disclosure generate basic models and load-carrying models corresponding to image elements, and use point clouds to realize random arrangement of basic models on the load-bearing model. This solves technical problems such as high production and modification costs, unnatural image presentation effects, and low efficiency of the image generation process when processing image elements that need to appear repeatedly and randomly.
  • FIG. 1 is an example schematic diagram illustrating an application scenario according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure.
  • Figure 3 is a schematic diagram illustrating an image generated by a method according to an embodiment of the present disclosure.
  • FIG. 4 is a structural diagram showing an image processing device according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart illustrating an image processing method performed using an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is an example user interface illustrating the basic model acquisition module acquiring a basic model according to an embodiment of the present disclosure.
  • FIG. 7A shows a flow chart in which the bearer model acquisition module acquires the bearer model according to an embodiment of the present disclosure.
  • FIG. 7B shows four example user interfaces for the bearer model acquisition module to acquire the bearer model according to an embodiment of the present disclosure.
  • Figure 7C illustrates an example user interface for parameterizing a bearer model in accordance with an embodiment of the present disclosure.
  • FIG. 8A shows a flow chart of point cloud data generated by a point cloud data processing module according to an embodiment of the present disclosure.
  • FIG. 8B shows a user interface in which the point cloud data processing module assigns attributes of each point in the point cloud based on the basic model according to an embodiment of the present disclosure.
  • 8C illustrates a user interface of a point cloud data processing module selecting a blocking model according to an embodiment of the present disclosure.
  • FIG. 8D illustrates a user interface for setting attributes of point cloud data by the point cloud data processing module according to an embodiment of the present disclosure.
  • FIG. 9 shows a user interface for generating an image preview by an image generation module according to an embodiment of the present disclosure.
  • Figure 10A is yet another schematic diagram illustrating generation of point cloud data according to an embodiment of the present disclosure.
  • 10B is a user interface illustrating further settings for the generation of point cloud data according to an embodiment of the present disclosure.
  • 11 to 15 are schematic diagrams of some operations that may be involved in an ordered random process according to embodiments of the present disclosure.
  • 16 to 18 are schematic diagrams of some operations that may be involved in a disordered random process according to embodiments of the present disclosure.
  • Figure 19 shows a schematic diagram of an electronic device according to an embodiment of the present disclosure.
  • Figure 20 shows a schematic diagram of the architecture of an exemplary computing device in accordance with an embodiment of the present disclosure.
  • Figure 21 shows a schematic diagram of a storage medium according to an embodiment of the present disclosure.
  • first data may be referred to as second data
  • second data may be referred to as first data
  • Both the first data and the second data may be data, and in some cases, may be separate and different data.
  • the term "at least one" in this application means one or more, and the term “plurality” in this application means two or more.
  • multiple audio frames means two or more audio frame.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not be determined by the execution order of the embodiments of the present application.
  • the implementation process constitutes no limitation. It should also be understood that determining B according to (based on) A does not mean only determining B according to (based on) A, and B can also be determined according to (based on) A and/or other information.
  • the term “if” may be interpreted to mean “when” or “upon” or “in response to determining” or “in response to detecting.”
  • the phrase “if it is determined" or “if [the stated condition or event] is detected” may be interpreted to mean “when it is determined" or “in response to the determination... ” or “on detection of [stated condition or event]” or “in response to detection of [stated condition or event].”
  • Model and 3D model refers to a shape or structure composed of electronic data through digital representation.
  • a 3D model is an example of a model.
  • the 3D model can be composed of vertices (vertex), which are connected to form triangles and quadrilaterals, and ultimately composed of countless polygons to form a complex three-dimensional model.
  • FIG. 1 shows a schematic diagram of an application scenario 100 according to an embodiment of the present disclosure, in which a server 110 and a plurality of terminals 120 are schematically shown.
  • the terminal 120 and the server 110 can be connected directly or indirectly through wired or wireless communication methods, and this disclosure is not limited here.
  • the embodiment of the present disclosure adopts Internet technology, especially physical network technology.
  • the Internet of Things can be used as an extension of the Internet. It includes the Internet and all resources on the Internet, and is compatible with all applications of the Internet. With the application of IoT technology in various fields, various new smart IoT application fields have emerged, such as smart homes, smart transportation, and smart health.
  • the images described in this disclosure may be 2D images or 3D images.
  • various virtual scenes and virtual objects related to the above-mentioned smart homes, smart transportation, and smart health can be presented on the images of the present disclosure.
  • the image of the present disclosure can also be an operable/interactive image.
  • the user can perform various operations (for example, click, drag, touch, etc.) on the image through an interactive interface to present the image from different perspectives.
  • Virtual scenes and virtual objects are not limited to this.
  • methods according to some embodiments of the present disclosure may be fully or partially piggybacked on the server 110 to process images.
  • the server 110 will be used to analyze the basic model and the bearing model corresponding to the image elements and generate point cloud data. Then based on the basic model, the bearing model and the point cloud data, the server 110 will use the rendering engine on the server to generate a random arrangement including An image of multiple image elements.
  • the server 110 here can be an independent server, a server cluster or a distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, and cloud communications.
  • the server 110 is also referred to as the cloud.
  • the method according to the embodiment of the present disclosure may also be fully or partially mounted on the terminal 120 to process images.
  • the terminal 120 will be used to collect data related to the basic model and the bearer model.
  • the terminal 120 will be used to present an image including a plurality of image elements randomly arranged.
  • the terminal 120 may be an interactive device that includes a user interface through which images can be displayed, and the user can interact with the interactive device. This disclosure does not limit this.
  • each terminal of the plurality of terminals 120 may be a fixed terminal such as a desktop computer, such as a smartphone, a tablet computer, a portable computer, a handheld device, a personal digital assistant, a smart wearable device (eg, smart glasses), Smart head-mounted devices, cameras, vehicle-mounted terminals and other mobile terminals with network functions, or any combination thereof, are not specifically limited in the embodiments of the present disclosure.
  • a desktop computer such as a smartphone, a tablet computer, a portable computer, a handheld device, a personal digital assistant, a smart wearable device (eg, smart glasses), Smart head-mounted devices, cameras, vehicle-mounted terminals and other mobile terminals with network functions, or any combination thereof, are not specifically limited in the embodiments of the present disclosure.
  • image element creation tools and image rendering tools may be installed on both the terminal 120 and the server 110 according to the embodiment of the present disclosure for generating images.
  • the terminal 120 and the server can be equipped with various digital content creation software (Digital Content Create or Digital Content Creating, also known as DCC software) as image element creation tools.
  • DCC software include: 3DsMax, C4d, Blender, Maya, Houdini, and more.
  • the files that DCC software can export include geometry files (.CGF), skinned character files (.CHR), skinned character files (.CHR), character animation files (.CAF), material files (.mtl), etc.
  • a brush component can also be included on the DCC software. The brush component can use some preset patterns in the form of brushes.
  • the terminal 120 and the server 110 can be equipped with Unreal Engine (UE Engine) as an image rendering tool.
  • Unreal Engine is a game engine that supports console games, PC games, mobile game development, etc.
  • the currently popular versions of Unreal Engine are UE4 and UE5.
  • the UE4 editor has a visual editing window, which can directly modify the characters and props in the game, and supports real-time rendering.
  • the rendering of images in each example of the present disclosure can be implemented based on the UE4 engine/UE5 engine.
  • Image assets include model parameters, audio, video, primitives, material parameters, etc., to achieve the creation of virtual environments/virtual environments.
  • various embodiments of the present disclosure generate basic models and load-carrying models corresponding to image elements, and use point clouds to realize random arrangement of basic models on the load-bearing model. This solves technical problems such as high production and modification costs, unnatural image presentation effects, and low efficiency of the image generation process when processing image elements that need to appear repeatedly and randomly.
  • FIG. 2 is a flowchart illustrating an image processing method 20 according to an embodiment of the present disclosure.
  • Figure 3 is a schematic diagram illustrating an image generated by method 20 according to an embodiment of the present disclosure.
  • the example method 20 may include one or all of operations S201-S204, and may also include more operations.
  • This disclosure is not limited in this regard.
  • operations S201 to S204 are performed by the terminal 120/server 110 in real time, or performed offline by the terminal 120/server 110.
  • This disclosure does not limit the execution subject of each operation of the example method 20, as long as it can achieve the purpose of the disclosure.
  • Each step in the example method may be performed in whole or in part by the image element creation tool and the image rendering tool, or may be implemented by a plug-in suitable for the image element creation tool and the image rendering tool.
  • a plug-in is a computer program product that interacts with image element creation tools and image rendering tools to add some specific functions required to the application so that third-party developers can expand and streamline the application. , or separate the source code from the application to remove incompatibilities caused by software usage rights.
  • image element creation tools Two or more of the above image element creation tools, image rendering tools and plug-ins can be integrated into a large application to achieve the goal of drawing images in a single software.
  • the image element creation tool, image rendering tool and plug-in can also be three. It is an independently installed application, but image-related model data, asset data, etc. are transmitted through the mutually open interfaces of the three. This disclosure is not limited in this regard.
  • a basic model corresponding to the image element is obtained, where the basic model indicates model data corresponding to the image element.
  • the image elements are various virtual objects that can be displayed on the image, and these virtual objects may appear repeatedly on the image multiple times to form part or all of the visualized virtual scene.
  • the appearance location of these virtual objects can be random.
  • Figure 3 which schematically shows an image with trees and buildings randomly arranged.
  • an example of the image element 301 is a tree.
  • the image element can also be any 2D or 3D static background image element.
  • the image elements can also be various cars, static crowds, billboards, street lights, road edges, fences, glass doors, walls, etc.
  • the image elements can also be trees, various vegetation, animal groups, stones, etc.
  • the basic model can be a 3D virtual model, which can be produced by the above-mentioned DCC software.
  • tools for making 3D models in the present disclosure include 3DsMax, C4d, Blender, Maya, Houdini, etc.
  • the geometry file (.CGF), skinned character file (.CHR), skinned character file (.CHR), character animation file (.CAF), and material file (.mtl) can be obtained through the software as Examples of some of the assets of the base model.
  • the model data can be structurally optimized before creating a three-dimensional virtual model to save computing resources and improve processing efficiency.
  • the above-mentioned DCC software can be software for 3D model analysis, 3D software for visual art creation, or 3D printing.
  • Software, etc. can also be generated through computer graphics libraries (that is, graphics libraries used in self-programming); for example, (Open Graphics Library, Open Graphics Library), DirectX (Direct eXtension), etc.
  • the model data corresponding to the above-mentioned basic model may be various data corresponding to the above-mentioned various virtual objects. That is, the model data corresponding to the above-mentioned basic model may include any data related to drawing a single image element in the image.
  • the model data may be edge information, depth information, vertex information, height information, width information, length information, mention information, etc. from the virtual object.
  • the model data can also be any image assets, such as color information, material information, shape information, etc.
  • the model data can also be any audio assets or video assets (such as gif animation).
  • the above-mentioned basic model can be manually set by artists, or any data related to the virtual object can be pulled from the Internet/database. This disclosure does not limit this.
  • a bearing model corresponding to the image element is obtained, where the bearing model indicates the distribution area of the image element on the image.
  • the bearer model indicates the drawable area corresponding to each of the above virtual objects in the virtual scene.
  • the bearer model may be a 3D virtual object created in DCC software, which may be manually designed by an artist.
  • the bearer model may be a primitive model or a combination of primitive models defined in various ways in the image rendering software.
  • the bearing model may also be a 3D virtual model corresponding to a certain area drawn using a plug-in or other software. This disclosure is not limited in this regard.
  • the load-bearing model can indicate the distribution area of the image elements on the image as any connected shape in the image.
  • a white area schematically shows an area corresponding to a bearing model 302.
  • graphic elements in the form of trees are randomly distributed accordingly.
  • the area corresponding to the bearing model is shown in the form of a white area in FIG. 3 , those skilled in the art should understand that the present disclosure is not limited thereto.
  • the bearer model can be obtained in various ways, and the present disclosure is not limited thereto. Some example ways of obtaining the bearing model will be further described with reference to various accompanying drawings, and will not be described again in this disclosure.
  • point cloud data is randomly generated based at least in part on the bearing model, and data corresponding to each point in the point cloud data indicates spatial distribution information of the base model on the bearing model and /or posture information.
  • the point cloud data may be a data set of multiple points in the image coordinate system corresponding to the image, and each point in the point cloud data exists with a basic model randomly arranged on the bearing model.
  • the spatial distribution information of each point in the point cloud data is the value of each coordinate axis of the basic model corresponding to the point in the image coordinate system
  • the posture of each point in the point cloud data is The information is the normal attribute and/or orientation attribute corresponding to the basic model corresponding to the point.
  • the data set corresponding to each point in the point cloud data may include multiple types or formats of information to indicate the spatial distribution information and/or posture information of the base model on the bearing model.
  • the spatial distribution information can be the values of each axis of the three-dimensional coordinates (X, Y, Z)/two-dimensional coordinates (X, Y) under the image coordinate system and any possible attribute values.
  • each point in the point cloud data will correspond to an underlying model.
  • the number of points in the point cloud data indicates the number of base models randomly arranged on the bearing model.
  • the values of each axis of the three-dimensional coordinates (X, Y, Z) of each point in the point cloud data indicate the spatial distribution information of the basic model on the bearing model, and the point cloud data
  • the corresponding normal attribute or tangent attribute (hereinafter also referred to as the orientation attribute) of each point indicates the posture information of the base model on the load-bearing model.
  • each point may also have deformation information indicating corresponding deformation information when the base model is placed on the load-bearing model.
  • Each point may also have color change information or material change information indicating the corresponding color change or material change when the base model is placed on the carrier model.
  • an image in which a plurality of the image elements are randomly arranged is generated based at least in part on the point cloud data, the base model and the bearing model.
  • the open interface of the DCC software can be used to transfer the corresponding model data in the basic model to the data set corresponding to each point of the point cloud data, thereby using the basic model itself as one of the attributes of each point of the point cloud data.
  • This disclosure is not limited in this regard.
  • various 2D rendering engines can be used to generate images with randomly arranged image elements.
  • the two-dimensional rendering engine is, for example, a PS (photoshop) tool or an illustrator tool.
  • various three-dimensional rendering engines can be used to generate images with randomly arranged image elements.
  • the 3D rendering engine can generate displayable 2D images from digital 3D scenes.
  • the generated two-dimensional images can be realistic or non-realistic.
  • the three-dimensional rendering process relies on a 3D rendering engine.
  • example rendering engines in this disclosure may use "ray tracing" technology, which is generated by tracing rays from a camera through a virtual plane of pixels and simulating the effect of their encounter with an object. image.
  • Example rendering engines in this disclosure may also use "rasterization” technology, which collects information about various bins to determine the value of each pixel in a two-dimensional image. This disclosure does not limit the types of 3D rendering engines and the technologies used.
  • various types of image element creation tools can be used to perform the above operation S201 to obtain at least one basic model.
  • the at least one base model corresponds to a composite asset of different image elements or virtual objects.
  • the composite asset includes at least one forest asset, at least one stone asset, and nested assets including both stone assets and forest assets, etc.
  • the image rendering tool Unreal Engine
  • the image rendering tool can perform the above operations S202 to S204, so as to obtain the bearing model and point cloud data in real time in the image rendering tool, and modify the bearing model and point cloud data in real time (for example, the user can manually Modify the random arrangement corresponding to point cloud data).
  • the image rendering tool can also modify the attributes of each point in the point cloud data manually or automatically.
  • the basic model corresponds to a fence
  • various types of image element creation tools can be used to create basic models corresponding to various image elements. Then use an image rendering tool (Unreal Engine) to perform the above operations S201 to S204.
  • an image rendering tool Unreal Engine
  • the image rendering tool performs the above operation S201, at least one basic model corresponding to the image element may be selected from the basic models corresponding to multiple image elements as the basic model used in operations S202 to S204.
  • example method 20 may also include a first additional operation.
  • the generated image can be used as an asset to generate a base model for another image.
  • the above-mentioned image including various randomly arranged image elements can also be used as an asset of a basic model, so that the basic model can be directly used in the process of generating the other image.
  • a pond consisting of randomly arranged lotus leaves (with a boulder-stacked edge) can be used as a base model when drawing a new image without having to rearrange the lotus leaves and pond edges. stone.
  • example method 20 may also include a second additional operation.
  • a base model for another image may be generated based on the point cloud data, the base model and the carrier model. That is, the point cloud data, the basic model and the bearing model in the above-mentioned operations S201 to S203 can all be assets of another basic model, so that the above-mentioned models and data can be used directly when drawing a new image.
  • point cloud data corresponding to lotus leaves, randomly arranged lotus leaves, ponds, stones, point cloud data corresponding to randomly arranged stones on the edge of the pond, etc., as a set can be the data of the other basic model.
  • Assets for example, the base model corresponding to the graphic element "Pond with randomly arranged lotus leaves (pond edge with piled stones)").
  • the base model generated in the first additional operation and the second additional operation that can be used for another image can also be packaged into a plug-in tool, and the base model in the plug-in tool and the adjustable loader model Parameters can be further categorized and then imported directly into Unreal Engine for use.
  • the present disclosure is not limited to this.
  • various embodiments of the present disclosure generate basic models and load-carrying models corresponding to image elements, and use point clouds to realize random arrangement of basic models on the load-bearing model. This solves technical problems such as high production and modification costs, unnatural image presentation effects, and low efficiency of the image generation process when processing image elements that need to appear repeatedly and randomly.
  • FIG. 4 is a structural diagram showing the image processing device 40 according to the embodiment of the present disclosure. Those skilled in the art should understand that the device shown in Figure 4 is only an example, and the present disclosure is not limited thereto.
  • the image processing device 40 includes a basic model acquisition module 401 , a bearing model acquisition module 402 , a point cloud data processing module 404 and an image generation module 405 .
  • the image processing device 40 optionally further includes a model parameter pickup module 403.
  • the image processing device 40 may also include more or fewer modules, and the present disclosure is not limited thereto.
  • the image processing device 40 can be used as a plug-in as a whole.
  • the base model acquisition module 401 may be configured to perform operation S201.
  • the image elements are, for example, one or several different trees, one or several different flowers and plants, and the basic model is the assets and parameters corresponding to these image elements.
  • the image element is a fence.
  • the basic model can be the assets and parameters corresponding to one longitudinal railing and two horizontal railings in the foundation of the fence.
  • the image element can also be one or several stones, a street lamp, etc.
  • the base model acquisition module 401 may also be configured to update the details of the base model so that the base model is more suitable for the image to be generated.
  • the bearer model acquisition module 402 may be configured to perform operation S202.
  • the hosted model acquisition module 402 may be part of a plug-in suitable for image element creation tools and image rendering tools. For example, users can use this plug-in to manually draw a closed curve to generate a load-bearing model. The internal area of the closed curve indicates the distribution area of the image elements on the image.
  • the user can also click to select the ground where trees need to be arranged, the mountain where grass needs to grow, the road where street lights need to be placed, etc., thereby automatically generating the load-bearing model.
  • the corresponding ground, mountains and roads correspond to the load-bearing model.
  • the bearer model acquisition module 402 may also be configured to update the bearer model so that the base model is more suitable for the image to be generated.
  • both the above-mentioned updates to the basic model and the updates to the loader model can be previewed in real time in the image rendering tool or plug-in, so that the user can observe and adjust the randomness in the image to be generated at any time during use.
  • the model parameter pickup module 403 may be configured to pass all or part of the parameters or assets of the base model from the base model acquisition module 401 to the bearer model acquisition module 402 .
  • the model parameter pickup module 403 may be configured to pass all or part of the parameters or assets of the base model from the host model acquisition module 402 to the base model acquisition module 401 .
  • the model parameter picking module 403 can be used as an optional component of the above plug-in.
  • the point cloud generation module 404 may be configured to perform operation S203.
  • the point cloud generation module 404 may also be part of a plug-in suitable for image element creation tools and image rendering tools.
  • various point cloud data attributes can be adjusted.
  • the point cloud data attributes include: random value, random degree, random size range, random color range, random image element type number, random rotation range, the volume of the obstruction model and its size (which corresponds to the The spatial parameters of the base model are not plotted on the model), etc. This disclosure is not limited in this regard.
  • Image generation module 405 may be configured to perform operation S204.
  • the image generation module 405 may be a component of an image rendering tool, which is used to render an image to generate the image in which a plurality of the image elements are randomly arranged.
  • the image generation module 405 may also generate a grid 3D model asset, or multiple random repeating grid 3D model assets.
  • the grid 3D model asset can be used as one of the components of the above-mentioned virtual scene, and one slice thereof can be the 3D image.
  • the present disclosure is not limited to this.
  • the image processing device 40 can generate a more natural image, and can also adjust the point cloud data in each of the above components. attributes to achieve subtle differences between various image elements that appear repeatedly in the image.
  • FIG. 5 is a schematic flowchart illustrating using the image processing device 40 to perform the method 20 according to an embodiment of the present disclosure.
  • 6 to 9 are example user interface diagrams or partial flow charts illustrating a process of performing the method 20 using the image processing device 40 according to an embodiment of the present disclosure.
  • the basic model acquisition module 401 can be used to acquire the basic model.
  • Figure 6 shows an example user interface for the basic model acquisition module 401 to acquire the basic model. Among them, on the right side of the example user interface, the user can click to select the basic model that needs to be randomly arranged. For example, in the example of Figure 6, the user selected randomly arranged fences. Below the example user interface, some assets of the selected virtual model will optionally be presented, for example, 3D images or other parameters of different vertical and crossbar configurations of the fence.
  • the bearing model acquisition module 402 may be used to acquire a curve, and then generate a bearing model based on the curve.
  • FIG. 7A shows a flow chart in which the bearer model acquisition module 402 acquires a bearer model according to an embodiment of the present disclosure.
  • FIG. 7B shows four example user interfaces for the bearing model acquisition module 402 to acquire bearing models according to an embodiment of the present disclosure, which correspond to different forms of generating bearing models according to curves.
  • Figure 7C illustrates an example user interface for parameterizing a bearer model in accordance with an embodiment of the present disclosure.
  • the bearing model acquisition module 402 may determine whether a preset area has been set on the image for acquiring the bearing model. In response to the presence of a preset area on the image, the bearing model acquisition module 402 acquires the bearing model based on the preset area. For example, in the curve drawing or extraction interface III of Figure 7B, the background of the image includes a hexagonal area, and this area is the material in the basic material library. By clicking the center of the area, the bearer model acquisition module 402 can automatically obtain the edge of the area as a curve, and optionally smooth the curve, thereby obtaining a distribution area that can indicate the distribution area of image elements on the image. Closed shape.
  • the bearing model acquisition module 402 In response to the absence of the preset area on the image, the bearing model acquisition module 402 carries the model based on the drawn curve. For example, in the curve drawing or extraction interface I of FIG. 7B , the user only draws a non-closed curve (shown as a solid line therein).
  • the bearing model acquisition module 402 can set the width parameter W according to the parameters corresponding to the basic model, thereby obtaining a closed area along the solid line (shown as a closed area surrounded by dotted lines).
  • the bearing model acquisition module 402 can use the sweep function to generate a strip-shaped closed area along a non-closed curve.
  • the present disclosure is not limited to this.
  • the user draws a closed ellipse, then the area enclosed by the ellipse is also the distribution area of the image elements indicated by the load-bearing model on the image. .
  • the distribution area of the image elements on the image indicated by the bearing model may not only be the above-mentioned closed area with a certain area, but may also be only one or more non-closed curves. There can be different graphic elements distributed randomly or orderly on this curve. For example, in the curve drawing or extraction interface IV of Figure 7B, the user only draws a non-closed curve. At this time, multiple points can be randomly sampled from the curve as subsequent point cloud data. Although FIG. 7B only shows four examples of obtaining the bearing model through curves, the present disclosure is not limited thereto.
  • the bearer model acquisition module 402 can also determine the distribution area of the image element on the image based on the value input by the user (for example, by checking the corresponding option on the user interface shown in Figure 7C). A closed region with a certain area, or one or more non-closed curves.
  • the distribution area of the image element on the image indicated by the bearing model being a closed area (for example, selecting the radio button corresponding to the closed area on the user interface shown in Figure 7C), in the closed area Multiple points are sampled in the area as the point cloud data; the distribution area of the image elements on the image in response to the load model indication is a non-closed curve (for example, select non-closed on the user interface shown in Figure 7C radio button corresponding to the curve), sample multiple points on the non-closed curve as the point cloud data.
  • the choice is to randomly sample multiple points directly on the curve as subsequent points.
  • Cloud data still randomly samples multiple points within a closed area as subsequent point cloud data. This disclosure is not limited in this regard.
  • FIG. 8A shows a flowchart of point cloud data generated by the point cloud data processing module 404 according to an embodiment of the present disclosure.
  • FIG. 8B shows a user interface for the point cloud data processing module 404 to assign attributes of each point in the point cloud based on a basic model according to an embodiment of the present disclosure.
  • 8C illustrates a user interface of the point cloud data processing module 404 for selecting an obstruction model according to an embodiment of the present disclosure.
  • 8D illustrates a user interface for the point cloud data processing module 404 to set attributes of point cloud data according to an embodiment of the present disclosure.
  • the point cloud data processing module 404 can at least generate point cloud data based on the above-mentioned bearer model. For example, in response to the distribution area of the image elements on the image indicated by the bearing model being a non-closed curve, the point cloud data processing module 404 may use various random functions to sample multiple points on the curve as point cloud data. In response to the distribution area of the image elements on the image indicated by the bearing model being a closed area, the point cloud data processing module 404 may use various random functions to sample multiple points in the closed area as point cloud data.
  • the point cloud data processing module 404 can further parse the bearer model to inherit some of the point cloud data attributes associated with the bearer model and the point cloud data. These parameters include some attributes corresponding to the load-bearing model, such as whether the load-bearing model is flat ground or rolling hills, whether the edges of the load-bearing model are smooth, whether the load-bearing model includes straight edges, the area corresponding to the load-bearing model, the shape corresponding to the load-bearing model, etc. etc., this disclosure is not limited thereto.
  • the point cloud data processing module 404 can also modify some point cloud data attributes inherited from the bearer model according to user input, or add some point cloud data attributes, or delete some point cloud data attributes. This disclosure is not limited in this regard.
  • the point cloud data processing module 404 can further modify the point cloud data, for example, adjust various point cloud data attributes according to the example user interface shown in FIG. 8D.
  • FIG. 8D only shows adjustments to the degree of randomness, the random color range (taking the value of the red color component as an example), and the number of types of random image elements, the present disclosure is not limited thereto.
  • the above adjustment is based at least in part on point cloud data attributes inherited from the bearer model or at least in part on updated point cloud data attributes.
  • a recommended random value within the attribute range of the point cloud data can be automatically generated accordingly (for example, the 50% randomness degree shown in Figure 8D, which Indicates the variance of the random distribution of point cloud data), the random color range can also be manually set by the user (for example, the value of the red component shown in Figure 8D is manually set to be between 30 and 70), and it can also be used for automatic recommendations.
  • the value is adjusted (for example, in Figure 8D, the number of types of automatically recommended image elements can be 6, and then manually adjusted to 5).
  • the present disclosure is not limited to this.
  • the point cloud data processing module 404 can also assign values to the attributes of each point in the point cloud based on the basic model, so that the data corresponding to each point in the point cloud data indicates that the basic model is in The spatial distribution information and/or attitude information on the bearer model.
  • the point cloud data processing module 404 may add unreal attributes (unreal) to each point based on the base model.
  • Unreal Properties can expose some parameters or properties that can be used in Unreal Engine.
  • unreal properties can expose parameters and assets that can be obtained from the base model.
  • the point cloud data processing module 404 may convert the adjustable assets of the base model into multiple strings of type string. The point cloud data processing module 404 can pass these strings to the data set of each point in the point cloud data through unreal attributes.
  • the basic model is a fence, and its assets include the horizontal and vertical columns of the fence.
  • the adjustable parameters of the horizontal and vertical bars of the fence are the lengths of the horizontal and vertical bars of the fence.
  • the point cloud data processing module 404 passes the parameter "the length of the horizontal bar of the fence" to the parameter adjustment interface (the right interface of Figure 8B) in the form of a string.
  • the point cloud data processing module 404 can create an adjustable variable for each point, which indicates some of the point cloud data properties that the base model can fine-tune for different points.
  • adjustable attributes include: random value change threshold, size change threshold, random degree threshold, random color range, random range of number of types of image elements, random rotation range, etc.
  • the adjustment range of the indicated parameter "the length of the horizontal rails of the fence" is 1 to 10.
  • normal attributes can be added to each point in the point cloud. Therefore, the point cloud data processing module 404 can rotate the basic model based on the normal attribute of the point, for example, using the normal as an axis to rotate a random angle or an angle set according to certain rules.
  • the point cloud data processing module 404 may generate an adjustment random value based on the adjustable attribute (the adjustment random value satisfies the range set by the adjustable attribute). For example, the random value 5 in the size adjustment shown in Figure 8B.
  • the point cloud data processing module 404 will adjust each asset of the basic model using corresponding adjustment random values.
  • the point cloud data processing module 404 implements random processing of each point in the point cloud data.
  • the point cloud data processing module 404 can also select one or more points from the point cloud data as the first blocking point.
  • the first blocking point indicates that points within a certain distance from the first blocking point do not arrange the basic model.
  • the point cloud data processing module 404 can then generate a blocking sphere based on the first blocking point and delete points inside the blocking sphere.
  • the point cloud data processing module 404 can display a blocked area preview interface, in which basic models related to flowers and plants may be randomly arranged inside the area enclosed by the curve, and some stones may be arranged along the curve. In order to prevent flowers and plants from growing inside the stone and causing image distortion, the point cloud data processing module 404 will set up multiple blocking spheres as shown in Figure 8C and delete points located inside the blocking spheres.
  • the point cloud data processing module 404 can also select one or more points from the point cloud data as the second blocking point, wherein the second blocking point indicates a distance within a certain distance from the second blocking point.
  • the first base model is not arranged at the point, and the second base model is arranged at the second blocking point, wherein the first base model corresponds to the first image element and the second base model corresponds to the Two image elements.
  • a second blocking sphere can be generated based on the second blocking point, and the first basic model can be arranged based on points other than the second blocking sphere.
  • the first basic model is the basic model corresponding to the image element flowers and grass
  • the second basic model is the basic model corresponding to the image element crowd, so that the crowd is arranged at the second blocking point, but the flowers and grass are not arranged around the second blocking point. , to avoid conflicts between the arrangement of flowers and plants and the arrangement of crowds.
  • the image generation module 405 may generate a corresponding image.
  • a user interface for the image generation module 405 to generate an image preview is shown.
  • the lengths of the horizontal bars are roughly the same, different fences also have different lengths, thereby achieving random processing corresponding to the basic model.
  • Figure 10A is yet another schematic diagram illustrating generation of point cloud data according to an embodiment of the present disclosure.
  • 10B is a user interface illustrating further settings for the generation of point cloud data according to an embodiment of the present disclosure.
  • the process of randomly generating point cloud data described above also includes: based on the type of the basic model, determining whether to use an ordered random process or an unordered random process to randomly generate the point cloud data.
  • the random distribution rules of randomly clustered objects usually include a regular part determined by ordered rules and a disordered part determined by disordered rules.
  • the process of generating point cloud data can be divided into an ordered random process and a disordered random process as shown in Figure 10A.
  • Figure 10B you can also set whether to generate point cloud data using an ordered random process or an unordered random process on the user interface.
  • ordered random means that the regular part accounts for more than the disordered part.
  • the regular part is that each street light needs to be installed with almost the same spacing arrangement.
  • the disordered part is that the spacing between each street light may have a slightly random color and the brightness may have a slightly random difference. Therefore, for basic models such as street lights, it is necessary to consider generating point cloud data in an ordered and random scheme.
  • an ordered and random corresponding point cloud data attribute for example, the area and curve shape of the strip area corresponding to the load-bearing model
  • the stones need to be arranged in an array in the load-bearing area, etc.
  • disordered randomness means that the regular part accounts for less than the disordered part.
  • the disordered part is that the size and position of each tree are disordered.
  • the orderly part is that the trees become smaller and denser toward the edge of the forest, and the spacing between trees becomes smaller.
  • some attributes of the basic model (length, width, height, volume), etc.
  • the point cloud data processing module 404 can be passed to the point cloud data processing module 404 as phantom attributes and correspondingly use adjustable attributes to generate Change threshold, and then use various random functions within the corresponding change threshold (such as Gaussian random function, random function based on multiplicative congruence method and random function based on mixed congruence method, random function based on central limit theorem, Box Muller Random function, etc.) generates individual attributes that vary randomly in an unordered manner.
  • various random functions within the corresponding change threshold such as Gaussian random function, random function based on multiplicative congruence method and random function based on mixed congruence method, random function based on central limit theorem, Box Muller Random function, etc.
  • the present disclosure in the process of making assets, if it is a basic model that needs to be processed by an ordered random process, then only the rules of the regular part can be considered; if it is a basic model that needs to be processed by a disordered random process , then only the rules of the unordered part can be considered.
  • the present disclosure is not limited to this.
  • the user interface in Figure 10B will recommend (or default to) selecting the non-smoothing option to remind the user that not smoothing the curves often results in better arrangement results ( As shown in Figure 17 described in detail later).
  • the smoothing option for basic models with sharp edges, such as floor tiles or buildings will recommend (or default to) selecting the smoothing option to remind the user that smoothing the curves often results in better arrangement effects.
  • the bearing model acquisition module 402 will smooth the curve corresponding to the bearing model, thereby obtaining the smoothed curve as shown in Figure 11.
  • the point cloud data processing module 404 will sample the curve corresponding to the bearing model to generate multiple points with the same spacing.
  • the comparison of the sampling results of the curve before smoothing and the sampling results of the curve after smoothing is shown in Figure 12. Then, you can create a normal attribute for each point, and correspondingly show the relationship between the direction of the normal and the coordinate axis.
  • the basic model here is a toy duck.
  • the positive direction of the x-axis is the orientation of the toy duck, and the y-axis is the normal upward. It can be seen that the orientation of each toy duck is Normals are axes with different directions of rotation.
  • the point cloud data processing module 404 arranges the orientation of each point along the tangent direction of the curve, so that each fence faces the next fence, thereby ensuring Each fence can be rigorously connected to the adjacent fence.
  • the point cloud data processing module 404 can determine the tangent direction of the curve at each point under the image coordinate system, and set the tangent direction as the tangent attribute of the point. Thereby, the horizontal rails for the fence at that point can be accurately connected to the vertical rails of the next fence.
  • the point cloud data processing module 404 can also process the starting point and the ending point separately, so that the ending fence no longer has a hanging horizontal rail.
  • the point cloud data processing module 404 can also adjust the fence in combination with various operations described in FIGS. 5 to 9 .
  • the point cloud data processing module 404 randomly adjusts some attributes of each fence, for example, adjusts the lighting parameters of the fence, etc.
  • the point cloud data processing module 404 can also set some blocking points to avoid drawing fences at inappropriate locations in the image.
  • disordered random processes are usually suitable for processing basic models with completely random arrangements such as forests, vegetation, rocks, and static crowds. These basic models are usually randomly arranged in a closed area.
  • the distribution area indicated by the load-bearing model is a closed area as shown in Figure 16.
  • the bearing model acquisition module 402 will smooth the curve corresponding to the bearing model to obtain a smoothed area.
  • the point cloud data processing module 404 may use various random functions to sample multiple points in the closed area as point cloud data. As shown in Figure 16, the sampling results of the closed area corresponding to the smooth curve and the closed area corresponding to the unsmooth curve are different, thus corresponding to different point cloud data.
  • the point cloud data processing module 404 can create a normal attribute for each point, and correspondingly show the relationship between the direction of the normal and the coordinate axis.
  • the basic model here is a cuboid.
  • the vector sum of its x-axis vector and y-axis vector is the direction of the cuboid, and the z-axis is the normal line pointing upward. You can see Each cuboid is oriented with a random rotation direction about the normal axis. Then, the point cloud data processing module 404 can also adjust the cuboid in combination with various operations described in FIGS. 5 to 9 .
  • the point cloud data processing module 404 randomly adjusts some attributes of each cuboid, for example, adjusts the color of the cuboid, and so on.
  • the point cloud data processing module 404 can also set some blocking points to avoid drawing cuboids at inappropriate locations in the image to obtain randomly arranged cuboids as shown in Figure 18.
  • the above-mentioned basic model acquisition module 401, bearer model acquisition module 402, and point cloud data processing module 404 can also be packaged into a plug-in tool to classify the adjustable parameters in the basic model and bearer model, and then directly import them into Unreal Engine for use.
  • the adjusted base model and loader model can be baked into an attribute as a whole and adjusted directly in Unreal Engine.
  • the base model is flowers and grass
  • the load-bearing model is the ground
  • the base model and the load-bearing model can be baked into vegetation attributes as a whole and directly used by Unreal Engine.
  • various embodiments of the present disclosure generate basic models and load-carrying models corresponding to image elements, and use point clouds to realize random arrangement of basic models on the load-bearing model. This solves technical problems such as high production and modification costs, unnatural image presentation effects, and low efficiency of the image generation process when processing image elements that need to appear repeatedly and randomly.
  • an electronic device is also provided for implementing the method according to the embodiment of the present disclosure.
  • Figure 19 shows a schematic diagram of an electronic device 2000 according to an embodiment of the present disclosure.
  • the electronic device 2000 may include one or more processors 2010 and one or more memories 2020 .
  • the memory 2020 stores computer readable code, which when run by the one or more processors 2010, can execute the search request processing method as described above.
  • the processor in the embodiment of the present disclosure may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • Each method, operation, and logical block diagram disclosed in the embodiments of the present disclosure may be implemented or executed.
  • the general-purpose processor can be a microprocessor or the processor can be any conventional processor, etc., which can be of X86 architecture or ARM architecture.
  • the various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device. While aspects of embodiments of the present disclosure are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it will be understood that the blocks, devices, systems, techniques, or methods described herein may be used as non-limiting Examples are implemented in hardware, software, firmware, special purpose circuitry or logic, general purpose hardware or controllers, or other computing devices, or some combination thereof.
  • computing device 3000 may include a bus 3010, one or more CPUs 3020, read only memory (ROM) 3030, random access memory (RAM) 3040, communication port 3050 connected to a network, input/output components 3060, hard disk 3070, etc.
  • the storage device in the computing device 3000 such as the ROM 3030 or the hard disk 3070, can store various data or files used for processing and/or communication of the methods provided by the present disclosure, as well as program instructions executed by the CPU.
  • Computing device 3000 may also include user interface 3080.
  • the architecture shown in Figure 20 is only exemplary, and when implementing different devices, one or more components in the computing device shown in Figure 20 may be omitted according to actual needs.
  • Figure 21 shows a schematic diagram of a storage medium 4000 according to the present disclosure.
  • Computer readable instructions 4010 are stored on the computer storage medium 4020. When the computer readable instructions 4010 are executed by a processor, the methods according to the embodiments of the present disclosure described with reference to the above figures may be performed.
  • Computer-readable storage media in embodiments of the present disclosure may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • Non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM Direct Memory Bus Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDRSDRAM double data rate synchronous dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Linked Dynamic Random Access Memory
  • DR RAM Direct Memory Bus Random Access Memory
  • Embodiments of the present disclosure also provide a computer program product or computer program, which includes computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method according to the embodiment of the present disclosure.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logic for implementing the specified Function executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device. While aspects of embodiments of the present disclosure are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it will be understood that the blocks, devices, systems, techniques, or methods described herein may be used as non-limiting Examples are implemented in hardware, software, firmware, special purpose circuitry or logic, general purpose hardware or controllers, or other computing devices, or some combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开提供一种图像处理方法、图像处理插件、图像处理装置、电子设备、计算机可读存储介质和计算机程序产品。所述方法包括获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据,获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域;至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。

Description

一种图像处理方法 技术领域
本公开涉及图像处理领域,更具体地涉及一种图像处理方法、图像处理插件、电子设备、计算机可读存储介质和计算机程序产品。
背景技术
在各种游戏、影视、广告、可视化交互的2D或3D场景中,可能存在着大量的需要重复随机出现的图像元素。例如,森林中的树木、岩石、栅栏、围墙、广告牌、路灯、马路边缘、各种植被、停车场的汽车、商场中的人群、动物群体等等。并且这些图像元素会在制作的过程中不断的被美术人员修改,替换,最终达到最好的美术效果。
然而,使用现有的工具来处理这些图像元素往往存在诸多不便。例如,这些工具往往需要人工将在图像元素生成软件/设计软件处生成的多个图像元素导入至图像呈现软件(例如,3D渲染引擎),才能使得美术人员看到这些图像元素的最终的呈现效果。如果美术人员需要对这些图像元素进行进一步的修改,又往往需要在图像元素生成软件/设计软件重新修改这些图像元素,然后再将这些图像元素重新导入至图像呈现软件。此外,现有的工具将单个的图像元素处理成随机重复排布的多个图像元素时,得到的图像往往存在排布效果僵化以及生产的图像不自然的情况。
为此,本公开提出了一种图像处理方法、图像处理装置、电子设备、计算机可读存储介质和计算机程序产品,以解决在处理需要重复随机出现的图像元素时,制作成本和修改成本高、图像呈现效果不自然、图像生成过程效率低等技术问题。
发明内容
本公开的实施例提供了一种图像处理方法,包括:获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据,获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域; 至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
例如,所述至少部分地基于所述承载模型,随机地生成点云数据包括:响应于承载模型指示的所述图像元素在所述图像上的分布区域为闭合区域,(可选地,利用随机函数)在所述闭合区域中采样多个点作为所述点云数据;响应于承载模型指示的所述图像元素在所述图像上的分布区域为非闭合曲线,(可选地,利用随机函数)在所述非闭合曲线上采样多个点作为所述点云数据。
例如,所述点云数据为所述图像对应的图像坐标系下的多个点的数据集,所述点云数据中的每个点与在所述承载模型上随机排布的基础模型存在一一对应的关系。
例如,所述随机地生成点云数据包括:基于所述基础模型对于点云中的各个点的属性进行赋值,以使得点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息。
例如,所述点云数据中的每个点的空间分布信息为所述点对应的基础模型在所述图像坐标系下的各个坐标轴的值,所述点云数据中的每个点的姿态信息为所述点对应的基础模型对应的法线属性和/或朝向属性。
例如,所述随机地生成点云数据包括:基于所述基础模型的类型,确定使用有序随机过程还是无序随机过程来随机地生成所述点云数据,其中,有序随机过程指示以有序的规则确定的规律部分的占比高于以无序的规则确定的无序部分,有序随机过程指示所述规律部分的占比低于所述无序随机。
例如,所述随机地生成点云数据包括:响应于确定使用有序随机过程来随机地生成所述点云数据,对所述承载模型对应的分布区域进行均匀采样,以生成所述点云数据;响应于确定使用无序随机过程来随机地生成所述点云数据,对所述承载模型对应的分布区域进行随机采样,以生成所述点云数据。
例如,所述点云数据具有以下点云数据属性中的一项或多项:随机值、随机程度、随机大小范围、随机颜色范围、随机图像元素的种类数量、随机旋转范围、以及在所述承载模型上不绘制所述基础模型的空间参数。
例如,所述随机地生成点云数据包括:从所述点云数据中选择一个或多个 点作为第一阻塞点,其中,所述第一阻塞点指示距离所述阻塞点一定距离内的点不排布基础模型;以及基于第一阻塞点生成阻塞球体,并删除所述点云数据位于所述阻塞球体内部的点。
例如,所述随机地生成点云数据包括:从所述点云数据中选择一个或多个点作为第二阻塞点,其中,所述第二阻塞点指示距离所述第二阻塞点一定距离内的点不排布第一基础模型,并在所述第二阻塞点处排布第二基础模型,其中,所述第一基础模型对应于第一图像元素,所述第二基础模型对应于第二图像元素。
例如,所述基础模型所对应的模型数据为与所述图像元素相关联的三维虚拟对象对应的数据,所述承载模型指示在虚拟场景中所述三维虚拟对象对应的可绘制区域。例如,所述承载模型上的可绘制区域上排布有多种基础模型。
例如,所述的方法还可选地包括:将所生成的图像作为资产,生成用于另一图像的基础模型。
例如,所述的方法还可选地包括:基于所述点云数据、所述基础模型和所述承载模型,生成用于另一图像的基础模型。
例如,所述获取图像元素对应的基础模型还包括:从多种图像元素对应的基础模型中,选择至少一种图像元素对应的基础模型。
本公开的实施例提供了一种图像处理插件,所述图像处理插件被配置为执行以下操作:获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据,获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域;至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
本公开的实施例提供了一种图像处理装置,包括:基础模型获取模块被配置为获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据;承载模型获取模块,被配置为获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域;点云生成模块,被 配置为至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及图像生成模块,被配置为至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
本公开的一些实施例提供了一种电子设备,包括:处理器;存储器,存储器存储有计算机指令,该计算机指令被处理器执行时实现上述的方法。
本公开的一些实施例提供了一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令被处理器执行时实现上述的方法。
本公开的一些实施例提供了一种计算机程序产品,其包括计算机可读指令,所述计算机可读指令在被处理器执行时,使得所述处理器执行上述的方法。
由此,针对需要便捷地处理重复随机出现的图像元素的需求,本公开的各个实施例生成图像元素对应的基础模型和承载模型,并利用点云实现在承载模型上基础模型的随机排布,从而解决在处理需要重复随机出现的图像元素时,制作成本和修改成本高、图像呈现效果不自然、图像生成过程效率低等技术问题。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的描述中所需要使用的附图作简单的介绍。下面描述中的附图仅仅是本公开的示例性实施例。
图1是示出根据本公开实施例的应用场景的示例示意图。
图2是示出根据本公开实施例的图像处理方法的流程图。
图3是示出根据本公开实施例的方法生成的图像的示意图。
图4是示出根据本公开的实施例的图像处理装置的结构图。
图5是示出根据本公开的实施例的使用图像处理装置执行图像处理方法的示意流程图。
图6是示出根据本公开的实施例的基础模型获取模块获取基础模型的一个示例用户界面。
图7A示出了根据本公开的实施例的承载模型获取模块获取承载模型的流程图。
图7B示出了根据本公开的实施例的承载模型获取模块获取承载模型的四种示例用户界面。
图7C示出了根据本公开的实施例的对承载模型进行参数设定的示例用户界面。
图8A示出了根据本公开实施例的点云数据处理模块生成点云数据的流程图。
图8B示出了根据本公开实施例的点云数据处理模块基于基础模型对点云中的各个点的属性进行赋值的用户界面。
图8C示出了根据本公开实施例的点云数据处理模块选择阻塞模型的用户界面。
图8D示出了根据本公开实施例的点云数据处理模块设置点云数据的属性的用户界面。
图9示出了根据本公开实施例的图像生成模块生成图像预览的用户界面。
图10A是示出根据本公开实施例的生成点云数据的又一示意图。
图10B是示出根据本公开实施例的对点云数据的生成进一步设置的用户界面。
图11至图15是根据本公开实施例的有序随机过程可能涉及的部分操作的示意图。
图16至图18是根据本公开实施例的无序随机过程可能涉及的部分操作的示意图。
图19示出了根据本公开实施例的电子设备的示意图。
图20示出了根据本公开实施例的示例性计算设备的架构的示意图。
图21示出了根据本公开实施例的存储介质的示意图。
具体实施方式
为了使得本公开的目的、技术方案和优点更为明显,下面将参照附图详细 描述根据本公开的示例实施例。显然,所描述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。
在本说明书和附图中,具有基本上相同或相似操作和元素用相同或相似的附图标记来表示,且对这些操作和元素的重复描述将被省略。同时,在本公开的描述中,术语“第一”“第二”等字样用于对作用和功能基本相同的相同项或相似项进行区分,应理解,“第一”、“第二”、“第n”之间不具有逻辑或时序上的依赖关系,也不对数量和执行顺序进行限定。还应理解,尽管以下描述使用术语第一、第二等来描述各种元素,但这些元素不应受术语的限制。这些术语只是用于将一元素与另一元素区别分开。例如,在不脱离各种示例的范围的情况下,第一数据可以被称为第二数据,并且类似地,第二数据可以被称为第一数据。第一数据和第二数据都可以是数据,并且在某些情况下,可以是单独且不同的数据。本申请中术语“至少一个”的含义是指一个或多个,本申请中术语“多个”的含义是指两个或两个以上,例如,多个音频帧是指两个或两个以上的音频帧。
应理解,在本文中对各种示例的描述中所使用的术语只是为了描述特定示例,而并非旨在进行限制。如在对各种示例的描述和所附权利要求书中所使用的那样,单数形式“一个(“a”“an”)”和“该”旨在也包括复数形式,除非上下文另外明确地指示。
还应理解,本文中所使用的术语“和/或”是指并且涵盖相关联的所列出的项目中的一个或多个项目的任何和全部可能的组合。术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本申请中的字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,在本申请的各个实施例中,各个过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。还应理解,根据(基于)A确定B并不意味着仅仅根据(基于)A确定B,还可以根据(基于)A和/或其它信息来确定B。
还应理解,术语“包括”(也称“includes”、“including”、“Comprises” 和/或“Comprising”)当在本说明书中使用时指定存在所陈述的特征、整数、操作、操作、元素、和/或部件,但是并不排除存在或添加一个或多个其他特征、整数、操作、操作、元素、部件、和/或其分组。
还应理解,术语“如果”可被解释为意指“当...时”(“when”或“upon”)或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定...”或“如果检测到[所陈述的条件或事件]”可被解释为意指“在确定...时”或“响应于确定...”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
为便于描述本公开,以下介绍与本公开有关的概念。
模型及3D模型,在本公开中,模型一词指示用电子数据通过数字表现形式构成的形体或结构。而3D模型则是模型的一个示例。在3D模型的一个示例中,3D模型可以由顶点(vertex)组成,顶点之间连成三角形和四边形,并最终由无数个多边形构成复杂的立体模型。
首先,参照图1描述本公开的各个方面的应用场景。图1示出了根据本公开实施例的应用场景100的示意图,其中示意性地示出了服务器110和多个终端120。终端120以及服务器110可以通过有线或无线通信方式进行直接或间接地连接,本公开在此不做限制。
如图1所示,本公开实施例采用互联网技术,尤其是物理网技术。物联网可以作为互联网的一种延伸,它包括互联网及互联网上所有的资源,兼容互联网所有的应用。随着物联网技术在各个领域的应用,出现了诸如智能家居、智能交通、智慧健康等各种新的智慧物联的应用领域。
根据本公开的一些实施例用于处理图像。本公开所述的图像可以是2D图像也可以是3D图像。例如,本公开的图像上可以呈现与上述的智能家居、智能交通、智慧健康相关的各种虚拟场景和虚拟对象。本公开的图像还可以是可操作/可交互的图像,具体地,用户可以通过交互界面以对该图像上进行各种操作(例如,点击、拖动、触摸等),以呈现不同视角下的虚拟场景和虚拟对象。当然,本公开并不以此为限。
例如,根据本公开的一些实施例的方法可以全部或部分地搭载在服务器110上以对图像进行处理。例如,服务器110将用于分析图像元素对应的基础模型和承载模型,并生成点云数据,然后基于基础模型、承载模型和点云数据, 利用服务器上的搭载的渲染引擎,生成包括随机排布的多个图像元素的图像。这里的服务器110可以是独立的服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(CDN,Content Delivery Network)、定位服务以及大数据和人工智能平台等基础云计算服务的云服务器,本公开实施例对此不作具体限制。以下,又将服务器110称为云端。
例如,根据本公开实施例的方法还可以全部或部分地搭载在终端120上以对图像进行处理。例如,终端120将用于采集基础模型和承载模型相关的数据。又例如,终端120将用于呈现包括随机排布的多个图像元素的图像。例如,终端120可以是一种交互装置,其包括用户界面,可通过用户界面对图像进行显示,用户可以与交互装置进行信息交互。本公开对此并不进行限定。
例如,多个终端120中的每个终端可以是诸如台式计算机等的固定终端,诸如,智能手机、平板电脑、便携式计算机、手持设备、个人数字助理、智能可穿戴设备(例如,智能眼镜)、智能头戴设备、摄像机、车载终端等具有网络功能的移动终端,或者它们的任意组合,本公开实施例对此不作具体限制。
例如,根据本公开的实施例的终端120和服务器110上均可以搭载以下图像元素创建工具和图像渲染工具,以用于生成图像。
例如,终端120和服务器上可以搭载各种数字内容创建软件(Digital Content Create或是Digital Content Creating,又称为DCC软件)作为图像元素创建工具。DCC软件的示例包括:3DsMax、C4d、Blender、Maya、Houdini等等。DCC软件可以导出的文件包括几何体文件(.CGF)、蒙皮角色文件(.CHR)、蒙皮角色文件(.CHR)、角色动画文件(.CAF)、材质文件(.mtl)等。可选地,DCC软件上还可以包括笔刷组件。笔刷组件可以将一些预设的图案以画笔的形式进行使用。
例如,终端120和服务器110上可以搭载虚幻引擎(UE引擎)作为图像渲染工具。虚幻引擎是一种游戏引擎,支持主机游戏、PC游戏、手游开发等。目前流行的虚幻引擎的版本为UE4和UE5。其中,UE4编辑器具有可视化编辑窗口,能直接对游戏中角色、道具进行修改,并且支持实时渲染。本公开的各个示例中针对图像的渲染可以基于UE4引擎/UE5引擎来实现。
更进一步地,上述的各种图像元素创建工具和图像渲染工具均可以对图像的资产(asset)进行处理。图像的资产包括模型参数、音频、视频、图元、材质参数等等,以实现虚拟环境/虚拟环境的创建。
目前,使用现有的工具来处理需要重复随机出现图像元素往往存在诸多不便。例如,这些工具往往需要手动地将DCC软件生成的单个图像元素对应的图像模型导入至虚幻引擎,才能使得美术人员看到该图像元素的最终的呈现效果。如果美术人员需要对这些图像元素进行进一步的修改,又往往需要在DCC软件重新修改该图像模型,然后再将修改后的图像模型重新导入至虚幻引擎。此外,现有的工具将单个的图像元素处理成随机重复排布的多个图像元素时,得到的图像往往存在排布效果僵化以及生产的图像不自然的情况。
由此,针对需要便捷地处理重复随机出现的图像元素的需求,本公开的各个实施例生成图像元素对应的基础模型和承载模型,并利用点云实现在承载模型上基础模型的随机排布,从而解决在处理需要重复随机出现的图像元素时,制作成本和修改成本高、图像呈现效果不自然、图像生成过程效率低等技术问题。
以下,参考图2至图21以对本公开实施例进行进一步的描述。
作为示例,图2是示出根据本公开实施例的图像处理方法20的流程图。图3是示出根据本公开实施例的方法20生成的图像的示意图。
参见图2,示例方法20可以包括操作S201-S204之一或全部,也可以包括更多的操作。本公开并不以此为限。如上所述,操作S201至S204是由终端120/服务器110实时执行的,或者由终端120/服务器110离线执行。本公开并不对示例方法20各个操作的执行主体进行限制,只要其能够实现本公开的目的即可。示例方法中的各个步骤可以全部或部分地由图像元素创建工具和图像渲染工具执行,也可以由适用于图像元素创建工具和图像渲染工具的插件(plug-in)实现。插件是一种计算机程序产品,其通过和图像元素创建工具和图像渲染工具的交互,用来替应用程式增加一些所需要的特定的功能以使得第三方的开发者可以对应用程序进行扩充、精简,或者将源代码从应用程序中分离出来,去除因软件使用权限而产生的不兼容。
上述的图像元素创建工具、图像渲染工具和插件中的两项或多项可以集成成一个大型应用以实现在单个软件中绘制图像的目标,图像元素创建工具、 图像渲染工具和插件也可以是三个独立安装的应用程序,但是通过三者互相开放的接口传输图像相关的模型数据和资产数据等等。本公开并不以此为限。
例如,在操作S201中,获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据。
例如,所述图像元素为可以在图像上显示的各种虚拟对象,这些虚拟对象可能图像上重复出现多次以形成部分或全部的可视化的虚拟场景。这些虚拟对象的出现位置可以是随机的。例如,如图3所示,其示意性地示出了随机排布有树木和建筑物的图像。其中,图像元素301的示例为树木。此外,图像元素还可以是任意的2D或3D静态背景图像元素。例如,如果需要生成展示城市的图像,图像元素还可以是各种汽车、静态人群、广告牌、路灯、马路边缘、栅栏、玻璃门、围墙等。如果需要生成展示草原的图像,图像元素还可以是树木、各种植被、动物群体、石块等等。
作为一个示例,所述基础模型可以是一种3D虚拟模型,其可通过上述提及的DCC软件制作。结合以下详述的本公开的各个实施例,本公开中制作3D模型的工具例如3DsMax、C4d、Blender、Maya、Houdini等等。在这些示例中,可以通过该软件得到几何体文件(.CGF)、蒙皮角色文件(.CHR)、蒙皮角色文件(.CHR)、角色动画文件(.CAF)、材质文件(.mtl)作为所述基础模型的部分资产的示例。此外,还可在创建三维虚拟模型之前,对模型数据进行结构性优化,以节省计算资源,提高处理效率。值得注意的是,本公开并不对制作3D模型的工具的类型进行限制,例如,上述的DCC软件可为3D模型剖析的软件,可为进行视觉艺术创作的3D软件,还可为3D打印的3D软件,等等;此外,还可通过计算机图形库(即自编程时用到的图形库)制作生成三维模型;例如,(Open Graphics Library,开放图形库)、DirectX(Direct eXtension)等等。
例如,上述基础模型所对应的模型数据可以是上述各种虚拟对象对应的各种数据。也即,上述基础模型所对应的模型数据可以包括与在所述图像中绘制单个图像元素有关的任意数据。例如,所述模型数据可以是从所述虚拟对象的边缘信息、深度信息、顶点信息、高度信息、宽度信息、长度信息、提及信息等等。所述模型数据还可以是任意的图像资产,例如,颜色信息、材质信息、形状信息等等。所述模型数据还可以是任意的音频资产或视频资产(例如gif动画)。上述基础模型可以是美术人员手动设置的,也可以是从互联网/数据 库中拉取与所述虚拟对象相关的任意数据。本公开对此不进行限制。
在操作S202中,获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域。
例如,所述承载模型指示上述的各个虚拟对象在虚拟场景中对应的可绘制区域。在一个示例,该承载模型可以是在DCC软件中创建的3D虚拟对象,其可以通过美术人员手动设计。在一个示例中,该承载模型可以是在图像呈现软件中通过各种方式限定的图元模型或图元模型的组合。在又一个示例中,该承载模型还可以是利用插件或其他软件绘制的某个区域对应的3D虚拟模型。本公开并不以此为限。
作为一个示例,承载模型可以以图像中的任意连通形状来指示所述图像元素在所述图像上的分布区域。例如,如图3所示,其示意性地以白色区域示出了一个承载模型302对应的区域,在该承载模型对应的区域上,对应地随机分布着树木形式的图形元素。虽然在图3以白色区域的形式示出了承载模型对应的区域,本领域技术人员应当理解本公开并不以此为限。承载模型可以以各种方式获取,本公开并不以此为限。之后将参考各个附图进一步描述获取承载模型的一些示例方式,本公开在此不再赘述。
在操作S203中,至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息。
例如,所述点云数据可以是所述图像对应的图像坐标系下的多个点的数据集,所述点云数据中的每个点与在所述承载模型上随机排布的基础模型存在一一对应的关系。例如,所述点云数据中的每个点的空间分布信息为所述点对应的基础模型在所述图像坐标系下的各个坐标轴的值,所述点云数据中的每个点的姿态信息为所述点对应的基础模型对应的法线属性和/或朝向属性。
例如,点云数据中的每个点对应的数据集可以包括多种类型或格式的信息以用于指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息。例如,在该数据集中,空间分布信息可以是该图像坐标系下的三维坐标(X,Y,Z)/二维坐标(X,Y)的各个轴的值以及任何可能的属性值。可选地,点云数据中的每个点将对应于一个基础模型。点云数据中的点的数量就指示了承载模型上随机排布的基础模型的数量。
在一个示例中,点云数据中的每个点的三维坐标(X,Y,Z)的各个轴的值指示了所述基础模型在所述承载模型上的空间分布信息,而点云数据中的每个点的对应的法线属性或切线属性(以下又称为朝向属性)指示了所述基础模型在所述承载模型上的姿态信息。此外,每个点还可以具有形变信息,其指示在将所述基础模型置于承载模型之上时对应的形变信息。每个点还可以具有色彩变更信息或材质变更信息,其指示在将所述基础模型置于承载模型之上时对应的色彩变化或材质变化。本领域技术人员应当理解本公开并不以此为限。点云数据可以以各种方式生成,本公开并不以此为限。之后将参考各个附图进一步描述生成点云数据的一些示例方式,本公开在此不再赘述。
在操作S204中,至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
例如,可以利用DCC软件开放的接口,将基础模型中对应的模型数据传输至点云数据的每个点对应的数据集中,从而将基础模型本身作为点云数据的每个点的属性之一。又例如,还可以利用插件,建立DCC软件和图像渲染工具之间的通信管道,从而在渲染点云中的每个点时,考虑到基础模型对应的模型数据。又例如,还可以利用图像渲染工具开放的接口,在渲染点云中的每个点时,对应地利用贴图等技术将基础模型的各类资产附着在该点之上。本公开并不以此为限。之后将参考各个附图进一步描述生成随机排布有多个所述图像元素的图像的一些示例方式,本公开在此不再赘述。
在一些示例中,可以使用各类二维渲染引擎来生成随机排布有图像元素的图像。二维渲染引擎例如是PS(photoshop)工具或illustrator工具等。在又一些示例中,可以使用各类三维渲染引擎来生成随机排布有图像元素的图像。三维渲染引擎能够实现从数字三维场景中生成可显示的二维影像。所生成的二维影像可以是写实的也可以是非写实的。而三维渲染的过程需要依靠3D渲染引擎来生成。结合以下详述的本公开的各个实施例,本公开中的示例渲染引擎可以使用“光线追踪”技术,其通过追踪来自摄影机的光线穿过像素的虚拟平面并模拟其与物体相遇的效果来生成影像。本公开中的示例渲染引擎还可以使用“光栅化”技术,其通过收集各种面元的相关信息来确定二维影像中各个像素的值。本公开并不对3D渲染引擎的种类以及采用的技术进行限制。
根据本公开的一个示例实施例,可以利用各类图像元素创建工具(DCC 软件)执行上述操作S201以获得至少一个基础模型。所述至少一个基础模型对应于不同图像元素或虚拟对象的复合资产。例如,所述复合资产包括至少一种森林资产、至少一种石头资产、以及既包括石头资产又包括树林资产的嵌套资产等。接着使用图像渲染工具(虚幻引擎)执行上述操作S202至操作S204,从而便于在图像渲染工具中实时地获取承载模型和点云数据,实时地修改承载模型和点云数据(例如,用户可以手动地修改点云数据对应的随机排布情况)。此外,在将基础模型中对应的模型数据传输至点云数据的每个点对应的数据集之后,图像渲染工具还可以对应地将点云数据中的每个点属性进行手动修改或自动化的修改。例如,在基础模型对应于栅栏时,既可以自动化地修改栅栏的个数,也可以手动地拖动栅栏的位置或改变栅栏的形状等。
根据本公开的又一个示例实施例,可以利用各类图像元素创建工具(DCC软件)创建多种图像元素对应的基础模型。接着使用图像渲染工具(虚幻引擎)执行上述操作S201至操作S204。在图像渲染工具执行上述操作S201的过程中,可以从多种图像元素对应的基础模型中,选择至少一种图像元素对应的基础模型作为操作S202至操作S204使用的基础模型。由此,可以避免在图像渲染工具中一次次导入不同的基础模型,而是批量导入多个基础模型。
可选地,示例方法20还可以包括第一附加操作。在该第一附加操作中,可以将所生成的图像作为资产,生成用于另一图像的基础模型。也即,上述包括各种随机排布的图像元素的图像也可以作为一种基础模型的资产,从而可以在生成所述另一图像的过程中直接使用该基础模型。例如,包括随机排布的荷叶的池塘(池塘边缘有石块堆积的边缘)可以作为一个基础模型,在绘制一张新的图像时直接使用,而不需重新排布荷叶和池塘边缘的石块。
可选地,示例方法20还可以包括第二附加操作。在该第二附加操作中可以基于所述点云数据、所述基础模型和所述承载模型,生成用于另一图像的基础模型。也即,上述操作S201至S203中的点云数据、基础模型和承载模型都可以是另一基础模型的资产,从而在绘制新的图像时直接使用上述模型和数据。例如,荷叶、随机排布的荷叶对应的点云数据、池塘、石块、随机排布的池塘边缘的石块对应的点云数据等作为一个集合,可以是所述另一基础模型的资产(例如,图形元素“随机排布的荷叶的池塘(池塘边缘有石块堆积的边缘)”对应的基础模型)。
可选地,在该第一附加操作和第二附加操作中生成的可用于另一图像的基础模型还可以被打包成一个插件工具,在该插件工具中的基础模型和承载模型中的可调节参数可被进一步归类,然后直接导入虚幻引擎进行使用。当然本公开并不限于此。
由此,针对需要便捷地处理重复随机出现的图像元素的需求,本公开的各个实施例生成图像元素对应的基础模型和承载模型,并利用点云实现在承载模型上基础模型的随机排布,从而解决在处理需要重复随机出现的图像元素时,制作成本和修改成本高、图像呈现效果不自然、图像生成过程效率低等技术问题。
接下来,参考图4来进一步描述可以用于执行方法20的一种示例装置40。图4是示出根据本公开的实施例的图像处理装置40的结构图。本领域技术人员应当理解图4所示的装置仅是一种示例,本公开并不以此为限。
如图4所示,图像处理装置40包括基础模型获取模块401、承载模型获取模块402、点云数据处理模块404和图像生成模块405。图像处理装置40还可选地包括模型参数拾取模块403。图像处理装置40还可以包括更多或更少的模块,本公开并不以此为限。如上所述,可选地,图像处理装置40可以整体作为一个插件使用。
在图4所示的示例中,基础模型获取模块401可以被配置为执行操作S201。其中,图像元素例如是一种或者几种不同的树、一种或者几种不同的花草,基础模型则是这些图像元素对应的资产和参数。又例如,图像元素是栅栏。那么,基础模型则可以是栅栏中基础的一个纵向栏杆和两个横向栏杆对应的资产和参数。又例如,图像元素还可以是一个或者几种石头,一个路灯等等。在本公开的一些实施例中,虽然针对同一个虚拟对象,图像元素可以有多种(例如上述的多种不同的树、多种不同的花草和多种石头),但是单个虚拟对象对应的图像元素的数量是受到限制的,从而减少图像处理过程中需要的计算资源。可选地,所述基础模型获取模块401还可以被配置为对于基础模型的细节进行更新,以使得基础模型与待生成的图像更适配。
承载模型获取模块402可以被配置为执行操作S202。在一个示例中,承载模型获取模块402可以是适用于图像元素创建工具和图像渲染工具的插件的组成部分。例如,用户可以利用该插件手动绘制一条闭合曲线以生成承载模 型,所述闭合曲线的内部区域指示所述图像元素在所述图像上的分布区域。又例如,用户还可以通过点击选择需要布置树木的地面、需要长草的山体,需要路灯的道路等等,从而自动地生成所述承载模型。而对应的地面、山体和道路即对应于承载模型。可选地,承载模型获取模块402还可以被配置为对承载模型进行更新,以使得基础模型与待生成的图像更适配。
可选地,不论是上述对基础模型的更新还是对承载模型的更新都可以实时的在图像渲染工具或插件中进行预览,以便于用户在使用过程中,随时观察和调整待生成的图像中随机集群的估计形态。
模型参数拾取模块403可以被配置为将基础模型的全部或部分参数或资产从基础模型获取模块401传递至承载模型获取模块402。或者,模型参数拾取模块403可以被配置为将基础模型的全部或部分参数或资产从承载模型获取模块402传递至基础模型获取模块401。可选地,所述模型参数拾取模块403可以作为上述插件的可选组成部件。
点云生成模块404可以被配置为执行操作S203。在一个示例中,点云生成模块404也可以是适用于图像元素创建工具和图像渲染工具的插件的组成部分。在该插件中,可以对各项点云数据属性进行调节。可选地,点云数据属性包括:随机值、随机程度、随机大小范围、随机颜色范围、随机图像元素的种类数量、随机旋转范围、阻塞模型的体积及其大小(其对应于在所述承载模型上不绘制所述基础模型的空间参数)等等。本公开并不以此为限。
图像生成模块405可以被配置为执行操作S204。在一个示例中,图像生成模块405可以是图像渲染工具的组成部分,其用于渲染图像以生成所述随机排布有多个所述图像元素的图像。在所述图像为3D图像的情况下,图像生成模块405也可以生成一个网格3D模型资产,或者多个随机的重复网格3D模型资产。所述网格3D模型资产可以作为上述虚拟场景的组件之一,其的一个切片可以是所述3D图像。当然本公开并不以此为限。
与传统的仅在图像中手动绘制和摆放各个图像元素相比,根据本公开实施例的图像处理装置40能够生成更为自然的图像,并且还可以通过在上述的各个组件中调节点云数据的属性,实现图像中重复出现的各个图像元素之间细微的不同。
接下来参考图5至图9来进一步描述使用图像处理装置40执行方法20 的一个示例。其中,图5是示出根据本公开的实施例的使用图像处理装置40执行方法20的示意流程图。图6至图9是示出根据本公开的实施例的使用图像处理装置40执行方法20的过程中的示例用户界面图或部分流程图。
如图5所示,基础模型获取模块401可以用于获取基础模型。图6示出了基础模型获取模块401获取基础模型的一个示例用户界面。其中,在该示例用户界面右侧,用户可以通过点击选择需要随机排布的基础模型,例如,在图6的示例中,用户选择了随机排布栅栏。在该示例用户界面的下方,可选地将呈现所选择的虚拟模型的一些资产,例如,栅栏的不同竖杆形态和横杆形态的3D图像或其他参数。
承载模型获取模块402可以用于获取曲线,然后基于所述曲线生成承载模型。图7A示出了根据本公开的实施例的承载模型获取模块402获取承载模型的流程图。图7B示出了根据本公开的实施例的承载模型获取模块402获取承载模型的四种示例用户界面,其对应于根据曲线生成承载模型的不同形式。图7C示出了根据本公开的实施例的对承载模型进行参数设定的示例用户界面。
例如,参考图7A,承载模型获取模块402可以判断图像上是否已经设定了预设区域用于获取承载模型。响应于图像上存在预设区域,承载模型获取模块402基于所述预设区域获取承载模型。例如,在图7B的曲线绘制或提取界面III中,所述图像的背景包括一块六边形的区域,这块区域为基础素材库中的材料。用户通过点击该区域的中心,承载模型获取模块402可以自动获取该区域的边缘作为曲线,并可选地对该曲线进行平滑处理,从而得到一个能够指示图像元素在所述图像上的分布区域的闭合图形。
响应于图像上不存在预设区域,承载模型获取模块402基于绘制的曲线来承载模型。例如,在图7B的曲线绘制或提取界面I中,用户仅绘制了一条非闭合的曲线(以其中的实线示出)。承载模型获取模块402可以根据基础模型对应的参数,设置宽度参数W,从而获得沿着该实线的闭合区域(以虚线围成的闭合区域示出)。例如,承载模型获取模块402可以利用sweep函数,生成沿着非闭合的曲线的带状闭合区域。当然本公开并不以此为限。
又例如,在图7B的曲线绘制或提取界面II中,用户绘制了一个闭合的椭圆形,那么该椭圆形围成的区域也即该承载模型所指示的图像元素在所述 图像上的分布区域。
承载模型所指示的所述图像元素在所述图像上的分布区域不仅可以是上述的具有一定面积的闭合区域,还可以仅是一条或多条非闭合的曲线。在该曲线上可以随机或有序的分布有不同的图形元素。例如,在图7B的曲线绘制或提取界面IV中,用户仅绘制了一条非闭合的曲线。此时,可以在从曲线上随机采样多个点作为后续的点云数据。虽然图7B仅示出了四种通过曲线获取承载模型的示例,本公开并不以此为限。
可选地,承载模型获取模块402还可以基于用户输入的值(例如,通过在图7C所示的用户界面上勾选相应的选项)来确定所述图像元素在所述图像上的分布区域是具有一定面积的闭合区域,还是一条或多条非闭合的曲线。也即,响应于承载模型指示的所述图像元素在所述图像上的分布区域为闭合区域(例如,在图7C所示的用户界面上选择闭合区域对应的单选按钮),在所述闭合区域中采样多个点作为所述点云数据;响应于承载模型指示的所述图像元素在所述图像上的分布区域为非闭合曲线(例如,在图7C所示的用户界面上选择非闭合曲线对应的单选按钮),在所述非闭合曲线上采样多个点作为所述点云数据。可选地,可以通过插件中的switchif工具(一种模式切换工具)来对应地根据单选按钮切换点云数据的生成方式,例如,选择是直接在曲线上随机采样多个点作为后续的点云数据还是在闭合区域内部随机采样多个点作为后续的点云数据。本公开并不以此为限。
接着结合图8A至图8C来进一步描述点云数据处理模块404对于点云数据的处理过程。图8A示出了根据本公开实施例的点云数据处理模块404生成点云数据的流程图。图8B示出了根据本公开实施例的点云数据处理模块404以基于基础模型对于点云中的各个点的属性进行赋值的用户界面。图8C示出了根据本公开实施例的点云数据处理模块404选择阻塞模型的用户界面。图8D示出了根据本公开实施例的点云数据处理模块404设置点云数据的属性的用户界面。
例如,点云数据处理模块404至少可以基于上述的承载模型生成点云数据。例如,响应于承载模型指示的所述图像元素在所述图像上的分布区域为非闭合曲线,点云数据处理模块404可以利用各种随机函数在曲线上采样多个点作为点云数据。响应于承载模型指示的所述图像元素在所述图像上的分布 区域为闭合区域,点云数据处理模块404可以利用各种随机函数在所述闭合区域中采样多个点作为点云数据。
可选地,结合图5,点云数据处理模块404还可以对承载模型进行进一步的解析,以继承承载模型与点云数据相关联的部分点云数据属性。这些参数包括承载模型对应的一些属性,例如,承载模型是平整的地面还是起伏的山丘,承载模型的边缘是否平滑,承载模型是否包括直线边缘,承载模型对应的面积,承载模型对应的形状等等,本公开并不以此为限。点云数据处理模块404还可以根据用户输入对从承载模型继承而来的部分点云数据属性进行修改,或者增加部分点云数据属性,或者删除某些点云数据属性。本公开并不以此为限。
可选地,点云数据处理模块404还可以对点云数据进行进一步的修改,例如,按照图8D所示的示例用户界面对各项点云数据属性进行调节。虽然图8D仅示出了针对随机程度、随机颜色范围(其以红色颜色成分的值作为示例)、随机图像元素的种类的数量的调节,但是本公开并不以此为限。还可以针对点云数据的其它属性进行调节,例如,对点云数据的属性中的物体数值、随机程度、随机大小范围、随机颜色范围、随机图像元素的种类数量、随机旋转范围、阻塞模型的体积及其大小等等进行调节。其中,阻塞模型指示不绘制图像元素的空间。本公开并不以此为限。可选地,上述调节至少部分地基于从承载模型继承而来的点云数据属性或者至少部分地基于更新后的点云数据属性。本公开并不以此为限。如图8D所示,针对每个点云数据的参数(属性),可以对应地自动生成在点云数据属性范围之内的推荐随机值(例如,图8D所示的50%的随机程度,其指示点云数据的随机分布的方差),也可以由用户手动设置随机颜色范围(例如,图8D所示的红色成分的值被手动的设置为在30到70之间),还可以针对自动推荐的值进行调节(例如,图8D中,自动推荐的图像元素的种类数量可以是6,然后被手动调节为5)。当然本公开并不以此为限。
可选地,结合图8A,点云数据处理模块404还可以基于基础模型对于点云中的各个点的属性进行赋值,从而使得点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息。
参考图8A和图8B,点云数据处理模块404可以基于基础模型向每个点添加虚幻属性(unreal)。其中,在一些示例中,虚幻属性可以暴露能够用于 将虚幻引擎中的部分参数或属性。在另一示例中,虚幻属性可以暴露能够从基础模型中获得的参数和资产。例如,点云数据处理模块404可以将基础模型的可调节资产转换为string类型的多个字符串。点云数据处理模块404可以将这些字符串通过虚幻属性传递至点云数据中的每个点的数据集里。例如,在图8B中,基础模型为栅栏,其的资产包括栅栏的横栏和竖栏。栅栏的横栏和竖栏的可调节参数为栅栏的横栏和竖栏的长度。用户在点击栅栏的横栏的长度之后,点云数据处理模块404将参数“栅栏的横栏的长度”以string的方式传递至参数调节界面(图8B的右侧界面)。
点云数据处理模块404可以为每个点创建可调节属性(variable),其指示所述基础模型可以针对不同点进行细微调节的部分点云数据属性。例如,可调节属性包括:随机值的变化阈值、尺寸的变化阈值、随机程度的阈值、随机颜色范围、随机图像元素的种类数量的范围、随机旋转范围等等。例如,在图8B的右侧参数调节界面中,其指示参数“栅栏的横栏的长度”的调节范围为1至10。针对栅栏这种朝向可旋转的基础模型,可以为点云中的每个点增加法线属性。由此,点云数据处理模块404可以基于该点的法线属性,将所述基础模型进行旋转,例如,以该法线为轴旋转随机角度或按照一定规则设定的角度。
接着,点云数据处理模块404可以基于可调节属性,生成调节随机值(所述调节随机值满足可调节属性设定的范围)。例如,图8B所示的大小调节中的随机值5。点云数据处理模块404将对应的利用调节随机值对基础模型的各个资产进行调节。由此,点云数据处理模块404实现了点云数据中的每个点的随机处理。
可选地,点云数据处理模块404还可以从点云数据中选择一个或多个点作为第一阻塞点。其中,所述第一阻塞点指示距离所述第一阻塞点一定距离内的点均不排布基础模型。然后点云数据处理模块404可以基于第一阻塞点生成阻塞球体,并删除阻塞球体内部的点。例如,图8C所示,点云数据处理模块404可以展示阻塞区域预览界面,其中,曲线围成的区域内部可以是随机排布花草相关的基础模型,沿着曲线可能会布置一些石块。为了避免花草长到石头内部,造成图像的失真,点云数据处理模块404将设置图8C所示的多个阻塞球体,并删除位于阻塞球体内部的点。
可选地,点云数据处理模块404还可以从所述点云数据中选择一个或多个点作为第二阻塞点,其中,所述第二阻塞点指示距离所述第二阻塞点一定距离内的点不排布第一基础模型,以及在所述第二阻塞点处排布第二基础模型,其中,所述第一基础模型对应于第一图像元素,所述第二基础模型对应于第二图像元素。更进一步的,还可以基于第二阻塞点生成第二阻塞球体,并基于第二阻塞球体以外的点,排布第一基础模型。例如,第一基础模型为图像元素花草对应的基础模型,第二基础模型为图像元素人群对应的基础模型,从而在第二阻塞点处排布人群,而在第二阻塞点周围不排布花草,以避免花草的排布和人群的排布之间的冲突。
继续参考图5,在点云数据处理模块404完成对点云数据中的每个点的属性进行赋值之后,图像生成模块405可以生成对应的图像。例如,参考图9,其示出了图像生成模块405生成图像预览的用户界面。在图9中,横栏的长度虽然大致相同,但是不同的栅栏也具有不同的长度,从而实现基础模型对应的随机处理。
图10A是示出根据本公开实施例的生成点云数据的又一示意图。图10B是示出根据本公开实施例的对点云数据的生成进一步设置的用户界面。
其中,以上描述的随机地生成点云数据的过程还包括:基于所述基础模型的类型,确定使用有序随机过程还是无序随机过程来随机地生成所述点云数据。
在真实世界中,随机集群的对象的随机分布规则通常存在以有序的规则确定的规律部分和以无序的规则确定的无序部分。根据规律部分和无序部分的占比的不同,可以如图10A所示的,将生成点云数据的过程分为有序随机过程和无序随机过程。也可以如图10B所示的,在用户界面上设置是以有序随机过程生成点云数据还是以无序随机过程生成点云数据。
例如,有序随机是指规律部分的占比多于无序部分。例如,针对排列路灯这样的对象,其规律部分是各个路灯需要安装几乎相同的间距排列。然而,无序部分则是每个路灯的间距可能会有轻微的随机颜色和亮度都会有轻微的随机差异。从而针对路灯这样的基础模型,需要考虑以有序随机的方案生成点云数据。例如,结合参考图5至图9描述的示例,可以针对从承载模型继承而来的部分点云数据属性(例如,承载模型对应的带状区域的面积和曲线形状), 生成有序随机对应的规则,比如相邻路灯之间需要沿曲线相同距离。或者,石块需要在承载区域中按阵列排布等等。
又例如,无序随机是指规律部分的占比少于无序部分。例如针对排列山林里的树木这样的对象,其无序部分是各个树木的大小和位置都是无序的。但是,有序部分则是树木越往森林边缘树木越小、密度越大、树木之间的间距越小。例如,结合参考图5至图9描述的示例,可以将基础模型的部分属性(长宽高,体积)等作为虚幻属性传给点云数据处理模块404并对应地利用可调节属性生成这些属性的变化阈值,然后在该对应的变化阈值内利用各种随机函数(例如高斯随机函数、基于乘同余法的随机函数和基于混合同余法的随机函数、基于中心极限定理的随机函数、Box Muller随机函数等)生成无序随机变化的各个属性。
在本公开的一些实施例中,在制作资产的过程中,如果是需要以有序随机过程处理的基础模型,那么可以仅考虑规律部分的规则;如果是需要以无序随机过程处理的基础模型,那么可以仅考虑无序部分的规则。当然本公开并不以此为限。
接下来,参考图11至图18来进一步描述针对有序随机和无序随机的不同的处理过程。其中,响应于确定使用有序随机过程来随机地生成所述点云数据(例如,在图10B中的单选按钮中选择有序随机选项),对所述承载模型对应的分布区域进行均匀采样,以生成所述点云数据;响应于确定使用无序随机过程来随机地生成所述点云数据(例如,在图10B中的单选按钮中选择无序随机选项),对所述承载模型对应的分布区域进行随机采样,以生成所述点云数据。
参考图11至图15,来描述在有序随机过程可能涉及的部分操作。如上所述,有序随机过程主要针对如篱笆、围栏、围墙、路灯等基础模型。这些基础模型之间具有轻微随机的差异并且基础模型的排布间距规律。以下以栅栏为基础模型为例来进一步描述有序随机过程,并假设承载模型所指示的分布区域为如图11所示的曲线。可选地,根据图10B的用户界面上是选择平滑选项还是选择不平滑选项,承载模型获取模块402将确定是否针对承载模型对应的曲线进行平滑处理。作为一个示例,针对地砖或建筑物等边缘较尖锐的基础模型,图10B的用户界面将推荐(或默认)选择不平滑选项,以提示用户不 对曲线进行平滑处理往往得到更好的排布效果(如后续详细描述的图17所示)。当然,针对地砖或建筑物等边缘较尖锐的基础模型也可以使用平滑选项,从而得到后续详细描述的图18的排布效果。而针对树林、花草等边缘较模糊的基础模型,图10B的用户界面将推荐(或默认)选择平滑选项,以提示用户对曲线进行平滑处理往往得到更好的排布效果。
作为一个示例,假设在图10B上选择平滑选项对应的单选按钮,承载模型获取模块402将针对承载模型对应的曲线进行平滑处理,从而得到如图11所示的平滑后的曲线。
点云数据处理模块404将针对承载模型对应的曲线进行采样,以生成多个间距相同的点。平滑前的曲线的采样结果和平滑后的曲线的采样结果对比如图12所示。接着,可以给每个点创建法线的属性,并对应地示出法线的朝向和坐标轴的关系。如图13所示,为便于示出,这里的基础模型是玩具鸭子,其x轴的正方向作为玩具鸭子的朝向,y轴作为法线朝上,可以看到每个玩具鸭子的朝向均以法线为轴具有不同的旋转方向。
继续排布栅栏的示例,可选地,如图14所示,点云数据处理模块404将每个点的朝向沿着曲线的切线方向排列,以使得每个栅栏均朝向下一个栅栏,从而保障每一个栅栏能够严谨的对接上相邻的栅栏。例如,点云数据处理模块404可以确定曲线在图像坐标系下的每个点处的切线方向,并将所述切线方向设定为所述点的切线属性。由此,针对在该点处的栅栏的横栏能够准确地连接到下一个栅栏的竖栏。可选地,如图15所示,点云数据处理模块404还可以对开始点和结束点进行单独地处理,以使得收尾的栅栏均不再具有悬空的横栏。
接着,点云数据处理模块404还可以结合图5至图9中描述的各种操作,对该栅栏进行调整。例如,点云数据处理模块404对各个栅栏的部分属性进行随机的调节,例如,调整栅栏的光照参数等等。点云数据处理模块404还可以设置一些阻塞点,以避免在图像中的不恰当的位置绘制了栅栏。
参考图16至图18,来描述在无序随机过程可能涉及的部分操作。如上所述,无序随机过程通常适用于对森林、植被、岩石、静态人群等完全随机排布的基础模型的处理。这些基础模型通常在闭合区域中随机的排布。
承载模型所指示的分布区域为如图16所示的闭合区域。可选地,承载模 型获取模块402将针对承载模型对应的曲线进行平滑处理,从而得到平滑后的区域。响应于承载模型指示的所述图像元素在所述图像上的分布区域为闭合区域,点云数据处理模块404可以利用各种随机函数在所述闭合区域中采样多个点作为点云数据。如图16所示,平滑的曲线对应的闭合区域和不平滑的曲线对应的闭合区域的采样结果不同,从而对应于不同的点云数据。
类似的,点云数据处理模块404可以给每个点创建法线的属性,并对应地示出法线的朝向和坐标轴的关系。如图17所示,为便于示出,这里的基础模型是长方体,其x轴向量和y轴向量的向量和指向的方向作为长方体的朝向,z轴作为法线朝上,可以看到每个长方体的朝向均以法线为轴具有随机的旋转方向。接着,点云数据处理模块404还可以结合图5至图9中描述的各种操作,对该长方体进行调整。例如,点云数据处理模块404对各个长方体的部分属性进行随机的调节,例如,调整长方体的颜色等等。点云数据处理模块404还可以设置一些阻塞点,以避免在图像中的不恰当的位置绘制了长方体,以得到图18所示的随机排布的长方体。
此外,上述的基础模型获取模块401、承载模型获取模块402、点云数据处理模块404还可以打包成一个插件工具,将基础模型和承载模型中的可调节参数进行归类,然后直接导入虚幻引擎进行使用。例如,可以将调整好的的基础模型和承载模型作为一个整体烘焙成一个属性,直接在虚幻引擎中调整。例如,假设基础模型是花草,承载模型为地面,那么基础模型和承载模型可以被整体烘焙成植被属性,直接供虚幻引擎使用。
由此,针对需要便捷地处理重复随机出现的图像元素的需求,本公开的各个实施例生成图像元素对应的基础模型和承载模型,并利用点云实现在承载模型上基础模型的随机排布,从而解决在处理需要重复随机出现的图像元素时,制作成本和修改成本高、图像呈现效果不自然、图像生成过程效率低等技术问题。
此外根据本公开的又一方面,还提供了一种电子设备,用于实施根据本公开实施例的方法。图19示出了根据本公开实施例的电子设备2000的示意图。
如图19所示,所述电子设备2000可以包括一个或多个处理器2010,和一个或多个存储器2020。其中,所述存储器2020中存储有计算机可读代码,所述计算机可读代码当由所述一个或多个处理器2010运行时,可以执行如上 所述的搜索请求处理方法。
本公开实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。上述处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、操作及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,可以是X86架构或ARM架构的。
一般而言,本公开的各种示例实施例可以在硬件或专用电路、软件、固件、逻辑,或其任何组合中实施。某些方面可以在硬件中实施,而其他方面可以在可以由控制器、微处理器或其他计算设备执行的固件或软件中实施。当本公开的实施例的各方面被图示或描述为框图、流程图或使用某些其他图形表示时,将理解此处描述的方框、装置、系统、技术或方法可以作为非限制性的示例在硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或其某些组合中实施。
例如,根据本公开实施例的方法或装置也可以借助于图20所示的计算设备3000的架构来实现。如图20所示,计算设备3000可以包括总线3010、一个或多个CPU 3020、只读存储器(ROM)3030、随机存取存储器(RAM)3040、连接到网络的通信端口3050、输入/输出组件3060、硬盘3070等。计算设备3000中的存储设备,例如ROM 3030或硬盘3070可以存储本公开提供的方法的处理和/或通信使用的各种数据或文件以及CPU所执行的程序指令。计算设备3000还可以包括用户界面3080。当然,图20所示的架构只是示例性的,在实现不同的设备时,根据实际需要,可以省略图20示出的计算设备中的一个或多个组件。
根据本公开的又一方面,还提供了一种计算机可读存储介质。图21示出了根据本公开的存储介质4000的示意图。
如图21所示,所述计算机存储介质4020上存储有计算机可读指令4010。当所述计算机可读指令4010由处理器运行时,可以执行参照以上附图描述的根据本公开实施例的方法。本公开实施例中的计算机可读存储介质可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM)、可编程只读存储器(PROM)、可擦 除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)或闪存。易失性存储器可以是随机存取存储器(RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(SDRAM)、双倍数据速率同步动态随机存取存储器(DDRSDRAM)、增强型同步动态随机存取存储器(ESDRAM)、同步连接动态随机存取存储器(SLDRAM)和直接内存总线随机存取存储器(DR RAM)。应注意,本文描述的方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。应注意,本文描述的方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本公开的实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行根据本公开实施例的方法。
需要说明的是,附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,所述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
一般而言,本公开的各种示例实施例可以在硬件或专用电路、软件、固件、逻辑,或其任何组合中实施。某些方面可以在硬件中实施,而其他方面可以在可以由控制器、微处理器或其他计算设备执行的固件或软件中实施。当本公开的实施例的各方面被图示或描述为框图、流程图或使用某些其他图形表示时,将理解此处描述的方框、装置、系统、技术或方法可以作为非限制性的示例在硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或 其某些组合中实施。
在上面详细描述的本公开的示例实施例仅仅是说明性的,而不是限制性的。本领域技术人员应该理解,在不脱离本公开的原理和精神的情况下,可对这些实施例或其特征进行各种修改和组合,这样的修改应落入本公开的范围内。

Claims (20)

  1. 一种图像处理方法,包括:
    获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据,
    获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域;
    至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及
    至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
  2. 如权利要求1所述的方法,其中,所述至少部分地基于所述承载模型,随机地生成点云数据包括:
    响应于承载模型指示的所述图像元素在所述图像上的分布区域为闭合区域,在所述闭合区域中采样多个点作为所述点云数据;
    响应于承载模型指示的所述图像元素在所述图像上的分布区域为非闭合曲线,在所述非闭合曲线上采样多个点作为所述点云数据。
  3. 如权利要求1所述的方法,其中,所述点云数据为所述图像对应的图像坐标系下的多个点的数据集,所述点云数据中的每个点与在所述承载模型上随机排布的基础模型存在一一对应的关系。
  4. 如权利要求3所述的方法,其中,所述随机地生成点云数据包括:
    基于所述基础模型对于点云中的各个点的属性进行赋值,以使得点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息。
  5. 如权利要求4所述的方法,其中,所述点云数据中的每个点的空间分 布信息为所述点对应的基础模型在所述图像坐标系下的各个坐标轴的值,所述点云数据中的每个点的姿态信息为所述点对应的基础模型对应的法线属性和/或朝向属性。
  6. 如权利要求4所述的方法,其中,所述随机地生成点云数据包括:
    基于所述基础模型的类型,确定使用有序随机过程还是无序随机过程来随机地生成所述点云数据,
    其中,有序随机过程指示以有序的规则确定的规律部分的占比高于以无序的规则确定的无序部分,有序随机过程指示所述规律部分的占比低于所述无序随机。
  7. 如权利要求6所述的方法,其中,所述随机地生成点云数据包括:
    响应于确定使用有序随机过程来随机地生成所述点云数据,对所述承载模型对应的分布区域进行均匀采样,以生成所述点云数据;
    响应于确定使用无序随机过程来随机地生成所述点云数据,对所述承载模型对应的分布区域进行随机采样,以生成所述点云数据。
  8. 如权利要求1所述的方法,其中,所述点云数据具有以下点云数据属性中的一项或多项:随机值、随机程度、随机大小范围、随机颜色范围、随机图像元素的种类数量、随机旋转范围、以及在所述承载模型上不绘制所述基础模型的空间参数。
  9. 如权利要求8所述的方法,其中,所述随机地生成点云数据包括:
    从所述点云数据中选择一个或多个点作为第一阻塞点,其中,所述第一阻塞点指示距离所述阻塞点一定距离内的点不排布基础模型;以及
    基于第一阻塞点生成阻塞球体,并删除所述点云数据位于所述阻塞球体内部的点。
  10. 如权利要求8所述的方法,其中,所述随机地生成点云数据包括:
    从所述点云数据中选择一个或多个点作为第二阻塞点,其中,所述第二阻 塞点指示距离所述第二阻塞点一定距离内的点不排布第一基础模型,并在所述第二阻塞点处排布第二基础模型,
    其中,所述第一基础模型对应于第一图像元素,所述第二基础模型对应于第二图像元素。
  11. 如权利要求1所述的方法,其中,所述基础模型所对应的模型数据为与所述图像元素相关联的三维虚拟对象对应的数据,所述承载模型指示在虚拟场景中所述三维虚拟对象对应的可绘制区域。
  12. 如权利要求11所述的方法,其中,所述承载模型上的可绘制区域上排布有多种基础模型。
  13. 如权利要求1所述的方法,还包括:将所生成的图像作为资产,生成用于另一图像的基础模型。
  14. 如权利要求1所述的方法,还包括:基于所述点云数据、所述基础模型和所述承载模型,生成用于另一图像的基础模型。
  15. 如权利要求1所述的方法,其中,所述获取图像元素对应的基础模型还包括:从多种图像元素对应的基础模型中,选择至少一种图像元素对应的基础模型。
  16. 一种图像处理插件,所述图像处理插件被配置为执行以下操作:
    获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据,
    获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域;
    至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及
    至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
  17. 一种图像处理装置,包括:
    基础模型获取模块,被配置为获取图像元素对应的基础模型,所述基础模型指示所述图像元素对应的模型数据;
    承载模型获取模块,被配置为获取所述图像元素对应的承载模型,所述承载模型指示所述图像元素在所述图像上的分布区域;
    点云生成模块,被配置为至少部分地基于所述承载模型,随机地生成点云数据,所述点云数据中的每个点对应的数据指示所述基础模型在所述承载模型上的空间分布信息和/或姿态信息;以及
    图像生成模块,被配置为至少部分地基于所述点云数据、所述基础模型和所述承载模型,生成随机排布有多个所述图像元素的图像。
  18. 一种电子设备,包括:处理器;存储器,存储器存储有计算机指令,该计算机指令被处理器执行时实现如权利要求1-15中任一项所述的方法。
  19. 一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令被处理器执行时实现如权利要求1-15中任一项所述的方法。
  20. 一种计算机程序产品,其包括计算机可读指令,所述计算机可读指令在被处理器执行时,使得所述处理器执行如权利要求1-15中任一项所述的方法。
PCT/CN2022/084512 2022-03-31 2022-03-31 一种图像处理方法 WO2023184381A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280000665.5A CN117178296A (zh) 2022-03-31 2022-03-31 一种图像处理方法
PCT/CN2022/084512 WO2023184381A1 (zh) 2022-03-31 2022-03-31 一种图像处理方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/084512 WO2023184381A1 (zh) 2022-03-31 2022-03-31 一种图像处理方法

Publications (1)

Publication Number Publication Date
WO2023184381A1 true WO2023184381A1 (zh) 2023-10-05

Family

ID=88198703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084512 WO2023184381A1 (zh) 2022-03-31 2022-03-31 一种图像处理方法

Country Status (2)

Country Link
CN (1) CN117178296A (zh)
WO (1) WO2023184381A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448089A (zh) * 2018-10-22 2019-03-08 美宅科技(北京)有限公司 一种渲染方法及装置
CN112950789A (zh) * 2021-02-03 2021-06-11 天津市爱美丽科技有限公司 一种虚拟增强现实展示物体的方法、装置和存储介质
CN114116109A (zh) * 2021-11-30 2022-03-01 广东利元亨智能装备股份有限公司 一种设备布局的处理方法、系统、装置及存储介质
JP2022047989A (ja) * 2020-09-14 2022-03-25 株式会社日立産機システム シミュレーションシステム及びシミュレーション方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448089A (zh) * 2018-10-22 2019-03-08 美宅科技(北京)有限公司 一种渲染方法及装置
JP2022047989A (ja) * 2020-09-14 2022-03-25 株式会社日立産機システム シミュレーションシステム及びシミュレーション方法
CN112950789A (zh) * 2021-02-03 2021-06-11 天津市爱美丽科技有限公司 一种虚拟增强现实展示物体的方法、装置和存储介质
CN114116109A (zh) * 2021-11-30 2022-03-01 广东利元亨智能装备股份有限公司 一种设备布局的处理方法、系统、装置及存储介质

Also Published As

Publication number Publication date
CN117178296A (zh) 2023-12-05

Similar Documents

Publication Publication Date Title
US10325399B2 (en) Optimal texture memory allocation
Schütz Potree: Rendering large point clouds in web browsers
US9852544B2 (en) Methods and systems for providing a preloader animation for image viewers
CN108090947B (zh) 一种面向3d场景的光线追踪优化方法
WO2019239211A2 (en) System and method for generating simulated scenes from open map data for machine learning
US9684997B2 (en) Efficient rendering of volumetric elements
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
WO2012083508A1 (zh) 互联网上复杂场景真实感快速绘制方法
US20150178976A1 (en) View Dependent Level-of-Detail for Tree-Based Replicated Geometry
CN109979002A (zh) 基于WebGL三维可视化的场景构建系统及方法
Novák et al. Rasterized bounding volume hierarchies
Bulbul et al. Social media based 3D visual popularity
CN114202622A (zh) 虚拟建筑生成方法、装置、设备及计算机可读存储介质
US9704290B2 (en) Deep image identifiers
Boudon et al. Survey on computer representations of trees for realistic and efficient rendering
WO2023184381A1 (zh) 一种图像处理方法
US8847949B1 (en) Streaming replicated geographic data for display in a three-dimensional environment
Favorskaya et al. Tree modelling in virtual reality environment
Zellmann et al. High-Quality Rendering of Glyphs Using Hardware-Accelerated Ray Tracing.
US8749550B1 (en) Display of replicated geographic data using a hierarchical data structure
WO2024027237A1 (zh) 渲染的优化方法、电子设备和计算机可读存储介质
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
WO2024037116A1 (zh) 三维模型的渲染方法、装置、电子设备及存储介质
Shihan et al. Adaptive volumetric light and atmospheric scattering
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934221

Country of ref document: EP

Kind code of ref document: A1