CN115311395A - Three-dimensional scene rendering method, device and equipment - Google Patents

Three-dimensional scene rendering method, device and equipment Download PDF

Info

Publication number
CN115311395A
CN115311395A CN202110426405.1A CN202110426405A CN115311395A CN 115311395 A CN115311395 A CN 115311395A CN 202110426405 A CN202110426405 A CN 202110426405A CN 115311395 A CN115311395 A CN 115311395A
Authority
CN
China
Prior art keywords
rendering
dimensional scene
dimensional
operation instruction
styles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110426405.1A
Other languages
Chinese (zh)
Inventor
丁治宇
徐莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202110426405.1A priority Critical patent/CN115311395A/en
Publication of CN115311395A publication Critical patent/CN115311395A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Abstract

The embodiment of the application discloses a three-dimensional scene rendering method, a three-dimensional scene rendering device and three-dimensional scene rendering equipment, which are used for improving the three-dimensional scene rendering flexibility. The method in the embodiment of the application comprises the following steps: the server cluster can obtain an operation instruction which is transmitted by a user through the terminal device and indicates the server cluster to render the three-dimensional scene. The operation instruction at least comprises two rendering styles, and the server cluster can respectively execute the rendering of the corresponding rendering styles on different areas in the three-dimensional scene to be rendered.

Description

Three-dimensional scene rendering method, device and equipment
Technical Field
The embodiment of the application relates to the field of graphic rendering, in particular to a three-dimensional scene rendering method, device and equipment.
Background
With the rapid development of graphics hardware and the continuous improvement of aesthetic quality of people in recent years, as a graphics branch corresponding to Photorealistic Rendering (Photorealistic Rendering), a Non-Photorealistic Rendering (Non-Photorealistic Rendering) technology has gained wide attention in three-dimensional model Rendering. Non-photorealistic rendering is a graphics rendering technique that generates various artistic styles, and the purpose of the non-photorealistic rendering is not to express the photorealistic sense of a scene, but to transmit data and business information more individually through a specific artistic style, emphasizing the stylized presentation and visual transmission of specific information in the scene, and thus having wide application prospects in the fields of movie and television products, advertising, scenic spot roaming, game entertainment, digital twin cities, digital factories, and the like.
In the prior art, non-photorealistic rendering performs stylized rendering and visualization in a single specific style for a determined space range or a service scene range.
However, the rendering flexibility is not high for a single stylized expression mode of a three-dimensional scene.
Disclosure of Invention
The embodiment of the application provides a three-dimensional scene rendering method, a three-dimensional scene rendering device and three-dimensional scene rendering equipment, which are used for improving the three-dimensional scene rendering flexibility.
A first aspect of an embodiment of the present application provides a three-dimensional scene rendering method, including: acquiring an operation instruction for rendering a three-dimensional scene by a user, wherein the operation instruction comprises at least two rendering styles; and according to the operation instruction, rendering different areas in the three-dimensional scene according to different rendering styles to obtain a rendered three-dimensional scene, wherein the rendered three-dimensional scene comprises at least two rendering styles.
In the first aspect, the execution subject of the method may be a server cluster, and the server cluster may obtain an operation instruction, which is transmitted by a user through a terminal device and instructs the server cluster to render a three-dimensional scene. The operation instruction at least comprises two rendering styles, and the server cluster can respectively execute the rendering of the corresponding rendering styles in different areas of the three-dimensional scene to be rendered according to the operation instruction, so that the at least two rendering styles can be realized in the rendered three-dimensional scene, and the rendering flexibility of the three-dimensional scene can be improved.
In a possible implementation manner, the operation instruction further includes at least two operation ranges, each operation range corresponds to one rendering style, and the step of rendering different regions in the three-dimensional scene according to different rendering styles according to the operation instruction specifically includes: and according to the operation instruction, rendering the areas in different operation ranges in the three-dimensional scene according to the corresponding rendering style.
In the possible implementation manner, the operation instruction transmitted by the user through the terminal device may further include at least two operation ranges, where the operation ranges correspond to the rendering styles one to one, and different operation ranges may correspond to the different areas, that is, the server cluster may perform rendering of the corresponding rendering styles on the area indicated by the operation range.
In a possible implementation manner, the operation instruction further includes at least two object types, each object type corresponds to one rendering style, and the step of rendering different areas in the three-dimensional scene according to different rendering styles according to the operation instruction specifically includes: determining a rendering area corresponding to each target type in the three-dimensional scene according to at least two target types; and rendering the rendering area corresponding to each target type in the three-dimensional scene according to the corresponding rendering style.
In the above possible implementation, the operation instruction may further include at least two object types, where the object types may indicate types of three-dimensional elements in the three-dimensional scene, each three-dimensional element has a rendering area corresponding thereto, and the server cluster may determine, according to the object types, a corresponding rendering area in the three-dimensional scene, and then perform rendering of a rendering style corresponding to the object type on the rendering area.
In a possible implementation manner, the obtaining of the operation instruction for rendering the three-dimensional scene by the user in the above step specifically includes: and acquiring an operation instruction for rendering the three-dimensional scene generated after a user interacts with the terminal equipment through a mouse, a keyboard, voice or gestures.
In the possible implementation manner, the user can transmit the operation instruction to the server cluster in various manners, so that the operation is convenient, and the user experience can be improved.
In a possible implementation manner, after the foregoing steps render different regions in the three-dimensional scene according to different rendering styles according to the operation instruction, the method further includes: acquiring an adjustment parameter; and adjusting the rendered three-dimensional scene according to the adjustment parameters, wherein the adjustment parameters comprise brightness information, color information or texture information.
In the above possible implementation manner, after the server cluster obtains the rendered three-dimensional scene, if the user needs to adjust the rendered three-dimensional scene, the server cluster may further obtain an adjustment parameter transmitted by the user through the terminal device, and adjust the rendered three-dimensional scene according to brightness information, color information, texture information, or other parameters in the adjustment parameter, so as to obtain a better rendering effect.
In a possible implementation manner, each rendering style includes at least two priority parameters, the at least two priority parameters are set by a user and obtained, each priority parameter corresponds to a part of three-dimensional data of a three-dimensional scene, and after the above steps, according to an operation instruction, different regions in the three-dimensional scene are rendered according to different rendering styles, the method further includes: and adjusting the rendering degree of the three-dimensional data corresponding to part or all of the priority parameters according to the at least two priority parameters.
In the possible implementation manner, after the server cluster obtains the rendered three-dimensional scene, the rendered three-dimensional scene may be adjusted according to at least two priority parameters included in the rendering style, where each priority parameter corresponds to a part of three-dimensional data in the three-dimensional scene, the priority parameter may be set by a user before the rendering of the server cluster, the server cluster may adjust the rendering degree of the three-dimensional data corresponding to a part or all of the priority parameters according to the priority parameter, and a specific part or all of the priority parameters are set by the user, so that a subsequent adjustment command of the rendered three-dimensional scene by the user may be reduced, and user experience may be improved.
In a possible implementation manner, before obtaining an operation instruction of a user for rendering a three-dimensional scene, the method further includes: acquiring three-dimensional data, wherein the three-dimensional data is a collection of a plurality of independent three-dimensional elements available for display on a Graphical User Interface (GUI); and constructing a three-dimensional scene according to the three-dimensional data.
In the possible implementation manner, before the user sends the operation instruction to the server cluster through the terminal device, the server cluster further needs to construct a three-dimensional scene, where the three-dimensional scene may be previously constructed or may be currently constructed, that is, a set of three-dimensional elements is currently acquired, and then a three-dimensional scene is constructed based on the three-dimensional elements, so that feasibility of the scheme is improved.
A second aspect of the embodiments of the present application provides a device for rendering a three-dimensional scene, including: the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an operation instruction for rendering a three-dimensional scene by a user, and the operation instruction comprises at least two rendering styles; and the rendering unit is used for rendering different areas in the three-dimensional scene according to different rendering styles according to the operation instruction to obtain a rendered three-dimensional scene, wherein the rendered three-dimensional scene comprises at least two rendering styles.
The apparatus for rendering a three-dimensional scene is configured to perform the method of the first aspect or any one of the embodiments of the first aspect.
A third aspect of the application provides a computer device comprising: a processor for executing instructions stored in the memory to cause a computer device to perform the method provided by the first aspect or any of the alternatives of the first aspect, and a communication interface for receiving or sending an indication. For specific details of the computer device provided by the third aspect, reference may be made to the first aspect or any optional manner of the first aspect, and details are not described here.
A fourth aspect of the present application provides a computer-readable storage medium having a program stored therein, where the program, when executed by a computer, performs the method of the first aspect or any of the alternatives of the first aspect.
A fifth aspect of the present application provides a computer program product for performing the method of the first aspect or any of the alternatives of the first aspect when the computer program product is executed on a computer.
Drawings
FIG. 1 is a system architecture diagram for graphics rendering provided by an embodiment of the present application;
fig. 2 is a schematic diagram of an embodiment of a three-dimensional scene rendering method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a rendering result provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus for rendering a three-dimensional scene according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a computer device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a rendering system according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a three-dimensional scene rendering method, a three-dimensional scene rendering device and three-dimensional scene rendering equipment, which are used for improving the three-dimensional scene rendering flexibility.
Embodiments of the present application will now be described with reference to the accompanying drawings, and it is to be understood that the described embodiments are merely illustrative of some, but not all, embodiments of the present application. As can be known to those skilled in the art, with the development of technology and the emergence of new scenes, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
Further, the concepts involved in the embodiments of the present application are briefly described below.
Photorealistic Rendering (Photorealistic Rendering) is an important component in computer graphics, and the basic requirement is to generate Photorealistic graphic images of three-dimensional scenes in a computer. Realistic graphics have been widely used, and play an important role in many aspects such as computer aided design, multimedia teaching, virtual reality systems, scientific calculation visualization, animation production, movie trick simulation, computer games, etc., and people have increasingly strict requirements on the visual perception of computers, which requires us to research more realistic image generation algorithms. For an object in a scene, if a realistic image of the object is to be obtained, perspective projection is carried out on the object, a hidden surface is blanked, and then the illumination shading effect of a visible surface is calculated to obtain the realistic image display of the scene. However, the reality of the image obtained by only removing the hidden surface of the scene is far from enough, how to handle the illumination and brightness effect on the surface of the object, and how to increase the reality of the graphic image by using different color grays are also the main sources of the reality of the scene image.
Non-photorealistic rendering (NPR), also known as stylized rendering. Non-photorealistic rendering does not care about reproducing the objective world as realistic as photographs, it focuses more on the expression of graphical personalization and artistry, emphasizing the differential expression and transfer of data and information. In non-photorealistic rendering, active selection of the content and manner of rendering is required. What he thinks important is to be presented with emphasis to the Content Creator (Content Creator) and to be presented in a way he thinks it is appropriate, while the visual reality of the authored Content is not the focus of the consideration. Non-photorealistic rendering is often implemented by an application that takes an image or three-dimensional entity as input and outputs an image of a particular artistic style.
Style migration (net Style Transfer): from the image level, art development has many different styles of art, such as ink and wash style, cartoon style, impression style, oil painting style, and the like, and style migration refers to changing the visual style of one image into another style.
Rasterization is to display a three-dimensional object on a two-dimensional screen, which is fast and has good effect. With rasterization techniques, 3D models of objects can be created by virtual triangular or polygonal meshes. In such a virtual mesh, the vertex of each triangle intersects with the vertices of other triangles that differ in size and shape. Each vertex is associated with a large amount of information, including its position in space and information about the color, texture and its "normal" (normal), which can be used to determine the orientation of the object surface. The computer then converts the triangles in the 3D model to pixels or points on the 2D screen. An initial color value can be assigned to each pixel based on the data stored in the triangle vertices. Further pixel processing or "shading" includes changing the color of a pixel in accordance with the collision of illumination with the pixel in the scene, and applying one or more textures to the pixel, thereby generating a final color to be applied to the pixel. Rasterization techniques are extremely computationally intensive. All object models in one scene can use up to millions of polygons, nearly 800 million pixels in a 4K display. Also, each frame or image displayed on the screen will typically be refreshed 30 to 90 times per second on the display.
The terminal devices referred to in the embodiments of the present application may include various handheld devices, vehicle mounted devices, wearable devices, computing devices, or other processing devices connected to a wireless modem with wireless communication capability. The terminal is a Mobile Station (MS), a subscriber unit (subscriber unit), a cellular phone (cellular phone), a smart phone (smart phone), a wireless data card, a Personal Digital Assistant (PDA) computer, a tablet computer, a wireless modem (modem), a handheld device (handset), a laptop computer (laptop), a Machine Type Communication (MTC) terminal, or the like.
Graphics rendering is the process of converting a three-dimensional light energy transfer process into a two-dimensional image. Scenes and entities are represented in three-dimensional form, closer to the real world, and easy to manipulate and transform, while graphic display devices are mostly two-dimensional rasterized displays and dot matrix printers. From the representation of the three-dimensional solid scene, the N-dimensional raster and the latticed representation are graphic rendering, namely rasterization. A raster display can be seen as a matrix of pixels, and any graphic displayed on a raster display is, in fact, a collection of pixels having one or more colors and shades of gray. Referring to fig. 1, a system architecture diagram for graphics rendering is shown, where the system includes a terminal device 1 and a server cluster 2.
The terminal device 1 is a device used by a user to view a rendered three-dimensional scene (i.e., a three-dimensional image), the user may access the server cluster 2 by logging in through a browser or an application program, and then send a rendering command to instruct the server cluster 2 to render the three-dimensional scene, and the rendered three-dimensional scene is displayed through the browser or the application program.
The server cluster 2 can be one or more cloud servers, or one or more server clusters in any data center, and three-dimensional geometric model information is obtained through three-dimensional scanning, three-dimensional interactive geometric modeling and a three-dimensional model library according to pre-stored data of a three-dimensional scene before graphic rendering; acquiring three-dimensional animation definition information through motion design, motion capture, motion calculation and dynamic deformation; the method comprises the steps of obtaining material information from a scanned photo, an image calculated by a computer or a picture drawn by a person, processing a three-dimensional scene to be rendered into a rendered three-dimensional scene through geometric transformation, projection transformation, perspective transformation and window clipping according to a rendering command from the terminal device 1 through the obtained material and light and shadow information, and sending the rendered three-dimensional scene to the terminal device 1.
Optionally, the scheme of the present application may also be executed by a rendering system, and the rendering system may execute a rendering process of a three-dimensional scene and display the rendered three-dimensional scene.
At present, for a determined space range or a three-dimensional scene, the whole space range or the three-dimensional scene is generally rendered and visualized in a single rendering style, switching of various rendering styles is based on global switching, and rendering flexibility is not high.
To solve the above problem, based on the above system architecture, the three-dimensional scene rendering method in the embodiment of the present application is described with reference to fig. 2.
Referring to fig. 2, an embodiment of a three-dimensional scene rendering method according to the present application includes:
201. and the terminal equipment sends an operation instruction for rendering the three-dimensional scene to be rendered to the server cluster.
In the embodiment of the application, the three-dimensional scene to be rendered is formed by the three-dimensional data which is imported in advance, and the server cluster can form a three-dimensional model based on the three-dimensional data which is imported by the user. The complete three-dimensional model graph is a three-dimensional scene to be rendered, wherein the three-dimensional scene is generated by processing a server cluster according to pre-imported three-dimensional data, and each three-dimensional element in the three-dimensional data refers to a User Interface (UI) element with the finest granularity and does not contain business logic. The three-dimensional scene generated by the server cluster comprises a plurality of components, each component is an independent whole formed by combining one or more three-dimensional elements (hereinafter referred to as elements) and business logic thereof, and the components can be nested. Each three-dimensional scene or each characteristic is an independent component, namely, the three-dimensional data of each element in the three-dimensional scene is pluggable three-dimensional data, and the three-dimensional data of different elements can be automatically positioned by a server cluster. The user can issue an operation instruction for rendering the three-dimensional scene to be rendered to the server cluster through the terminal device, wherein the operation instruction may include at least two rendering styles, and the user may select a plurality of areas in the three-dimensional scene and set the rendering styles respectively, or the user may set different rendering styles for different target types in the three-dimensional scene respectively.
For the three-dimensional scene to be rendered, the three-dimensional scene to be rendered may be a three-dimensional scene selected by a user from a plurality of pre-constructed three-dimensional scenes, or the three-dimensional scene to be rendered may be directly provided by the user. The three-dimensional data is a set of a plurality of independent three-dimensional elements which can be displayed on a Graphical User Interface (GUI), and the three-dimensional data is imported by a user before rendering is performed, so that the server cluster constructs a three-dimensional scene so as to perform rendering according to an operation instruction.
Optionally, the server cluster may obtain an operation instruction for rendering the three-dimensional scene, which is generated after the user interacts with the terminal through a mouse, a keyboard, voice, or a gesture. Namely: the operation instruction can be generated by the interaction of a user and the terminal device through a mouse, a keyboard, voice, gestures and other possible modes. The user can also determine an operation range and select a rendering style in the scene to be rendered, so that the spatial scale, the area and feature combination and the rendering style are changed in a rich and intelligent interaction mode, and good user experience is achieved.
Illustratively, the voice interaction logic may be: the user sends a voice instruction, the terminal equipment logs in the user audio in real time, the voice recognition service of the terminal equipment recognizes the incoming audio stream data, and then the recognized instruction is transmitted to the server cluster in real time.
The gesture interaction logic may be: the method comprises the steps that a user interacts with a terminal device through gesture actions, the terminal device with a hand tracking function obtains the hand state of the user in real time, gesture instructions of the user are recognized through a gesture recognition service of the terminal device based on stream data of the hand state, and then the instructions are transmitted to a server cluster. The method for issuing the operation instruction by the user is not limited in the embodiment of the present application.
The rendering style includes a realistic rendering style and a non-realistic rendering style, wherein the non-realistic rendering style may include a wash painting style, a sketch style, a cartoon style, a white painting style, or other forms of highlighting specific information.
202. And the server cluster renders different areas in the three-dimensional scene according to different rendering styles according to the operation instruction to obtain the rendered three-dimensional scene.
The server cluster can determine the areas corresponding to the rendering styles according to the operation instruction, wherein the areas corresponding to the rendering styles are the viewing areas of the user when the rendering styles are set in the three-dimensional scene, the server cluster can render the three-dimensional data in the determined areas by adopting the corresponding rendering styles, and the rendered three-dimensional scene is obtained, namely the required three-dimensional image is obtained.
For the rendering process, taking the rendering style as a stylized visualization scheme based on the non-photorealistic rendering of graphics as an example, the main process is to calculate the pen-touch effects of color, transparency, illumination, outline and model surface according to different styles, and finally, the rendering of each part is combined to form the final effect. Taking the cartoon rendering style as an example, the server cluster performs the cartoon style expression through the rendering of the object contour line and the internal coloring rendering. Firstly, obtaining contour line rendering through steps of contour line extraction algorithm, pen touch texture mapping, shading blurring and the like, secondly, calculating the illumination intensity of the card ventilation grid through a cartoon rendering illumination model, simplifying and deviating the color, adjusting the color illumination of cold and warm tones, and finally combining the two parts of rendering to obtain the cartoon style drawing effect.
Optionally, the operation instruction further includes at least two operation ranges, each operation range corresponds to one rendering style, and the server cluster renders regions in different operation ranges in the three-dimensional scene according to the operation instruction and the corresponding rendering styles.
In the embodiment of the present application, the terminal device may directly carry an operation range corresponding to the rendering style in the operation instruction, where the operation range may be determined by a user on the terminal device, and exemplarily, the operation range may be selected by directly dragging a mouse, or may be selected by means of voice interaction, for example, the user may "select a city area" by voice, or may directly drag and adjust by means of gesture interaction.
After obtaining the operation range, the server cluster may automatically locate target data to be rendered by the three-dimensional data in each operation range in the three-dimensional scene, and then render the three-dimensional data in the operation range based on a rendering style corresponding to the operation range, specifically, the rendering styles of the three-dimensional data in different operation ranges may be the same or different. For example, a user may define an operation range to a server cluster, set a rendering style for the operation range, and continue defining another operation range and set another rendering style accordingly. The server cluster may render and display each operation range and the corresponding rendering style in real time, or render and display after receiving the operation instructions including all the operation ranges and the corresponding rendering styles, which is not limited in this embodiment.
Optionally, the operation instruction further includes at least two target types, each target type corresponds to one rendering style, and the server cluster determines, according to the at least two target types, a rendering area corresponding to each target type in the three-dimensional scene; and rendering the rendering area corresponding to each target type in the three-dimensional scene according to the corresponding rendering style.
In this embodiment, the operation instruction may further include a target type indicating an element in the rendered three-dimensional scene, where each target type corresponds to one rendering style, and rendering styles of different target types may also be the same. The server cluster can directly position the rendering area where the element is located and the three-dimensional data according to the target type, and then the rendering style corresponding to the rendering area is performed on the rendering area.
When the operation instruction simultaneously includes an operation range, a target type and a rendering style, the element indicated by the target type may be an element within the operation range, and the server cluster may perform rendering of the rendering style corresponding to the target type on the element indicated by the target type within the operation range.
Optionally, the target type may include one or more elements that need to be rendered, each element corresponding to a rendering style, or multiple elements in the target type all correspond to a rendering style. That is, the user may indicate that one or more elements within the operation range need to be rendered, and configure a corresponding rendering style for each element, where the rendering styles of the elements may be the same or different, and the elements may be any one of the constituent elements of a three-dimensional scene such as a mountain, a building, a river, a character, or some combination of some items based on some rules and definitions, such as a combination of all elements within one geographic area or a combination of all mountains within one geographic area. Optionally, one target type may also correspond to only one element, and at this time, the operation instruction may include multiple target types.
Optionally, each rendering style may further include at least two priority parameters, the at least two priority parameters are set according to differences in the three-dimensional scene, the priority parameters are set and obtained by a user, each priority parameter may correspond to a part of three-dimensional data of the three-dimensional scene, and the rendering degree of the three-dimensional data corresponding to the part or all of the priority parameters is adjusted by comparing the priority parameters. Specifically, the adjustment may include an enhancement, that is, an enhancement of a rendering degree of three-dimensional data with a highest priority in the rendered three-dimensional scene. That is, for example, for the rendering result of each rendering style, a clear portion and a blurred portion may appear (for example, for a "wash painting style" of the mountain, the emphasis is on a boundary or an edge portion of the mountain, and other elements on the mountain are unimportant data and may be hidden, and then the boundary or the edge portion is the clear portion, and other portions are the blurred portions). Optionally, the fuzzy part may also be divided into a plurality of layers according to specific requirements, and the layers correspond to different service priorities respectively, so as to obtain a better rendering effect.
203. And the server cluster sends the rendered three-dimensional scene to the terminal equipment.
After rendering the three-dimensional scene according to the operation instruction, the server cluster may transmit the rendered three-dimensional scene to the terminal device through the network.
204. And the terminal equipment displays the rendered three-dimensional scene.
And after receiving the rendered three-dimensional scene, the browser or the application program of the terminal equipment, which sends the operation instruction by the user, receives the three-dimensional image sent by the server cluster, the terminal equipment can display the three-dimensional image on a screen through the browser or the application program, and the user can check the rendering result in real time. The displaying of the rendered three-dimensional scene by the terminal device may also be inputting a display instruction to the server cluster through the terminal device when the user needs to check the rendered three-dimensional scene, and then the server cluster displays the rendered three-dimensional data to the user. Specifically, the user may also rotate or zoom in the rendered three-dimensional scene through a rotation instruction, a zoom-in instruction, and the like.
Optionally, the server cluster may further obtain an adjustment parameter, and adjust the rendered three-dimensional data according to the adjustment parameter.
The adjustment parameter is a parameter set by the user for adjusting the rendered three-dimensional data based on the user's own needs or experience, and may be brightness information, color information, texture information, or other empirical parameters related to the representation form, specifically, the adjustment parameter issued in which operation range by the user corresponds to the operation range, or which target type rendering result needs to be adjusted, or the rendering result of the target type in a certain operation range is adjusted. The manner of issuing the adjustment parameter by the user may refer to the manner of issuing the operation instruction by the user in step 201, and is not described herein again.
Optionally, after the server cluster adjusts the rendered three-dimensional data according to the adjustment parameter, the adjusted three-dimensional scene may be fed back to the terminal device in real time, the terminal device may display the adjusted three-dimensional scene in real time, and based on the real-time visual feedback, the user may continue to modify the adjustment parameter according to the visual feedback, so as to obtain a better rendering effect.
In one example, the rendering process of the embodiment of the present application may be: the method comprises the steps that a user firstly sends a voice command 'Shenzhen lake and Luo district', a server cluster jumps a map to a destination, the user can drag the map by using gestures, a certain interesting urban area is determined on the map in an exploration mode and serves as an operation range, the operation range comprises a city 31, a mountain 32 and a river 33, the city 31 is an element which does not need to be rendered, the user can select a target type 'mountain' 32 and a rendering style 'ink painting style' corresponding to the 'mountain' 32 by using the voice command, and select the target type 'river' 33 and a rendering style 'cartoon style' corresponding to the 'mountain', the server cluster can correspondingly obtain a preliminary rendering result according to the operation command and then feeds the preliminary rendering result back to the terminal device, so that the terminal device displays a rendered three-dimensional scene to the user for viewing. The user can incorporate his own taste and experience, and manually adjust parameters such as the overall brightness of the rendering result by adjusting the parameters, and the final real-time rendering result can be as shown in fig. 3.
According to the rendering method and the rendering device, the rendering corresponding to the corresponding rendering styles is executed for the three-dimensional data of different areas in the three-dimensional scene according to the operation instruction, at least two rendering styles can be realized in the three-dimensional scene, and the rendering flexibility is improved.
The three-dimensional scene rendering method is explained above, and the following describes a three-dimensional scene rendering device.
Referring to fig. 4, as shown in fig. 4, an embodiment of the present application provides an apparatus for rendering a three-dimensional scene, where the apparatus 40 includes:
an obtaining unit 401, configured to obtain an operation instruction for rendering a three-dimensional scene by a user, where the operation instruction includes at least two rendering styles;
the rendering unit 402 is configured to render different regions in the three-dimensional scene according to different rendering styles according to the operation instruction, so as to obtain a rendered three-dimensional scene, where the rendered three-dimensional scene includes at least two rendering styles.
Optionally, the operation instruction further includes at least two operation ranges, each operation range corresponds to one rendering style, and the rendering unit 402 specifically includes:
and according to the operation instruction, rendering the areas in different operation ranges in the three-dimensional scene according to the corresponding rendering style.
Optionally, the operation instruction further includes at least two target types, each target type corresponds to one rendering style, and the rendering unit 402 specifically includes:
determining a rendering area corresponding to each target type in the three-dimensional scene according to at least two target types;
and rendering the rendering area corresponding to each target type in the three-dimensional scene according to the corresponding rendering style.
Optionally, the obtaining unit 401 specifically includes:
and acquiring an operation instruction for rendering the three-dimensional scene generated after a user interacts with the terminal equipment through a mouse, a keyboard, voice or gestures.
Optionally, the apparatus further includes a first adjusting unit 403, where the first adjusting unit 403 is specifically configured to:
acquiring an adjustment parameter;
and adjusting the rendered three-dimensional scene according to the adjustment parameters, wherein the adjustment parameters comprise brightness information, color information or texture information.
Optionally, each rendering style includes at least two priority parameters, the at least two priority parameters are set by a user, each priority parameter corresponds to a part of three-dimensional data of the three-dimensional scene, the apparatus further includes a second adjusting unit 404, and the second adjusting unit 404 is specifically configured to:
and adjusting the rendering degree of the three-dimensional data corresponding to part or all of the priority parameters according to the at least two priority parameters.
Optionally, the obtaining unit 401 is further configured to:
acquiring three-dimensional data, wherein the three-dimensional data is a collection of a plurality of independent three-dimensional elements available for display on a Graphical User Interface (GUI);
the apparatus further comprises a construction unit 405, the construction unit 405 is specifically configured to:
and constructing a three-dimensional scene according to the three-dimensional data.
Fig. 5 is a schematic diagram illustrating a possible logical structure of a computer device 50 according to an embodiment of the present application. The computer device 50 includes: a processor 501, a communication interface 502, a memory system 503, and a bus 504. The processor 501, the communication interface 502, and the storage system 503 are connected to each other by a bus 504. In an embodiment of the present application, the processor 501 is configured to control and manage actions of the computer device 50, for example, the processor 501 is configured to execute steps performed by a server cluster in the method embodiment of fig. 2. The communication interface 502 is used to support the computer device 50 for communication. A storage system 503 for storing program codes and data for the computer device 50.
The processor 501 may be, for example, a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor 501 may also be a combination of implementing computing functionality, e.g., comprising one or more microprocessors, a combination of digital signal processors and microprocessors, and so forth. The bus 504 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The obtaining unit 401, the rendering unit 402, the first adjusting unit 403, the second adjusting unit 404 and the constructing unit 405 in the apparatus 40 for three-dimensional scene rendering correspond to the processor 501 in the computer device 50.
When the aforementioned apparatus 40 for three-dimensional scene rendering is a software apparatus, the obtaining unit 401, the rendering unit 402, the first adjusting unit 403, the second adjusting unit 404 and the constructing unit 405 may be program codes for implementing different functions, which may be stored in the storage system 503 in the computing device 50, and the processor 501 in the computing device 50 may execute the program codes to implement the functions of the apparatus 40.
The computer device 50 of this embodiment may correspond to the server cluster in the embodiment of the method in fig. 2, and the processor 501 in the computer device 50 may implement the functions of the server cluster and/or various steps implemented in the embodiment of the method in fig. 2, which are not described herein again for brevity.
The three-dimensional scene rendering process in the embodiment of the present application may also be implemented by a rendering system formed by devices that simultaneously implement rendering and display, for example, please refer to fig. 6, a schematic structural diagram of the rendering system 60 in the embodiment of the present application, an import module 601, and a front end module 602.
The import module 601 is configured to receive three-dimensional data transmitted by other devices acquired by a user, where the three-dimensional data includes basic geometric data (geometry), model types, and custom semantic structure data.
The front-end module 602 comprises an interactive response submodule 6021 and a display submodule 6022, wherein the interactive response submodule 6021 is configured to construct a three-dimensional scene according to the three-dimensional data imported by the import module 601, and then invoke a non-photorealistic rendering shader of different styles to render the selected operation area according to an operation instruction sent by a user through the terminal device. The display sub-module 6022 is used for displaying the rendering result of the interaction response sub-module 6021. Specifically, the interactive response submodule 6021 may further receive an adjustment parameter sent by the user from the terminal device, adjust the rendered three-dimensional data according to the adjustment parameter, and then display the adjusted three-dimensional data in real time through the display submodule 6022.
In another embodiment of the present application, a computer-readable storage medium is further provided, in which a computer executes instructions, and when a processor of a device executes the computer to execute the instructions, the device executes the steps of the three-dimensional scene rendering method executed by the server cluster in fig. 2.
In another embodiment of the present application, there is also provided a computer program product comprising computer executable instructions stored in a computer readable storage medium; when the computer executes the instructions, the processor of the device performs the steps of the three-dimensional scene rendering method performed by the server cluster in fig. 2.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (16)

1. A method of rendering a three-dimensional scene, comprising:
acquiring an operation instruction for rendering the three-dimensional scene by a user, wherein the operation instruction comprises at least two rendering styles;
and rendering different areas in the three-dimensional scene according to different rendering styles according to the operation instruction to obtain a rendered three-dimensional scene, wherein the rendered three-dimensional scene comprises the at least two rendering styles.
2. The three-dimensional scene rendering method according to claim 1, wherein the operation instruction further includes at least two operation ranges, each operation range corresponds to one rendering style, and the rendering different regions in the three-dimensional scene according to the operation instruction according to different rendering styles specifically includes:
and rendering areas in different operation ranges in the three-dimensional scene according to the operation instruction and the corresponding rendering style.
3. The three-dimensional scene rendering method according to claim 1 or 2, wherein the operation instruction further includes at least two object types, each object type corresponds to one rendering style, and the rendering different regions in the three-dimensional scene according to the operation instruction according to different rendering styles specifically includes:
determining a rendering area corresponding to each target type in the three-dimensional scene according to the at least two target types;
and rendering the rendering area corresponding to each target type in the three-dimensional scene according to the corresponding rendering style.
4. The method for rendering the three-dimensional scene according to any one of claims 1 to 3, wherein the obtaining of the operation instruction for rendering the three-dimensional scene by the user specifically comprises:
and acquiring an operation instruction for rendering the three-dimensional scene, which is generated after the user interacts with the terminal equipment through a mouse, a keyboard, voice or gestures.
5. The method for rendering a three-dimensional scene according to any one of claims 1 to 4, wherein after the rendering of the different regions of the three-dimensional scene according to the operation command according to the different rendering styles, the method further comprises:
acquiring an adjustment parameter;
and adjusting the rendered three-dimensional scene according to the adjusting parameters, wherein the adjusting parameters comprise brightness information, color information or texture information.
6. The method for rendering a three-dimensional scene according to any one of claims 1-5, wherein each rendering style comprises at least two priority parameters, the at least two priority parameters are obtained by setting by the user, each priority parameter corresponds to a part of three-dimensional data of the three-dimensional scene, and after different regions in the three-dimensional scene are rendered according to different rendering styles according to the operation instruction, the method further comprises:
and adjusting the rendering degree of the three-dimensional data corresponding to part or all of the priority parameters according to the at least two priority parameters.
7. A method for rendering a three-dimensional scene according to any of claims 1-6, wherein before obtaining the operation instruction of rendering the three-dimensional scene by a user, the method further comprises:
acquiring three-dimensional data, wherein the three-dimensional data is a collection of a plurality of independent three-dimensional elements available for display on a Graphical User Interface (GUI);
and constructing the three-dimensional scene according to the three-dimensional data.
8. An apparatus for three-dimensional scene rendering, comprising:
the three-dimensional scene rendering method comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an operation instruction for rendering the three-dimensional scene by a user, and the operation instruction comprises at least two rendering styles;
and the rendering unit is used for rendering different areas in the three-dimensional scene according to different rendering styles according to the operation instruction to obtain a rendered three-dimensional scene, wherein the rendered three-dimensional scene comprises the at least two rendering styles.
9. The apparatus for three-dimensional scene rendering according to claim 8, wherein the operation instruction further includes at least two operation ranges, each operation range corresponding to a rendering style, and the rendering unit is specifically configured to:
and rendering the areas in different operation ranges in the three-dimensional scene according to the corresponding rendering styles according to the operation instructions.
10. The apparatus for rendering a three-dimensional scene according to claim 8 or 9, wherein the operation instruction further includes at least two object types, each object type corresponding to a rendering style, and the rendering unit is specifically configured to:
determining a rendering area corresponding to each target type in the three-dimensional scene according to the at least two target types;
and rendering the rendering area corresponding to each target type in the three-dimensional scene according to the corresponding rendering style.
11. The apparatus for three-dimensional scene rendering according to any of claims 8-10, wherein the obtaining unit is specifically configured to:
and acquiring an operation instruction for rendering the three-dimensional scene, which is generated after the user interacts with the terminal equipment through a mouse, a keyboard, voice or gestures.
12. The apparatus for three-dimensional scene rendering according to any of the claims 8-11, wherein the apparatus further comprises a first adjusting unit, the first adjusting unit is configured to:
acquiring an adjustment parameter;
and adjusting the rendered three-dimensional scene according to the adjusting parameters, wherein the adjusting parameters comprise brightness information, color information or texture information.
13. Apparatus for three-dimensional scene rendering according to any of claims 8-12, wherein each rendering style comprises at least two priority parameters, the at least two priority parameters being set by the user, each priority parameter corresponding to a part of the three-dimensional data of the three-dimensional scene, the apparatus further comprising a second adjusting unit configured to:
and adjusting the rendering degree of the three-dimensional data corresponding to part or all of the priority parameters according to the at least two priority parameters.
14. The apparatus for three-dimensional scene rendering according to any of claims 8-13, wherein the obtaining unit is further configured to:
acquiring three-dimensional data, wherein the three-dimensional data is a collection of a plurality of independent three-dimensional elements available for display on a Graphical User Interface (GUI);
the device further comprises a construction unit, the construction unit being specifically configured to:
and constructing the three-dimensional scene according to the three-dimensional data.
15. A computer device, comprising: a processor and a memory, wherein the processor is connected to the memory,
the processor is to execute instructions stored in the memory to cause the computer device to perform the method of any of claims 1-7.
16. A computer program product, characterized in that when the computer program product is executed on a computer, the computer performs the method according to any of claims 1 to 7.
CN202110426405.1A 2021-04-20 2021-04-20 Three-dimensional scene rendering method, device and equipment Pending CN115311395A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110426405.1A CN115311395A (en) 2021-04-20 2021-04-20 Three-dimensional scene rendering method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110426405.1A CN115311395A (en) 2021-04-20 2021-04-20 Three-dimensional scene rendering method, device and equipment

Publications (1)

Publication Number Publication Date
CN115311395A true CN115311395A (en) 2022-11-08

Family

ID=83853614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110426405.1A Pending CN115311395A (en) 2021-04-20 2021-04-20 Three-dimensional scene rendering method, device and equipment

Country Status (1)

Country Link
CN (1) CN115311395A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206006A (en) * 2023-03-02 2023-06-02 达瓦未来(北京)影像科技有限公司 Card style direct illumination effect rendering method based on UE rendering engine

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206006A (en) * 2023-03-02 2023-06-02 达瓦未来(北京)影像科技有限公司 Card style direct illumination effect rendering method based on UE rendering engine

Similar Documents

Publication Publication Date Title
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
US8514238B2 (en) System and method for adding vector textures to vector graphics images
KR101145260B1 (en) Apparatus and method for mapping textures to object model
CN111161392B (en) Video generation method and device and computer system
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
JP7432005B2 (en) Methods, devices, equipment and computer programs for converting two-dimensional images into three-dimensional images
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
CN106447756B (en) Method and system for generating user-customized computer-generated animations
WO2017123163A1 (en) Improvements in or relating to the generation of three dimensional geometries of an object
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN111462205B (en) Image data deformation, live broadcast method and device, electronic equipment and storage medium
CN115496845A (en) Image rendering method and device, electronic equipment and storage medium
Sandnes Sketching 3D immersed experiences rapidly by hand through 2D cross sections
JP2017111719A (en) Video processing device, video processing method and video processing program
CN107230249A (en) Shading Rendering method and apparatus
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
Yan et al. A non-photorealistic rendering method based on Chinese ink and wash painting style for 3D mountain models
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN108230430A (en) The processing method and processing device of cloud layer shade figure
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
KR101428577B1 (en) Method of providing a 3d earth globes based on natural user interface using motion-recognition infrared camera
KR102336156B1 (en) Method and system for realizing ultra-high quality images
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination