CN114159798A - Scene model generation method and device, electronic equipment and storage medium - Google Patents

Scene model generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114159798A
CN114159798A CN202111404730.4A CN202111404730A CN114159798A CN 114159798 A CN114159798 A CN 114159798A CN 202111404730 A CN202111404730 A CN 202111404730A CN 114159798 A CN114159798 A CN 114159798A
Authority
CN
China
Prior art keywords
model
target
scene
style
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111404730.4A
Other languages
Chinese (zh)
Inventor
吴宛婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111404730.4A priority Critical patent/CN114159798A/en
Publication of CN114159798A publication Critical patent/CN114159798A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management

Abstract

The embodiment of the invention discloses a scene model generation method, a scene model generation device, electronic equipment and a computer readable storage medium; the method comprises the steps that a correlation model of a target scene can be obtained, the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node; acquiring a target style control parameter of the target scene; and based on the style control node, adjusting the control parameters of the sub-base model according to the target style control parameters to obtain a target style model of the target scene, wherein the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model. The embodiment of the invention can avoid the repeated workload during the scene model manufacturing and improve the scene model manufacturing efficiency.

Description

Scene model generation method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a scene model generation method and device, electronic equipment and a computer readable storage medium.
Background
Under the background of rapid development of computer technology, establishing a scene model through modeling software becomes an essential step for manufacturing many products. For example, when a game is created, many three-dimensional models need to be created by modeling software, and a virtual scene of the game needs to be formed by combining the three-dimensional models.
Generally, after a scene model is built in modeling software, the built scene model is imported into a game Engine such as UE4(Unreal Engine) in a file form for rendering a game scene. That is, one scene model needs to be built and imported into the game engine for one scene style, and a plurality of scene models need to be built and imported into the game engine for a plurality of scene styles.
However, in some cases, there are many same points between two different scene patterns, and if a scene model is created for each of the two scene patterns, a lot of repetitive work will be caused, resulting in low efficiency in creating the scene model. For example, when two models of basketball court floors of different styles are to be manufactured, although the basketball court floors of the two styles have different color matching on parts of patterns, lines and regions, the basketball court floors of the two different styles have a larger part and the same points, if the two models are manufactured for the two basketball court floors of the two different styles respectively, a large amount of repeated work will exist, and thus the manufacturing efficiency of the models is low.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for generating a scene model, an electronic device, and a computer-readable storage medium, which can avoid a repetitive workload during scene model creation and improve the efficiency of scene model creation.
In a first aspect, an embodiment of the present application provides a method for generating a scene model, including:
acquiring a correlation model of a target scene, wherein the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node;
acquiring a target style control parameter of the target scene;
and based on the style control node, adjusting the control parameters of the sub-base model according to the target style control parameters to obtain a target style model of the target scene, wherein the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
In a second aspect, an embodiment of the present application further provides a scene model generation apparatus, including:
the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for obtaining a correlation model of a target scene, the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node;
a second obtaining unit, configured to obtain a target style control parameter of the target scene;
and the adjusting unit is used for adjusting the control parameters of the sub-base model according to the target style control parameters based on the style control node to obtain a target style model of the target scene, and the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
In some embodiments, the sub-base model includes at least one of a line model, a color matching region model, and a pattern model, and the first obtaining unit is specifically configured to:
obtaining at least one line model of the target scene;
obtaining at least one color matching region model of the target scene;
acquiring at least one pattern model of the target scene;
and combining the at least one line model, the at least one color matching region model and the at least one pattern model through a preset combination control node to obtain the association model.
In some embodiments, the style control node comprises a target display control node of the sub-base model, the target style control parameter comprises at least one of a target hiding parameter and a model color parameter of the sub-base model, and the adjusting unit is specifically configured to:
and adjusting the control parameters of the sub-base model according to at least one of the target implicit display parameters and the model color parameters based on the target display control node to obtain the target style model.
In some embodiments, the sub-base model comprises a target line model of the target scene, the target display control node comprises a first display control node of the target line model, the target implicit parameter comprises a first implicit parameter of the target line model, and the adjusting unit is specifically configured to:
and adjusting the control parameters of the target line model according to the first implicit display parameters based on the first display control node to obtain the target style model.
In some embodiments, the model color parameters further include a first model color parameter of the target line model, and the adjusting unit is specifically configured to:
and adjusting the control parameters of the target line model according to the color parameters of the first model based on the first display control node to obtain the target style model.
In some embodiments, the sub-base model includes a target color matching region model of the target scene, the target display control node includes a second display control node of the target color matching region model, the target implicit parameter includes a second implicit parameter of the target color matching region model, and the adjusting unit is specifically configured to:
and adjusting the control parameters of the target color matching area model according to the second implicit display parameters based on the second display control node to obtain the target style model.
In some embodiments, the target style control parameter further includes a second model color parameter of the target color matching region model, and the adjusting unit is specifically configured to:
and adjusting the control parameters of the target color matching area model according to the second model color parameters based on the second display control node to obtain the target style model.
In some embodiments, the style control node includes a material control node of the sub-base model, the target style control parameter includes a material parameter of a color matching region corresponding to the sub-base model, and the second obtaining unit is specifically configured to:
extracting color matching areas corresponding to all the sub-basic models in the association model based on the lerp function;
obtaining material parameters of the color matching area;
in some embodiments, the adjusting unit is specifically configured to:
and adjusting the control parameters of the color matching area according to the material parameters based on the material control nodes to obtain the target style model.
In some embodiments, the second obtaining unit is specifically configured to:
acquiring a target material ball for adjusting the target style model;
acquiring material color parameters of the color matching area based on the target material ball;
in some embodiments, the adjusting unit is specifically configured to:
and adjusting the control parameters of the color matching area according to the material color parameters based on the material control nodes to obtain the target style model.
In some embodiments, the second obtaining unit is specifically configured to:
obtaining a material chartlet parameter of the color matching area;
in some embodiments, the adjusting unit is specifically configured to:
and adjusting the control parameters of the color matching area according to the material map parameters based on the material control nodes to obtain the target style model.
In some embodiments, the second obtaining unit is specifically configured to:
when the material exposure parameter is 0, acquiring a material chartlet parameter of the color matching area;
and when the material exposure parameter is 1, acquiring the material color parameter of the color matching area.
In some embodiments, the sub-base model comprises a target pattern model of the target scene, the pattern control nodes comprise pattern control nodes of the target pattern model, the target pattern control parameters comprise a location parameter and a resource parameter of the target pattern model, the adjustment unit is specifically configured to:
and adjusting the control parameters of the pattern model according to the position parameters and the resource parameters based on the pattern control nodes to obtain the target style model.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory storing a plurality of instructions; the processor loads instructions from the memory to execute the steps of any one of the scene model generation methods provided by the embodiments of the present application.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to perform steps in any one of the scene model generation methods provided in the embodiments of the present application.
In the embodiment of the application, the association model of the target scene comprises at least one sub-base model carrying the style control node, and the control parameter of the sub-base model is adjusted by obtaining the target style control parameter of the target scene, so that the display of the scene style corresponding to the sub-base model can be controlled, the scene style of the association model corresponding to the target scene is adjusted, and the target style model of the target scene is obtained; therefore, the same material after modeling can be repeatedly utilized to the same point among different scene styles, the problem that a scene model needs to be built and led into a game engine aiming at one scene style to cause a large amount of repeated work is avoided, the repeated work load during the scene model manufacturing is avoided to a certain extent, and the scene model manufacturing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an embodiment of a scene model generation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a construction of a correlation model provided in an embodiment of the present application;
FIG. 3 is a scene style diagram of a target scene constructed by lines provided in the embodiments of the present application;
FIG. 4 is a schematic diagram of a scenario in which a merge control node is provided in an embodiment of the present application;
FIG. 5 is a scene style diagram of a target scene constructed from color matching regions as provided in an embodiment of the present application;
FIG. 6 is a scene style diagram of a target scene constructed by patterns provided in the embodiments of the present application;
fig. 7 is a schematic structural diagram of a scene model generation apparatus provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a scene model generation method and device, electronic equipment and a computer readable storage medium.
The scene model generation apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal, a server, or other devices. The terminal can be a mobile phone, a tablet Computer, an intelligent bluetooth device, a notebook Computer, or a Personal Computer (PC), and the like; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the scene model generating apparatus may also be integrated in a plurality of electronic devices, for example, the scene model generating apparatus may be integrated in a plurality of servers, and the plurality of servers implement the scene model generating method of the present invention.
In some embodiments, the server may also be implemented in the form of a terminal, for example, a personal computer may be provided as the server to integrate the scene model generation apparatus.
For example, the electronic device may be a mobile terminal, and the mobile terminal may obtain, through a network, an association model of a target scene, where the association model includes at least one sub-base model of the target scene, and the sub-base model carries a style control node; acquiring a target style control parameter of the target scene; and based on the style control node, adjusting the control parameters of the sub-base model according to the target style control parameters to obtain a target style model of the target scene, wherein the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
Referring to fig. 1, fig. 1 is a schematic flowchart of a scene model generation method provided in an embodiment of the present application. It should be noted that, although a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. In this embodiment, the scene model generation method includes steps 101 to 103, where:
101. and acquiring an association model of the target scene.
The association model comprises at least one sub-base model of the target scene, and the sub-base model carries a style control node.
The target scene is a scene of a model to be generated, such as a basketball court floor, a garment, a wall and the like.
The model is a model built by reflecting a scene, such as a basketball field plate, a garment, a wall, a table and chair model, and the like.
The sub-base model is a model for reflecting a local pattern in a target scene, such as a model reflecting a court line of a basketball court floor and a model reflecting a pattern of the basketball court floor, which can be used as the sub-base model of the basketball court floor. The set of models of the plurality of local styles may reflect a target scene global style.
The association model is a model built for reflecting a target scene, such as a basketball field plate, a garment, a wall, a table and chair model, and the like.
The association model may be specifically a set of multiple sub-base models for constructing the target scene, for example, taking the target scene as a basketball court floor, as shown in fig. 2, the basketball court floor is constructed by a line model (e.g., sub-model 1 in fig. 2), a color matching region model (e.g., sub-model 2 in fig. 2), and a pattern model (e.g., sub-model 3 in fig. 2), and the line model (e.g., sub-model 1), the color matching region model (e.g., sub-model 2), and the pattern model (e.g., sub-model 3) for constructing the basketball court floor are all sub-base models, and the set of the line model (e.g., sub-model 1), the color matching region model (e.g., sub-model 2), and the pattern model (e.g., sub-model 3) is the association model associated with the basketball court floor.
In some embodiments, the correlation model may reside in modeling software such as Houdini, and the like. The Houdini is multifunctional software capable of modeling, binding, making animation and making special effects.
In other embodiments, the association model may also reside in a gaming engine, such as the UE 4. When the association model exists in a game engine such as UE4, the association model made in modeling software such as Houdini can be exported in the form of an HDA file and put into the UE4 engine. After the conventional modeling software (e.g., 3DsMax, maya, etc.) finishes creating the model file, the file in fbx format is output and imported into UE 4; if the model needs to be modified, the model is modified in 3DsMax or maya, and a file in fbx format is repeatedly output. The HDA file can avoid such repeated modification and output, and the content (e.g. model) produced in Houdini can set corresponding variation parameters, and then output the HDA file to the UE4, and the model can be varied randomly in the UE4 within the variation parameters.
In step 101, there are various ways to obtain the association model, which illustratively includes:
(1) and (5) building in real time. Illustratively, the correlation model can be built in real time directly based on software such as Houdini and the like which can be used for modeling. The inventor finds that scene styles mainly show differences in lines, area color matching, patterns and the like through practical application summary, and therefore a plurality of sub-base models can be constructed for building the association model for forming the target scene according to the differences in the lines, the area color matching, the patterns and the like. The following describes a process of obtaining the association model in step 101, taking as an example that the association model is a plurality of line models, a plurality of color matching region models, or a plurality of pattern models.
The association model is a plurality of line models. In this case, step 101 may specifically include the following steps 1011A to 1012A:
1011A, obtaining a plurality of line models of the target scene.
Wherein the line model is a sub-base model for reflecting lines in the target scene. Such as the court lines in a basketball court floor.
For example, assuming that the target scene is a basketball court floor, as shown in fig. 3, the basketball court floors of scene style 1, style 2, and style 3 need to be constructed in practical application; wherein, the basketball court floor of style 1 includes: type a, type B, type C, as shown in fig. 3 (a); the basketball court floor of style 2 includes: type a, type B, type D, as shown in fig. 3 (B). Wherein, the court lines of the same type are arranged in the basketball court floor in the same position and shape.
As can be seen, in each scene pattern of the basketball court floor, the court line pattern remains unchanged as follows: type a spherical field lines, type B spherical field lines. The court line that court line pattern can be along with basketball board scene pattern change does: a C-type court line and a D-type court line.
In Houdini, on one hand, a court line model (denoted as model 0) without a change pattern is modeled, and the pattern of the model 0 is: comprises a type A court line and a type B court line; on the other hand, the court line models needing to be changed in style are modeled (marked as models 1 to n1, n1 is the number of court line models needing to be changed), and here the models 1 to n1 include: model 1 with the style of C-type spherical field lines and model 2 with the style of D-type spherical field lines. Model 0, model 1, model 2 are a plurality of line models of the basketball court floor (i.e., the target scene).
Further, a Color node may be linked to each line model, and colors may be given to the line models through the Color nodes.
1012A, combining the line models through a preset combination control node to obtain the association model.
In step 1012A, the preset Merge control node is a control node for merging a plurality of line models, for example, a Merge node in Houdini.
Specifically, the plurality of line models obtained in step 1011A are input into a preset merge control node, and after the plurality of line models are selected and merged by the preset merge control node, the correlation model is finally output to obtain the correlation model
Illustratively, as shown in fig. 4, fig. 4 is a schematic view of a scenario for providing a merge control node in the embodiment of the present application; the preset Merge control nodes may specifically include a Merge node and a Switch node in Houdini, where the Merge node is configured to Merge two models, and the Switch node is configured to select one or more of multiple inputs for output; and controlling whether the line model of each style is adopted to construct the association model or not, and further realizing the scene style of the object scene corresponding to the association model. As shown in fig. 4, assuming that the outputs of the Merge nodes 1, 2, and 3 are Merge model 1, Merge model 2, and Merge model 3, respectively, if the outputs of the Switch nodes 1, 2, and 3 are selected as Merge models 1, 2, and 3, respectively, the final output of the Switch node 3 in fig. 4 is Merge model 3, that is: models 1, 2, 3, 4; that is, the merging model output by the Switch node 3 is the preset association model output by the merging control node.
Therefore, the line model can be selected by adjusting the preset control parameters of the combined control node (such as the control parameters of the Merge node and the Switch node), so that the scene style of the associated model is adjusted, and the scene style of the target scene is adjusted on the basis of reducing the repeated workload. Therefore, through steps 1011A to 1012A, a correlation model for selecting a line model can be built, so that a correlation model capable of adjusting a scene style is built, and further, the scene style of a target scene on a line is adjusted on the basis of reducing the repeated workload. Similar to the selection of the online model, the target implicit parameter of the sub-base model included in the target style control parameter is used to select the sub-base model, where the target implicit parameter refers to a control parameter used to select whether the sub-base model is displayed, and for example, the target implicit parameter of the sub-base model may specifically be a control parameter of a Switch node in the merged control node.
The association model is a plurality of color matching region models. In this case, step 101 may specifically include the following steps 1011B to 1012B:
1011B, obtaining a plurality of color matching area models of the target scene.
The color matching region model is a sub-base model for reflecting the color of each region in the target scene. For example, the basketball court floor is divided into 2 color matching areas, namely, an area is arranged in the three-line and an area is arranged outside the three-line.
For example, assuming that the target scene is a basketball court floor, as shown in fig. 5, basketball court floors of scene style 1 and style 2 need to be constructed in practical application; wherein, the basketball court floor of style 1 includes: color matching regions of type a, color matching regions of type B, as shown in fig. 5 (a); the basketball court floor of style 2 includes: color matching regions of type a, type B, and type C, as shown in fig. 5 (B). Wherein, the arrangement positions and the shapes of the color matching areas of the same type in the basketball court floor are the same.
It can be seen that, in each scene pattern of the basketball court floor, the color matching area with the color matching area pattern unchanged is as follows: color matching regions of type a, color matching regions of type B. The color matching area with the color matching area pattern changed along with the basketball board scene pattern is as follows: a color matching region of type C.
In Houdini, on one hand, a court color matching area model (denoted as model 0) without a change pattern is modeled, and the model 0 has the following pattern: the color matching area comprises a type A color matching area and a type B color matching area; on the other hand, the court color region models requiring the transformation of the styles are modeled (marked as models 1 to n2, wherein n2 is the number of color region styles requiring the change), and the models 1 to n2 comprise: model 1 with the style of a type C color matching region. Models 0 and 1 are models of a plurality of color matching areas of the basketball court floor (i.e. the target scene).
Further, a Color node may be linked to each Color region model, and colors may be assigned to the Color region models by the Color nodes.
1012B, combining the multiple color matching region models through a preset combination control node to obtain the association model.
In step 1012A, the preset Merge control node is a control node for merging a plurality of color matching region models, for example, a Merge node in Houdini.
The manner of combining the color matching region models in step 1012B is similar to that in step 1011A, and reference may be made to the description of step 1011A, which is not described herein again.
Therefore, by adjusting the preset control parameters of the combined control node (such as the control parameters of the Merge node and the Switch node), the color matching region model can be selected, so that the scene style of the associated model in the region color matching is adjusted, and further the scene style of the target scene is adjusted on the basis of reducing the repeated workload. Therefore, through steps 1011B to 1012B, a correlation model for selecting a color matching region model can be built, so that a correlation model capable of adjusting a scene style can be built, and further, the scene style of a target scene can be adjusted on the basis of reducing the repeated workload.
And the correlation model is a plurality of pattern models. In this case, step 101 may specifically include the following steps 1011C to 1012C:
1011C, obtaining a plurality of pattern models of the target scene.
Wherein the pattern model is a sub-base model for reflecting a pattern in the target scene. For example, the icons of team a and team B are marked on the floor of the basketball court.
For example, assuming that the target scene is a basketball court floor, as shown in fig. 6, basketball court floors of scene style 1 and style 2 need to be constructed in practical application; wherein, the basketball court floor of style 1 includes: a pattern of type a, a pattern of type C, as shown in fig. 6 (a); the basketball court floor of style 2 includes: pattern of type a, pattern of type D, as shown in fig. 6 (b). Wherein, the same type of patterns are arranged in the basketball court floor in the same position and shape.
As can be seen, in each scene style of the basketball court floor, the pattern with the pattern style kept unchanged is as follows: pattern of type a. The pattern with the pattern style changing along with the scene style of the basketball board is as follows: pattern of type C, pattern of type D.
In Houdini, on one hand, a court pattern model (marked as model 0) without a change pattern is modeled, and the model 0 is a pattern of type A; on the other hand, court pattern models requiring a change in pattern are modeled (denoted as models 1 to n3, where n3 is the number of pattern patterns requiring a change), where models 1 to n3 include: model 1 with the style of C-type pattern and model 2 with the style of D-type pattern. Model 0, model 1 and model 2 are the multiple pattern models of the basketball court floor (i.e., the target scene).
1012C, combining the plurality of pattern models through a preset combination control node to obtain the association model.
Therefore, by adjusting the preset control parameters of the merged control node (such as the control parameters of the Merge node and the Switch node), the pattern model can be selected, so that the scene style of the associated model on the pattern is adjusted, and further the scene style of the target scene is adjusted on the basis of reducing the repeated workload. Therefore, through steps 1011B to 1012B, a correlation model for selecting a pattern model can be built, so that a correlation model capable of adjusting a scene style can be built, and further, the scene style of a target scene can be adjusted on the basis of reducing the repetitive workload.
The process of obtaining the association model in step 101 is described above by taking as an example that the association model is a plurality of line models, a plurality of color matching region models, or a plurality of pattern models. It will be appreciated that the association model may also be at least two of the line model, color region model, pattern model, and other sub-base models for adjustment style changes described above, and the number of line models, color region models, pattern models, and other sub-base models for adjustment style changes may be one or more than two. For example, as shown in fig. 2, taking an example that the association model includes at least one line model, at least one color matching region model, and at least one pattern model, the process of obtaining the association model in step 101 is described, and at this time, step 101 may specifically include the following steps 1011D to 1014D.
The associated models comprise at least one line model, at least one color matching area model and at least one pattern model. In this case, the step 101 may specifically include the following steps 1011D to 1014D:
1011D, obtaining at least one line model of the target scene.
Step 1011D is similar to the step 1011A, and specific reference may be made to the related description of step 1011A, which is not repeated herein.
1012D, obtaining at least one color matching region model of the target scene.
Step 1012D is similar to the step 1011B, and specific reference may be made to the related description of the step 1011B, which is not described herein again.
1013D, obtaining at least one pattern model of the target scene.
Step 1013D is similar to the step 1011C described above, and specific reference may be made to the related description of the step 1011C, which is not described herein again.
1014D, combining the at least one line model, the at least one color matching region model and the at least one pattern model through a preset combination control node to obtain the association model.
Step 1014D is similar to the implementation of step 1012A, and reference may be specifically made to the related description of step 1012A, which is not described herein again.
As shown in fig. 2, for example, if the target scene is a basketball court floor, the basketball court floor is built by a line model (e.g., sub-model 1 in fig. 2), a color matching region model (e.g., sub-model 2 in a dashed line frame in fig. 2), and a pattern model (e.g., sub-model 3 in a dashed line frame in fig. 2), and the line model (e.g., sub-model 1), the color matching region model (e.g., sub-model 2), and the pattern model (e.g., sub-model 3) for building the basketball court floor are all sub-base models. And combining the line model (the sub-model 1), the color matching area model (the sub-model 2) and the pattern model (the sub-model 3) through a preset combination control node to obtain the association model of the basketball board floor.
Therefore, through steps 1011D-1014D, a correlation model for selecting the line model, the color matching region model and the pattern model can be built, so that a correlation model capable of adjusting scene styles can be built, and further, the scene styles of the target scene can be adjusted on the basis of reducing repeated workload.
(2) And directly reading from a preset database. Before step 101, the association model of each type of scene is pre-built and stored in a preset database by the way of building the association model in real time in the step (1), and the association model associated with the target scene is directly inquired and read from the preset database in step 101.
102. And acquiring a target style control parameter of the target scene.
The target style control parameter is a parameter for controlling a scene style of the associated model. Specifically, the control parameters of the sub-base model may be, for example, target implicit parameters, model color parameters, material parameters, resource parameters, and position parameters of the sub-base model.
In step 102, there are various ways to obtain the target style control parameter, which exemplarily includes:
firstly, the control parameters of the sub-basic model are adjusted in modeling software such as Houdini and the like.
In Houdini and other modeling software, there are many nodes, such as Switch node and Color node, which can adjust the model expression. Next, taking the control parameters of the adjustment sub-base model based on the Switch node and the Color node in Houdini in step 103 as an example, a process of obtaining the target style control parameter of the target scene in step 102 is described.
1. In step 103, the control parameters of the sub-base model are adjusted based on the Switch node in Houdini.
At this time, the target style control parameter is specifically a target implicit parameter of the sub-base model. For convenience of understanding, with continued reference to fig. 4, the target implicit parameter refers to a control parameter of each Switch node in the merged control nodes preset by the association model. The target implicit parameter (i.e., the target pattern control parameter) is used to select the sub-base model of each pattern.
For example, the target style control parameter obtained in step 102 is an obtained target implicit parameter, and the manually input and output control parameter of each Switch node in the merged control node preset by the association model may be obtained based on Houdini, and used as the target implicit parameter for obtaining the sub-base model used for selecting each style.
2. In step 103, the control parameters of the sub-basic model are adjusted based on the Color node in the Houdini.
At this time, the target style control parameter is specifically a model color parameter of the sub-base model. The model Color parameters refer to control parameters of Color nodes of each sub-basic model in the correlation model. The model color parameters (i.e., target style control parameters) are used to give color to the underlying model.
Illustratively, the target style control parameter obtained in step 102 is an obtained model Color parameter, and a manually input control parameter of a Color node of each sub-elementary model in the associated model may be obtained based on Houdini, and used as a model Color parameter for giving Color to the sub-elementary model.
Secondly, when the control parameters of the sub-basic model are adjusted in the illusion engine such as the UE 4.
The Houdini and other modeling software has a plurality of exposable nodes, and the exposed nodes can enable the model to be introduced into a phantom engine such as UE4 and the model performance can still be adjusted; for example, Objectmerge node, Transform node in Houdini.
There are also many nodes or parameters in the illusion engine such as UE4 that can adjust the model representation, such as Vertexcolor node, UseColor parameter in UE 4.
Next, a process of obtaining the target style control parameter of the target scene in step 102 will be described by taking, as an example, the control parameter of the base model adjusted based on the Objectmerge node and Transform node exposed in Houdini and based on the Vertexcolor node and UseColor parameter in UE4 in step 103.
1. In step 103, the control parameters of the sub-base model are adjusted based on Vertexcolor nodes and USeColor parameters in the UE 4.
At this time, the target style control parameter is specifically a material parameter of the color matching region corresponding to the sub-base model. The material parameters (i.e., the target style control parameters) are used to assign materials to the color matching regions corresponding to the base model. The step 102 of obtaining the target style control parameter is obtaining a material parameter, and the step 102 specifically includes the following steps 1021 to 1022:
1021. and extracting color matching areas corresponding to the sub-basic models in the association model.
For example, in step 1021, vertex color information of each sub-base model in the association model may be extracted through a vertex color node collocation based on a lerp function in UE4, so as to obtain a color matching region corresponding to each sub-base model. Wherein, the parameter value of the Vertexcolor node represents the color of the vertex, and particularly represents the color information stored in the vertex of the model; in UE4, the child base model vertex color may be obtained through Vertexcolor node. The lerp function is used for returning a mixed value between the A input and the B input based on the alpha input; when the alpha value is 0, a value of 100% of A will be returned; when the alpha value is 1, a value of 100% of B will be returned.
For example, for each sub-basic model in the correlation model, parameter values of Vertexcolor nodes in three channels of R red, G green and B yellow are respectively extracted; and matching with a lerp function to obtain the ranges of the three regions (255,0,0), (0,255,0) and (0, 255), the color matching regions corresponding to the sub-basic models in the association model are the three regions (255,0,0), (0,255,0) and (0, 255).
1022. And acquiring material parameters of the color matching area.
For example, in step 1022, the manually input material parameters (e.g., color parameters of a material ball, and mapping parameters) of the color matching regions corresponding to the sub-base models in the association model may be obtained based on the UE4, so as to obtain the material parameters for giving materials to the color matching regions corresponding to the sub-base models.
The material parameter may be a color parameter of the material ball, or may be a mapping parameter. However, in general, the color parameter and the mapping parameter cannot be simultaneously given to the material ball. Furthermore, in order to endow the color parameter of the material ball to the basic model and endow the chartlet parameter at the same time, the method can be realized by controlling the USeColor parameter. UseColor is the floor parameter of violent material in UE4, and when the UseColor is set to 1, the color parameter of the material ball can be input; when UseColor is set to 0, a map parameter may be input. At this time, for example, in step 1022, the manner of obtaining the material parameter may specifically include:
when the UseColor is 1, acquiring manually input color parameters of the color matching regions corresponding to the sub-base models in the association model based on the UE4, so as to acquire material parameters for giving materials to the color matching regions corresponding to the sub-base models. That is, step 1022 may specifically include: and when the material exposure parameter is 1, acquiring the material color parameter of the color matching area. The material exposure parameter is a value of UseColor.
And when the USecolor is 0, acquiring manually input mapping parameters of the color matching regions corresponding to the sub-base models in the associated model based on the UE4 so as to acquire material parameters for giving materials to the color matching regions corresponding to the sub-base models. That is, step 1022 may specifically include: and when the material exposure parameter is 0, obtaining a material mapping parameter of the color matching area. The material exposure parameter is a value of UseColor.
2. In step 103, control parameters of the sub-base model are adjusted based on the Objectmerge node and the Transform node exposed in Houdini.
At this time, the target style control parameter is specifically a location parameter and a resource parameter of the sub-base model. The position parameters are used to adjust the position of the sub-base model. The resource parameters are used to adjust resources such as patterns of the base model.
Specifically, the sub-base model is an objectage node placed at some position in the association model, and any UE4 resource can be given to the exposed objectage node in the UE4, so as to realize the adjustment of the pattern. At this time, the UE4 resource given by the Objectmerge node is the resource parameter of the sub-base model. Further, a Transform node may be connected below the Objectmerge node in Houdini, and the Transform node is exposed, so that a Transform parameter of the Transform node may be adjusted in the UE4 to implement operations such as rotation, scaling, translation, and the like, thereby implementing adjustment of rotation, scaling, translation, and the like of the pattern. At this time, the Transform parameter of the Transform node is the position parameter of the sub-base model.
In step 102, the obtained target style control parameter is an obtained position parameter and a resource parameter, and may be obtained based on the UE4, and the manually input UE4 resource assigned to each child basic model Objectmerge node in the association model is used as the resource parameter of the child basic model. The UE4 is used for acquiring manually input Transform parameters given to Transform nodes of each sub-basic model in the association model as position parameters of the sub-basic model.
103. And adjusting the control parameters of the sub-base model according to the target style control parameters based on the style control node to obtain a target style model of the target scene.
And the control parameters of the sub-basic model are used for controlling the display of the scene style corresponding to the sub-basic model.
There are various ways to adjust the control parameters of the base model in step 103 corresponding to the target style control parameters obtained in step 102, and the exemplary method includes:
and (I) adjusting control parameters of the sub-basic model in modeling software such as Houdini.
Next, a process of adjusting the control parameters of the sub-base model in step 103 will be described by taking the example of adjusting the control parameters of the sub-base model based on the Switch node and the Color node in Houdini in step 103.
1) And step 103, adjusting the control parameters of the sub-basic model based on the Switch node in the Houdini.
That is, the target style control parameter is specifically a target implicit parameter of the sub-base model. Adjusting the control parameters of the base model corresponds to adjusting the target implicit parameters of the base model to the target implicit parameters of the base model obtained in step 102. In step 103, target implicit parameters of the base model are adjusted based on the Switch node in Houdini.
For example, when the sub-base model is a line model, the sub-base model includes a target line model of the target scene, the style control node includes a first display control node of the target line model, and the target style control parameter includes a first implicit parameter of the target line model. At this time, step 103 specifically includes the following step 1031A:
1031A, based on the first display control node, adjusting the control parameters of the target line model according to the first implicit parameters to obtain the target style model.
In step 1031A, the first display control node specifically refers to a Switch node set for the target line model in Houdini.
The first implicit parameter specifically refers to the input/output control parameter of the Switch node set for the target line model, which is acquired in step 102.
Illustratively, the target implicit parameter of the target line model is adjusted to the first implicit parameter obtained in step 102 through the first display control node to obtain the target style model, so that the line model of the required style can be selected from the line models of the association model to form the target style model of the target scene. And further, the scene style of the target scene on the lines is adjusted on the basis of reducing the repeated workload.
For another example, when the sub-base model is a color matching region model, the sub-base model includes a target color matching region model for the target scene, the pattern control node includes a second display control node for the target color matching region model, and the target pattern control parameter includes a second implicit parameter for the target color matching region model. At this time, step 103 specifically includes the following step 1031B:
1031B, based on the second display control node, adjusting the control parameters of the target color matching region model according to the second implicit display parameters to obtain the target style model.
In step 1031B, the second display control node specifically refers to a Switch node set for the target color matching region model in Houdini.
The second implicit parameter specifically refers to the input/output control parameter of the Switch node set for the target color matching region model, which is acquired in step 102.
Illustratively, the target hiding parameters of the target color matching region model are adjusted to the second hiding parameters obtained in step 102 through the second display control node to obtain a target style model, so that a color matching region model of a required style can be selected from the color matching region models of the association model to form a target style model of the target scene. And further, the scene style of the target scene on the regional color matching is adjusted on the basis of reducing the repeated workload.
2) And step 103, adjusting the control parameters of the sub-basic model based on the Color node in the Houdini.
That is, the target style control parameter is specifically a model color parameter of the sub-base model. Adjusting the control parameters of the sub-base model corresponds to adjusting the model color parameters of the sub-base model to the model color parameters of the sub-base model obtained in step 102. In step 103, model color parameters of the sub-base model are adjusted based on the Switch node in Houdini.
For example, when the sub-base model is a line model, the sub-base model comprises a target line model of the target scene, the style control node comprises a first display control node of the target line model, and the target style control parameter comprises a first model color parameter of the target line model. At this time, step 103 specifically includes the following step 1031C:
1031C, based on the first display control node, adjusting the control parameters of the target line model according to the first model color parameters to obtain the target style model.
In step 1031C, the first display control node specifically refers to a Color node set for the target line model in Houdini.
The first model Color parameter specifically refers to the Color parameter of the Color node set for the target line model acquired in step 102.
Illustratively, the model color parameters of the target line model are adjusted to the first model color parameters obtained in step 102 through the first display control node to obtain the target style model, so that the colors of the line models of the association model can be adjusted to the line model of the required style to form the target style model of the target scene. And further, the scene style of the target scene on the line color is adjusted on the basis of reducing the repeated workload.
For another example, when the sub-base model is a color region model, the sub-base model includes a target color region model for the target scene, the style control node includes a second display control node for the target color region model, and the target style control parameters include second model color parameters for the target color region model. At this time, step 103 specifically includes the following step 1031D:
1031D, based on the second display control node, adjusting the control parameters of the target color matching region model according to the second model color parameters to obtain the target style model.
In step 1031D, the second display control node specifically refers to a Color node set for the target Color matching region model in Houdini.
The second model Color parameter specifically refers to the Color parameter of the Color node set for the target Color matching region model acquired in step 102.
And adjusting the model color parameters of the target color matching region model into the second model color parameters obtained in step 102 through the second display control node to obtain a target style model, so that the colors of the color matching region models of the associated models can be adjusted into the color matching region models of the required styles to form the target style model of the target scene. And further, the scene style of the target scene on the regional color is adjusted on the basis of reducing the repeated workload.
The process of adjusting the control parameters of the sub-base model in step 103 is described above by taking the example of adjusting the target implicit parameter and the model color parameter of the sub-base model respectively. It is to be understood that the adjustment may also be performed by simultaneously adjusting at least two of the target saliency parameter and the model color parameter of the target line model, the second saliency parameter of the target color region model, the first model color parameter of the target line model, the second model color parameter of the target color region model, and the target saliency parameter and the model color parameter of the other sub-base models used to adjust the style change. For example, the style control node includes a target display control node of the base model, the target style control parameter includes a target implicit parameter and a model color parameter of the base model, and step 103 may specifically include: and adjusting the control parameters of the sub-base model according to the target implicit display parameters and the model color parameters based on the target display control node to obtain the target style model.
And (ii) adjusting control parameters of the sub-base model in a ghost engine such as the UE 4.
Next, a process of adjusting the control parameters of the sub-base model in step 103 will be described by taking, as an example, the control parameters of the sub-base model adjusted based on the Objectmerge node and Transform node exposed in Houdini and based on the Vertexcolor node and UseColor parameters in UE4 in step 103.
1) And step 103, adjusting the control parameters of the sub-base model based on Vertexcolor nodes and USeColor parameters in the UE 4.
Specifically, the target style control parameter is a material parameter of the color matching region corresponding to the sub-base model. Adjusting the control parameters of the sub-base model is equivalent to adjusting the material parameters of the color matching region corresponding to the sub-base model to the material parameters of the color matching region corresponding to the sub-base model obtained in step 102. In step 103, the material parameters of the sub-base model are adjusted based on Vertexcolor nodes and USeCoolor parameters in the UE 4. At this time, the style control node includes a material control node of the base model, and the target style control parameter includes a material parameter of a color matching region corresponding to the base model. Step 103 specifically includes the following step 1031E: 1031E, based on the material control nodes, adjusting the control parameters of the color matching regions according to the material parameters to obtain the target style model.
In step 1031E, the material control node is specifically a Vertexcolor node set for the sub-base model.
For example, when the material exposure parameter UseColor parameter is 1, that is, the target style control parameter obtained in step 102 is specifically a material color parameter, step 1031E may specifically include: and adjusting the control parameters of the color matching area according to the material color parameters based on the material control nodes to obtain the target style model. The material color parameter specifically refers to the material parameter of the color matching region corresponding to the sub-base model acquired in step 102.
Illustratively, the material color parameters of the color matching regions corresponding to the sub-base models are adjusted to the material color parameters obtained in step 102 by the material control node, so as to obtain the target style model, so that the materials of the sub-base models of the association model can be adjusted to the materials of the required styles, and the target style model of the target scene is formed. And further, the scene style of the target scene on the material is adjusted on the basis of reducing the repeated workload.
For another example, when the material exposure parameter UseColor parameter is 0, that is, the target style control parameter obtained in step 102 is specifically a material mapping parameter, step 1031E may specifically include: and adjusting the control parameters of the color matching area according to the material map parameters based on the material control nodes to obtain the target style model. The material mapping parameter specifically refers to the material parameter of the color matching region corresponding to the sub-base model acquired in step 102. The material mapping parameter is specifically a mapping for mapping to the color matching region corresponding to the sub-base model and controlling the appearance of the color matching region corresponding to the sub-base model.
Illustratively, the material control node adjusts the material parameters of the color matching regions corresponding to the sub-base models to the material mapping parameters obtained in step 102 to obtain the target style model, so that the material of each sub-base model of the association model can be adjusted to the material of the required style to form the target style model of the target scene. And further, the scene style of the target scene on the material is adjusted on the basis of reducing the repeated workload.
And when the material mapping parameter is adopted to adjust the material parameters of the corresponding color matching area of the sub-base model, a mapping mode is adopted. The following describes an example of adjusting the material parameters of the color matching regions corresponding to the base model by mapping: performing UV expansion on the color matching area corresponding to the sub-base model to obtain a UV expansion result; and mapping the material mapping parameters to corresponding positions according to the UV expansion result to obtain corresponding color matching areas of the mapped sub-base model, thereby completing mapping.
In the model, the UV can accurately correspond each point on the image (i.e., the map) to the surface of the model, so that the model can present a corresponding visual effect; UV unfolding refers to converting the model surface into a planar representation.
For the specific setting of the material parameters, reference may be made to the related description of the step 102, and for simplification, the description is omitted here.
2) And in step 103, adjusting the control parameters of the sub-basic model based on the Objectmerge node and the Transform node exposed in the Houdini.
That is, the target style control parameters are specifically the location parameters and the resource parameters of the sub-base model. Adjusting the control parameters of the sub-base model is equivalent to adjusting the position parameters and resource parameters of the sub-base model to the position parameters and resource parameters of the sub-base model obtained in step 102. In step 103, the resource parameters and the position parameters of the sub-base model are adjusted based on the Objectmerge node and the Transform node exposed in Houdini, respectively.
At this time, the sub-base model includes a target pattern model of the target scene, the style control node includes a pattern control node of the target pattern model, and the target style control parameter includes a parameter of the pattern control node of the target pattern model. Step 103 may specifically include the following step 1031F:
1031F, based on the pattern control node, adjusting the control parameters of the pattern model according to the position parameters and the resource parameters to obtain the target style model.
In step 1031F, the pattern control node specifically refers to an Objectmerge node and a Transform node set for the pattern model in Houdini.
The location parameter specifically refers to a parameter of a Transform node set for the pattern model.
The resource parameter specifically refers to a parameter of the Objectmerge node set for the pattern model.
For the specific setting of the location parameter and the resource parameter, reference may be made to the related description of the step 102, and details are not described here for simplicity.
The position parameters and resource parameters of the pattern model are adjusted to the position parameters and resource parameters obtained in step 102 through the pattern control node to obtain the target style model, so that the pattern model in the required style can be selected and operated (such as rotation, scaling, translation and other operations) from the pattern models of the associated model to form the target style model of the target scene. And further, the scene style of the target scene on the pattern is adjusted on the basis of reducing the repeated workload.
Therefore, the association model of the target scene comprises at least one sub-base model carrying the style control node, and the control parameter of the sub-base model is adjusted by obtaining the target style control parameter of the target scene, so that the display of the scene style corresponding to the sub-base model can be controlled, the scene style of the association model corresponding to the target scene is adjusted, and the target style model of the target scene is obtained; therefore, the same material after modeling can be repeatedly utilized to the same point among different scene styles, the problem that a scene model needs to be built and led into a game engine aiming at one scene style to cause a large amount of repeated work is avoided, the repeated work load during the scene model manufacturing is avoided to a certain extent, and the scene model manufacturing efficiency is improved.
In order to better implement the method, an embodiment of the present invention further provides a scene model generating apparatus, where the scene model generating apparatus may be specifically integrated in an electronic device, for example, a computer device, and the computer device may be a terminal, a server, and the like.
The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in this embodiment, a scene model generation apparatus is specifically integrated in a smart phone as an example, and the method of the embodiment of the present invention is described in detail.
For example, as shown in fig. 7, the scene model generating means may include:
a first obtaining unit 701, configured to obtain an association model of a target scene, where the association model includes at least one sub-base model of the target scene, and the sub-base model carries a style control node;
a second obtaining unit 702, configured to obtain a target style control parameter of the target scene;
an adjusting unit 703 is configured to adjust, based on the style control node, a control parameter of the sub-base model according to the target style control parameter, to obtain a target style model of the target scene, where the control parameter of the sub-base model is used to control display of a scene style corresponding to the sub-base model.
In some embodiments, the sub-base model includes at least one of a line model, a color matching region model, and a pattern model, and the first obtaining unit 701 is specifically configured to:
obtaining at least one line model of the target scene;
obtaining at least one color matching region model of the target scene;
acquiring at least one pattern model of the target scene;
and combining the at least one line model, the at least one color matching region model and the at least one pattern model through a preset combination control node to obtain the association model.
In some embodiments, the style control node comprises a target display control node of the sub-base model, the target style control parameter comprises at least one of a target implicit parameter and a model color parameter of the sub-base model, and the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the sub-base model according to at least one of the target implicit display parameters and the model color parameters based on the target display control node to obtain the target style model.
In some embodiments, the sub-base model comprises a target line model of the target scene, the target display control node comprises a first display control node of the target line model, the target implicit parameter comprises a first implicit parameter of the target line model, and the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the target line model according to the first implicit display parameters based on the first display control node to obtain the target style model.
In some embodiments, the model color parameters further include a first model color parameter of the target line model, and the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the target line model according to the color parameters of the first model based on the first display control node to obtain the target style model.
In some embodiments, the sub-base model includes a target color matching region model of the target scene, the target display control node includes a second display control node of the target color matching region model, the target implicit parameter includes a second implicit parameter of the target color matching region model, and the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the target color matching area model according to the second implicit display parameters based on the second display control node to obtain the target style model.
In some embodiments, the target style control parameter further includes a second model color parameter of the target color matching region model, and the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the target color matching area model according to the second model color parameters based on the second display control node to obtain the target style model.
In some embodiments, the style control node includes a material control node of the sub-base model, the target style control parameter includes a material parameter of a color matching region corresponding to the sub-base model, and the second obtaining unit 702 is specifically configured to:
extracting color matching areas corresponding to all the sub-basic models in the association model based on the lerp function;
obtaining material parameters of the color matching area;
in some embodiments, the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the color matching area according to the material parameters based on the material control nodes to obtain the target style model.
In some embodiments, the second obtaining unit 702 is specifically configured to:
acquiring a target material ball for adjusting the target style model;
acquiring material color parameters of the color matching area based on the target material ball;
in some embodiments, the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the color matching area according to the material color parameters based on the material control nodes to obtain the target style model.
In some embodiments, the second obtaining unit 702 is specifically configured to:
obtaining a material chartlet parameter of the color matching area;
in some embodiments, the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the color matching area according to the material map parameters based on the material control nodes to obtain the target style model.
In some embodiments, the second obtaining unit 702 is specifically configured to:
when the material exposure parameter is 0, acquiring a material chartlet parameter of the color matching area;
and when the material exposure parameter is 1, acquiring the material color parameter of the color matching area.
In some embodiments, the sub-base model comprises a target pattern model of the target scene, the pattern control nodes comprise pattern control nodes of the target pattern model, the target pattern control parameters comprise a location parameter and a resource parameter of the target pattern model, and the adjusting unit 703 is specifically configured to:
and adjusting the control parameters of the pattern model according to the position parameters and the resource parameters based on the pattern control nodes to obtain the target style model.
As can be seen from the above, in the scene model generation apparatus of this embodiment, the first obtaining unit 701 may obtain the association model of the target scene, where the association model includes at least one sub-base model of the target scene, and the sub-base model carries the style control node; acquiring, by the second acquiring unit 702, a target style control parameter of the target scene; the adjusting unit 703 adjusts the control parameter of the sub-base model according to the target style control parameter based on the style control node to obtain a target style model of the target scene, where the control parameter of the sub-base model is used to control display of a scene style corresponding to the sub-base model. Therefore, in the embodiment of the invention, the association model of the target scene comprises at least one sub-base model carrying the style control node, and the control parameter of the sub-base model is adjusted by obtaining the target style control parameter of the target scene, so that the display of the scene style corresponding to the sub-base model can be controlled, the scene style of the association model corresponding to the target scene is adjusted, and the target style model of the target scene is obtained; therefore, the same material after modeling can be repeatedly utilized to the same point among different scene styles, the problem that a scene model needs to be built and led into a game engine aiming at one scene style to cause a large amount of repeated work is avoided, the repeated work load during the scene model manufacturing is avoided to a certain extent, and the scene model manufacturing efficiency is improved.
Correspondingly, the embodiment of the present application further provides an electronic device, where the electronic device may be a terminal, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 800 includes a processor 801 with one or more processing cores, a memory 802 with one or more computer-readable storage media, and a computer program stored on the memory 802 and executable on the processor. The processor 801 is electrically connected to the memory 802. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The processor 801 is a control center of the electronic device 800, connects various parts of the entire electronic device 800 using various interfaces and lines, and performs various functions of the electronic device 800 and processes data by running or loading software programs and/or modules stored in the memory 802 and calling data stored in the memory 802, thereby performing overall monitoring of the electronic device 800.
In this embodiment, the processor 801 in the electronic device 800 loads instructions corresponding to processes of one or more application programs into the memory 802, and the processor 801 executes the application programs stored in the memory 802 according to the following steps, so as to implement various functions:
acquiring a correlation model of a target scene, wherein the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node;
acquiring a target style control parameter of the target scene;
and based on the style control node, adjusting the control parameters of the sub-base model according to the target style control parameters to obtain a target style model of the target scene, wherein the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 8, the electronic device 800 further includes: a touch display 803, a radio frequency circuit 804, an audio circuit 805, an input unit 806, and a power supply 807. The processor 801 is electrically connected to the touch display screen 803, the radio frequency circuit 804, the audio circuit 805, the input unit 806, and the power supply 807, respectively. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The touch display screen 803 can be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display 803 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 801, and can receive and execute commands sent by the processor 801. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 801 to determine the type of the touch event, and then the processor 801 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, a touch panel and a display panel may be integrated into the touch display screen 803 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 803 may also be used as a part of the input unit 806 to implement an input function.
The radio frequency circuit 804 may be configured to transmit and receive radio frequency signals to establish wireless communication with a network device or other electronic devices through wireless communication, and transmit and receive signals with the network device or other electronic devices.
The audio circuit 805 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone, or the like. The audio circuit 805 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into an audio signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 805 and converted into audio data, and the audio data is processed by the audio data output processor 801 and then sent to another electronic device via the rf circuit 804, or the audio data is output to the memory 802 for further processing. The audio circuit 805 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
The input unit 806 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 807 is used to power the various components of the electronic device 800. Optionally, the power supply 807 may be logically connected to the processor 801 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 807 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 8, the electronic device 800 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, since the association model of the target scene includes at least one sub-base model carrying the style control node, the electronic device provided in this embodiment may control the display of the scene style corresponding to the sub-base model by obtaining the target style control parameter of the target scene to adjust the control parameter of the sub-base model, thereby implementing the adjustment of the scene style of the association model corresponding to the target scene to obtain the target style model of the target scene; therefore, the same modeled material can be repeatedly utilized to the same point among different scene styles, and the problem of a large amount of repeated work caused by the fact that a scene model needs to be built and led into a game engine for one scene style is solved. The electronic equipment provided by the embodiment avoids the repeated workload when the scene model is manufactured to a certain extent, and the manufacturing efficiency of the scene model is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any one of the scene model generation methods provided in the present application. For example, the computer program may perform the steps of:
acquiring a correlation model of a target scene, wherein the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node;
acquiring a target style control parameter of the target scene;
and based on the style control node, adjusting the control parameters of the sub-base model according to the target style control parameters to obtain a target style model of the target scene, wherein the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the computer-readable storage medium can execute the steps in any scene model generation method provided in the embodiments of the present application, beneficial effects that can be achieved by any scene model generation method provided in the embodiments of the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The method, the apparatus, the electronic device, and the computer-readable storage medium for generating a scene model provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principles and embodiments of the present application, and the description of the embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A method for generating a scene model, comprising:
acquiring a correlation model of a target scene, wherein the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node;
acquiring a target style control parameter of the target scene;
and based on the style control node, adjusting the control parameters of the sub-base model according to the target style control parameters to obtain a target style model of the target scene, wherein the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
2. The method for generating a scene model of claim 1, wherein the sub-base model includes at least one of a line model, a color matching region model, and a pattern model, and the obtaining a target scene association model includes:
obtaining at least one line model of the target scene;
obtaining at least one color matching region model of the target scene;
acquiring at least one pattern model of the target scene;
and combining the at least one line model, the at least one color matching region model and the at least one pattern model through a preset combination control node to obtain the association model.
3. The scene model generation method according to claim 1, wherein the pattern control node includes a target display control node of the sub base model, the target pattern control parameter including at least one of a target saliency parameter and a model color parameter of the sub base model;
the adjusting the control parameters of the sub-base model according to the target style control parameters based on the style control node to obtain the target style model of the target scene includes:
and adjusting the control parameters of the sub-base model according to at least one of the target implicit display parameters and the model color parameters based on the target display control node to obtain the target style model.
4. The scene model generation method of claim 3, wherein the sub-base model comprises a target line model of the target scene, the target display control node comprises a first display control node of the target line model, and the target implicit parameter comprises a first implicit parameter of the target line model;
the adjusting, based on the target display control node, the control parameter of the sub-base model according to at least one of the target implicit parameter and the model color parameter to obtain the target style model includes:
and adjusting the control parameters of the target line model according to the first implicit display parameters based on the first display control node to obtain the target style model.
5. The scene model generation method according to claim 4, wherein the model color parameters further include a first model color parameter of the target line model;
the adjusting, based on the target display control node, the control parameter of the sub-base model according to at least one of the target implicit parameter and the model color parameter to obtain the target style model includes:
and adjusting the control parameters of the target line model according to the color parameters of the first model based on the first display control node to obtain the target style model.
6. The method of scene model generation of claim 3, wherein the sub-base model comprises a target color matching region model of the target scene, the target display control node comprises a second display control node of the target color matching region model, and the target saliency parameter comprises a second saliency parameter of the target color matching region model;
the adjusting, based on the target display control node, the control parameter of the sub-base model according to at least one of the target implicit parameter and the model color parameter to obtain the target style model includes:
and adjusting the control parameters of the target color matching area model according to the second implicit display parameters based on the second display control node to obtain the target style model.
7. The method for generating a scene model of claim 6, wherein the target style control parameters further include a second model color parameter of the target color region model;
the adjusting, based on the target display control node, the control parameter of the sub-base model according to at least one of the target implicit parameter and the model color parameter to obtain the target style model includes:
and adjusting the control parameters of the target color matching area model according to the second model color parameters based on the second display control node to obtain the target style model.
8. The scene model generation method according to claim 1, wherein the style control node includes a material control node of the sub-base model, and the target style control parameter includes a material parameter of a color matching region corresponding to the sub-base model;
the obtaining of the target style control parameter of the target scene includes:
extracting color matching areas corresponding to all the sub-basic models in the association model;
obtaining material parameters of the color matching area;
the adjusting the control parameters of the sub-base model according to the target style control parameters based on the style control node to obtain the target style model of the target scene includes:
and adjusting the control parameters of the color matching area according to the material parameters based on the material control nodes to obtain the target style model.
9. The method for generating a scene model according to claim 8, wherein said obtaining material parameters of said color matching regions comprises:
acquiring a target material ball for adjusting the target style model;
acquiring material color parameters of the color matching area based on the target material ball;
the adjusting the control parameters of the color matching region according to the material parameters based on the material control nodes to obtain the target style model comprises:
and adjusting the control parameters of the color matching area according to the material color parameters based on the material control nodes to obtain the target style model.
10. The method for generating a scene model according to claim 8, wherein said obtaining material parameters of said color matching regions comprises:
obtaining a material chartlet parameter of the color matching area;
the adjusting the control parameters of the color matching region according to the material parameters based on the material control nodes to obtain the target style model comprises:
and adjusting the control parameters of the color matching area according to the material map parameters based on the material control nodes to obtain the target style model.
11. The method for generating a scene model according to claim 8, wherein the material control node includes a material exposure parameter, and the obtaining the material parameter of the color matching region includes:
when the material exposure parameter is 0, acquiring a material chartlet parameter of the color matching area;
and when the material exposure parameter is 1, acquiring the material color parameter of the color matching area.
12. The scene model generation method of any one of claims 1 to 11, wherein the sub-base model includes a target pattern model of the target scene, the pattern control nodes include pattern control nodes of the target pattern model, and the target pattern control parameters include a location parameter and a resource parameter of the target pattern model;
the adjusting the control parameters of the sub-base model according to the target style control parameters based on the style control node to obtain the target style model of the target scene includes:
and adjusting the control parameters of the pattern model according to the position parameters and the resource parameters based on the pattern control nodes to obtain the target style model.
13. A scene model generation apparatus, comprising:
the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for obtaining a correlation model of a target scene, the correlation model comprises at least one sub-basic model of the target scene, and the sub-basic model carries a style control node;
a second obtaining unit, configured to obtain a target style control parameter of the target scene;
and the adjusting unit is used for adjusting the control parameters of the sub-base model according to the target style control parameters based on the style control node to obtain a target style model of the target scene, and the control parameters of the sub-base model are used for controlling the display of the scene style corresponding to the sub-base model.
14. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps of the method of generating a scene model according to any one of claims 1 to 12.
15. A computer-readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the method for generating a scene model according to any one of claims 1 to 12.
CN202111404730.4A 2021-11-24 2021-11-24 Scene model generation method and device, electronic equipment and storage medium Pending CN114159798A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111404730.4A CN114159798A (en) 2021-11-24 2021-11-24 Scene model generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111404730.4A CN114159798A (en) 2021-11-24 2021-11-24 Scene model generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114159798A true CN114159798A (en) 2022-03-11

Family

ID=80480380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111404730.4A Pending CN114159798A (en) 2021-11-24 2021-11-24 Scene model generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114159798A (en)

Similar Documents

Publication Publication Date Title
CN112233211B (en) Animation production method, device, storage medium and computer equipment
CN113516742A (en) Model special effect manufacturing method and device, storage medium and electronic equipment
WO2023213037A1 (en) Hair virtual model rendering method and apparatus, computer device, and storage medium
CN113546411A (en) Rendering method and device of game model, terminal and storage medium
CN112329184A (en) Network architecture configuration information generation method and device, storage medium and electronic equipment
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN112717382A (en) Game virtual object building method and device, storage medium and computer equipment
CN114159798A (en) Scene model generation method and device, electronic equipment and storage medium
CN113350792B (en) Contour processing method and device for virtual model, computer equipment and storage medium
CN115944923A (en) Instance object editing method and device, electronic equipment and storage medium
CN113362435B (en) Virtual component change method, device, equipment and medium of virtual object model
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
CN115984528A (en) Mapping generation method and device for virtual model, computer equipment and storage medium
CN115082606A (en) Smoke rendering method and device, electronic equipment and storage medium
CN115797532A (en) Rendering method and device of virtual scene, computer equipment and storage medium
CN115645917A (en) Virtual model processing method and device, computer equipment and storage medium
CN115761160A (en) Mountain terrain generation method and device, electronic equipment and readable storage medium
CN114419233A (en) Model generation method and device, computer equipment and storage medium
CN114404953A (en) Virtual model processing method and device, computer equipment and storage medium
CN114146418A (en) Game resource processing method and device, computer equipment and storage medium
CN116328298A (en) Virtual model rendering method and device, computer equipment and storage medium
CN115794247A (en) Interface setting method and device, electronic equipment and readable storage medium
CN114494551A (en) Three-dimensional model map processing method and device, computer equipment and storage medium
CN115731339A (en) Virtual model rendering method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination