CN114332335A - Lamplight processing method, device, equipment and medium of three-dimensional enhanced model - Google Patents

Lamplight processing method, device, equipment and medium of three-dimensional enhanced model Download PDF

Info

Publication number
CN114332335A
CN114332335A CN202111633831.9A CN202111633831A CN114332335A CN 114332335 A CN114332335 A CN 114332335A CN 202111633831 A CN202111633831 A CN 202111633831A CN 114332335 A CN114332335 A CN 114332335A
Authority
CN
China
Prior art keywords
light
node
model
dimensional
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111633831.9A
Other languages
Chinese (zh)
Inventor
张腾飞
邓竹立
彭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN202111633831.9A priority Critical patent/CN114332335A/en
Publication of CN114332335A publication Critical patent/CN114332335A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a light processing method, a device, equipment and a medium for a three-dimensional enhanced model, wherein the method comprises the following steps: in response to a display operation directed to the three-dimensional augmented model, determining an initial three-dimensional augmented model corresponding to the display operation, and obtaining an external light model file corresponding to the initial three-dimensional enhanced model, which may include a plurality of first light nodes, the preset lighting parameters in the lighting model file can be added to the first lighting node of the initial three-dimensional enhanced model to realize the configuration of the lighting parameters, so as to obtain a target three-dimensional enhanced model, then the target three-dimensional enhanced model can be rendered and displayed, so that the light parameters are added to the light nodes in the three-dimensional enhanced model through the light model file, the light parameter configuration of the three-dimensional enhanced model is avoided, the configuration process of the three-dimensional enhanced model is effectively simplified, meanwhile, the size of model resources of the three-dimensional enhanced model is reduced, and the resource burden of the terminal is effectively reduced.

Description

Lamplight processing method, device, equipment and medium of three-dimensional enhanced model
Technical Field
The present invention relates to the field of augmented reality technologies, and in particular, to a light processing method for a three-dimensional augmented model, a light processing apparatus for a three-dimensional augmented model, an electronic device, and a computer-readable storage medium.
Background
Augmented reality technology can integrate real world information and virtual world information "seamlessly", and can interact the real world with a three-dimensional augmented model in a terminal. With the improvement of the computing power of electronic products, the applications of augmented reality are more and more extensive, for example, when the augmented reality technology is combined with scenes such as house sources and vehicle sources, in order to enhance the sense of reality of a three-dimensional augmented model (such as a house model and a vehicle model) in a real scene picture presented by a terminal, a corresponding light effect can be usually added to the model, and further the reality of model display is improved through different light effects. However, in the process of adding a light effect to a model, one model often corresponds to one light effect, and when the light effect of the model needs to be adjusted, a new light effect needs to be reconfigured, so that the configuration of light parameters of the model is excessively complicated, and when the light effect needs to be added to the model is excessively complicated, resources corresponding to light data are easily excessively redundant, and a large resource burden is brought to a terminal.
Disclosure of Invention
The embodiment of the invention provides a light processing method and device of a three-dimensional enhancement model, electronic equipment and a computer readable storage medium, and aims to solve or partially solve the problems that in the related art, the configuration of light parameters for the three-dimensional enhancement model is too complicated, and resource redundancy and resource burden are easily caused.
The embodiment of the invention discloses a light processing method of a three-dimensional enhanced model, which comprises the following steps:
responding to a display operation of a three-dimensional enhanced model, determining an initial three-dimensional enhanced model corresponding to the display operation, and acquiring a light model file aiming at the initial three-dimensional enhanced model, wherein the initial three-dimensional enhanced model comprises a plurality of first light nodes;
adding preset lighting parameters into a first lighting node of the initial three-dimensional enhancement model according to the lighting model file to generate a target three-dimensional enhancement model;
and displaying the target three-dimensional enhancement model.
Optionally, the light model file at least includes a plurality of second light nodes and preset light parameters of the second light nodes, and according to the light model file, adding the preset light parameters in the first light nodes of the initial three-dimensional enhanced model to generate the target three-dimensional enhanced model, the method includes:
acquiring first node information of the first lighting node and second node information of the second lighting node;
matching the first node information with the second node information, taking a first light node corresponding to the successfully matched first node information as a first target light node, and taking a second light node corresponding to the successfully matched second node information as a second target light node;
and adding the preset light parameters of the second target light node into the first target light node to generate a target three-dimensional enhanced model.
Optionally, the step of matching the first node information with the second node information, taking the first light node corresponding to the successfully matched first node information as a first target light node, and taking the second light node corresponding to the successfully matched second node information as a second target light node, includes:
matching the first node identification with the second node identification, and determining a first target node identification which is successfully matched and a second target node identification which corresponds to the first target node identification;
and taking a first light node corresponding to the first target node identification as a first target light node, and taking a second light node corresponding to the second target node identification as a second target light node.
Optionally, the step of matching the first node information with the second node information, taking the first light node corresponding to the successfully matched first node information as a first target light node, and taking the second light node corresponding to the successfully matched second node information as a second target light node, includes:
matching the first position information with the second position information, and determining successfully matched first target position information and second target position information corresponding to the first target position information;
and taking a first light node corresponding to the first target position information as a first target light node, and taking a second light node corresponding to the second target position information as a second target light node.
Optionally, the method further comprises:
and if the matching of the first node information and each second node information fails, displaying the initial enhanced three-dimensional model.
Optionally, the obtaining a light model file for the initial three-dimensional enhanced model includes:
and obtaining the model type of the initial three-dimensional enhanced model, and determining a light model file matched with the model type.
Optionally, the preset lighting parameter at least includes one of a lighting color, a lighting brightness, a light source type, a light source reduction distance, and a light source color temperature.
The embodiment of the invention also discloses a light processing device of the three-dimensional enhanced model, which comprises:
the system comprises an initial three-dimensional enhancement model determining module, a light model obtaining module and a light model generating module, wherein the initial three-dimensional enhancement model determining module is used for responding to the display operation of a three-dimensional enhancement model, determining an initial three-dimensional enhancement model corresponding to the display operation and obtaining a light model file aiming at the initial three-dimensional enhancement model, and the initial three-dimensional enhancement model comprises a plurality of first light nodes;
the target three-dimensional enhancement model generation module is used for adding preset lighting parameters into a first lighting node of the initial three-dimensional enhancement model according to the lighting model file to generate a target three-dimensional enhancement model;
and the target three-dimensional enhancement model display module is used for displaying the target three-dimensional enhancement model.
Optionally, the light model file at least includes a plurality of second light nodes and preset light parameters of the second light nodes, and the target three-dimensional enhancement model generation module includes:
the node information acquisition submodule is used for acquiring first node information of the first lighting node and second node information of the second lighting node;
the target lamp light node generation submodule is used for matching the first node information with the second node information, taking a first lamp light node corresponding to the successfully matched first node information as a first target lamp light node, and taking a second lamp light node corresponding to the successfully matched second node information as a second target lamp light node;
and the target three-dimensional enhancement model generation submodule is used for adding the preset light parameters of the second target light node into the first target light node to generate a target three-dimensional enhancement model.
Optionally, the first node information includes a first node identifier corresponding to the first lighting node in the initial three-dimensional enhanced model, the second node information includes a second node identifier corresponding to the second lighting node in the lighting model file, and the target lighting node generation submodule is specifically configured to:
matching the first node identification with the second node identification, and determining a first target node identification which is successfully matched and a second target node identification which corresponds to the first target node identification;
and taking a first light node corresponding to the first target node identification as a first target light node, and taking a second light node corresponding to the second target node identification as a second target light node.
Optionally, the first node information includes first position information corresponding to the first lighting node in the initial three-dimensional enhanced model, the second node information includes second position information corresponding to the second lighting node in the lighting model file, and the target lighting node generation submodule is specifically configured to:
matching the first position information with the second position information, and determining successfully matched first target position information and second target position information corresponding to the first target position information;
and taking a first light node corresponding to the first target position information as a first target light node, and taking a second light node corresponding to the second target position information as a second target light node.
Optionally, the method further comprises:
and the initial enhanced three-dimensional model display module is used for displaying the initial enhanced three-dimensional model if the matching of the first node information and each second node information fails.
Optionally, the initial three-dimensional enhanced model determining module is specifically configured to:
and obtaining the model type of the initial three-dimensional enhanced model, and determining a light model file matched with the model type.
Optionally, the preset lighting parameter at least includes one of a lighting color, a lighting brightness, a light source type, a light source reduction distance, and a light source color temperature.
The embodiment of the invention also discloses an electronic device, which comprises:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the method as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the methods as described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, in response to the display operation aiming at the three-dimensional enhanced model, an initial three-dimensional enhanced model corresponding to the display operation is determined, and an external light model file corresponding to the initial three-dimensional enhanced model is obtained, wherein the initial three-dimensional enhanced model can comprise a plurality of first light nodes, preset light parameters in the light model file can be added to the first light nodes of the initial three-dimensional enhanced model to realize the configuration of the light parameters, a target three-dimensional enhanced model is obtained, and then the target three-dimensional enhanced model can be rendered and displayed, so that the light parameters are added to the light nodes in the three-dimensional enhanced model through the light model file, the configuration of the light parameters of the three-dimensional enhanced model is avoided, the configuration process of the three-dimensional enhanced model is effectively simplified, and the model resource size of the three-dimensional enhanced model is reduced, effectively reducing the resource burden of the terminal.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for processing light of a three-dimensional augmented model according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for processing light of a three-dimensional augmented model according to an embodiment of the present invention;
FIG. 3 is a block diagram of a light processing device of a three-dimensional enhanced model according to an embodiment of the present invention;
FIG. 4 is a block diagram of an electronic device provided in an embodiment of the invention;
fig. 5 is a schematic diagram of a computer-readable medium provided in an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The light processing method of the three-dimensional enhancement model in the embodiment of the invention can be operated on terminal equipment or a server. The terminal device may be a local terminal device. When the light processing method of the three-dimensional enhancement model is operated as a server, the three-dimensional enhancement model can be displayed as a cloud.
In an optional embodiment, the cloud presentation refers to an information presentation manner based on cloud computing. In the cloud display operation mode, an operation main body and an information picture presentation main body of an information processing program are separated, storage and operation of the light processing method of the three-dimensional enhanced model are completed on a cloud display server, and a cloud display client is used for receiving and sending data and presenting an information picture, for example, the cloud display client can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device for processing the information data is a cloud display server at the cloud end. When the light processing is carried out on the three-dimensional enhanced model, a user operates the cloud display client to send an operation instruction to the cloud display server, the cloud display server processes the light of the three-dimensional enhanced model according to the operation instruction, data are coded and compressed and are returned to the cloud display client through a network, and finally the three-dimensional enhanced model after the light processing is decoded and output through the cloud display client.
In another alternative embodiment, the terminal device may be a local terminal device. The local terminal device stores an application program and is used for presenting an application interface. The local terminal device is used for interacting with a user through a graphical user interface, namely, downloading and installing an application program through the electronic device and running the application program conventionally. The manner in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the user by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including an application screen and a processor for running the application, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
With the continuous development and improvement of the augmented reality technology, the application field of the augmented reality technology is more and more extensive, and especially in the fields of house sources, vehicle sources, games and the like, technicians can enhance the reality sense of the three-dimensional augmented model in the real world by continuously optimizing the display forms, display lights and the like of various three-dimensional augmented models. For example, in the process of configuring the three-dimensional enhancement model, the sense of reality of the three-dimensional enhancement model in a real scene picture can be enhanced by increasing the lighting effect of the three-dimensional enhancement model, so that a more realistic effect is achieved, a user can browse different three-dimensional enhancement models through the terminal device, more relevant information corresponding to the three-dimensional enhancement model can be comprehensively and deeply acquired, and the experience of the user is improved.
Among them, Augmented Reality (AR) is also called Augmented Reality. The augmented reality technology is a new technology capable of integrating real world information and virtual world information in a seamless mode, and can be used for overlaying entity information (visual information, sound, touch and the like) which is difficult to experience in a certain time space range of the real world after simulation through scientific technologies such as computers and the like, overlaying virtual information obtained by simulation is overlaid to a real scene picture, a user obtains the virtual information intuitively by using different terminal equipment, so that the entity information which is difficult to show in the real world is experienced, and a real environment and a virtual object are overlaid to the same picture or a function which exists in the space simultaneously in real time. Therefore, the real-world information is displayed, the virtual information can be displayed in real time, and the two kinds of information are mutually supplemented and superposed. In visual augmented reality, a user can see the real world around it by multiply compositing the real world with computer graphics using an associated AR device.
However, in the existing three-dimensional enhanced model display process, in order to enhance the sense of reality of the three-dimensional enhanced model presented in the real scene picture, a plurality of different lighting nodes are often directly configured in the three-dimensional enhanced model, so that when the three-dimensional enhanced model is displayed, the adaptive lighting is obtained through the different lighting nodes to present rich lighting effects. In practical application, tens of hundreds of different light nodes may be configured on a part of a larger three-dimensional enhanced model, and when a user interacts with a plurality of three-dimensional enhanced models, the number of the three-dimensional enhanced models increases, and the number of the light nodes required to be configured for the three-dimensional enhanced model also increases rapidly, so that in the process of configuring the three-dimensional enhanced model, not only is the workload and the error rate high due to the complicated configuration process of the light nodes, but also the more light nodes, the more light parameters are configured, the larger resource package of the three-dimensional enhanced model is easily caused, and a larger burden is easily caused on an application program when the three-dimensional enhanced model is downloaded or configured.
In view of the above, one of the core invention points of the embodiments of the present invention is that an initial three-dimensional enhanced model corresponding to a display operation can be determined in response to the display operation on a three-dimensional enhanced model, and an external light model file corresponding to the initial three-dimensional enhanced model can be obtained, where the initial three-dimensional enhanced model may include a plurality of first light nodes, and then preset light parameters in the light model file can be added to the first light nodes of the initial three-dimensional enhanced model to implement configuration of light parameters, so as to obtain a target three-dimensional enhanced model, and then the target three-dimensional enhanced model can be rendered and displayed, so that light parameters are added to the light nodes in the three-dimensional enhanced model through the light model file, thereby avoiding configuration of light parameters for the three-dimensional enhanced model, effectively simplifying a configuration flow of the three-dimensional enhanced model, and reducing a model resource size of the three-dimensional enhanced model, effectively reducing the resource burden of the terminal.
In order to make the embodiments of the present invention better understood, the data involved in the embodiments of the present invention are further described, and the following detailed description is given:
the lighting nodes may be nodes for bearing lighting parameters, and may include a first lighting node and a second lighting node. The first light node can be a null node with a null light parameter in the three-dimensional enhanced model, and the second light node can be a node carrying different light parameters of the light model file.
The node information may be specific information of the lighting node, such as a name, a location, and the like of the lighting node, which includes the first node information and the second node information. The first node information may be specific information of the first lighting node, and at least includes a first node identifier and first location information, where the first node identifier may be used to represent a name of the first lighting node, for example, the initial three-dimensional enhanced model includes a first lighting node light1 and a first lighting node light2, and the first location information may be used to represent a specific location of the first lighting node in the initial three-dimensional enhanced model. The second node information is specific information of the second lighting node, and at least includes a second node identifier and second location information, the second node identifier may be used to represent a name of the second lighting node, and the second location information may be used to represent a specific location of the second lighting node in the lighting model file.
Specifically, referring to fig. 1, a flowchart illustrating steps of a light processing method of a three-dimensional enhanced model provided in an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, responding to a display operation of a three-dimensional enhanced model, determining an initial three-dimensional enhanced model corresponding to the display operation, and acquiring a light model file aiming at the initial three-dimensional enhanced model, wherein the initial three-dimensional enhanced model comprises a plurality of first light nodes;
in the embodiment of the invention, the three-dimensional enhanced model can be a model which is formed by fusing and displaying the three-dimensional enhanced model in a graphical user interface of the terminal equipment through the terminal equipment by a user, and the user can adopt the corresponding terminal equipment to control the three-dimensional enhanced model so as to carry out interaction, the three-dimensional enhanced model can comprise different types such as a house source and a vehicle source, the initial three-dimensional enhanced model can be a three-dimensional enhanced model without adding a light effect, and different initial three-dimensional enhanced models of the same model type can correspond to the same light model file, so that the different initial three-dimensional enhanced models can adopt the same light model file to obtain the corresponding light effect, thereby reducing the configuration times of the light model file and improving the utilization rate of the light model file.
Optionally, the user may use a corresponding terminal device to display different three-dimensional enhancement models in a real scene picture, so that specific information of the room source three-dimensional model can be intuitively acquired in the real scene picture, an initial three-dimensional enhancement model corresponding to a display operation is determined by responding to the display operation of the three-dimensional enhancement model, and then a pre-configured light model file corresponding to the initial three-dimensional enhancement model is acquired, wherein the initial three-dimensional enhancement model includes a plurality of first light nodes, and the first light nodes are all light empty nodes.
In the specific implementation, before obtaining the light model file, the model type obtained by the initial three-dimensional enhancement model can be obtained, and then the light model file matched with the model type of the initial three-dimensional enhancement model is determined, wherein the three-dimensional enhancement model of the same model type can use the same light model file, and the light model file corresponding to the model type is determined by distinguishing the model types of different initial three-dimensional enhancement models. Therefore, one light model file can correspond to a plurality of three-dimensional enhanced models, so that the utilization rate of the light model file is improved, the creation times of the light model file are reduced, and the light processing efficiency is improved.
As an example, when a user browses a pre-sold building through a terminal device, if the user wants to deeply know the lighting effect of the pre-sold building in the real world, the user can display a three-dimensional building model corresponding to the pre-sold building, and the terminal can acquire a lighting model file corresponding to the three-dimensional building model, and control the display angle of the three-dimensional building model corresponding to the pre-sold building by adopting interactive operations such as sliding and touch, so that the user can intuitively acquire the lighting effect of the pre-sold building at each angle by configuring the corresponding lighting model file for the three-dimensional building model.
102, adding preset lighting parameters into a first lighting node of the initial three-dimensional enhancement model according to the lighting model file to generate a target three-dimensional enhancement model;
in the embodiment of the invention, after the light model file corresponding to the initial three-dimensional enhancement model is obtained, the light model file can be added into the initial three-dimensional enhancement model, so that the initial three-dimensional enhancement model for display is combined with the light model file for realizing the light effect to generate the target three-dimensional enhancement model corresponding to the initial three-dimensional enhancement model and the light model file.
Optionally, the light model file corresponding to the model type of the initial three-dimensional enhanced model may include files of a plurality of second light nodes, and different preset light parameters may be configured in each second light node, so that in the process of adding the light model file to the initial three-dimensional enhanced model, the preset light parameters in the light model file may be added to the initial three-dimensional enhanced model to present a light effect corresponding to the preset light parameters, so that different light effects are exhibited by configuring different light parameters, and the target three-dimensional enhanced model configured with the light model file may satisfy user requirements.
In the specific implementation, the preset lighting parameters at least comprise one of lighting color, lighting brightness, light source type, light source reduction distance and light source color temperature, the preset lighting parameters are added into the initial three-dimensional enhancement model, and then the target three-dimensional enhancement model with the preset lighting parameters is generated, so that the personalized customization of lighting for the three-dimensional enhancement model according to different use scenes and different user requirements is realized.
The preset light parameters include light colors such as purple, green, red and pink. The brightness of the lamp light can be adjusted according to actual conditions, such as 80% of brightness, 50% of brightness and the like. The light source types can be a D50 light source, a TL84 light source, a U30 light source, a CWF light source, an F light source, an A light source and the like, for example, if a user wants to simulate the light irradiation condition of a vehicle during driving, the D50 light source can be configured to simulate standard artificial sunlight, if the user wants to show the light of a building at a sunset moment, the F light source and the A light source can be configured to simulate sunset, or if the user wants to simulate market lighting, the U30 light source can be configured to model commercial fluorescence. The light source reduction distance may be an illumination range of the light source, such as an illumination range of 0.3 m. The measurement unit of the color temperature of the light source is Kelvin Scale (K), the measurement standard of the color temperature is that a standard blackbody is heated from absolute zero (273 ℃ below zero) in physics, and when the temperature is 0 ℃, the measurement standard is 273K. When the standard black body is heated to 800 ℃ and then heated to 5727 ℃, namely, to 6000K, the color temperature of the standard black body is the white light similar to the sun. If the standard black body is heated to more than 10000 ℃, the color temperature of the standard black body is about 10000K and is bluish purple. Therefore, when the color temperature is higher, the scenery is blue, when the color temperature is lower, the scenery is red, when the color temperature is lower than 3300K, the light mainly takes red light, and the human eyes generally feel warm and healthy. When the color temperature is 3300K-6000K, it is neutral color temperature, the red, green and blue light content is a certain proportion, at this time, the human eyes usually feel comfortable, cool and soft feeling, and when the color temperature is more than 6000K, the blue light content is larger, at this time, the human eyes usually feel cool and deep feeling.
As an example, a user may preset a light model file a with light color of light blue, light brightness of 40%, and light source type of U30 light source, where the light model file a corresponds to the model type of stage, and when the user shows an initial stage three-dimensional model through app related to a market, the light model file a corresponding to the initial stage three-dimensional model may be obtained at the same time, so as to generate a target stage three-dimensional model, and when the target stage three-dimensional model is shown, the model may shine soft light blue light.
In concrete realization, every light model file has included a plurality of second light node at least, and preset light parameter corresponding to each second light node, therefore, can be in proper order to the different light parameter of each second light node adaptation, thereby obtain richer light effect, and then can add different preset light parameter at the first light node of initial three-dimensional enhanced model, the target enhanced model that has generated a plurality of preset light parameters, adopt different second light nodes to render up the three-dimensional enhanced model of target in a multi-angle way, the sense of reality of the three-dimensional enhanced model of target in the real world has been improved greatly to this kind of light processing mode of multi-angle.
Specifically, first node information of a first lighting node in the initial three-dimensional enhanced model and second node information of a second lighting node in the lighting model file can be obtained, the first node information and the second node information are matched, whether the first node information and the second node information are successfully matched or not is judged, if the judgment result is that the matching is successful, the first lighting node corresponding to the successfully matched first node information can be used as a first target lighting node, the second lighting node corresponding to the successfully matched second node information can be used as a second target lighting node, then preset lighting parameters of the second target lighting node are added into the first target lighting node, so that the target three-dimensional enhanced model is generated, and by matching the first node information in the initial three-dimensional enhanced model with the second node information in the lighting model file one by one, the preset light parameters can be accurately added to the initial three-dimensional enhancement model.
In an optional embodiment, the first node information corresponding to the initial three-dimensional enhanced model may include first node identifiers corresponding to the first lighting nodes in the initial three-dimensional enhanced model, for example, the first node identifiers corresponding to the first lighting nodes may be light1, light2, light3, light4, and the like, and the second node information corresponding to the lighting model file may include second node identifiers corresponding to the second lighting nodes in the lighting model file, for example, the second node identifiers corresponding to the second lighting nodes may be light i, light ii, light iii, light iv, and the like. Therefore, the process of matching the first node identifier with the second node identifier may be: firstly, acquiring a second node identifier in the light model file: traversing each first lighting node of the initial three-dimensional enhanced model, matching a first node identifier of each first lighting node with a second node identifier one by one, if a first node identifier (light1) is obtained and is the same as the second node identifier (light I), indicating that a first lighting node corresponding to light1 is successfully matched with a second lighting node corresponding to light I, taking a first node identifier (light1) which is successfully matched as a first target node identifier, taking the second node identifier (light I) which is successfully matched as a second target node identifier, taking the first lighting node corresponding to the first target node identifier as a first target lighting node, taking the second lighting node corresponding to the second target node identifier as a second target lighting node, and finally adding a preset target parameter of the second target node into the first lighting node, and generating a target three-dimensional enhanced model.
In an example, assuming that the three-dimensional enhanced model is a house source three-dimensional model, the house source three-dimensional model comprises a light node, a light node and a light node, a node identifier corresponding to the light node is light1, a node identifier corresponding to the light node is light2, a node identifier corresponding to the light node is light3, a light model file a 'corresponding to the house source type comprises a light node i, a light node ii and a light node iii, a node identifier corresponding to the light node i is light i, a node identifier corresponding to the light node ii is light ii, a node identifier corresponding to the light node iii is light iii, a preset light parameter a is corresponding to the node identifier i, a preset light parameter B is corresponding to the node identifier ii, a preset light parameter C is corresponding to the node identifier iii, when a user uses a house source app to display the house source three-dimensional model, the light model file a' corresponding to the house source model is obtained, traversing each lighting node of the house source three-dimensional model, judging whether the node identification of each lighting node in the house source three-dimensional model is the same as the node identification of each lighting node in the lighting model file, if light1 is the same as light I, light2 is the same as light II, and light3 is different from light III, so as to explain that the lighting nodes I and II of the house source three-dimensional model are successfully matched with the lighting nodes I and II of the lighting model file respectively, and the lighting nodes III are failed to be matched with the lighting nodes III, then adding the preset lighting parameter A corresponding to the node identification light I into the lighting nodes I corresponding to the node identification light1, adding the preset lighting parameter B corresponding to the node identification light II into the lighting nodes corresponding to the node identification light2, and finally generating and displaying the lighting node corresponding to which adopts the preset lighting parameter A (without displaying effect), and displaying the lighting parameter A, And presetting a lighting effect corresponding to the target three-dimensional enhancement model of the lighting parameter C.
In another optional embodiment, the first node information corresponding to the initial three-dimensional enhanced model may include first location information corresponding to the first lighting node in the initial three-dimensional enhanced model, the second node information corresponding to the lighting model file may include second location information corresponding to the second lighting node in the lighting model file, the first location information of the first lighting node and the second location information of the second lighting node may be matched, and whether the matching is successful is determined, if the matching is successful, the first target location information that is successfully matched and the second target location information that is corresponding to the first target location information may be determined, then the first lighting node corresponding to the first target location information is used as a first target lighting node, and the second lighting node corresponding to the second target location information is used as a second target lighting node, and finally, adding the preset light parameters of the second target light node into the first target light node to generate a target three-dimensional enhanced model.
In another example, assuming that the three-dimensional enhanced model is a room source three-dimensional model, the room source three-dimensional model includes a light node i, a light node ii, a light node iii, a light node iv, a light model file B' corresponding to the room source type includes a light node i, a light node ii, a light node iii, and a light node iv, and a light model file corresponding to the room source three-dimensional model is obtained when the user shows the room source three-dimensional model using a room source app, then traversing each lighting node of the house source three-dimensional model, matching the position information i of the lighting node I in the house source three-dimensional model with the position information a, B, c and d corresponding to the lighting nodes I, II, III and IV in the lighting model file B' one by one, determining the position information a successfully matched with the position information i, the location information i may be regarded as first target location information, the location information a successfully matched with the location information i may be regarded as second target location information, then, the lamplight node (I) corresponding to the first target position information is used as a first target lamplight node, the lamplight node (A) corresponding to the second target position information is used as a second target lamplight node, and finally, the preset lamplight parameters of the second target lamplight node are added into the first target lamplight node, thereby generating and displaying the light effect of the target three-dimensional enhancement model added with the preset light parameters.
In another optional embodiment, after the first node information of the first lighting node in the initial three-dimensional enhanced model and the second node information of the second lighting node in the lighting model file are obtained, the first node information and the second node information may be matched, and whether the first node information and the second node information are successfully matched is determined, if the determination result is that the matching fails, it is determined that the lighting model file does not configure a corresponding lighting effect for the initial three-dimensional enhanced model, the initial three-dimensional enhanced model may be directly displayed.
And 103, displaying the target three-dimensional enhancement model.
In the embodiment of the invention, after the initial three-dimensional enhancement model is loaded, a light model file corresponding to the model type of the initial three-dimensional enhancement model can be obtained, and the first light nodes in the initial three-dimensional enhancement model are matched in a traversing manner, the first node information of each first light node is matched with the second node information of each second light node in the light model file one by one, so that the successfully-matched first light nodes are used as first target light nodes, the successfully-matched second light nodes are used as second target light nodes, and finally the preset light parameters of the second target light nodes are added into the first target light nodes, so that the target three-dimensional enhancement model can be rendered by adopting the added preset light parameters.
It should be noted that the embodiment of the present invention includes but is not limited to the above examples, and it is understood that, under the guidance of the idea of the embodiment of the present invention, a person skilled in the art can set the method according to practical situations, and the present invention is not limited to this.
In the embodiment of the invention, the initial three-dimensional enhanced model corresponding to the display operation can be determined in response to the display operation aiming at the three-dimensional enhanced model, and the external light model file corresponding to the initial three-dimensional enhanced model is obtained, wherein the initial three-dimensional enhanced model can comprise a plurality of first light nodes, the preset light parameters in the light model file can be added into the first light nodes of the initial three-dimensional enhanced model to realize the configuration of the light parameters, so as to obtain the target three-dimensional enhanced model, and then the target three-dimensional enhanced model can be rendered and displayed, so that the light parameters are added to the light nodes in the three-dimensional enhanced model through the light model file, the configuration of the light parameters of the three-dimensional enhanced model is avoided, the configuration process of the three-dimensional enhanced model is effectively simplified, and the model resource size of the three-dimensional enhanced model is reduced, effectively reducing the resource burden of the terminal.
In order to make those skilled in the art better understand the technical solution of the embodiment of the present invention, the embodiment of the present invention is described below by way of an example and with reference to the flowchart shown in fig. 2.
1) Starting a house source app;
2) the method comprises the steps that a light model file corresponding to a house source type is obtained, a plurality of second light nodes are arranged in the light model file, and each second light node corresponds to a preset light parameter, such as parameters of light brightness, light color, light source type, light source reduction distance and the like;
3) displaying the initial house source three-dimensional model, and acquiring each first light node in the initial house source three-dimensional model, wherein the first light nodes are all empty nodes;
4) judging whether each first light node in the initial house source three-dimensional model is matched with each second light node in the light model file;
5) if each first light node in the initial room source three-dimensional model is matched with each second light node in the light model file, adding a preset light parameter in the second light node into the first light node, and rendering the target room source three-dimensional model by adopting the preset light parameter;
6) and if each first light node in the house source three-dimensional model is not matched with each second light node in the light model file, displaying the initial house source three-dimensional model.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 3, a block diagram of a structure of a light processing device of a three-dimensional enhanced model provided in the embodiment of the present invention is shown, and the light processing device may specifically include the following modules:
an initial three-dimensional enhancement model determining module 301, configured to determine, in response to a display operation of a three-dimensional enhancement model, an initial three-dimensional enhancement model corresponding to the display operation, and obtain a light model file for the initial three-dimensional enhancement model, where the initial three-dimensional enhancement model includes a plurality of first light nodes;
a target three-dimensional enhancement model generation module 302, configured to add a preset lighting parameter to a first lighting node of the initial three-dimensional enhancement model according to the lighting model file, and generate a target three-dimensional enhancement model;
and the target three-dimensional enhancement model display module 303 is used for displaying the target three-dimensional enhancement model.
In an optional embodiment, the light model file at least includes a plurality of second light nodes and preset light parameters of the second light nodes, and the target three-dimensional enhanced model generating module 302 includes:
the node information acquisition submodule is used for acquiring first node information of the first lighting node and second node information of the second lighting node;
the target lamp light node generation submodule is used for matching the first node information with the second node information, taking a first lamp light node corresponding to the successfully matched first node information as a first target lamp light node, and taking a second lamp light node corresponding to the successfully matched second node information as a second target lamp light node;
and the target three-dimensional enhancement model generation submodule is used for adding the preset light parameters of the second target light node into the first target light node to generate a target three-dimensional enhancement model.
In an optional embodiment, the first node information includes a first node identifier corresponding to the first light node in the initial three-dimensional enhanced model, the second node information includes a second node identifier corresponding to the second light node in the light model file, and the target light node generation submodule is specifically configured to:
matching the first node identification with the second node identification, and determining a first target node identification which is successfully matched and a second target node identification which corresponds to the first target node identification;
and taking a first light node corresponding to the first target node identification as a first target light node, and taking a second light node corresponding to the second target node identification as a second target light node.
In an optional embodiment, the first node information includes first location information corresponding to the first light node in the initial three-dimensional enhanced model, the second node information includes second location information corresponding to the second light node in the light model file, and the target light node generation submodule is specifically configured to:
matching the first position information with the second position information, and determining successfully matched first target position information and second target position information corresponding to the first target position information;
and taking a first light node corresponding to the first target position information as a first target light node, and taking a second light node corresponding to the second target position information as a second target light node.
In an alternative embodiment, further comprising:
and the initial enhanced three-dimensional model display module is used for displaying the initial enhanced three-dimensional model if the matching of the first node information and each second node information fails.
In an optional embodiment, the initial three-dimensional enhanced model determining module 301 is specifically configured to:
and obtaining the model type of the initial three-dimensional enhanced model, and determining a light model file matched with the model type.
In an optional embodiment, the preset light parameter includes at least one of a light color, a light brightness, a light source type, a light source reduction distance, and a light source color temperature.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In addition, an electronic device is further provided in the embodiments of the present invention, as shown in fig. 4, and includes a processor 401, a communication interface 402, a memory 403, and a communication bus 404, where the processor 401, the communication interface 402, and the memory 403 complete mutual communication through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401, when executing the program stored in the memory 403, implements the following steps:
responding to a display operation of a three-dimensional enhanced model, determining an initial three-dimensional enhanced model corresponding to the display operation, and acquiring a light model file aiming at the initial three-dimensional enhanced model, wherein the initial three-dimensional enhanced model comprises a plurality of first light nodes;
adding preset lighting parameters into a first lighting node of the initial three-dimensional enhancement model according to the lighting model file to generate a target three-dimensional enhancement model;
and displaying the target three-dimensional enhancement model.
In an optional embodiment, the generating the target three-dimensional enhanced model by adding the preset lighting parameters to the first lighting node of the initial three-dimensional enhanced model according to the lighting model file includes:
acquiring first node information of the first lighting node and second node information of the second lighting node;
matching the first node information with the second node information, taking a first light node corresponding to the successfully matched first node information as a first target light node, and taking a second light node corresponding to the successfully matched second node information as a second target light node;
and adding the preset light parameters of the second target light node into the first target light node to generate a target three-dimensional enhanced model.
In an optional embodiment, the matching the first node information and the second node information, taking the first light node corresponding to the successfully matched first node information as a first target light node, and taking the second light node corresponding to the successfully matched second node information as a second target light node, includes:
matching the first node identification with the second node identification, and determining a first target node identification which is successfully matched and a second target node identification which corresponds to the first target node identification;
and taking a first light node corresponding to the first target node identification as a first target light node, and taking a second light node corresponding to the second target node identification as a second target light node.
In an optional embodiment, the first node information includes first location information corresponding to the first lighting node in the initial three-dimensional enhanced model, the second node information includes second location information corresponding to the second lighting node in the lighting model file, the matching between the first node information and the second node information is performed, the first lighting node corresponding to the successfully matched first node information is used as a first target lighting node, and the second lighting node corresponding to the successfully matched second node information is used as a second target lighting node, including:
matching the first position information with the second position information, and determining successfully matched first target position information and second target position information corresponding to the first target position information;
and taking a first light node corresponding to the first target position information as a first target light node, and taking a second light node corresponding to the second target position information as a second target light node.
In an alternative embodiment, further comprising:
and if the matching of the first node information and each second node information fails, displaying the initial enhanced three-dimensional model.
In an optional embodiment, the obtaining the light model file for the initial three-dimensional enhanced model includes:
and obtaining the model type of the initial three-dimensional enhanced model, and determining a light model file matched with the model type.
In an optional embodiment, the preset light parameter includes at least one of a light color, a light brightness, a light source type, a light source reduction distance, and a light source color temperature.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, as shown in fig. 5, there is further provided a computer-readable storage medium 501, in which instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to execute the light processing method of the three-dimensional enhanced model described in the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the light processing method of the three-dimensional augmented model described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A light processing method of a three-dimensional enhanced model is characterized by comprising the following steps:
responding to a display operation of a three-dimensional enhanced model, determining an initial three-dimensional enhanced model corresponding to the display operation, and acquiring a light model file aiming at the initial three-dimensional enhanced model, wherein the initial three-dimensional enhanced model comprises a plurality of first light nodes;
adding preset lighting parameters into a first lighting node of the initial three-dimensional enhancement model according to the lighting model file to generate a target three-dimensional enhancement model;
and displaying the target three-dimensional enhancement model.
2. The method according to claim 1, wherein the light model file at least includes a plurality of second light nodes and preset light parameters of the second light nodes, and the generating of the target three-dimensional enhanced model by adding the preset light parameters to the first light nodes of the initial three-dimensional enhanced model according to the light model file comprises:
acquiring first node information of the first lighting node and second node information of the second lighting node;
matching the first node information with the second node information, taking a first light node corresponding to the successfully matched first node information as a first target light node, and taking a second light node corresponding to the successfully matched second node information as a second target light node;
and adding the preset light parameters of the second target light node into the first target light node to generate a target three-dimensional enhanced model.
3. The method according to claim 2, wherein the first node information includes a first node identifier corresponding to the first light node in the initial three-dimensional enhanced model, the second node information includes a second node identifier corresponding to the second light node in the light model file, and the matching the first node information and the second node information includes determining the first light node corresponding to the successfully matched first node information as a first target light node and determining the second light node corresponding to the successfully matched second node information as a second target light node, including:
matching the first node identification with the second node identification, and determining a first target node identification which is successfully matched and a second target node identification which corresponds to the first target node identification;
and taking a first light node corresponding to the first target node identification as a first target light node, and taking a second light node corresponding to the second target node identification as a second target light node.
4. The method according to claim 2, wherein the first node information includes first position information corresponding to the first light node in the initial three-dimensional enhanced model, the second node information includes second position information corresponding to the second light node in the light model file, the matching the first node information with the second node information, the first light node corresponding to the successfully matched first node information being a first target light node, and the second light node corresponding to the successfully matched second node information being a second target light node, comprises:
matching the first position information with the second position information, and determining successfully matched first target position information and second target position information corresponding to the first target position information;
and taking a first light node corresponding to the first target position information as a first target light node, and taking a second light node corresponding to the second target position information as a second target light node.
5. The method of claim 2, further comprising:
and if the matching of the first node information and each second node information fails, displaying the initial enhanced three-dimensional model.
6. The method of claim 1, wherein obtaining a light model file for the initial three-dimensional augmented model comprises:
and obtaining the model type of the initial three-dimensional enhanced model, and determining a light model file matched with the model type.
7. The method according to any one of claims 1 to 6, wherein the predetermined lamp parameters comprise at least one of lamp color, lamp brightness, light source type, light source reduction distance, and light source color temperature.
8. A light processing apparatus for a three-dimensional augmented model, comprising:
the system comprises an initial three-dimensional enhancement model determining module, a light model obtaining module and a light model generating module, wherein the initial three-dimensional enhancement model determining module is used for responding to the display operation of a three-dimensional enhancement model, determining an initial three-dimensional enhancement model corresponding to the display operation and obtaining a light model file aiming at the initial three-dimensional enhancement model, and the initial three-dimensional enhancement model comprises a plurality of first light nodes;
the target three-dimensional enhancement model generation module is used for adding preset lighting parameters into a first lighting node of the initial three-dimensional enhancement model according to the lighting model file to generate a target three-dimensional enhancement model;
and the target three-dimensional enhancement model display module is used for displaying the target three-dimensional enhancement model.
9. An electronic device, comprising:
one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-7.
10. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any one of claims 1-7.
CN202111633831.9A 2021-12-28 2021-12-28 Lamplight processing method, device, equipment and medium of three-dimensional enhanced model Pending CN114332335A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633831.9A CN114332335A (en) 2021-12-28 2021-12-28 Lamplight processing method, device, equipment and medium of three-dimensional enhanced model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633831.9A CN114332335A (en) 2021-12-28 2021-12-28 Lamplight processing method, device, equipment and medium of three-dimensional enhanced model

Publications (1)

Publication Number Publication Date
CN114332335A true CN114332335A (en) 2022-04-12

Family

ID=81016948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633831.9A Pending CN114332335A (en) 2021-12-28 2021-12-28 Lamplight processing method, device, equipment and medium of three-dimensional enhanced model

Country Status (1)

Country Link
CN (1) CN114332335A (en)

Similar Documents

Publication Publication Date Title
US10732968B2 (en) Method, apparatus and system for generating augmented reality module and storage medium
US10284705B2 (en) Method and apparatus for controlling smart device, and computer storage medium
CN110765620B (en) Aircraft visual simulation method, system, server and storage medium
CN106215418B (en) The display control method and its device of a kind of application, terminal
CN111414225A (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
WO2014063657A1 (en) Online experience system for 3d products
US11006505B2 (en) Automated re-creation of lighting visual for a venue
CN111833423A (en) Presentation method, presentation device, presentation equipment and computer-readable storage medium
US10921796B2 (en) Component information retrieval device, component information retrieval method, and program
CN111243068A (en) Automatic rendering method and device for 3D model scene and storage medium
WO2017147909A1 (en) Target device control method and apparatus
WO2018224390A1 (en) Mapping a light effect to light sources using a mapping function
US20240144625A1 (en) Data processing method and apparatus, and electronic device and storage medium
CN105425515A (en) Optical Projection Apparatus And Illumination Apparatus Using Same
WO2021098306A1 (en) Object comparison method, and device
CN111381794A (en) Control method and device for robot eye lamp, terminal device and medium
CN110868471B (en) Equipment construction method, device and storage medium
CN117707676A (en) Window rendering method, device, equipment, storage medium and program product
CN105556570A (en) Generating screen data
US9615009B1 (en) Dynamically adjusting a light source within a real world scene via a light map visualization manipulation
JP2019532385A (en) System for configuring or modifying a virtual reality sequence, configuration method, and system for reading the sequence
CN114332335A (en) Lamplight processing method, device, equipment and medium of three-dimensional enhanced model
CN113076155A (en) Data processing method and device, electronic equipment and computer storage medium
CN115373570B (en) Image processing method, device, electronic equipment and storage medium
CN114821010A (en) Virtual scene processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination