CN116127587B - Rendering method and system in indoor design - Google Patents

Rendering method and system in indoor design Download PDF

Info

Publication number
CN116127587B
CN116127587B CN202310405498.9A CN202310405498A CN116127587B CN 116127587 B CN116127587 B CN 116127587B CN 202310405498 A CN202310405498 A CN 202310405498A CN 116127587 B CN116127587 B CN 116127587B
Authority
CN
China
Prior art keywords
objects
rendering
model
color
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310405498.9A
Other languages
Chinese (zh)
Other versions
CN116127587A (en
Inventor
蒋志锋
赵伟峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Matrix Design Co ltd
Original Assignee
Matrix Design Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matrix Design Co ltd filed Critical Matrix Design Co ltd
Priority to CN202310405498.9A priority Critical patent/CN116127587B/en
Publication of CN116127587A publication Critical patent/CN116127587A/en
Application granted granted Critical
Publication of CN116127587B publication Critical patent/CN116127587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Architecture (AREA)
  • Mathematical Analysis (AREA)
  • Structural Engineering (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Civil Engineering (AREA)
  • Image Analysis (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a rendering method and a rendering system in indoor design, comprising the following steps: acquiring indoor image information; inputting the three images into a preset segmentation model for detection to obtain each object in the room; acquiring characteristic information of each object, and screening the objects to obtain target objects; matching corresponding rendering model parameter sets in a preset database based on characteristic information of the target object; and acquiring an initial rendering model, updating the initial rendering model based on rendering model parameters in the rendering model parameter set, and obtaining an updated rendering model to render the indoor image information. According to the invention, the object in the room is detected based on the indoor image information, and the corresponding rendering model parameters are matched according to the characteristic information of the object, so that the updated rendering model applicable to the indoor object is obtained, and the defect that the existing indoor design rendering diagram cannot be applicable to indoor household equipment is overcome.

Description

Rendering method and system in indoor design
Technical Field
The present invention relates to the field of indoor design technologies, and in particular, to a rendering method and system in indoor design.
Background
At present, when enterprises move to change office places or individual houses move to change living places, decoration design is usually needed; the user of the place puts forward the design requirement to the design company, and the design company performs indoor design according to the design requirement.
The user usually purchases part of home equipment, different design styles are adopted for different home equipment, and at present, a design company can only design and generate a rendering chart for the user's place according to the own design style, so that the user cannot adapt to the home equipment purchased by the user.
Disclosure of Invention
The invention mainly aims to provide a rendering method and a rendering system in indoor designs, and aims to overcome the defect that an existing indoor design rendering diagram cannot be suitable for indoor household equipment.
In order to achieve the above object, the present invention provides a rendering method in indoor design, comprising the steps of:
acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model;
acquiring characteristic information of each object, screening the objects according to the characteristic information, and obtaining a target object through screening;
based on the characteristic information of the target object, matching a corresponding rendering model parameter set in a preset database; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object;
acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; and rendering the indoor image information based on the updated rendering model.
Further, the rendering model parameters include: the method comprises the steps of rendering the height and width of a picture, global optical switch option parameters, global deterministic Monte Carlo option parameters, setting parameter noise threshold values of a rendering indoor model and lamplight buffer option parameters.
Further, the global light switch option parameters include a gloss effect parameter, a secondary light offset setting parameter, an image filter parameter, an image sampler parameter.
Further, the step of obtaining the characteristic information of each object, screening the object according to the characteristic information, and obtaining a target object through screening includes:
acquiring size and shape information and color information of each object;
predicting the volume of each object according to the size and shape information of each object; selecting first objects with volumes larger than a preset value from the first objects, and judging whether the number of the first objects is smaller than two;
if the first objects are not smaller than two, sorting the first objects according to the sequence from large to small based on the volumes of the first objects; selecting an object with the volume arranged in the first two from the sorted first objects as a second object;
acquiring the color composition of each second object based on the color information of each second object;
obtaining the main color of each second object according to the color composition of each second object;
calculating the approximation degree between the main colors of the two second objects according to the main colors of the second objects;
judging whether the approximation degree is larger than a threshold value, and if so, taking the second object with the largest volume as a target object; and if the target object is not larger than the target object, the two second objects are taken as target objects together.
Further, after the step of selecting the first object with the volume larger than the preset value and judging whether the number of the first objects is larger than two, the method comprises the following steps:
and if the number of the first objects is less than two, taking the first objects as the target objects.
Further, the step of calculating the approximation degree between the body colors of the two second objects according to the body colors of the respective second objects includes:
respectively obtaining RGB values corresponding to the main colors of the two second objects;
converting RGB values corresponding to the main colors of the two second objects into vectors respectively, wherein the vectors are a first vector and a second vector respectively;
the similarity between the first vector and the second vector is calculated based on a cosine function as an approximation between the body colors of the two second objects.
Further, the step of obtaining the main color of each second object according to the color composition of each second object includes:
for each second object, acquiring each color and the duty ratio of each color in each second object;
determining a first duty ratio threshold and a second duty ratio threshold according to the number of each color in the second object; wherein the number of colors in the second object is greater than 1;
judging whether the second object has a color with a color duty ratio larger than the first duty ratio threshold value;
if so, taking the color with the color duty ratio larger than the first duty ratio threshold value as the main color of the second object;
and if the color is not present, acquiring the color with the color ratio larger than the second duty ratio threshold value in the second object, and taking the color with the color ratio larger than the second duty ratio threshold value as the main color.
Further, according to the number of the colors in the second object, a calculation formula for determining the first duty ratio threshold and the second duty ratio threshold is as follows:
a first duty ratio threshold= (a x number of colors in the second object-1)/(a x number of colors in the second object); wherein a is a constant greater than 1.5;
the second duty cycle threshold = 1/number of individual colors in the second object.
Further, before the step of acquiring the indoor image information, the method further includes:
the indoor design terminal generates a training instruction to the management server; transmitting the training instructions to the associated plurality of enterprise terminals based on the management server; the training instruction carries identification information;
each enterprise terminal verifies whether the identification information is valid; if the data are valid, each enterprise terminal acquires training data adopted in the current project and a history segmentation model used in the history project;
each enterprise terminal iteratively trains a history segmentation model used in a history project based on training data adopted in the current project to obtain model parameters of the history segmentation model as sub-model parameters;
each enterprise terminal encrypts the sub-model parameters through the identification information respectively, and sends the encrypted sub-model parameters to the management server, and the management server decrypts each encrypted sub-model parameter based on the identification information, then performs joint operation on the sub-model parameters to obtain joint model parameters, and encrypts the joint model parameters by adopting the identification information;
the indoor design terminal acquires the encrypted joint model parameters from the management server, and decrypts the encrypted joint model parameters based on the identification information to obtain the joint model parameters;
and the indoor design terminal acquires a historical segmentation model adopted in the historical project, and updates the historical segmentation model based on the joint model parameters to obtain the preset segmentation model.
The invention also provides a rendering system in indoor design, comprising:
the detection unit is used for acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model;
the screening unit is used for acquiring the characteristic information of each object, screening the objects according to the characteristic information and obtaining target objects through screening;
the matching unit is used for matching a corresponding rendering model parameter set in a preset database based on the characteristic information of the target object; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object;
the rendering unit is used for acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; and rendering the indoor image information based on the updated rendering model.
The invention provides a rendering method and a rendering system in indoor design, comprising the following steps: acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model; acquiring characteristic information of each object, screening the objects according to the characteristic information, and obtaining a target object through screening; based on the characteristic information of the target object, matching a corresponding rendering model parameter set in a preset database; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object; acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; and rendering the indoor image information based on the updated rendering model. According to the invention, the object in the room is detected based on the room image information, and the corresponding rendering model parameters are matched according to the characteristic information of the object, so that the updated rendering model suitable for the room object is obtained, the room image information is rendered based on the updated rendering model, and the defect that the current room design rendering diagram cannot be suitable for the room home equipment is overcome.
Drawings
FIG. 1 is a schematic diagram illustrating steps of a rendering method in an indoor design according to an embodiment of the present invention;
FIG. 2 is a block diagram of a rendering system in indoor design according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, in one embodiment of the present invention, there is provided a rendering method in an indoor design, including the steps of:
step S1, acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model;
s2, obtaining characteristic information of each object, screening the objects according to the characteristic information, and obtaining target objects through screening;
step S3, matching a corresponding rendering model parameter set in a preset database based on the characteristic information of the target object; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object;
step S4, an initial rendering model is obtained, and the initial rendering model is updated based on rendering model parameters in the rendering model parameter set, so that an updated rendering model is obtained; and rendering the indoor image information based on the updated rendering model.
In this embodiment, the above scheme is applied to adaptively update a rendering model for applying to home devices in a room, so as to obtain a rendering graph more fitting the indoor style. As described in the above step S1, image information of the room in which the decoration design is to be performed is acquired, the image information including a plurality of angles of the indoor image including not only the indoor layout but also various objects in the room such as home appliances, electric appliances, and the like. In this embodiment, a segmentation model is trained in advance, and the segmentation model can realize object segmentation on the indoor image, that is, each object is segmented from the indoor image information and labeled.
As described in the above step S2, each object has corresponding characteristic information, such as a sofa and a tea table, which have different shape characteristics, different style characteristics, and different color characteristics; it can be appreciated that some objects, if particularly small, do not have a significant impact on the overall design style; alternatively, some objects are particularly colored, which can easily affect the overall design style. Therefore, the object can be screened according to the characteristic information of the object, and a target object is obtained through screening; in this embodiment, style matching is performed only for the target object.
As described in step S3, the database stores the correspondence between the feature information of the object and the rendering model parameter set in advance. According to the corresponding relation, the corresponding rendering model parameter set can be matched in a preset database based on the characteristic information of the target object. The rendering model parameter set includes a plurality of rendering model parameters applicable to the target object, where the rendering model parameters refer to parameters adopted by a rendering model when rendering a design drawing, and when the parameters are different, the obtained rendering effects are also greatly different.
As described in the above step S4, an initial rendering model is obtained, and the initial rendering model may be a rendering model used in a history item. Then updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; and finally, rendering the indoor image information based on the updated rendering model to obtain a rendered indoor design sketch, and if the indoor design sketch needs to be modified later, local modification and the like can be performed, so that the detailed description is omitted.
In an embodiment, the rendering model parameters include: the method comprises the steps of rendering the height and width of a picture, global optical switch option parameters, global deterministic Monte Carlo option parameters, setting parameter noise threshold values of a rendering indoor model and lamplight buffer option parameters.
In this embodiment, the global optical switch option parameters include a gloss effect parameter, a secondary light offset setting parameter, an image filter parameter, and an image sampler parameter.
In an embodiment, the step of obtaining the feature information of each object, screening the object according to the feature information, and obtaining a target object by screening includes:
acquiring size and shape information and color information of each object; the feature information may be obtained according to a labeling result of the segmentation model, which is not described herein.
Predicting the volume of each object according to the size and shape information of each object; selecting first objects with volumes larger than a preset value from the first objects, and judging whether the number of the first objects is smaller than two; in this embodiment, the larger the volume of the object is, the larger the influence on the overall design style is, and therefore, the volume of each object can be predicted from the size and shape information of each object.
If the first objects are not smaller than two, sorting the first objects according to the sequence from large to small based on the volumes of the first objects; selecting an object with the volume arranged in the first two from the sorted first objects as a second object; in this embodiment, the rendering style matching of the indoor design is performed only for the object whose volume is arranged in the first two.
Acquiring the color composition of each second object based on the color information of each second object; each object may have multiple color compositions or may have only one color.
Obtaining the main color of each second object according to the color composition of each second object;
calculating the approximation degree between the main colors of the two second objects according to the main colors of the second objects;
judging whether the approximation degree is larger than a threshold value, and if so, taking the second object with the largest volume as a target object; and if the target object is not larger than the target object, the two second objects are taken as target objects together. If the similarity between the main colors of the two second objects is higher, the style of the two second objects is possibly similar, and the influence of the second objects with larger volumes is larger at the moment and the second objects can be used as target objects; if the similarity is not greater than the threshold, the style similarity is low, and comprehensive consideration is needed, so that two second objects can be taken as target objects together, corresponding rendering model parameter sets can be obtained based on the characteristic information of the second objects, and the areas where the second objects are located are rendered based on the corresponding rendering model parameter sets. And respectively performing differential rendering on the areas where the two second objects are located.
In another embodiment, after the step of selecting the first object with the volume larger than the preset value and determining whether the number of the first objects is larger than two, the method includes:
and if the number of the first objects is less than two, taking the first objects as the target objects.
In one embodiment, the step of calculating the approximation degree between the body colors of the two second objects according to the body colors of the respective second objects includes:
respectively obtaining RGB values corresponding to the main colors of the two second objects;
converting RGB values corresponding to the main colors of the two second objects into vectors respectively, wherein the vectors are a first vector and a second vector respectively; for example, the RGB values corresponding to the main colors of the two second objects are (100, 150, 125), (150, 90, 200), respectively, and the generated first vector and second vector may be expressed as (100, 150, 125), (150, 90, 200).
The similarity between the first vector and the second vector is calculated based on a cosine function as an approximation between the body colors of the two second objects.
In an embodiment, a solution for determining a body color in a second object is provided, specifically, the step of obtaining the body color of each second object according to the color composition of each second object includes:
for each second object, acquiring each color and the duty ratio of each color in each second object;
determining a first duty ratio threshold and a second duty ratio threshold according to the number of each color in the second object; wherein the number of colors in the second object is greater than 1; when the color in the second object is 1, it is directly taken as the main color.
Judging whether the second object has a color with a color duty ratio larger than the first duty ratio threshold value;
if so, taking the color with the color duty ratio larger than the first duty ratio threshold value as the main color of the second object;
and if the color is not present, acquiring the color with the color ratio larger than the second duty ratio threshold value in the second object, and taking the color with the color ratio larger than the second duty ratio threshold value as the main color.
In this embodiment, the main color of the second object is determined mainly according to the color ratio in the second object.
In an embodiment, according to the number of colors in the second object, a calculation formula for determining the first duty ratio threshold and the second duty ratio threshold is:
a first duty ratio threshold= (a x number of colors in the second object-1)/(a x number of colors in the second object); wherein a is a constant greater than 1.5;
the second duty cycle threshold = 1/number of individual colors in the second object. For example, when a is 2 and the number of colors is 3, the first duty ratio threshold=5/6 and the second duty ratio threshold is 1/3.
In an embodiment, before the step S1 of acquiring indoor image information, the method further includes:
the indoor design terminal generates a training instruction to the management server; transmitting the training instructions to the associated plurality of enterprise terminals based on the management server; the training instruction carries identification information; the identification information is a feature information which is negotiated in advance by the indoor design terminal, the management server and the plurality of enterprise terminals and is used for encrypting data and verifying validity. The management server is a trusted third party which is negotiated in advance between the indoor design terminal and the enterprise terminals, and the enterprise terminals are terminals of different enterprises.
Each enterprise terminal verifies whether the identification information is valid; if the data are valid, each enterprise terminal acquires training data adopted in the current project and a history segmentation model used in the history project;
each enterprise terminal iteratively trains a history segmentation model used in a history project based on training data adopted in the current project to obtain model parameters of the history segmentation model as sub-model parameters; it can be understood that the training data adopted in the current project is adopted to train the historical segmentation model used in the historical project, so that the training time can be reduced, the training cost can be reduced, and the segmentation model suitable for the current project can be obtained.
Each enterprise terminal encrypts the sub-model parameters through the identification information respectively, and sends the encrypted sub-model parameters to the management server, and the management server decrypts each encrypted sub-model parameter based on the identification information, then performs joint operation on the sub-model parameters to obtain joint model parameters, and encrypts the joint model parameters by adopting the identification information; in this embodiment, in order to avoid data leakage, the sub-model parameters need to be encrypted by using the identification information; and meanwhile, decryption, joint operation and encryption are sequentially carried out through a trusted management server. The identification information always passes through the above process, and the joint operation is an aggregation calculation process, which is not described herein.
The indoor design terminal acquires the encrypted joint model parameters from the management server, and decrypts the encrypted joint model parameters based on the identification information to obtain the joint model parameters;
and the indoor design terminal acquires a historical segmentation model adopted in the historical project, and updates the historical segmentation model based on the joint model parameters to obtain the preset segmentation model. In this embodiment, the indoor design terminal does not need to perform model training or make training data, only needs to send a training instruction to the management server, performs joint training through the management server and the plurality of enterprise terminals, does not need to perform exchange of the training data, and does not cause data leakage; and only a very small amount of training data is needed on each enterprise terminal, so that training time is greatly shortened, training difficulty is reduced, and meanwhile, the confidence coefficient of a training model can be improved.
Referring to fig. 2, in an embodiment of the present invention, there is also provided a rendering system in an indoor design, including:
the detection unit is used for acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model;
the screening unit is used for acquiring the characteristic information of each object, screening the objects according to the characteristic information and obtaining target objects through screening;
the matching unit is used for matching a corresponding rendering model parameter set in a preset database based on the characteristic information of the target object; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object;
the rendering unit is used for acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; and rendering the indoor image information based on the updated rendering model.
In this embodiment, for specific implementation of each unit in the above system embodiment, please refer to the description in the above method embodiment, and no further description is given here.
In summary, the method and system for rendering in indoor design provided in the embodiments of the present invention include: acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model; acquiring characteristic information of each object, screening the objects according to the characteristic information, and obtaining a target object through screening; based on the characteristic information of the target object, matching a corresponding rendering model parameter set in a preset database; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object; acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; and rendering the indoor image information based on the updated rendering model. According to the invention, the object in the room is detected based on the room image information, and the corresponding rendering model parameters are matched according to the characteristic information of the object, so that the updated rendering model suitable for the room object is obtained, the room image information is rendered based on the updated rendering model, and the defect that the current room design rendering diagram cannot be suitable for the room home equipment is overcome.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present invention and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (8)

1. A rendering method in an interior design, comprising the steps of:
acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model;
acquiring characteristic information of each object, screening the objects according to the characteristic information, and obtaining a target object through screening;
based on the characteristic information of the target object, matching a corresponding rendering model parameter set in a preset database; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object;
acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; rendering the indoor image information based on the updated rendering model;
the step of obtaining the characteristic information of each object, screening the objects according to the characteristic information to obtain target objects, comprises the following steps:
acquiring size and shape information and color information of each object;
predicting the volume of each object according to the size and shape information of each object; selecting first objects with volumes larger than a preset value from the first objects, and judging whether the number of the first objects is smaller than two;
if the number of the first objects is less than two, taking the first objects as the target objects;
if the first objects are not smaller than two, sorting the first objects according to the sequence from large to small based on the volumes of the first objects; selecting an object with the volume arranged in the first two from the sorted first objects as a second object;
acquiring the color composition of each second object based on the color information of each second object;
obtaining the main color of each second object according to the color composition of each second object;
calculating the approximation degree between the main colors of the two second objects according to the main colors of the second objects;
judging whether the approximation degree is larger than a threshold value, and if so, taking the second object with the largest volume as a target object; and if the target object is not larger than the target object, the two second objects are taken as target objects together.
2. The rendering method in an indoor design according to claim 1, wherein the rendering model parameters include: the method comprises the steps of rendering the height and width of a picture, global optical switch option parameters, global deterministic Monte Carlo option parameters, setting parameter noise threshold values of a rendering indoor model and lamplight buffer option parameters.
3. The method of rendering in an indoor design according to claim 2, wherein the global light switch option parameters include a gloss effect parameter, a secondary light offset setting parameter, an image filter parameter, an image sampler parameter.
4. The rendering method in an indoor design according to claim 1, wherein the step of calculating the approximation degree between the body colors of the two second objects from the body colors of the respective second objects includes:
respectively obtaining RGB values corresponding to the main colors of the two second objects;
converting RGB values corresponding to the main colors of the two second objects into vectors respectively, wherein the vectors are a first vector and a second vector respectively;
the similarity between the first vector and the second vector is calculated based on a cosine function as an approximation between the body colors of the two second objects.
5. The rendering method in the indoor design according to claim 1, wherein the step of acquiring the main body color of each of the second objects from the color composition of each of the second objects includes:
for each second object, acquiring each color and the duty ratio of each color in each second object;
determining a first duty ratio threshold and a second duty ratio threshold according to the number of each color in the second object; wherein the number of colors in the second object is greater than 1;
judging whether the second object has a color with a color duty ratio larger than the first duty ratio threshold value;
if so, taking the color with the color duty ratio larger than the first duty ratio threshold value as the main color of the second object;
and if the color is not present, acquiring the color with the color ratio larger than the second duty ratio threshold value in the second object, and taking the color with the color ratio larger than the second duty ratio threshold value as the main color.
6. The rendering method according to claim 5, wherein the calculation formula for determining the first duty threshold and the second duty threshold according to the number of the colors in the second object is:
a first duty ratio threshold= (a x number of colors in the second object-1)/(a x number of colors in the second object); wherein a is a constant greater than 1.5;
the second duty cycle threshold = 1/number of individual colors in the second object.
7. The method of rendering in an indoor design according to claim 1, further comprising, before the step of acquiring indoor image information:
the indoor design terminal generates a training instruction to the management server; transmitting the training instructions to the associated plurality of enterprise terminals based on the management server; the training instruction carries identification information;
each enterprise terminal verifies whether the identification information is valid; if the data are valid, each enterprise terminal acquires training data adopted in the current project and a history segmentation model used in the history project;
each enterprise terminal iteratively trains a history segmentation model used in a history project based on training data adopted in the current project to obtain model parameters of the history segmentation model as sub-model parameters;
each enterprise terminal encrypts the sub-model parameters through the identification information respectively, and sends the encrypted sub-model parameters to the management server, and the management server decrypts each encrypted sub-model parameter based on the identification information, then performs joint operation on the sub-model parameters to obtain joint model parameters, and encrypts the joint model parameters by adopting the identification information;
the indoor design terminal acquires the encrypted joint model parameters from the management server, and decrypts the encrypted joint model parameters based on the identification information to obtain the joint model parameters;
and the indoor design terminal acquires a historical segmentation model adopted in the historical project, and updates the historical segmentation model based on the joint model parameters to obtain the preset segmentation model.
8. A rendering system in an interior design, comprising:
the detection unit is used for acquiring indoor image information; inputting the indoor image information into a preset segmentation model for detection to obtain each object in the room; wherein the segmentation model is a pre-trained deep learning model;
the screening unit is used for acquiring the characteristic information of each object, screening the objects according to the characteristic information and obtaining target objects through screening;
the matching unit is used for matching a corresponding rendering model parameter set in a preset database based on the characteristic information of the target object; wherein the rendering model parameter set comprises a plurality of rendering model parameters suitable for the target object;
the rendering unit is used for acquiring an initial rendering model, and updating the initial rendering model based on rendering model parameters in the rendering model parameter set to obtain an updated rendering model; rendering the indoor image information based on the updated rendering model;
the screening unit is specifically configured to:
acquiring size and shape information and color information of each object;
predicting the volume of each object according to the size and shape information of each object; selecting first objects with volumes larger than a preset value from the first objects, and judging whether the number of the first objects is smaller than two;
if the number of the first objects is less than two, taking the first objects as the target objects;
if the first objects are not smaller than two, sorting the first objects according to the sequence from large to small based on the volumes of the first objects; selecting an object with the volume arranged in the first two from the sorted first objects as a second object;
acquiring the color composition of each second object based on the color information of each second object;
obtaining the main color of each second object according to the color composition of each second object;
calculating the approximation degree between the main colors of the two second objects according to the main colors of the second objects;
judging whether the approximation degree is larger than a threshold value, and if so, taking the second object with the largest volume as a target object; and if the target object is not larger than the target object, the two second objects are taken as target objects together.
CN202310405498.9A 2023-04-17 2023-04-17 Rendering method and system in indoor design Active CN116127587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310405498.9A CN116127587B (en) 2023-04-17 2023-04-17 Rendering method and system in indoor design

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310405498.9A CN116127587B (en) 2023-04-17 2023-04-17 Rendering method and system in indoor design

Publications (2)

Publication Number Publication Date
CN116127587A CN116127587A (en) 2023-05-16
CN116127587B true CN116127587B (en) 2023-06-16

Family

ID=86301301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310405498.9A Active CN116127587B (en) 2023-04-17 2023-04-17 Rendering method and system in indoor design

Country Status (1)

Country Link
CN (1) CN116127587B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117195378B (en) * 2023-11-02 2024-02-06 北京装库创意科技有限公司 Home layout optimization method and system based on big data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018121282A1 (en) * 2017-09-06 2019-03-07 Nvidia Corporation DIFFERENTIAL RENDERING PIPELINE FOR INVERSE GRAPHICS
CN111177622A (en) * 2019-12-23 2020-05-19 深圳壹账通智能科技有限公司 Webpage rendering method and device based on machine learning and computer equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178786A1 (en) * 2012-12-25 2015-06-25 Catharina A.J. Claessens Pictollage: Image-Based Contextual Advertising Through Programmatically Composed Collages
US10282914B1 (en) * 2015-07-17 2019-05-07 Bao Tran Systems and methods for computer assisted operation
CN107871338B (en) * 2016-09-27 2019-12-03 重庆完美空间科技有限公司 Real-time, interactive rendering method based on scene decoration
US11043026B1 (en) * 2017-01-28 2021-06-22 Pointivo, Inc. Systems and methods for processing 2D/3D data for structures of interest in a scene and wireframes generated therefrom
CN109408954B (en) * 2018-10-23 2023-05-02 美宅科技(北京)有限公司 Indoor design method and device applied to electronic commerce
CA3134424A1 (en) * 2019-03-18 2020-09-24 Geomagical Labs, Inc. Virtual interaction with three-dimensional indoor room imagery
CN112115291B (en) * 2020-08-12 2024-02-27 南京止善智能科技研究院有限公司 Three-dimensional indoor model retrieval method based on deep learning
CN112102462B (en) * 2020-09-27 2023-07-21 北京百度网讯科技有限公司 Image rendering method and device
US20230035601A1 (en) * 2021-07-28 2023-02-02 OPAL AI Inc. Floorplan Generation System And Methods Of Use

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018121282A1 (en) * 2017-09-06 2019-03-07 Nvidia Corporation DIFFERENTIAL RENDERING PIPELINE FOR INVERSE GRAPHICS
CN111177622A (en) * 2019-12-23 2020-05-19 深圳壹账通智能科技有限公司 Webpage rendering method and device based on machine learning and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于感知的室内场景的多视点渲染优化;汲梦宇 等;中国科学技术大学学报(第02期);第1-7页 *

Also Published As

Publication number Publication date
CN116127587A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN109284313B (en) Federal modeling method, device and readable storage medium based on semi-supervised learning
CN110633805B (en) Longitudinal federal learning system optimization method, device, equipment and readable storage medium
CN109002861B (en) Federal modeling method, device and storage medium
CN109255444B (en) Federal modeling method and device based on transfer learning and readable storage medium
CN109165725B (en) Neural network federal modeling method, equipment and storage medium based on transfer learning
CN108229296B (en) Face skin attribute identification method and device, electronic equipment and storage medium
CN116127587B (en) Rendering method and system in indoor design
CN112633311A (en) Efficient black-box antagonistic attacks using input data structures
CN111598182B (en) Method, device, equipment and medium for training neural network and image recognition
CN109214374B (en) Video classification method, device, server and computer-readable storage medium
CN110751149B (en) Target object labeling method, device, computer equipment and storage medium
CN107204956B (en) Website identification method and device
CN112116008A (en) Target detection model processing method based on intelligent decision and related equipment thereof
CN112529101B (en) Classification model training method and device, electronic equipment and storage medium
WO2023087656A1 (en) Image generation method and apparatus
CN111651731A (en) Method for converting entity product into digital asset and storing same on block chain
CN111783630B (en) Data processing method, device and equipment
CN110135943B (en) Product recommendation method, device, computer equipment and storage medium
CN108769973A (en) A kind of method for secret protection of bluetooth equipment
CN112529102B (en) Feature expansion method, device, medium and computer program product
CN107948721B (en) Method and device for pushing information
Ślot et al. Autoencoder-based image processing framework for object appearance modifications
CN110717037B (en) Method and device for classifying users
CN111079587B (en) Face recognition method and device, computer equipment and readable storage medium
CN112541556A (en) Model construction optimization method, device, medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant