CN117372598A - Visual rendering method, system and medium based on environmental art design - Google Patents

Visual rendering method, system and medium based on environmental art design Download PDF

Info

Publication number
CN117372598A
CN117372598A CN202311512263.6A CN202311512263A CN117372598A CN 117372598 A CN117372598 A CN 117372598A CN 202311512263 A CN202311512263 A CN 202311512263A CN 117372598 A CN117372598 A CN 117372598A
Authority
CN
China
Prior art keywords
scene
rendering
visual
information
layout
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311512263.6A
Other languages
Chinese (zh)
Inventor
王愉贵子
田硕
冯超
陈思
邵帅
梁松山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Polytechnic College
Original Assignee
Shandong Polytechnic College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Polytechnic College filed Critical Shandong Polytechnic College
Priority to CN202311512263.6A priority Critical patent/CN117372598A/en
Publication of CN117372598A publication Critical patent/CN117372598A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a visual rendering method, a system and a medium based on environmental art design, wherein the method comprises the following steps: acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters; analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect; dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information, optimizing the scene image according to the correction information, and transmitting the optimization result to the terminal; and performing targeted scene rendering according to the scene layout, judging visual effects after scene rendering, analyzing, and dynamically adjusting rendering parameters according to the visual effects, so that real-time adjustment of visual rendering is realized, and visual rendering effects are improved.

Description

Visual rendering method, system and medium based on environmental art design
Technical Field
The application relates to the field of visual rendering, in particular to a visual rendering method, a system and a medium based on environmental art design.
Background
Environmental art is green art and science, and is art and science that creates harmony and durability. Urban planning, urban design, architectural design, indoor design, urban carving, fresco, small building products and the like all belong to the category of environmental art, the environmental art (Environmental art) is also called environmental design (Environmental design), is a subject in development, and a complete theoretical system is not formed at present. The theoretical category and working range of subject research and design, including definition, are not unified, and the rendering technology refers to a process of simulating the lighting of physical environment and the texture of objects in the physical world in a three-dimensional scene to obtain a relatively real image. Rendering is not an independent concept, it is the process of bringing together all the work in the three-dimensional model, texture, lighting, camera and effects process to form the final graphics sequence. Simply by creating pixels that are given different colors to form a complete image. The rendering process requires a large amount of complex calculation, so that the computer is busy, the existing visual rendering method is difficult to adjust the environmental art through the atmosphere lamp and the light reflection and refraction principle in the environmental art design process, the visual rendering effect is poor, and an effective technical solution is needed at present for the problems.
Disclosure of Invention
An object of the embodiment of the application is to provide a visual rendering method, a system and a medium based on environmental art design, which perform targeted scene rendering through scene layout, judge visual effects after scene rendering for analysis, dynamically adjust rendering parameters according to the visual effects, realize real-time adjustment of visual rendering and improve visual rendering effects.
The embodiment of the application also provides a visual rendering method based on the environmental art design, which comprises the following steps:
acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters;
analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect;
dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information,
and optimizing the scene image according to the correction information, and transmitting an optimization result to the terminal.
Optionally, in the visual rendering method based on the environmental art design according to the embodiment of the present application, a scene image is acquired, image features are extracted, and a scene layout is analyzed according to the image features, including:
acquiring image characteristics, and comparing the image characteristics with preset characteristics to obtain a characteristic deviation rate;
judging whether the characteristic deviation rate is larger than a preset characteristic deviation rate threshold value or not;
if the number is larger than the threshold value, judging the background characteristics, and dividing and removing the background characteristics;
if the scene layout information is smaller than the predetermined value, determining that the scene layout information is the layout feature, screening and combining the layout feature, and generating the scene layout information.
Optionally, in the visual rendering method based on the environmental art design according to the embodiment of the present application, a scene layout is input into a preset model to generate rendering parameters, specifically:
acquiring scene layout information, inputting the scene layout information into a preset model for training, and obtaining a training result;
judging whether the training result is converged or not, if not, continuing training the model until the training result is converged;
and if so, generating rendering parameters according to a preset model.
Optionally, in the visual rendering method based on the environmental art design according to the embodiment of the present application, analyzing a visual effect according to a rendering parameter, and performing similarity calculation on the visual effect and a preset effect, including:
acquiring rendering parameters, performing scene rendering according to the rendering parameters, and analyzing rendering effects according to scene rendering results, wherein the rendering effects comprise scene brightness rendering, scene color rendering and scene ray rendering;
and performing scene division brightness analysis, scene color analysis and scene ray analysis according to the rendering effect to obtain brightness difference, scene chromatic aberration and scene ray refraction information.
Optionally, in the visual rendering method based on the environmental art design according to the embodiment of the present application, a rendering parameter is obtained, scene rendering is performed according to the rendering parameter, and a rendering effect is analyzed according to a scene rendering result, where the rendering effect includes scene brightness rendering, scene color rendering, and scene ray rendering, and then the method further includes:
acquiring scene brightness information, analyzing scene resolution according to the scene brightness information,
calculating scene definition from the scene resolution;
comparing the scene definition with a preset definition;
if the scene definition is smaller than the preset definition, dynamically adjusting the scene brightness;
if the scene definition is larger than the preset definition, analyzing the scene color under the corresponding scene brightness, analyzing the color difference, and adjusting the scene brightness in real time according to the color difference.
Optionally, in the visual rendering method based on the environmental art design according to the embodiment of the present application, the rendering parameters are dynamically adjusted according to the similarity calculation result to obtain the correction information, which specifically includes:
obtaining similarity information, and comparing the similarity information with preset similarity information to obtain a similarity deviation rate;
if the similarity deviation rate is larger than the first deviation rate threshold and smaller than the second deviation rate threshold, generating first correction information, and adjusting the rendering parameters in a first mode according to the first correction information;
if the similarity deviation rate is larger than a second deviation rate threshold value, generating second correction information, and adjusting the rendering parameters in a second mode according to the second correction information;
the first deviation ratio threshold is less than the second deviation ratio threshold.
In a second aspect, embodiments of the present application provide a visual rendering system based on an environmental artistic design, the system comprising: the system comprises a memory and a processor, wherein the memory comprises a program of a visual rendering method based on an environmental art design, and the program of the visual rendering method based on the environmental art design realizes the following steps when being executed by the processor:
acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters;
analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect;
dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information,
and optimizing the scene image according to the correction information, and transmitting an optimization result to the terminal.
Optionally, in the visual rendering system based on the environmental art design according to the embodiment of the present application, a scene image is acquired, image features are extracted, and a scene layout is analyzed according to the image features, including:
acquiring image characteristics, and comparing the image characteristics with preset characteristics to obtain a characteristic deviation rate;
judging whether the characteristic deviation rate is larger than a preset characteristic deviation rate threshold value or not;
if the number is larger than the threshold value, judging the background characteristics, and dividing and removing the background characteristics;
if the scene layout information is smaller than the predetermined value, determining that the scene layout information is the layout feature, screening and combining the layout feature, and generating the scene layout information.
Optionally, in the visual rendering system based on the environmental art design according to the embodiment of the present application, a scene layout is input into a preset model to generate rendering parameters, specifically:
acquiring scene layout information, inputting the scene layout information into a preset model for training, and obtaining a training result;
judging whether the training result is converged or not, if not, continuing training the model until the training result is converged;
and if so, generating rendering parameters according to a preset model.
In a third aspect, embodiments of the present application further provide a computer readable storage medium, where a visual rendering method program based on an environmental art design is included, where the visual rendering method program based on an environmental art design, when executed by a processor, implements the steps of the visual rendering method based on an environmental art design as described in any one of the above.
As can be seen from the above, the visual rendering method, system and medium based on the environmental art design provided in the embodiments of the present application, by acquiring a scene image, extracting image features, analyzing a scene layout according to the image features, and inputting the scene layout into a preset model to generate rendering parameters; analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect; dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information, optimizing the scene image according to the correction information, and transmitting the optimization result to the terminal; and performing targeted scene rendering according to the scene layout, judging visual effects after scene rendering, analyzing, and dynamically adjusting rendering parameters according to the visual effects, so that real-time adjustment of visual rendering is realized, and visual rendering effects are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a visual rendering method based on an environmental art design provided in an embodiment of the present application;
fig. 2 is a flowchart of obtaining scene layout information of a visual rendering method based on an environmental art design according to an embodiment of the present application;
fig. 3 is a flowchart of a rendering parameter generation method of a visual rendering method based on an environmental art design according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a flowchart of a visual rendering method based on an environmental art design according to some embodiments of the present application. The visual rendering method based on the environmental art design is used in terminal equipment and comprises the following steps:
s101, acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters;
s102, analyzing a visual effect according to rendering parameters, and calculating the similarity between the visual effect and a preset effect;
s103, dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information,
s104, optimizing the scene image according to the correction information, and transmitting an optimization result to the terminal.
It should be noted that, by analyzing the scene image to determine the scene layout, different scene layouts perform rendering in different modes, and analyzing the visual effect to perform real-time dynamic adjustment of rendering parameters, so as to improve the rendering effect.
Referring to fig. 2, fig. 2 is a scene layout information obtaining flowchart of a visual rendering method based on an environmental art design according to some embodiments of the present application. According to the embodiment of the invention, a scene image is acquired, image features are extracted, and scene layout is analyzed according to the image features, and the method specifically comprises the following steps:
s201, obtaining image features, and comparing the image features with preset features to obtain feature deviation rates;
s202, judging whether the characteristic deviation rate is larger than a preset characteristic deviation rate threshold value;
s203, if the number is larger than the threshold value, judging the background characteristics, and segmenting and eliminating the background characteristics;
if the scene layout information is smaller than the predetermined threshold, the scene layout information is determined as the layout feature, and the layout feature is filtered and combined to generate the scene layout information.
By analyzing the image features, the background features and the layout features are distinguished, all the background features are removed, only the layout features are reserved, and the accurate judgment of the scene layout is ensured, so that the accurate analysis of the scene layout is realized, and the scene rendering analysis precision is improved.
Referring to fig. 3, fig. 3 is a flowchart of a rendering parameter generation method of a visual rendering method based on an environmental art design according to some embodiments of the present application. According to the embodiment of the invention, the scene layout is input into a preset model to generate rendering parameters, specifically:
s301, acquiring scene layout information, inputting the scene layout information into a preset model for training, and obtaining a training result;
s302, judging whether the training result is converged, if not, continuing training the model until the training result is converged;
and S303, if the convergence is performed, generating rendering parameters according to a preset model.
It should be noted that, performing iterative computation on the model according to the scene layout information, setting the iteration times to determine whether the model training result is converged, and if not, adjusting the iteration times to make the output result of the model more close to the actual result, so as to realize that different scene layouts automatically generate corresponding rendering parameters, and provide a basis for the subsequent determination of visual rendering effects.
According to the embodiment of the invention, the visual effect is analyzed according to the rendering parameters, and the similarity calculation is carried out on the visual effect and the preset effect, which comprises the following steps:
acquiring rendering parameters, performing scene rendering according to the rendering parameters, and analyzing rendering effects according to scene rendering results, wherein the rendering effects comprise scene brightness rendering, scene color rendering and scene ray rendering;
and performing scene division brightness analysis, scene color analysis and scene ray analysis according to the rendering effect to obtain brightness difference, scene chromatic aberration and scene ray refraction information.
It should be noted that, whether the resolution is abnormal or not is judged by analyzing the brightness of the scene image, so that the image is blurred, the scene brightness is adjusted in real time, the image resolution is improved, in addition, the scene rendering effect is judged in an auxiliary mode by the scene color, and the rendering precision is improved.
Further, by acquiring scene ray rendering, analyzing image shadows caused in the scene rendering process, judging the influence of the image shadows in the scene rendering process, analyzing the ray incidence angle and refraction angle of an image, acquiring image shadow changes according to the image incidence angle and refraction angle, and analyzing the change relation between the change of the ray and the change of the image shadows, the image ray is accurately adjusted.
According to the embodiment of the invention, the rendering parameters are obtained, the scene rendering is carried out according to the rendering parameters, the analysis rendering effect is carried out according to the scene rendering result, the rendering effect comprises scene brightness rendering, scene color rendering and scene ray rendering, and the method further comprises the following steps:
acquiring scene brightness information, analyzing scene resolution according to the scene brightness information,
calculating scene definition from the scene resolution;
comparing the scene definition with a preset definition;
if the scene definition is smaller than the preset definition, dynamically adjusting the scene brightness;
if the scene definition is larger than the preset definition, analyzing the scene color under the corresponding scene brightness, analyzing the color difference, and adjusting the scene brightness in real time according to the color difference.
It should be noted that, the definition change condition of the image after the scene rendering is judged by analyzing the definition of the scene, and the brightness of the scene is adjusted according to the definition change condition, so as to ensure that the definition of the scene is higher.
According to the embodiment of the invention, the rendering parameters are dynamically adjusted according to the similarity calculation result to obtain the correction information, which specifically comprises the following steps:
obtaining similarity information, and comparing the similarity information with preset similarity information to obtain a similarity deviation rate;
if the similarity deviation rate is larger than the first deviation rate threshold and smaller than the second deviation rate threshold, generating first correction information, and adjusting the rendering parameters in a first mode according to the first correction information;
if the similarity deviation rate is larger than a second deviation rate threshold value, generating second correction information, and adjusting the rendering parameters in a second mode according to the second correction information;
the first deviation ratio threshold is less than the second deviation ratio threshold.
It should be noted that, by analyzing the similarity between the scene rendering effect and the preset rendering effect, different similarities adjust the rendering parameters in different ways, so as to ensure that the rendering parameters can accurately render the scene image and improve the rendering effect.
According to an embodiment of the present invention, further comprising: acquiring scene images of different visual angles, comparing the scene images of different visual angles to obtain layout state information, and judging light refraction angles under different visual angles according to the layout state information;
carrying out difference on refraction angles of light rays under different visual angles to obtain angle difference;
and analyzing the angle deviation according to the angle difference, and adjusting the lighting angle of the atmosphere lamp according to the angle deviation.
It should be noted that, setting position and the angle of shining of atmosphere lamp can produce the influence to the scene image in the collection process, judges the condition of shining of atmosphere lamp through the light refraction angle of same object under the different visual angles of analysis, analyzes the grey scale or the luminance of scene image to adjust the rendering parameter according to the grey scale and the luminance of scene image, make the scene image that different visual angles were gathered all accord with predetermined resolution, guarantee that the scene image definition of gathering is higher.
According to an embodiment of the present invention, further comprising: acquiring scene images and rendering image information, analyzing rendering effects according to the rendering image information, and performing rendering scoring according to the rendering effects to obtain scoring information;
performing reverse analysis rendering according to the scoring information to determine whether the rendering meets the requirements;
if the requirements are met, corresponding rendering parameters are called according to the scene images;
and if the rendering parameters are smaller than the rendering scores, adjusting the rendering parameters according to the rendering scores.
It should be noted that, the difference between the effect after rendering and the preset rendering effect is judged by analyzing the scoring information, and the rendering parameters are dynamically adjusted according to the difference information, so that the rendering precision is ensured, and the resolution of the scene image is improved.
According to an embodiment of the present invention, further comprising:
acquiring weather information, wherein the weather information comprises sunny days, rainy days and cloudy days;
acquiring light irradiation brightness and light irradiation direction in sunny days, analyzing image exposure according to the light irradiation, generating first compensation information according to the image exposure, and compensating the light irradiation brightness and the light irradiation direction in real time according to the first compensation information;
acquiring the rainfall and the wind direction in the rainy day in each hour, judging the rainwater state, analyzing rainwater interference information according to the rainwater state, generating second compensation information according to the interference information, and adjusting rendering parameters according to the second compensation information;
and acquiring the ambient brightness in the cloudy day, comparing the ambient brightness with the current rendering state information, generating anti-third compensation information, and adjusting rendering parameters according to the third compensation information.
It should be noted that, by analyzing the influence of different weather factors on the rendering effect, different compensation information is generated, so as to dynamically adjust the rendering parameters.
In a second aspect, embodiments of the present application provide a visual rendering system based on an environmental artistic design, the system comprising: the system comprises a memory and a processor, wherein the memory comprises a program of a visual rendering method based on an environmental art design, and the program of the visual rendering method based on the environmental art design realizes the following steps when being executed by the processor:
acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters;
analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect;
dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information,
and optimizing the scene image according to the correction information, and transmitting an optimization result to the terminal.
It should be noted that, by analyzing the scene image to determine the scene layout, different scene layouts perform rendering in different modes, and analyzing the visual effect to perform real-time dynamic adjustment of rendering parameters, so as to improve the rendering effect.
According to the embodiment of the invention, a scene image is acquired, image features are extracted, and scene layout is analyzed according to the image features, and the method specifically comprises the following steps:
acquiring image characteristics, and comparing the image characteristics with preset characteristics to obtain a characteristic deviation rate;
judging whether the characteristic deviation rate is larger than a preset characteristic deviation rate threshold value or not;
if the number is larger than the threshold value, judging the background characteristics, and dividing and removing the background characteristics;
if the scene layout information is smaller than the predetermined value, determining that the scene layout information is the layout feature, screening and combining the layout feature, and generating the scene layout information.
By analyzing the image features, the background features and the layout features are distinguished, all the background features are removed, only the layout features are reserved, and the accurate judgment of the scene layout is ensured, so that the accurate analysis of the scene layout is realized, and the scene rendering analysis precision is improved.
According to the embodiment of the invention, the scene layout is input into a preset model to generate rendering parameters, specifically:
acquiring scene layout information, inputting the scene layout information into a preset model for training, and obtaining a training result;
judging whether the training result is converged or not, if not, continuing training the model until the training result is converged;
and if so, generating rendering parameters according to a preset model.
It should be noted that, performing iterative computation on the model according to the scene layout information, setting the iteration times to determine whether the model training result is converged, and if not, adjusting the iteration times to make the output result of the model more close to the actual result, so as to realize that different scene layouts automatically generate corresponding rendering parameters, and provide a basis for the subsequent determination of visual rendering effects.
According to the embodiment of the invention, the visual effect is analyzed according to the rendering parameters, and the similarity calculation is carried out on the visual effect and the preset effect, which comprises the following steps:
acquiring rendering parameters, performing scene rendering according to the rendering parameters, and analyzing rendering effects according to scene rendering results, wherein the rendering effects comprise scene brightness rendering, scene color rendering and scene ray rendering;
and performing scene division brightness analysis, scene color analysis and scene ray analysis according to the rendering effect to obtain brightness difference, scene chromatic aberration and scene ray refraction information.
It should be noted that, whether the resolution is abnormal or not is judged by analyzing the brightness of the scene image, so that the image is blurred, the scene brightness is adjusted in real time, the image resolution is improved, in addition, the scene rendering effect is judged in an auxiliary mode by the scene color, and the rendering precision is improved.
Further, by acquiring scene ray rendering, analyzing image shadows caused in the scene rendering process, judging the influence of the image shadows in the scene rendering process, analyzing the ray incidence angle and refraction angle of an image, acquiring image shadow changes according to the image incidence angle and refraction angle, and analyzing the change relation between the change of the ray and the change of the image shadows, the image ray is accurately adjusted.
According to the embodiment of the invention, the rendering parameters are obtained, the scene rendering is carried out according to the rendering parameters, the analysis rendering effect is carried out according to the scene rendering result, the rendering effect comprises scene brightness rendering, scene color rendering and scene ray rendering, and the method further comprises the following steps:
acquiring scene brightness information, analyzing scene resolution according to the scene brightness information,
calculating scene definition from the scene resolution;
comparing the scene definition with a preset definition;
if the scene definition is smaller than the preset definition, dynamically adjusting the scene brightness;
if the scene definition is larger than the preset definition, analyzing the scene color under the corresponding scene brightness, analyzing the color difference, and adjusting the scene brightness in real time according to the color difference.
It should be noted that, the definition change condition of the image after the scene rendering is judged by analyzing the definition of the scene, and the brightness of the scene is adjusted according to the definition change condition, so as to ensure that the definition of the scene is higher.
According to the embodiment of the invention, the rendering parameters are dynamically adjusted according to the similarity calculation result to obtain the correction information, which specifically comprises the following steps:
obtaining similarity information, and comparing the similarity information with preset similarity information to obtain a similarity deviation rate;
if the similarity deviation rate is larger than the first deviation rate threshold and smaller than the second deviation rate threshold, generating first correction information, and adjusting the rendering parameters in a first mode according to the first correction information;
if the similarity deviation rate is larger than a second deviation rate threshold value, generating second correction information, and adjusting the rendering parameters in a second mode according to the second correction information;
the first deviation ratio threshold is less than the second deviation ratio threshold.
It should be noted that, by analyzing the similarity between the scene rendering effect and the preset rendering effect, different similarities adjust the rendering parameters in different ways, so as to ensure that the rendering parameters can accurately render the scene image and improve the rendering effect.
According to an embodiment of the present invention, further comprising: acquiring scene images of different visual angles, comparing the scene images of different visual angles to obtain layout state information, and judging light refraction angles under different visual angles according to the layout state information;
carrying out difference on refraction angles of light rays under different visual angles to obtain angle difference;
and analyzing the angle deviation according to the angle difference, and adjusting the lighting angle of the atmosphere lamp according to the angle deviation.
It should be noted that, setting position and the angle of shining of atmosphere lamp can produce the influence to the scene image in the collection process, judges the condition of shining of atmosphere lamp through the light refraction angle of same object under the different visual angles of analysis, analyzes the grey scale or the luminance of scene image to adjust the rendering parameter according to the grey scale and the luminance of scene image, make the scene image that different visual angles were gathered all accord with predetermined resolution, guarantee that the scene image definition of gathering is higher.
According to an embodiment of the present invention, further comprising: acquiring scene images and rendering image information, analyzing rendering effects according to the rendering image information, and performing rendering scoring according to the rendering effects to obtain scoring information;
performing reverse analysis rendering according to the scoring information to determine whether the rendering meets the requirements;
if the requirements are met, corresponding rendering parameters are called according to the scene images;
and if the rendering parameters are smaller than the rendering scores, adjusting the rendering parameters according to the rendering scores.
It should be noted that, the difference between the effect after rendering and the preset rendering effect is judged by analyzing the scoring information, and the rendering parameters are dynamically adjusted according to the difference information, so that the rendering precision is ensured, and the resolution of the scene image is improved.
According to an embodiment of the present invention, further comprising:
acquiring weather information, wherein the weather information comprises sunny days, rainy days and cloudy days;
acquiring light irradiation brightness and light irradiation direction in sunny days, analyzing image exposure according to the light irradiation, generating first compensation information according to the image exposure, and compensating the light irradiation brightness and the light irradiation direction in real time according to the first compensation information;
acquiring the rainfall and the wind direction in the rainy day in each hour, judging the rainwater state, analyzing rainwater interference information according to the rainwater state, generating second compensation information according to the interference information, and adjusting rendering parameters according to the second compensation information;
and acquiring the ambient brightness in the cloudy day, comparing the ambient brightness with the current rendering state information, generating anti-third compensation information, and adjusting rendering parameters according to the third compensation information.
It should be noted that, by analyzing the influence of different weather factors on the rendering effect, different compensation information is generated, so as to dynamically adjust the rendering parameters.
A third aspect of the present invention provides a computer-readable storage medium having embodied therein a visual rendering method program based on an environmental art design, which when executed by a processor, implements the steps of the visual rendering method based on an environmental art design as in any one of the above.
The invention discloses a visual rendering method, a system and a medium based on environmental art design, which are characterized in that scene images are obtained, image features are extracted, scene layout is analyzed according to the image features, and the scene layout is input into a preset model to generate rendering parameters; analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect; dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information, optimizing the scene image according to the correction information, and transmitting the optimization result to the terminal; and performing targeted scene rendering according to the scene layout, judging visual effects after scene rendering, analyzing, and dynamically adjusting rendering parameters according to the visual effects, so that real-time adjustment of visual rendering is realized, and visual rendering effects are improved.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of units is only one logical function division, and there may be other divisions in actual implementation, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.

Claims (10)

1. A visual rendering method based on an environmental art design, comprising:
acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters;
analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect;
dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information,
and optimizing the scene image according to the correction information, and transmitting an optimization result to the terminal.
2. The visual rendering method based on the environmental art design according to claim 1, wherein the steps of obtaining a scene image, extracting image features, and analyzing a scene layout according to the image features comprise:
acquiring image characteristics, and comparing the image characteristics with preset characteristics to obtain a characteristic deviation rate;
judging whether the characteristic deviation rate is larger than a preset characteristic deviation rate threshold value or not;
if the number is larger than the threshold value, judging the background characteristics, and dividing and removing the background characteristics;
if the scene layout information is smaller than the predetermined value, determining that the scene layout information is the layout feature, screening and combining the layout feature, and generating the scene layout information.
3. The visual rendering method based on the environmental art design according to claim 2, wherein the scene layout is input into a preset model to generate rendering parameters, specifically:
acquiring scene layout information, inputting the scene layout information into a preset model for training, and obtaining a training result;
judging whether the training result is converged or not, if not, continuing training the model until the training result is converged;
and if so, generating rendering parameters according to a preset model.
4. The visual rendering method based on environmental art design according to claim 3, wherein the visual effect is analyzed according to the rendering parameters, and the similarity calculation is performed between the visual effect and a preset effect, and the method specifically comprises the following steps:
acquiring rendering parameters, performing scene rendering according to the rendering parameters, and analyzing rendering effects according to scene rendering results, wherein the rendering effects comprise scene brightness rendering, scene color rendering and scene ray rendering;
and performing scene division brightness analysis, scene color analysis and scene ray analysis according to the rendering effect to obtain brightness difference, scene chromatic aberration and scene ray refraction information.
5. The visual rendering method of claim 4, wherein the visual rendering method further comprises the steps of obtaining rendering parameters, performing scene rendering according to the rendering parameters, and analyzing rendering effects according to scene rendering results, wherein the rendering effects comprise scene brightness rendering, scene color rendering and scene ray rendering, and further comprising:
acquiring scene brightness information, analyzing scene resolution according to the scene brightness information,
calculating scene definition from the scene resolution;
comparing the scene definition with a preset definition;
if the scene definition is smaller than the preset definition, dynamically adjusting the scene brightness;
if the scene definition is larger than the preset definition, analyzing the scene color under the corresponding scene brightness, analyzing the color difference, and adjusting the scene brightness in real time according to the color difference.
6. The visual rendering method based on the environmental art design according to claim 5, wherein the method for dynamically adjusting the rendering parameters according to the similarity calculation result to obtain the correction information comprises the following steps:
obtaining similarity information, and comparing the similarity information with preset similarity information to obtain a similarity deviation rate;
if the similarity deviation rate is larger than the first deviation rate threshold and smaller than the second deviation rate threshold, generating first correction information, and adjusting the rendering parameters in a first mode according to the first correction information;
if the similarity deviation rate is larger than a second deviation rate threshold value, generating second correction information, and adjusting the rendering parameters in a second mode according to the second correction information;
the first deviation ratio threshold is less than the second deviation ratio threshold.
7. A visual rendering system based on an environmental artistic design, the system comprising: the system comprises a memory and a processor, wherein the memory comprises a program of a visual rendering method based on an environmental art design, and the program of the visual rendering method based on the environmental art design realizes the following steps when being executed by the processor:
acquiring a scene image, extracting image features, analyzing scene layout according to the image features, inputting the scene layout into a preset model, and generating rendering parameters;
analyzing the visual effect according to the rendering parameters, and calculating the similarity between the visual effect and a preset effect;
dynamically adjusting rendering parameters according to the similarity calculation result to obtain correction information,
and optimizing the scene image according to the correction information, and transmitting an optimization result to the terminal.
8. The visual rendering system of claim 7, wherein the scene image is acquired, the image features are extracted, and the scene layout is analyzed based on the image features, and the method specifically comprises:
acquiring image characteristics, and comparing the image characteristics with preset characteristics to obtain a characteristic deviation rate;
judging whether the characteristic deviation rate is larger than a preset characteristic deviation rate threshold value or not;
if the number is larger than the threshold value, judging the background characteristics, and dividing and removing the background characteristics;
if the scene layout information is smaller than the predetermined value, determining that the scene layout information is the layout feature, screening and combining the layout feature, and generating the scene layout information.
9. The visual rendering system based on the environmental art design of claim 8, wherein the scene layout is input into a preset model to generate rendering parameters, specifically:
acquiring scene layout information, inputting the scene layout information into a preset model for training, and obtaining a training result;
judging whether the training result is converged or not, if not, continuing training the model until the training result is converged;
and if so, generating rendering parameters according to a preset model.
10. A computer readable storage medium, characterized in that it comprises a visual rendering method program based on an environmental art design, which, when executed by a processor, implements the steps of the visual rendering method based on an environmental art design according to any one of claims 1 to 6.
CN202311512263.6A 2023-11-14 2023-11-14 Visual rendering method, system and medium based on environmental art design Pending CN117372598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311512263.6A CN117372598A (en) 2023-11-14 2023-11-14 Visual rendering method, system and medium based on environmental art design

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311512263.6A CN117372598A (en) 2023-11-14 2023-11-14 Visual rendering method, system and medium based on environmental art design

Publications (1)

Publication Number Publication Date
CN117372598A true CN117372598A (en) 2024-01-09

Family

ID=89398401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311512263.6A Pending CN117372598A (en) 2023-11-14 2023-11-14 Visual rendering method, system and medium based on environmental art design

Country Status (1)

Country Link
CN (1) CN117372598A (en)

Similar Documents

Publication Publication Date Title
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN111968216A (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN114758252B (en) Image-based distributed photovoltaic roof resource segmentation and extraction method and system
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN110414738B (en) Crop yield prediction method and system
CN110717971B (en) Substation three-dimensional simulation system database modeling system facing power grid training service
US20230281913A1 (en) Radiance Fields for Three-Dimensional Reconstruction and Novel View Synthesis in Large-Scale Environments
CN112164142A (en) Building lighting simulation method based on smart phone
CN104408757A (en) Method and system for adding haze effect to driving scene video
CN112818925A (en) Urban building and crown identification method
CN113160062A (en) Infrared image target detection method, device, equipment and storage medium
CN111862254A (en) Cross-rendering platform based material rendering method and system
JP2024512102A (en) Image generation method, device, equipment and storage medium
Liu et al. Image edge recognition of virtual reality scene based on multi-operator dynamic weight detection
CN111832508A (en) DIE _ GA-based low-illumination target detection method
CN117372598A (en) Visual rendering method, system and medium based on environmental art design
CN111091580A (en) Stumpage image segmentation method based on improved ResNet-UNet network
CN115731560A (en) Slot line identification method and device based on deep learning, storage medium and terminal
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN115116052A (en) Orchard litchi identification method, device, equipment and storage medium
TWM625817U (en) Image simulation system with time sequence smoothness
TWI804001B (en) Correction system for broken depth map with time sequence smoothness
WO2024111412A1 (en) Information processing apparatus, information processing method, and storage medium
CN113642395B (en) Building scene structure extraction method for city augmented reality information labeling
CN117994477B (en) Method, device, equipment and storage medium for realizing XR (X-ray) augmented reality scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination