CN116152426A - Method and device for drawing image, electronic equipment and storage medium - Google Patents

Method and device for drawing image, electronic equipment and storage medium Download PDF

Info

Publication number
CN116152426A
CN116152426A CN202111389305.2A CN202111389305A CN116152426A CN 116152426 A CN116152426 A CN 116152426A CN 202111389305 A CN202111389305 A CN 202111389305A CN 116152426 A CN116152426 A CN 116152426A
Authority
CN
China
Prior art keywords
image
trained
reflection
attribute
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111389305.2A
Other languages
Chinese (zh)
Inventor
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202111389305.2A priority Critical patent/CN116152426A/en
Publication of CN116152426A publication Critical patent/CN116152426A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for drawing an image, wherein the method comprises the following steps: acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different; processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object; and determining a target three-dimensional view corresponding to the target object based on the object attribute information. According to the technical scheme provided by the embodiment of the disclosure, not only is accurate estimation of object attribute information realized, but also the object can be reversely drawn based on the information.

Description

Method and device for drawing image, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device, electronic equipment and a storage medium for drawing an image.
Background
Reverse drawing is one of important research directions in graphics, and by using a reverse drawing technology, the geometric shape, the material and the like of an object can be recovered based on a single Zhang Wupin picture.
However, in the process of inverse drawing, since the object in the picture generally has a complex structure or characteristic, many obstacles exist in the process of constructing the object model based on the existing method, and the display effect of the object in reality cannot be accurately reflected.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for drawing an image, which not only realize accurate estimation of object attribute information, but also can reversely draw an object based on the information.
In a first aspect, embodiments of the present disclosure provide a method of rendering an image, the method comprising:
acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and determining a target three-dimensional view corresponding to the target object based on the object attribute information.
In a second aspect, embodiments of the present disclosure further provide an apparatus for drawing an image, the apparatus including:
the image acquisition module to be processed is used for acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
The object attribute information determining module is used for processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and the target three-dimensional view determining module is used for determining a target three-dimensional view corresponding to the target object based on the object attribute information.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of rendering an image as described in any of the embodiments of the present disclosure.
In a fourth aspect, the presently disclosed embodiments also provide a storage medium containing computer-executable instructions for performing a method of rendering an image as described in any of the presently disclosed embodiments when executed by a computer processor.
According to the technical scheme, the to-be-processed images of a plurality of target objects under different illumination angles are obtained, the to-be-processed images are processed according to the target object attribute determining model, object attribute information of the target objects is obtained, and finally, a target three-dimensional view corresponding to the target objects is determined based on the object attribute information, so that accurate estimation of the object attribute information is achieved, the objects can be reversely drawn based on the information, meanwhile, the fact that the drawn target three-dimensional view is closest to the display effect of the objects in reality is guaranteed, a user can appreciate the effect actually presented by the objects only through the target three-dimensional view, and user experience is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flowchart of a method for drawing an image according to a first embodiment of the disclosure;
fig. 2 is a flowchart of a method for drawing an image according to a second embodiment of the disclosure;
fig. 3 is a flowchart of a method for drawing an image according to a third embodiment of the disclosure;
fig. 4 is a network configuration diagram of a method for drawing an image according to a third embodiment of the present disclosure;
fig. 5 is a block diagram of an apparatus for drawing an image according to a fourth embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
Before the present technical solution is introduced, an application scenario may be illustrated. The technical scheme can be applied to any situation needing to determine the three-dimensional view of the object, for example, in a live broadcast scene, an art designing scene or a specific application program, when the object with a certain semitransparent material is required to be displayed in an omnibearing manner, a plurality of images can be obtained by shooting the object from a plurality of illumination angles, and at the moment, based on the technical scheme, the attribute information of the object with the semitransparent material can be determined by utilizing the shot images, and then the three-dimensional view of the object is reversely drawn.
Example 1
Fig. 1 is a flowchart of a method for drawing an image according to an embodiment of the present disclosure, where the method may be implemented by an apparatus for drawing an image, and the apparatus may be implemented in software and/or hardware, and the hardware may be an electronic device, such as a mobile terminal, a PC, or a server.
As shown in fig. 1, the method of the present embodiment includes:
s110, acquiring a plurality of images to be processed.
In this embodiment, the image to be processed includes at least a specific object (i.e. a target object), and in order to construct a three-dimensional view corresponding to the object, multiple images are required to be acquired, and those skilled in the art should understand that the multiple images to be processed may reflect shape and structure information of the target object from multiple angles, or reflect shape and structure information of the target object in different environmental states from one or more angles.
In this embodiment, the image to be processed may further include a semitransparent material object that is desired to construct a three-dimensional view thereof, where the semitransparent material object may be an object having semitransparent properties, such as jade, a candle, and leaves of plants. In the optical field, transparency is a physical property that allows light to pass through a material without dispersion, on a macroscopic scale, photons follow the snell's law, translucency is a superset of transparency that allows light to pass through, but does not necessarily follow the snell's law, photons can be dispersed at either of two interfaces. Since the interaction between light and material includes scattering and absorption, the semitransparent property of the object can be understood that the transparency is that after light enters the object, the light deflects and exits, the subsurface scattering is that after light enters the object, the light bounces and absorbs inside, and then part of the light exits from the surface, so that a semitransparent effect can be displayed.
In this embodiment, in order to accurately represent the effect actually presented by the object in the process of constructing the subsequent three-dimensional view, the illumination angle between the light source and the target object in each image to be processed is different. It will be appreciated that the target object in each image is at a different illumination angle while the image capture angle is maintained, and therefore the target object in each image will also exhibit a different effect. The shooting device is aligned to a piece of jade, after the relative positions of the device and the jade are fixed, the flash lamp is respectively arranged at the left side, the right side and the rear side of the jade and is used for shooting, and the jade images under three illumination effects obtained through shooting are images to be processed.
In this embodiment, there are various ways of acquiring a plurality of images to be processed. For example, a specific image may be called up as an image to be processed from a repository storing a plurality of images according to a preset rule, and those skilled in the art will understand that the called up plurality of images should present the target object from a plurality of illumination angles; the imaging device can be used for shooting the target object based on the requirements of different illumination angles, so that a plurality of images to be processed are obtained in real time, for example, after the target object is fixed, a plurality of shooting angles are preset at different positions, and after the imaging device shoots the target object based on the shooting angles, a plurality of images to be processed corresponding to the object can be obtained. Those skilled in the art will understand that the specific manner of acquiring the image to be processed should be selected according to the actual situation, and embodiments of the present disclosure are not specifically limited herein.
S120, processing the image to be processed according to the target object attribute determining model to obtain object attribute information of the target object.
The target object attribute determining model may be a deep learning model obtained by training in advance, at least a plurality of images to be processed may be processed, and corresponding results may be output. For the target object attribute determination model, after the images to be processed as input are determined, the images may be processed so as to output attribute information of the target object in the images. It should be noted that, there are a plurality of acquired images to be processed, based on this, the target object attribute determining model may take a set of these images as input, and uniformly process these images, so as to output object attribute information.
In this embodiment, the object attribute information is a parameter reflecting the effect of the object in reality from multiple dimensions, and includes information such as material information, shape information (geometric information), color, texture, smoothness, and transparency of the object. Continuing to explain the above example, when it is determined that the images corresponding to the jade under the three illumination effects are to be processed images, the images can be respectively input into the target object attribute determining model, and the material information, color, texture, transparency and the like of the jade can be obtained through model processing.
It should be noted that, after the object attribute information of the target object is obtained according to the object attribute determining model, the information may be stored in a specific storage library and marked correspondingly, so that the information is directly invoked in the subsequent image processing process, and the waste of computing resources caused by determining the object attribute information of the target object for many times is avoided.
S130, determining a target three-dimensional view corresponding to the target object based on the object attribute information.
In this embodiment, after object attribute information of the target object is obtained through the object attribute determination model, a target three-dimensional view corresponding to the target object may be determined, where the target three-dimensional view is a projection of the three-dimensional model corresponding to the target object, which is observed from different viewpoint directions in the three-dimensional space. On the one hand, the three-dimensional view of the target can reflect the effect of the target object in reality in a stereoscopic way; on the other hand, based on the three-dimensional view of the target, the user may observe the target object from a plurality of directions, for example, the user may select different viewpoints, and the front view, the side view and the top view of the target object may be observed through projection of the three-dimensional view.
Specifically, a three-dimensional view of a target can be constructed by using various computer software, for example, a project is created by using CAD or revit software, the determined object attribute information is imported into the software, the corresponding configuration items in the software are set based on the information, and rendering simulation operation can be performed on the target object after the software parameters are set, so that the three-dimensional view of the target corresponding to the target object is constructed.
According to the technical scheme, the images to be processed of the plurality of target objects under different illumination angles are firstly obtained, then the images to be processed are processed according to the object attribute determining model, object attribute information of the target objects is obtained, finally, the target three-dimensional view corresponding to the target objects is determined based on the object attribute information, accurate estimation of the object attribute information is achieved, the objects can be reversely drawn based on the information, meanwhile, the fact that the drawn target three-dimensional view is closest to the display effect of the objects in reality is guaranteed, therefore, a user can appreciate the effect actually presented by the objects only through the target three-dimensional view, and user experience is improved.
Example two
Fig. 2 is a flow chart of a method for drawing an image according to a second embodiment of the present disclosure, where, based on the foregoing embodiment, a target object is photographed by an image capturing device based on an illumination angle in a preset illumination angle set, so as to obtain reflection and refraction effects of the object on light under different illumination conditions; aiming at different types of images to be processed, the images are differentiated based on the reflection attribute determination submodel and the refraction attribute determination submodel, and information of multiple dimensions on the front and the back of the object is obtained, so that a target three-dimensional view constructed based on the information can more accurately reflect the effect of the object under an actual light source. The specific implementation manner can be seen in the technical scheme of the embodiment. Wherein, the technical terms identical to or corresponding to the above embodiments are not repeated herein.
As shown in fig. 2, the method specifically includes the following steps:
s210, sequentially irradiating the target object based on the illumination angles in the preset illumination angle set, and acquiring a to-be-processed image comprising the target object.
In order to accurately construct a three-dimensional view of a target object having semi-transparent properties, images of the target object under a plurality of illumination angles are first acquired. Therefore, in this embodiment, one way is to aim the image capturing device at the target object, fix the relative positions of the image capturing device and the target object, and simultaneously preset a plurality of illumination angles to sequentially illuminate the target object, and collect one or more images to be processed including the target object by using the image capturing device under each illumination angle, so as to obtain images reflecting the effects presented by the object under different illumination conditions; in another mode, after the target object and the light source are fixed, a circle is made by taking the target object as a circle center, a radius is optionally taken as a 0-degree datum line, a plurality of illumination angles are preset from a clockwise direction or a counterclockwise direction, the camera device moves on the circle based on the illumination angles respectively, and when the camera device reaches each illumination angle, a corresponding image is shot to serve as an image to be processed, and it can be understood that the camera device can obtain a plurality of images to be processed under the preset illumination angles after rotating around the target object for one circle. It should be understood by those skilled in the art that the set of the plurality of illumination angles is a preset illumination angle set, and at the same time, no matter which mode is adopted to collect the image to be processed, after the photographing of the image capturing device is completed, the obtained image may be stored in a specific server, so as to be directly obtained and used in the process of reconstructing the three-dimensional view of the target object.
In practical applications, the angles in the preset illumination angle set may be set based on a specific angle interval/angle adjustment step. For example, after the image pickup device aims at a target object and positions of the target object and the image pickup device are fixed, the target object can be used as a circle center, a circle is made with a radius of 2m, six points with an angle interval of 60 degrees on the circle are taken as flash lamp bracket setting points, flash lamps are sequentially started on the six bracket setting points, and corresponding images are shot by the image pickup device, so that images to be processed of the target object under different illumination angles are obtained; or taking the target object as the center of a circle, taking 2m as the radius to make a circle, taking one point on the circle as the setting point of a flash lamp bracket, starting a flash lamp, taking the image pickup device to pick up a first image to be processed, adjusting the position of the flash lamp on the circular arc in the clockwise/anticlockwise direction by taking 20 DEG as the angle adjustment step length, starting the flash lamp when the flash lamp moves to the first adjustment position, taking a second image to be processed by the image pickup device, and the like until the image pickup device picks up all 18 images to be processed. In this embodiment, in order to accurately obtain the reflection and refraction effects of the light on the translucent material object, the flash may face the target object at all times (even if the light in the center of the flash remains directly directed at the target object at all times).
S220, processing the image to be processed according to the target object attribute determining model to obtain object attribute information of the target object.
When light irradiates a target object made of semitransparent materials, light reflection phenomenon and light refraction phenomenon exist. When light propagates to different substances, the light changes the propagation direction on the interfaces of the different substances and returns to the original substances, and the phenomena are divided into specular reflection and diffuse reflection; the refraction of light refers to the phenomenon that when light obliquely enters one medium into another medium, the propagation direction changes, so that light rays are deflected at the junction of the different mediums.
Based on this, in the present embodiment, in determining the object attribute information of the target object, it is necessary to clarify the reflection and refraction effects of light on the object, and correspondingly, the reflection attribute determination sub-model and the refraction attribute determination sub-model are included in the target object attribute determination model, and these models may be models which are obtained by training in advance and are based on deep learning, and two sub-models in the target object attribute determination model are specifically described below.
Optionally, processing the image to be processed of the reflection type based on the reflection attribute determining sub-model to obtain reflection attribute related information corresponding to the target object; processing the reflection attribute related information and the refraction type image to be processed based on the refraction attribute determining sub-model to obtain refraction attribute related information corresponding to the target object; object attribute information is determined based on the reflection-associated attribute information and the refraction-associated attribute information.
It should be noted that, for the to-be-processed image under different illumination angles, the to-be-processed image can be further divided into a reflection type and a refraction type, it can be understood that, based on the to-be-processed image of the reflection type, the light ray of the light of the flash lamp reflected by the target object can be determined, and based on the to-be-processed image of the refraction type, the light ray of the light of the flash lamp refracted by the target object can also be determined. Taking the above six images to be processed collected at the angle interval of 60 ° as an example, the positions of the six images captured by the flash lamps are all on a circle centered on the target object, the angle interval of two adjacent points is 60 ° (i.e. the multiple images to be processed are at different illumination angles), a line is connected between the camera device and the target object and marked as a line segment 0, a line is connected between each flash lamp point and the target object and marked as a line segment 1-6 respectively, the angles between the six line segments and the line segment 0 are respectively determined, when the angles are between [ -90 °,90 ° ], the images to be processed corresponding to the line segments are the images to be processed of reflection type, for example, the angles are respectively 0 °, -60 ° and 60 °, the line segments 1-3, the images contain light reflected by the object, at least the reflected image on the front surface of the target object can be reflected, when the angles are between [ -180 °, -90 ° ] or [90 °,180 ° ], the images to be processed corresponding to the line segments are determined to be the images to be processed of refraction type, for example, the angles are respectively included between-120 ° and the images not contained by the object, and the images are not contained by the reflection object but are reflected by at least 120 °.
In the practical application process, based on the image capturing manner in the above example, it may be determined that the image captured by the flash in the area between the target object and the image capturing device is a reflection type image to be processed, and the image captured by the flash behind the target object is a refraction type image to be processed.
In this embodiment, the reflection type image to be processed is determined and used as input, and after the reflection attribute determination sub-model is processed, the reflection attribute related information output by the model can be obtained, and specifically, the reflection related attribute information includes the reflection material quality, the reflection color, the normal map of the reflection type image to be processed and the depth value of each pixel point of the target object.
Wherein the reflective material information characterizes information corresponding to the material of the reflected light in the image, such as texture and smoothness; the reflected color information characterizes the color presented in the image when the target object reflects light; the normal map of the reflection type image to be processed can be understood as a normal map, i.e. a normal line is made at each point of the concave-convex surface of the original object, and the direction of the normal line is marked by the RGB color channels, which can be understood as another different surface parallel to the original concave-convex surface. In the practical application process, by determining the normal map, the surface with lower detail degree can display the accurate illumination direction and reflection effect with high detail degree, for example, after the model with high detail is subjected to mapping baking to obtain the normal map, the surface of the model with high detail degree can have the rendering effect of light and shadow distribution even if the model is attached to a normal map channel of a low-end model, and meanwhile, the surface number and calculation content required in the object rendering process are reduced by using the normal map, so that the optimization of the rendering effect is realized; the depth values of the individual pixels are at least used to reflect the depth of the individual pixels constituting the target object in the image (i.e. the distance between the camera lens on the object and the point corresponding to the pixel during the image capturing process), and it will be understood by those skilled in the art that these depth values may also determine the distance between their corresponding pixels and the viewpoint during the construction of the three-dimensional view of the target. The reflection-related attribute information described above represents information on the front surface of the object (i.e., the surface facing the imaging device).
In the present embodiment, the process of processing the image to be processed of the reflection type by the reflection attribute determination sub-model may be implemented based on a framework of encoding-decoding (Encoder-Decoder), which is a model framework of the deep learning type. In the practical application process, the framework is utilized to merge (concat) a plurality of images to be processed in a reflection type based on a specific programming language, and then a corresponding hidden variable is obtained through coding processing, wherein the hidden variable is an unobservable random variable, and the posterior probability of the hidden variable can be inferred only through collected samples. Further, after hidden variables corresponding to the multiple reflection type images to be processed are determined, four decoding processes are performed to obtain the reflection materials, the reflection colors, the normal map of the reflection type images to be processed and the depth values of all the pixel points respectively.
Correspondingly, after the refraction type to-be-processed image is determined, the refraction type to-be-processed image can be input together as a model by combining the reflection attribute related information, and the refraction attribute related attribute information output by the model can be obtained after the refraction attribute determination sub-model is processed, wherein the refraction related attribute information comprises the refraction material of the target object, the normal map of the refraction type to-be-processed image and the depth value of each pixel point.
Wherein the refractive material characterizes information corresponding to the material of the refracted ray in the image; the normal map of the refraction type image to be processed and the depth value of each pixel point are similar to those of the reflection type, and the embodiments of the disclosure are not repeated here. The refraction-related attribute information described above represents information on the reverse side of the object (i.e., the side facing away from the imaging device).
In this embodiment, the process of processing the refraction type image to be processed and the reflection attribute related information by the refraction attribute determination sub-model can be implemented based on the encoding-decoding framework as well. In the practical application process, the refraction type image and the predicted front color, normal direction and depth of the object can be combined together, and the corresponding hidden variable is obtained through one coding process. Furthermore, three decoding processes are respectively performed on the hidden variables, so that the refraction material, the normal map of the refraction type to-be-processed image and the depth value of each pixel point can be respectively obtained.
By carrying out differentiation processing on different types of images to be processed, reflection attribute related information of the front surface and refraction related attribute information of the back surface of the semitransparent material object can be obtained, so that a target three-dimensional view constructed in the subsequent process can more accurately reflect the display effect of the object under an actual light source.
In this embodiment, after obtaining the reflection-related attribute information and the refraction-related attribute information of the target object, the information is integrated to obtain the corresponding object attribute information, where the object attribute information includes at least one of material information, shape information, color information, texture information, smoothness information, and transparency information.
And S230, drawing a three-dimensional view of the target based on the editor by taking the material information, the shape information, the color information, the texture information, the smoothness information and the transparency information as drawing parameters.
In this embodiment, after determining object attribute information of a target object, the object attribute information may be imported into three-dimensional animation rendering and manufacturing software by using a pre-written script, so as to draw a target three-dimensional view corresponding to the target object; or, the determined object attribute information is used as drawing reference data, and the related staff can draw in software based on the drawing reference data, so that a corresponding target three-dimensional view is obtained. Those skilled in the art will understand that the specific drawing method may be selected according to the actual situation, and the embodiments of the present disclosure are not limited specifically.
Specifically, after obtaining the material information, shape information, color information, texture information, smoothness information and transparency information of a certain jade, a corresponding engineering project can be created in a software work interface of three-dimensional animation rendering software (such as 3d Max) based on a pre-written script; further, the data associated with the multi-dimensional information is extracted, and corresponding parameters in the software are set based on the extracted data. The material closest to the actual jade presentation effect can be determined in a material editor of the software based on data corresponding to the material information; based on the data corresponding to the shape information, corners of the model generated by the software can be sketched (for example, smooth parameters of geometric bodies are set in 3d Max software), so that the model presents the shape of the jade; further, based on the data corresponding to the color information, the smoothness information and the transparency information, specific numerical values of parameters such as diffuse reflection color, high light level, glossiness, reflection, refraction and transparency of the model can be edited in the texture editor, and meanwhile, a corresponding texture map can be determined for the model based on the data pair corresponding to the texture information. And finally, after the parameters in the software are set, rendering the model to obtain a target three-dimensional view corresponding to the jade.
According to the technical scheme, the imaging device shoots a target object based on the illumination angles in the preset illumination angle set so as to obtain the reflection and refraction effects of the object on light under different illumination conditions; aiming at different types of images to be processed, the images are differentiated based on the reflection attribute determination submodel and the refraction attribute determination submodel, and information of multiple dimensions on the front and the back of the object is obtained, so that a target three-dimensional view constructed based on the information can more accurately reflect the effect of the object under an actual light source.
Example III
Fig. 3 is a flowchart of a method for drawing an image according to a third embodiment of the present disclosure, where, based on the foregoing embodiment, a plurality of training samples are obtained, and a reflection attribute determination sub-model to be trained and a refraction attribute determination sub-model to be trained are trained based on the training samples, so that after model training is completed, object attribute information corresponding to an object is obtained based on the models, and a target three-dimensional view closer to the real effect of the object is further constructed. The specific implementation manner can be seen in the technical scheme of the embodiment. Wherein, the technical terms identical to or corresponding to the above embodiments are not repeated herein.
As shown in fig. 3, the method specifically includes the following steps:
s310, acquiring a plurality of training samples; and inputting a plurality of images to be trained in the current training sample into the attribute determining model of the object to be trained aiming at each training sample to obtain the actual associated attribute information corresponding to the object to be processed.
In this embodiment, in order to obtain the associated attribute information of the target object in the above example, the object attribute determining model needs to be trained in advance, and since the object attribute determining model includes the reflection attribute determining sub-model and the refraction attribute determining sub-model, in the practical application process, the two sub-models need to be trained respectively. It should be understood by those skilled in the art that, for both models, the specific training process includes steps of constructing a training set, training the model, tuning the model, and the like, and these steps are specifically described below.
In this embodiment, each training sample includes a plurality of images to be trained, which can be determined according to the description in the second embodiment, where the images to be trained are obtained by shooting the object to be processed under the illumination of the illumination angles in the preset illumination angle set. For the target object, after a plurality of images to be trained are acquired, the set constructed based on the images is the training set corresponding to the target object. When the two sub-models to be trained are deep learning networks, the image capturing device can be aligned to the jade when the object to be processed is a piece of jade, a plurality of images to be trained are obtained by shooting the image capturing device under the illumination angles in the preset illumination angle set, and the set constructed based on the images is the training set of the two sub-models.
It should be noted that, in the technical solution of the present disclosure, when determining object attribute information by using a model later, in order to ensure consistency of model input in logic and form, the number of the plurality of images to be processed as input should be consistent with the number of the plurality of images to be trained in each group of training samples in the model training process; correspondingly, the number of illumination angles in the preset illumination angle set should be consistent with the number of images to be trained.
In this embodiment, the to-be-trained object attribute determining model includes a to-be-trained reflection attribute determining sub-model and a to-be-trained refraction attribute determining sub-model, and it should be understood by those skilled in the art that the to-be-trained reflection attribute determining sub-model is the reflection attribute determining sub-model after the training is completed, and the to-be-trained refraction attribute determining sub-model is the refraction attribute determining sub-model after the training is completed. Corresponding to the two sub-models to be trained, the output actual associated attribute information also includes actual training refraction attribute information and actual training reflection attribute information, and it should be noted that, before the sub-models to be trained are not trained, the output actual training refraction attribute information and the actual training reflection attribute information may not faithfully reflect the reflection and refraction conditions of the object to be processed on the light rays, and the target three-dimensional view of the object to be processed may not be accurately constructed based on the output result.
It should be further noted that after determining a plurality of images to be trained in the training set, the images may be uniformly input into the attribute determining model of the object to be trained to train the model, for example, 6 jade images under different illumination angles in the above example are input into the attribute determining model of the object to be trained; the specific type of image in the training set may also be input to the corresponding sub-model to train each sub-model, for example, the 6 jade images under different illumination angles in the above example are divided into 3 images to be trained of the reflection type, and another 3 images to be trained of the refraction type, further, the 3 images to be trained of the reflection type are input to the reflection attribute determining sub-model to be trained, and the other 3 images to be trained of the refraction type are input to the refraction attribute determining sub-model to be trained.
Specifically, determining a sub-model based on the reflection attribute to be trained, and processing a reflection type to be trained image in a current training sample to obtain actual training reflection attribute information corresponding to the to-be-trained object; correspondingly, the refraction type to-be-trained image and the actual training reflection attribute information in the current training sample are processed based on the refraction attribute to-be-trained determination sub-model, and the actual training refraction attribute information can be obtained. Continuing to explain the jade in the above example, inputting the image to be processed of the reflection type related to the jade into the reflection attribute determination sub-model to be trained, and then outputting the reflection material, the reflection color, the normal map of the image to be processed of the reflection type and the depth value of each pixel point of the image to be processed of the jade as actual training reflection attribute information by the model; after the refraction type to-be-processed image related to the jade is input into the refraction attribute determination sub-model to be trained, the model can output the refraction material of the jade, the normal map of the refraction type to-be-processed image and the depth value of each pixel point as actual training refraction attribute information.
S320, determining a plurality of actual drawing images based on the actual association attribute information; and processing the actual drawing image and the image to be trained at the same illumination angle to obtain the target object attribute determination model.
In this embodiment, the actual correlation attribute information is obtained after integrating the actual training reflection attribute information and the actual training refraction attribute information. It will be appreciated that, in order to correct the model parameters, the number of actually drawn images should be the same as the number of images to be trained in the current training sample; meanwhile, in the actual application process, parameters in the model can be adjusted by adopting a control variable method, so that the illumination angle of an object to be processed in the actual drawing image is the same as the illumination angle of the image to be trained.
In the present embodiment, the actual drawing image includes an actual reflection drawing image and an actual refraction drawing image. The actual reflection drawing image is an image drawn based on actual training reflection attribute information, and the actual refraction drawing image is an image drawn based on actual training refraction attribute information.
Specifically, for each reflection type to-be-trained image, determining a target illumination angle of an object to be processed by a light source in the current to-be-trained image, and determining an actual reflection drawing image consistent with the target illumination angle based on actual training reflection attribute information in actual associated attribute information. Aiming at the images to be trained of all refraction types, determining the target illumination angle of the light source to be processed in the current images to be trained, and determining the actual refraction drawing image consistent with the target illumination angle based on the actual training refraction attribute information in the actual association attribute information.
Continuing to explain the above example, after determining the image of the reflection type in the image to be processed related to the jade, it is further required to determine the target illumination angle generated by the light source in each image projected onto the object, further, after determining the actual reflection drawing image of the jade based on the actual training reflection attribute information, the image may be associated with the corresponding target illumination angle, for example, a corresponding illumination angle identifier is marked on each actual reflection drawing image. For the jade refraction type image, the processing manner is similar to that of the reflection type image, and the embodiments of the disclosure are not repeated here.
Further, processing an actual reflection drawing image and an image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a first preset loss function corresponding to the reflection attribute determination sub-model to be trained; processing the actual refraction drawing image and the image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a second preset loss function corresponding to the reflection attribute determination sub-model to be trained; and converging the first preset loss function and the second preset loss function to serve as training targets, and obtaining a target object attribute determining model.
Continuing to explain the above examples, after obtaining the actual reflection drawing image marked by the jade, the actual image to be trained of the jade under each illumination angle needs to be obtained. Further, the illumination angle is kept unchanged, an actual reflection drawing image corresponding to the illumination angle is obtained through identification screening, the actual reflection drawing image and the image to be trained corresponding to the illumination angle are processed together, so that a loss value of a model function is obtained, model parameters in the reflection attribute determination sub-model to be trained can be adjusted based on the loss value and a first preset loss function corresponding to the reflection attribute determination sub-model to be trained, and a corresponding adjustment result is obtained. In this embodiment, the first preset loss function may be converged as a training target, where the first preset loss function is associated with the reflection attribute determining sub-model to be trained, based on which it may be understood that when it is determined that the first preset loss function is not converged, it is indicated that the adjustment result does not meet the requirement of the solution, other illumination angles need to be selected to continue training the model in the above manner, and when it is determined that the first preset loss function is converged, it is indicated that the adjustment result meets the requirement of the solution, and the reflection attribute determining sub-model required for subsequently determining the three-dimensional view of the target has been obtained.
In this embodiment, the second preset loss function is associated with the refraction attribute determining sub-model to be trained, and a specific process of obtaining the refraction attribute determining sub-model based on the actual refraction drawing image and the image to be trained is similar to the above process, which is not described herein.
The above process is described below with reference to the network structure in fig. 4, and after the position of the camera device and the target object is fixed, the target object is used as the center of a circle, and the distance between the camera device and the target object is used as the radius to make a circle, so that the flash lamp moves on the circle in a clockwise or anticlockwise manner. Meanwhile, the connecting line between the camera device and the target object is used as a 0-degree datum line, six illumination angles of 0 DEG, 60 DEG, 120 DEG, 180 DEG, 60 DEG and 120 DEG are preset, and when the flash lamp moves to the corresponding illumination angle on the circle, the camera device is controlled to shoot the image to be processed containing the target object, so that a plurality of images to be processed under different illumination angles corresponding to the target object are obtained.
With continued reference to fig. 4, the multiple images to be processed are classified, when the illumination angles are 0 °, 60 °, and-60 °, the obtained images to be processed only reflect the reflection effect of the images to light from the front surface of the object, so that the three images are used as the images to be processed of the reflection type, and when the illumination angles are 120 °, 180 °, and-120 °, the obtained images to be processed reflect the reflection effect of the object to light and the refraction effect of the images to light from the back surface of the object, so that the three images can be used as the images to be processed of the reflection type and the refraction type.
With continued reference to fig. 4, in the model training process, the images to be processed of the reflection type may be input into the to-be-trained reflection attribute determining sub-model, the images may be combined together based on a specific programming language, and then a corresponding hidden variable may be obtained through a coding process, further, the reflection material, the reflection color, the normal map of the image to be processed of the reflection type and the depth value of each pixel point of the object may be obtained through four decoding process sets, and the reflection map of the object may be obtained based on the reflection attribute association information. Further, the obtained reflection attribute related information is input into a refraction attribute determination sub-model to be trained in combination with a refraction type image to be processed, the refraction material of the object, a normal map of the refraction type image to be processed and depth values of all pixels can be obtained through the similar processing process, and the refraction map of the object can be obtained based on the refraction attribute related information. Those skilled in the art will understand that in the subsequent model application process, the processing process for the reflection type image to be processed and the refraction type image to be processed is the same as the above process, and will not be described herein.
In this embodiment, a plurality of images may be drawn based on the reflection attribute information and the illumination angle corresponding to the reflection image, and as the reflection image, a plurality of refraction images may be drawn based on the refraction attribute information and the illumination angle corresponding to the refraction image, respectively.
With continued reference to fig. 4, in the model training process, based on the obtained reflection map, refraction map and image to be trained of the object, the reflection attribute determination sub-model to be trained and the refraction attribute determination sub-model to be trained can be corrected respectively, specifically, the actual reflection drawing image/actual refraction drawing image and the image to be trained with illumination angles of 0 °, 60 °, 120 °, 180 °, minus 60 ° and minus 120 ° can be processed respectively, so as to determine the corresponding loss value under each illumination angle. For example, when the illumination angle is 120 °, a loss value in the case of reflection and a loss value in the case of refraction may be determined, and further, parameters in the model for determining the reflection attribute to be trained may be adjusted based on the first preset loss function. And adjusting parameters in the refraction attribute determination model to be trained based on the second preset loss function, and judging that the corresponding model to be trained meets the requirements of practical application when the loss function converges. If the images do not converge, the images to be trained and the drawn images corresponding to one of the illumination angles can be processed continuously, for example. When the illumination angle is at 60 degrees, a first preset loss function can be determined based on a loss value under the reflection condition corresponding to the illumination angle and the reflection attribute determination sub-model to be trained, and a second preset loss function can be determined based on a loss value under the refraction condition corresponding to the illumination angle and the refraction attribute determination sub-model to be trained, and further, when the two loss functions are converged, the model to be trained under the illumination angle is judged to be trained.
S330, acquiring a plurality of images to be processed.
S340, processing the image to be processed according to the target object attribute determining model to obtain object attribute information of the target object.
S350, determining a target three-dimensional view corresponding to the target object based on the object attribute information.
According to the technical scheme, a plurality of training samples are obtained, the reflection attribute determination sub-model to be trained and the refraction attribute determination sub-model to be trained are trained based on the training samples, so that object attribute information corresponding to an object is obtained based on the models after model training is completed, and further a target three-dimensional view closer to the actual effect of the object is built.
Example IV
Fig. 5 is a block diagram of an apparatus for drawing an image according to a fourth embodiment of the present disclosure, which may perform the method for drawing an image according to any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 5, the apparatus specifically includes: a pending image acquisition module 410, an object attribute information determination module 420, and a target three-dimensional view determination module 430.
A to-be-processed image acquisition module 410, configured to acquire a plurality of to-be-processed images; wherein the illumination angle between the light source and the target object in each image to be processed is different.
The object attribute information determining module 420 is configured to process the image to be processed according to a target object attribute determining model, so as to obtain object attribute information of the target object.
The target three-dimensional view determining module 430 is configured to determine a target three-dimensional view corresponding to the target object based on the object attribute information.
Optionally, the image to be processed acquisition module 410 is further configured to sequentially irradiate the target object based on the illumination angles in the preset illumination angle set, and acquire the image to be processed including the target object.
On the basis of the above aspects, the target object attribute determining model includes a reflection attribute determining sub-model and a refraction attribute determining sub-model, and the object attribute information determining module 420 includes a reflection attribute related information determining unit, a refraction attribute related information determining unit, and an object attribute information determining unit.
And the reflection attribute related information determining unit is used for processing the image to be processed of the reflection type based on the reflection attribute determining sub-model to obtain the reflection attribute related information corresponding to the target object.
And the refraction attribute related information determining unit is used for processing the reflection attribute related information and the refraction type image to be processed based on the refraction attribute determining sub-model to obtain refraction attribute related information corresponding to the target object.
An object attribute information determining unit configured to determine the object attribute information based on the reflection-related attribute information and the refraction-related attribute information.
On the basis of the technical schemes, the reflection association attribute information comprises reflection materials, reflection colors, normal patterns of the reflection type to-be-processed image of the target object and depth values of all pixel points; the refraction associated attribute information comprises refraction materials of the target object, a normal map of the refraction type to-be-processed image and depth values of all pixel points.
On the basis of the technical schemes, the device for drawing the image further comprises a training sample acquisition module, an actual association attribute information determination module, an actual drawing image determination module and an object attribute determination model determination module.
The training sample acquisition module is used for acquiring a plurality of training samples; each training sample comprises a plurality of images to be trained, and the images to be trained are shot by an object to be processed under illumination of illumination angles in a preset illumination angle set.
The practical association attribute information determining module is used for inputting a plurality of images to be trained in the current training sample into the object attribute determining model to be trained aiming at each training sample to obtain practical association attribute information corresponding to the object to be processed.
An actual drawing image determining module for determining a plurality of actual drawing images based on the actual association attribute information; the number of the actual drawing images is the same as the number of the images to be trained in the current training sample, and the illumination angle of the object to be processed in the actual drawing images is the same as the illumination angle of the images to be trained.
And the object attribute determination model determination module is used for obtaining the target object attribute determination model by processing the actual drawing image and the image to be trained of the same illumination angle.
On the basis of the technical schemes, the to-be-trained object attribute determining model comprises a to-be-trained reflection attribute determining sub-model and a to-be-trained refraction attribute determining sub-model, the actual association attribute information comprises actual training refraction attribute information and actual training reflection attribute information, and the actual association attribute information determining module comprises an actual training reflection attribute information determining unit and an actual training refraction attribute information determining unit.
The actual training reflection attribute information determining unit is used for processing the image to be trained of the reflection type in the current training sample based on the to-be-trained reflection attribute determining sub-model to obtain actual training reflection attribute information corresponding to the to-be-trained object.
The actual training refraction attribute information determining unit is used for processing the refraction type to-be-trained image in the current training sample and the actual training reflection attribute information based on the to-be-trained refraction attribute determining sub-model to obtain actual training refraction attribute information.
On the basis of the technical schemes, the actual drawing image comprises an actual reflection drawing image and an actual refraction drawing image, and the actual drawing image determining module comprises an actual reflection drawing image determining unit and an actual refraction drawing image determining unit.
The actual reflection drawing image determining unit is used for determining target illumination angles of the light sources to-be-processed objects in the current to-be-trained images aiming at the to-be-trained images of all reflection types, and determining actual reflection drawing images consistent with the target illumination angles based on actual training reflection attribute information in the actual associated attribute information.
The actual refraction drawing image determining unit is used for determining target illumination angles of the light source to-be-processed articles in the current to-be-trained images aiming at the to-be-trained images of all refraction types, and determining actual refraction drawing images consistent with the target illumination angles based on actual training refraction attribute information in the actual associated attribute information.
Optionally, the actual drawing image determining module is further configured to process an actual reflection drawing image and an image to be trained of the same illumination angle, determine a loss value, and adjust model parameters in the reflection attribute to be trained determination sub-model based on the loss value and a first preset loss function corresponding to the reflection attribute to be trained determination sub-model; processing an actual refraction drawing image and an image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a second preset loss function corresponding to the reflection attribute determination sub-model to be trained; and converging the first preset loss function and the second preset loss function to serve as training targets, and obtaining the target object attribute determining model.
On the basis of the above technical solutions, the object attribute information includes at least one of material information, shape information, color information, texture information, smoothness information, and transparency information.
Optionally, the target three-dimensional view determining module 430 is further configured to draw the target three-dimensional view based on the editor and using the texture information, the shape information, the color information, the texture information, the smoothness information, and the transparency information as drawing parameters.
According to the technical scheme provided by the embodiment, the to-be-processed images of a plurality of target objects under different illumination angles are firstly obtained, then the to-be-processed images are processed according to the target object attribute determining model to obtain the object attribute information of the target objects, and finally the target three-dimensional view corresponding to the target objects is determined based on the object attribute information, so that accurate estimation of the object attribute information is realized, the objects can be reversely drawn based on the information, and meanwhile, the fact that the drawn target three-dimensional view is closest to the display effect of the objects in reality is ensured, so that a user can appreciate the effect actually presented by the objects only through the target three-dimensional view, and user experience is improved.
The image drawing device provided by the embodiment of the disclosure can execute the image drawing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of executing the method.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
Example five
Fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the disclosure. Referring now to fig. 6, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 6) 500 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage 506 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An edit/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: editing devices 506 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, and the like; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 506 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 506, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present disclosure belongs to the same inventive concept as the method for drawing an image provided by the above embodiment, and technical details not described in detail in the present embodiment may be referred to the above embodiment, and the present embodiment has the same beneficial effects as the above embodiment.
Example six
The present disclosure provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of drawing an image provided by the above embodiments.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as the hypertext transfer protocol (HyperText Transfer Protocol, HTTP), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and determining a target three-dimensional view corresponding to the target object based on the object attribute information.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image, the method comprising:
acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and determining a target three-dimensional view corresponding to the target object based on the object attribute information.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image, further comprising:
sequentially irradiating the target object based on the illumination angles in the preset illumination angle set, and acquiring a to-be-processed image comprising the target object.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image [ example three ], further comprising:
processing the image to be processed of the reflection type based on the reflection attribute determining sub-model to obtain reflection attribute related information corresponding to the target object;
processing the reflection attribute related information and the refraction type image to be processed based on the refraction attribute determining sub-model to obtain refraction attribute related information corresponding to the target object;
The object attribute information is determined based on the reflection-associated attribute information and the refraction-associated attribute information.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image [ example four ], further comprising:
optionally, the reflection association attribute information includes a reflection material, a reflection color, a normal map of the reflection type image to be processed of the target object, and a depth value of each pixel point; the refraction associated attribute information comprises refraction materials of the target object, a normal map of the refraction type to-be-processed image and depth values of all pixel points.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image [ example five ], further comprising:
acquiring a plurality of training samples; each training sample comprises a plurality of images to be trained, and the images to be trained are shot by an object to be processed under illumination of illumination angles in a preset illumination angle set;
inputting a plurality of images to be trained in the current training sample into an object attribute determining model to be trained aiming at each training sample to obtain actual associated attribute information corresponding to the object to be processed;
Determining a plurality of actual drawing images based on the actual association attribute information; the number of the actual drawing images is the same as the number of the images to be trained in the current training sample, and the illumination angle of the object to be processed in the actual drawing images is the same as the illumination angle of the images to be trained;
and processing the actual drawing image and the image to be trained at the same illumination angle to obtain the target object attribute determining model.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image [ example six ], further comprising:
the to-be-trained object attribute determining model comprises a to-be-trained reflection attribute determining sub-model and a to-be-trained refraction attribute determining sub-model, and the actual association attribute information comprises actual training refraction attribute information and actual training reflection attribute information.
Optionally, processing the to-be-trained image of the reflection type in the current training sample based on the to-be-trained reflection attribute determination sub-model to obtain actual training reflection attribute information corresponding to the to-be-trained object;
and processing the refraction type to-be-trained image in the current training sample and the actual training reflection attribute information based on the to-be-trained refraction attribute determination sub-model to obtain actual training refraction attribute information.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image [ example seventh ], further comprising:
the actual rendering image includes an actual reflection rendering image and an actual refraction rendering image.
Optionally, for each reflection type of image to be trained, determining a target illumination angle of an object to be processed by a light source in the current image to be trained, and determining an actual reflection drawing image consistent with the target illumination angle based on actual training reflection attribute information in the actual associated attribute information;
and determining a target illumination angle of an object to be processed by a light source in the current image to be trained according to the images to be trained of each refraction type, and determining an actual refraction drawing image consistent with the target illumination angle based on actual training refraction attribute information in the actual associated attribute information.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image, further comprising:
optionally, processing the actual reflection drawing image and the image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a first preset loss function corresponding to the reflection attribute determination sub-model to be trained; the method comprises the steps of,
Processing an actual refraction drawing image and an image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a second preset loss function corresponding to the reflection attribute determination sub-model to be trained;
and converging the first preset loss function and the second preset loss function to serve as training targets, and obtaining the target object attribute determining model.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image, further comprising:
optionally, the object attribute information includes at least one of texture information, shape information, color information, texture information, smoothness information, and transparency information.
According to one or more embodiments of the present disclosure, there is provided a method of rendering an image, further comprising:
optionally, the three-dimensional view of the target is drawn based on the editor and the material information, the shape information, the color information, the texture information, the smoothness information and the transparency information are used as drawing parameters.
According to one or more embodiments of the present disclosure, there is provided an apparatus for rendering an image, including:
The image acquisition module to be processed is used for acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
the object attribute information determining module is used for processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and the target three-dimensional view determining module is used for determining a target three-dimensional view corresponding to the target object based on the object attribute information.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. A method of rendering an image, comprising:
acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and determining a target three-dimensional view corresponding to the target object based on the object attribute information.
2. The method of claim 1, wherein acquiring the image of the target object to be processed at different illumination angles comprises:
sequentially irradiating the target object based on the illumination angles in the preset illumination angle set, and acquiring a to-be-processed image comprising the target object.
3. The method according to claim 1, wherein the target object attribute determining model includes a reflection attribute determining sub-model and a refraction attribute determining sub-model, and the processing the image to be processed according to the target object attribute determining model to obtain object attribute information of the target object includes:
Processing the image to be processed of the reflection type based on the reflection attribute determining sub-model to obtain reflection attribute related information corresponding to the target object;
processing the reflection attribute related information and the refraction type image to be processed based on the refraction attribute determining sub-model to obtain refraction attribute related information corresponding to the target object;
the object attribute information is determined based on the reflection-associated attribute information and the refraction-associated attribute information.
4. A method according to claim 3, wherein the reflection-related attribute information includes reflection materials, reflection colors, a normal map of a reflection type image to be processed of the target object, and depth values of respective pixels; the refraction associated attribute information comprises refraction materials of the target object, a normal map of the refraction type to-be-processed image and depth values of all pixel points.
5. The method as recited in claim 1, further comprising:
acquiring a plurality of training samples; each training sample comprises a plurality of images to be trained, and the images to be trained are shot by an object to be processed under illumination of illumination angles in a preset illumination angle set;
Inputting a plurality of images to be trained in the current training sample into an object attribute determining model to be trained aiming at each training sample to obtain actual associated attribute information corresponding to the object to be processed;
determining a plurality of actual drawing images based on the actual association attribute information; the number of the actual drawing images is the same as the number of the images to be trained in the current training sample, and the illumination angle of the object to be processed in the actual drawing images is the same as the illumination angle of the images to be trained;
and processing the actual drawing image and the image to be trained at the same illumination angle to obtain the target object attribute determining model.
6. The method according to claim 5, wherein the to-be-trained object attribute determining model includes a to-be-trained reflection attribute determining sub-model and a to-be-trained refraction attribute determining sub-model, the actual correlation attribute information includes actual training refraction attribute information and actual training reflection attribute information, the inputting the plurality of to-be-trained images in the current training sample into the to-be-trained object attribute determining model, and obtaining the actual correlation attribute information corresponding to the to-be-processed object includes:
Processing the to-be-trained image of the reflection type in the current training sample based on the to-be-trained reflection attribute determining sub-model to obtain actual training reflection attribute information corresponding to the to-be-trained object;
and processing the refraction type to-be-trained image in the current training sample and the actual training reflection attribute information based on the to-be-trained refraction attribute determination sub-model to obtain actual training refraction attribute information.
7. The method of claim 6, wherein the actual rendering image comprises an actual reflected rendering image and an actual refracted rendering image, the determining a plurality of actual rendering images based on the actual associated attribute information comprising:
aiming at the images to be trained of each reflection type, determining a target illumination angle of an object to be processed by a light source in the current images to be trained, and determining an actual reflection drawing image consistent with the target illumination angle based on actual training reflection attribute information in the actual associated attribute information;
and determining a target illumination angle of an object to be processed by a light source in the current image to be trained according to the images to be trained of each refraction type, and determining an actual refraction drawing image consistent with the target illumination angle based on actual training refraction attribute information in the actual associated attribute information.
8. The method according to claim 7, wherein the obtaining the target object attribute determining model by processing the actual drawing image and the image to be trained at the same illumination angle includes:
processing an actual reflection drawing image and an image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a first preset loss function corresponding to the reflection attribute determination sub-model to be trained; the method comprises the steps of,
processing an actual refraction drawing image and an image to be trained of the same illumination angle, determining a loss value, and adjusting model parameters in the reflection attribute determination sub-model to be trained based on the loss value and a second preset loss function corresponding to the reflection attribute determination sub-model to be trained;
and converging the first preset loss function and the second preset loss function to serve as training targets, and obtaining the target object attribute determining model.
9. The method of claim 1, wherein the object attribute information comprises at least one of texture information, shape information, color information, texture information, smoothness information, and transparency information.
10. The method of claim 9, wherein the determining a target three-dimensional view corresponding to the target object based on the object attribute information comprises:
and drawing the target three-dimensional view based on the editor by taking the material information, the shape information, the color information, the texture information, the smoothness information and the transparency information as drawing parameters.
11. An apparatus for rendering an image, comprising:
the image acquisition module to be processed is used for acquiring a plurality of images to be processed; wherein, the illumination angle between the light source and the target object in each image to be processed is different;
the object attribute information determining module is used for processing the image to be processed according to a target object attribute determining model to obtain object attribute information of the target object;
and the target three-dimensional view determining module is used for determining a target three-dimensional view corresponding to the target object based on the object attribute information.
12. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of rendering an image as recited in any of claims 1-10.
13. A storage medium containing computer executable instructions for performing the method of rendering an image as claimed in any one of claims 1 to 10 when executed by a computer processor.
CN202111389305.2A 2021-11-22 2021-11-22 Method and device for drawing image, electronic equipment and storage medium Pending CN116152426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111389305.2A CN116152426A (en) 2021-11-22 2021-11-22 Method and device for drawing image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111389305.2A CN116152426A (en) 2021-11-22 2021-11-22 Method and device for drawing image, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116152426A true CN116152426A (en) 2023-05-23

Family

ID=86372300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111389305.2A Pending CN116152426A (en) 2021-11-22 2021-11-22 Method and device for drawing image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116152426A (en)

Similar Documents

Publication Publication Date Title
US20130321396A1 (en) Multi-input free viewpoint video processing pipeline
US9483864B2 (en) System and method for photorealistic imaging using ambient occlusion
CN115115688B (en) Image processing method and electronic equipment
WO2022042290A1 (en) Virtual model processing method and apparatus, electronic device and storage medium
WO2023061232A1 (en) Image rendering method and apparatus, device, and medium
US9659404B2 (en) Normalized diffusion profile for subsurface scattering rendering
CN110084873B (en) Method and apparatus for rendering three-dimensional model
WO2023125365A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN116758208A (en) Global illumination rendering method and device, storage medium and electronic equipment
CN1373883A (en) Method and apparatus for rendering images with refractions
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN115661320A (en) Image processing method and electronic device
CN109816791B (en) Method and apparatus for generating information
CN117372607A (en) Three-dimensional model generation method and device and electronic equipment
US20230096119A1 (en) Feedback Using Coverage for Object Scanning
WO2023158374A2 (en) Subject material determination method and apparatus, and electronic device and storage medium
Yang et al. Real-time light-field generation based on the visual hull for the 3D light-field display with free-viewpoint texture mapping
CN116309137A (en) Multi-view image deblurring method, device and system and electronic medium
CN116152426A (en) Method and device for drawing image, electronic equipment and storage medium
CN115695685A (en) Special effect processing method and device, electronic equipment and storage medium
US20240249422A1 (en) Single Image 3D Photography with Soft-Layering and Depth-aware Inpainting
CN117745928A (en) Image processing method, device, equipment and medium
CN113066166A (en) Image processing method and device and electronic equipment
CN115249215A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112967369A (en) Light ray display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination