WO2019192746A1 - Image rendering from texture-less cad models - Google Patents

Image rendering from texture-less cad models Download PDF

Info

Publication number
WO2019192746A1
WO2019192746A1 PCT/EP2018/082794 EP2018082794W WO2019192746A1 WO 2019192746 A1 WO2019192746 A1 WO 2019192746A1 EP 2018082794 W EP2018082794 W EP 2018082794W WO 2019192746 A1 WO2019192746 A1 WO 2019192746A1
Authority
WO
WIPO (PCT)
Prior art keywords
normal map
augmentation
texture
computer
pipeline
Prior art date
Application number
PCT/EP2018/082794
Other languages
French (fr)
Inventor
Benjamin PLANCHE
Sergey Zakharov
Andreas Hutter
Slobodan Ilic
Ziyan Wu
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Publication of WO2019192746A1 publication Critical patent/WO2019192746A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Definitions

  • the invention relates to a concept for rendering an augmented image of an object from a texture-less CAD model of the ob ject.
  • the invention is in particular relevant for the field of object recognition.
  • Tobin et al uses a 3D engine to directly render random color images from the 3D CAD models, especially randomizing the object's texture .
  • a method for rendering an aug mented image of an object from a texture-less CAD model of the object comprising:
  • Rendering which is also referred to as image synthesis, is the automatic process of generating an image from a two- dimensional (2D) or three-dimensional (3D) model by means of computer programs .
  • An "augmented” image of an object is understood as an image wherein some kind of disturbance, in other words nuisance, has been added to.
  • the type of augmentation in other words "augmentation modality” comprises, for instance, adding a background behind the object which is depicted in the image, adding noise in the illustration of the object and/or the background, shadows, surface texture, blurring, rotation, translation, flipping or resizing of the object, partial oc clusions of the object and changing the color of the object.
  • a normal map or a depth map which is directly ob tained from the CAD model and has not been augmented, is also referred to as a "clean" normal or depth map, respectively.
  • CAD model computer-aided design model
  • CADD model computer-aided design and drafting model
  • CAD output is often in the form of electronic files for print, machining, or other manufacturing operations.
  • CAD mod els may be two-dimensional (2D) or three-dimensional (3D) .
  • a "texture-less CAD model” is understood as a CAD model which only contains pure semantic and geometrical information, but no information regarding e.g. its appearance (color, texture, material type) , scene (position of light sources, cameras, peripheral objects) or animation (how the model moves, if ap plicable) . It will be one of the tasks of the augmentation pipeline to add random appearance or scene features to the clean normal map of the texture-less CAD model.
  • a normal map is a representation of the surface normals of a 3D model from a particular viewpoint, stored in a two- dimensional colored image, also referred to as an RGB (i.e. red color/ green color/ blue color) image.
  • RGB i.e. red color/ green color/ blue color
  • each color corresponds to the orientation of the surface normal.
  • a "nor mal map” therefore creates the impression of a three- dimensional image, but only occupies little storage space.
  • Normal mapping sometimes also referred to as "Dot3 bump map ping" is a known technique from 3D computer graphics. It is primarily used in video games.
  • “Augmenting” refers to the transformation of a normal map in to a color image by means of adding certain types of nuisanc es (or “clutter") to the clean normal map. Note that by transforming a normal map into a color image, on the one hand information is lost as no more precise 3D representation of the object surface is present at the color image. On the oth- er hand, external information is added to the image, due to e.g. lighting condition and texture information. If, however, the added information is randomly defined, it is more noise than semantic information.
  • a key aspect of the present invention is to introduce the step of generating normal maps when rendering color images from 3D models.
  • the process of generating color images is made lightweight by separating the 3D projec tion step (i.e. computing the projected geometry as seen from target viewpoints) in terms of the generation of normal maps and the augmentation step, namely computing the color repre sentation.
  • the introduction of normal maps as an intermediary dataset has the advantage that the GPU-intensive step of di rectly rendering augmented images from 3D models is substi tuted by two separate steps wherein the second step, i.e. the conversion from normal maps into augmented images, can be carried out by the CPU.
  • the first step of con verting the geometric information of the CAD model into a normal map can in principle be performed beforehand, while the step of augmenting the clean normal map can be done online, for instance in parallel of training a recognition unit which uses the augmented images as input data.
  • normal maps for different perspectives are generated and further processed.
  • This can e.g. be realized by creating a virtual hemisphere around and above the CAD model, thus defining a desired number of viewpoints.
  • a significant amount of augmented images may then be created.
  • a large number of images depicting the same object from differ ent viewpoints with different types of augmentation is ob tained .
  • the augmentation pipeline is built in a modular manner such that the actions to be ap plied to the normal map can be individually chosen.
  • An augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is the shading of the object by varying levels of darkness, in particular by simulating directional and/or ambient light sources.
  • Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is adding a random texture to the object.
  • random object textures can be generated using the same functions to generate noise as those used for the background. Random noise maps are generated for the hue and saturation channels of the foreground. Hue and saturation noise maps may be generated separately using different param eters to add further complexity to the resulting textures.
  • Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is color domain transformation, in particular from RGB to HSL or vice versa.
  • a color image which is obtained by the aug mentation pipeline can also be converted from one color do- main to another. It may, for instance, be beneficial to con vert RGB images into HSL images in order to have semantically more meaningful channels and facilitate the augmentation and training. This operation can either be done inline or of fline, wherein offline in implies that the converted dataset is saved before training.
  • Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is modifying the size and/or position of the object, in particular by translating, rotating, flipping and/or resizing of the ob ject .
  • Translation of the object comprises x- and/or y-translations . This augmentation step is useful as in real-life color image the object to be recognized is not always perfectly centered.
  • Rotation comprises a twist (i.e. turn) of the object about a certain angle.
  • Flipping of the object comprises tilting the object e.g. from left to right or from up to down.
  • Resizing of the object comprises enlarging or shrinking the object while maintaining its relative dimensions.
  • Another important augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is adding a background to the image.
  • the background may comprise random colors and patterns.
  • the background is also referred to as "noise” or “background noise”. It may include: fractal Perlin noise, cellular noise and white noise.
  • the noise patterns may be generated using a vast frequency range further increasing the number of possi ble background variations.
  • the background may also include real images from the target domain or from any public image dataset, which the recogni tion system using the augmented images needs to distinguish from the object to be recognized.
  • Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is blurring, in particular uniform, median or Gaussian blurring, of the ob ject .
  • simple blur operations e.g. Gaussian, uniform or median blur
  • variable intensity e.g. Gaussian, uniform or median blur
  • Yet another augmentation step which is advantageously imple mented in and comprised by the augmentation pipeline is add ing a partial occlusion of the object.
  • Occlusions are introduced to serve two different purposes: First is to teach a network to reconstruct the parts of the object that are partially occluded. The second purpose is to enforce the invariance to additional objects within the patch, i.e. to ignore them, in other words treat them as a background. Occlusion objects are generated by walking around the circle taking random angular steps and random radii at each step. Subsequently, the generated polygons are filled with arbitrary depth values and painted on top of the patch.
  • the invention is also directed towards a renderer for render ing an augmented image of an object from a texture-less CAD model of the object.
  • the renderer comprises:
  • - a unit for generating a normal map of the object from the CAD model for a specific viewpoint, and - an augmentation pipeline for augmenting the normal map by applying one or more predefined actions defined to the normal map.
  • the unit for generating the normal maps from the CAD model may be a conventional renderer engine running on the GPU of the computer.
  • the augmentation pipeline may be designed in a modular manner comprising one or several of the properties as described in the context of the inventive method above.
  • the invention is directed to a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method described above.
  • the invention is directed to a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the steps of the method as described above.
  • Figure 1 shows a rendering process according to the prior art
  • Figure 2 shows a rendering process using an augmentation
  • Figure 3 shows the rendering process of Figure 2 highlight ing the modular character of the augmentation pipe line .
  • Figure 1 illustrates a method for rendering an augmented col or image of an object from a texture-less CAD model as known in the prior art.
  • a texture-less CAD model 111 is directly rendered, i.e. con verted, to a color image 112, wherein the color image 112 comprises some kind of nuisance, or "clutter", such as back ground noise, blurring, etc.
  • the CAD model 111 comprises geo metrical 3D (three-dimensional) information about the object, but no information about the e.g. the texture or material type of the object.
  • the rendering process is executed on the graphics processing unit (GPU) of the computer. It consumes relatively much ca pacity of the GPU, i.e. it is a relatively heavy, GPU
  • the obtained augmented images need to be stored somewhere be fore they can be used, for instance, as synthetic training data of a recognition system.
  • the rendering process according to the present invention comprises the generation and use of intermediary normal maps 212.
  • These normal maps 212 are "clean" normal maps and obtained from texture-less CAD models 211 of the ob ject.
  • the normal maps 212 are used as input of an augmenta tion pipeline A, which converts the clean normal maps 212 in to augmented images 213.
  • the advantage of the inventive rendering process is that it can be executed on the central processing unit (CPU) instead of the GPU. It is a lightweight process and can be carried out in parallel to other tasks of the computer. In particu lar, it can be carried out in parallel to e.g. a training of a recognition unit.
  • the training data of the recognition unit which are in this case the augmented images obtained through the rendering process, can even be generated "in situ", in other words "online”. This means that they need not to be generated and stored beforehand. Thus, a quasi-infinite number of training data can be generated and used for train ing a recognition unit.
  • Figure 3 shows an advantageous embodiment of the invention, namely the use of a modular augmentation pipeline A.
  • the mod ular augmentation pipeline A comprises n pipeline modules Ai, A 2 , A 3 ,... A n .
  • Each pipeline module A ⁇ corresponds to one aug- mentation step, for example Ai the inclusion of simple direc tional or ambient lighting, A 2 performing random texturing of the object, A 3 resizing of the object, etc.
  • a weight is attributed to each pipeline module A ⁇ . If the weight is zero, the augmentation step is not performed, oth erwise it is performed with a certain probability and a cer tain intensity.

Abstract

The invention relates to a method for rendering an augmented image (213) of an object from a texture-less CAD model (211) of the object, the method comprising the steps of: - generating a normal map (212) of the object from the CAD model (211) for a specific viewpoint, and - augmenting the normal map (212) by applying one or more actions defined in an augmentation pipeline (A) to the normal map (212). Furthermore, the invention relates to a renderer for render- ing an augmented image (213) of an object from a texture-less CAD model (211), a computer program product and a computer-readable storage medium.

Description

Specification
Image rendering from texture-less CAD models
The invention relates to a concept for rendering an augmented image of an object from a texture-less CAD model of the ob ject. The invention is in particular relevant for the field of object recognition.
Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba and Pieter Abbeel presented the concept of "domain randomization" in their scientific article "Domain Randomiza tion for Transferring Deep Neural Networks from Simulation to the Real World", first published on arXiv with the ID
1703.06907vl . Herein, it is proposed to generate a large amount of synthetic, augmented color images based on texture less 3D CAD models. The texture and further features of the output image such as the background or shadows are chosen en tirely randomly. Their concept is based on the hypothesis that if the variability in simulation of the augmented images is large enough, a recognition system trained in simulation would generalize to the real world with no additional train ing necessary. In the case of texture, one could say that the recognition system becomes "texture-blind" and focuses on other aspects in the image for recognizing the object to be recognized.
According to the cited article of Tobin et al . , this concept has been proven to work quite well in certain tests. A draw back of the concept is, however, that the generation of the cluttered color images is carried out by the graphics pro cessing unit (GPU) of the computer. In other words, Tobin et al . use a 3D engine to directly render random color images from the 3D CAD models, especially randomizing the object's texture .
The operation of using a 3D engine to render color images from 3D models is heavy and must run of the GPU for efficien- cy. As the training of neural networks is also generally done on GPU(s), this means that either the data generation must be done beforehand (which means generating and storing a finite dataset, instead of generating new data every training itera tion) , or that network training and data generation must com pete for the GPU resource, which is far from optimal for lim ited GPU setups .
Thus, there is the desire to develop a concept how to be able to generate a large amount of augmented images based on CAD models, without burdening the GPU of the system.
This problem is solved by the subject-matters of the inde pendent claims. Advantageous details or specifications are disclosed in the dependent claims.
Accordingly, there is provided a method for rendering an aug mented image of an object from a texture-less CAD model of the object, wherein the method comprises:
- generating a normal map of the object from the CAD model for a specific viewpoint, and
- augmenting the normal map by applying one or more actions defined in an augmentation pipeline to the normal map.
"Rendering", which is also referred to as image synthesis, is the automatic process of generating an image from a two- dimensional (2D) or three-dimensional (3D) model by means of computer programs .
An "augmented" image of an object is understood as an image wherein some kind of disturbance, in other words nuisance, has been added to. The type of augmentation, in other words "augmentation modality" comprises, for instance, adding a background behind the object which is depicted in the image, adding noise in the illustration of the object and/or the background, shadows, surface texture, blurring, rotation, translation, flipping or resizing of the object, partial oc clusions of the object and changing the color of the object. In general, a normal map or a depth map which is directly ob tained from the CAD model and has not been augmented, is also referred to as a "clean" normal or depth map, respectively.
A "CAD model" (computer-aided design model) , which is some times also referred to as "CADD model" (computer-aided design and drafting model) , is understood as a design for which cre ation computer systems including workstations have been used. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. CAD mod els may be two-dimensional (2D) or three-dimensional (3D) .
A "texture-less CAD model" is understood as a CAD model which only contains pure semantic and geometrical information, but no information regarding e.g. its appearance (color, texture, material type) , scene (position of light sources, cameras, peripheral objects) or animation (how the model moves, if ap plicable) . It will be one of the tasks of the augmentation pipeline to add random appearance or scene features to the clean normal map of the texture-less CAD model.
A normal map is a representation of the surface normals of a 3D model from a particular viewpoint, stored in a two- dimensional colored image, also referred to as an RGB (i.e. red color/ green color/ blue color) image. Herein each color corresponds to the orientation of the surface normal. A "nor mal map" therefore creates the impression of a three- dimensional image, but only occupies little storage space. Normal mapping, sometimes also referred to as "Dot3 bump map ping" is a known technique from 3D computer graphics. It is primarily used in video games.
"Augmenting" refers to the transformation of a normal map in to a color image by means of adding certain types of nuisanc es (or "clutter") to the clean normal map. Note that by transforming a normal map into a color image, on the one hand information is lost as no more precise 3D representation of the object surface is present at the color image. On the oth- er hand, external information is added to the image, due to e.g. lighting condition and texture information. If, however, the added information is randomly defined, it is more noise than semantic information.
A key aspect of the present invention is to introduce the step of generating normal maps when rendering color images from 3D models. In other words, the process of generating color images is made lightweight by separating the 3D projec tion step (i.e. computing the projected geometry as seen from target viewpoints) in terms of the generation of normal maps and the augmentation step, namely computing the color repre sentation. The introduction of normal maps as an intermediary dataset has the advantage that the GPU-intensive step of di rectly rendering augmented images from 3D models is substi tuted by two separate steps wherein the second step, i.e. the conversion from normal maps into augmented images, can be carried out by the CPU. In particular, the first step of con verting the geometric information of the CAD model into a normal map can in principle be performed beforehand, while the step of augmenting the clean normal map can be done online, for instance in parallel of training a recognition unit which uses the augmented images as input data.
Advantageously, normal maps for different perspectives are generated and further processed. This can e.g. be realized by creating a virtual hemisphere around and above the CAD model, thus defining a desired number of viewpoints. For each view point, i.e. for each perspective, a significant amount of augmented images may then be created. By this procedure, a large number of images depicting the same object from differ ent viewpoints with different types of augmentation is ob tained .
In an embodiment of the invention, the augmentation pipeline is built in a modular manner such that the actions to be ap plied to the normal map can be individually chosen. This means that different kinds of augmenting the normal map can be defined, which will be described in more detail in the following. It can be attributed a specific weight to each kind, so that by processing the normal map through the pipe line the normal maps are modified according to the weights given to the individual augmentation steps.
An augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is the shading of the object by varying levels of darkness, in particular by simulating directional and/or ambient light sources.
By multiplying the normal maps to a random directional light vector and adding to the results a random ambient light in tensity value, one can easily simulate an infinity of simple lighting conditions. For each foreground normal map, this op eration yields a lightness map, as used e.g. in the HSL (hue, saturation, lightness) color format.
Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is adding a random texture to the object.
To compensate for the presumed lack of texture information in the CAD models, random object textures can be generated using the same functions to generate noise as those used for the background. Random noise maps are generated for the hue and saturation channels of the foreground. Hue and saturation noise maps may be generated separately using different param eters to add further complexity to the resulting textures.
Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is color domain transformation, in particular from RGB to HSL or vice versa.
This means that a color image which is obtained by the aug mentation pipeline can also be converted from one color do- main to another. It may, for instance, be beneficial to con vert RGB images into HSL images in order to have semantically more meaningful channels and facilitate the augmentation and training. This operation can either be done inline or of fline, wherein offline in implies that the converted dataset is saved before training.
Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is modifying the size and/or position of the object, in particular by translating, rotating, flipping and/or resizing of the ob ject .
Translation of the object comprises x- and/or y-translations . This augmentation step is useful as in real-life color image the object to be recognized is not always perfectly centered.
Rotation comprises a twist (i.e. turn) of the object about a certain angle.
Flipping of the object comprises tilting the object e.g. from left to right or from up to down.
Resizing of the object comprises enlarging or shrinking the object while maintaining its relative dimensions.
All these operations may be referred to as linear transfor mations. Additionally or alternatively, the object may also be distorted, which signifies that x-extension of the object is modified differently to the y-extension.
Another important augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is adding a background to the image.
The background may comprise random colors and patterns. The background is also referred to as "noise" or "background noise". It may include: fractal Perlin noise, cellular noise and white noise. The noise patterns may be generated using a vast frequency range further increasing the number of possi ble background variations.
The background may also include real images from the target domain or from any public image dataset, which the recogni tion system using the augmented images needs to distinguish from the object to be recognized.
Another augmentation step which is advantageously implemented in and comprised by the augmentation pipeline is blurring, in particular uniform, median or Gaussian blurring, of the ob ject .
To reproduce possible motion blur or unfocused images, simple blur operations (e.g. Gaussian, uniform or median blur) can be applied with variable intensity.
Yet another augmentation step which is advantageously imple mented in and comprised by the augmentation pipeline is add ing a partial occlusion of the object.
Occlusions are introduced to serve two different purposes: First is to teach a network to reconstruct the parts of the object that are partially occluded. The second purpose is to enforce the invariance to additional objects within the patch, i.e. to ignore them, in other words treat them as a background. Occlusion objects are generated by walking around the circle taking random angular steps and random radii at each step. Subsequently, the generated polygons are filled with arbitrary depth values and painted on top of the patch.
The invention is also directed towards a renderer for render ing an augmented image of an object from a texture-less CAD model of the object. The renderer comprises:
- a unit for generating a normal map of the object from the CAD model for a specific viewpoint, and - an augmentation pipeline for augmenting the normal map by applying one or more predefined actions defined to the normal map.
The unit for generating the normal maps from the CAD model may be a conventional renderer engine running on the GPU of the computer. The augmentation pipeline may be designed in a modular manner comprising one or several of the properties as described in the context of the inventive method above.
Furthermore, the invention is directed to a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method described above.
Lastly, the invention is directed to a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the steps of the method as described above.
Embodiments of the invention are now described, by way of ex ample only, with the help of the accompanying drawings, of which :
Figure 1 shows a rendering process according to the prior art ;
Figure 2 shows a rendering process using an augmentation
pipeline according to the invention; and
Figure 3 shows the rendering process of Figure 2 highlight ing the modular character of the augmentation pipe line .
Figure 1 illustrates a method for rendering an augmented col or image of an object from a texture-less CAD model as known in the prior art. A texture-less CAD model 111 is directly rendered, i.e. con verted, to a color image 112, wherein the color image 112 comprises some kind of nuisance, or "clutter", such as back ground noise, blurring, etc. The CAD model 111 comprises geo metrical 3D (three-dimensional) information about the object, but no information about the e.g. the texture or material type of the object.
The rendering process is executed on the graphics processing unit (GPU) of the computer. It consumes relatively much ca pacity of the GPU, i.e. it is a relatively heavy, GPU
intensive process.
The obtained augmented images need to be stored somewhere be fore they can be used, for instance, as synthetic training data of a recognition system.
In contrast, the rendering process according to the present invention comprises the generation and use of intermediary normal maps 212. These normal maps 212 are "clean" normal maps and obtained from texture-less CAD models 211 of the ob ject. The normal maps 212 are used as input of an augmenta tion pipeline A, which converts the clean normal maps 212 in to augmented images 213.
The advantage of the inventive rendering process is that it can be executed on the central processing unit (CPU) instead of the GPU. It is a lightweight process and can be carried out in parallel to other tasks of the computer. In particu lar, it can be carried out in parallel to e.g. a training of a recognition unit. The training data of the recognition unit, which are in this case the augmented images obtained through the rendering process, can even be generated "in situ", in other words "online". This means that they need not to be generated and stored beforehand. Thus, a quasi-infinite number of training data can be generated and used for train ing a recognition unit. Figure 3 shows an advantageous embodiment of the invention, namely the use of a modular augmentation pipeline A. The mod ular augmentation pipeline A comprises n pipeline modules Ai, A2, A3,... An. Each pipeline module A± corresponds to one aug- mentation step, for example Ai the inclusion of simple direc tional or ambient lighting, A2 performing random texturing of the object, A3 resizing of the object, etc.
A weight is attributed to each pipeline module A± . If the weight is zero, the augmentation step is not performed, oth erwise it is performed with a certain probability and a cer tain intensity.

Claims

Claims
1. Method for rendering an augmented image (213) of an object from a texture-less CAD model (211) of the object, the method comprising the steps of:
- generating a normal map (212) of the object from the CAD model (211) for a specific viewpoint, and
- augmenting the normal map (212) by applying one or more ac tions defined in an augmentation pipeline (A) to the normal map (212) .
2. Method according to claim 1,
wherein the augmentation pipeline (A) is built in a modular manner such that the actions to be applied to the normal map (212) can be individually chosen.
3. Method according to one of the preceding claims,
wherein the method is carried out on a central processing unit (CPU) of a computer.
4. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises shading of the object by varying levels of darkness, in particular by simulating directional and/or ambient light sources.
5. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises adding a ran dom texture to the object.
6. Method according to claim 5,
wherein texturing comprises application of a noise map of the hue and saturation channel to hue and saturation values of the object.
7. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises color domain transformation, in particular from RGB to HSL or vice versa.
8. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises modifying the size and/or position of the object, in particular by trans lating, rotating, flipping and/or resizing of the object.
9. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises adding a background to the image of the object.
10. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises blurring, in particular uniform, median or Gaussian blurring, of the ob ject .
11. Method according to one of the preceding claims,
wherein the augmentation pipeline (A) comprises the partial occlusion of the object.
12. Renderer for rendering an augmented image (213) of an ob ject from a texture-less CAD model (211) of the object, the renderer comprising:
- a unit for generating a normal map (212) of the object from the CAD model (211) for a specific viewpoint, and
- an augmentation pipeline (A) for augmenting the normal map (212) by applying one or more predefined actions to the normal map (212) .
13. A computer program product comprising instructions which, when the program is executed by a computer, cause the comput er to carry out the steps of the method according to one of the claims 1 to 11.
14. A computer-readable storage medium comprising instruc tions which, when executed by a computer, cause the computer to carry out the steps of the method of claim according to one of the claims 1 to 11.
PCT/EP2018/082794 2018-04-06 2018-11-28 Image rendering from texture-less cad models WO2019192746A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862653735P 2018-04-06 2018-04-06
US62/653,735 2018-04-06

Publications (1)

Publication Number Publication Date
WO2019192746A1 true WO2019192746A1 (en) 2019-10-10

Family

ID=64661298

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/082794 WO2019192746A1 (en) 2018-04-06 2018-11-28 Image rendering from texture-less cad models

Country Status (1)

Country Link
WO (1) WO2019192746A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100013842A1 (en) * 2008-07-16 2010-01-21 Google Inc. Web-based graphics rendering system
US20160155261A1 (en) * 2014-11-26 2016-06-02 Bevelity LLC Rendering and Lightmap Calculation Methods
EP3273411A1 (en) * 2016-07-19 2018-01-24 The Boeing Company Synthetic geotagging for computer-generated images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100013842A1 (en) * 2008-07-16 2010-01-21 Google Inc. Web-based graphics rendering system
US20160155261A1 (en) * 2014-11-26 2016-06-02 Bevelity LLC Rendering and Lightmap Calculation Methods
EP3273411A1 (en) * 2016-07-19 2018-01-24 The Boeing Company Synthetic geotagging for computer-generated images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DAVID KOLLER ET AL: "Protected interactive 3D graphics via remote rendering", 20040801; 20040808 - 20040812, 1 August 2004 (2004-08-01), pages 695 - 703, XP058318444, DOI: 10.1145/1186562.1015782 *
JOSH TOBIN ET AL: "Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 March 2017 (2017-03-20), XP080758342, DOI: 10.1109/IROS.2017.8202133 *
LARSON LUKE ET AL: "Creating 3D avatars from artistic drawing for VR and games applications", 2016 FUTURE TECHNOLOGIES CONFERENCE (FTC), IEEE, 6 December 2016 (2016-12-06), pages 1094 - 1099, XP033044544, DOI: 10.1109/FTC.2016.7821739 *
MICHAEL GUTHE ET AL: "Efficient NURBS rendering using view-dependent LOD and normal maps", 1 January 2003 (2003-01-01), XP055548614, Retrieved from the Internet <URL:http://wscg.zcu.cz/wscg2003/Papers_2003/B13.pdf> *
TARINI M ET AL: "REAL TIME, ACCURATE, MULTI-FEATURED RENDERING OF BUMP MAPPED SURFACES", COMPUTER GRAPHICS FORUM, WILEY-BLACKWELL PUBLISHING LTD, GB, vol. 19, no. 3, 1 January 2000 (2000-01-01), pages C119 - C130,527, XP000981235, ISSN: 0167-7055, DOI: 10.1111/1467-8659.00404 *

Similar Documents

Publication Publication Date Title
CN115100339B (en) Image generation method, device, electronic equipment and storage medium
JP7142162B2 (en) Posture variation 3D facial attribute generation
Mu et al. 3d photo stylization: Learning to generate stylized novel views from a single image
CN105787865A (en) Fractal image generation and rendering method based on game engine and CPU parallel processing
Lu et al. Attention-based dense point cloud reconstruction from a single image
US10922852B2 (en) Oil painting stroke simulation using neural network
Li et al. Read: Large-scale neural scene rendering for autonomous driving
CN112700528B (en) Virtual object shadow rendering method for head-mounted augmented reality device
Zakharkin et al. Point-based modeling of human clothing
Du et al. Stereo-matching network for structured light
CN115115805A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
US9704290B2 (en) Deep image identifiers
Chen et al. A survey on 3d gaussian splatting
Lu et al. 3d real-time human reconstruction with a single rgbd camera
Jiang et al. VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality
WO2019192746A1 (en) Image rendering from texture-less cad models
TWI712002B (en) A 3d human face reconstruction method
CN114693857A (en) Ray tracing multi-frame noise reduction method, electronic equipment, chip and readable storage medium
Sun et al. EasyMesh: An efficient method to reconstruct 3D mesh from a single image
Liu et al. 3D Animation Graphic Enhancing Process Effect Simulation Analysis
CN113592999B (en) Rendering method of virtual luminous body and related equipment
Wang et al. SketchFashion: Image Translation from Fashion Sketch Based on GAN
Ma A comparison of art style transfer in Cycle-GAN based on different generators
Deng et al. A Point Matching Strategy of 3D Loss Function for Single RGB Images Deep Mesh Reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18815556

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18815556

Country of ref document: EP

Kind code of ref document: A1