WO2015102014A1 - Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation - Google Patents
Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation Download PDFInfo
- Publication number
- WO2015102014A1 WO2015102014A1 PCT/IN2014/000177 IN2014000177W WO2015102014A1 WO 2015102014 A1 WO2015102014 A1 WO 2015102014A1 IN 2014000177 W IN2014000177 W IN 2014000177W WO 2015102014 A1 WO2015102014 A1 WO 2015102014A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- real
- photographs
- texture
- layout
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Definitions
- the present invention relates to field of texturing in three dimensional (3D) computer graphics, particularly texturing of a 3D-model of a real object using
- photographs and/or video of the real object for use in user-controlled interactions implementation.
- 3D computer graphics model is a better option to represent a real product, however existing 3D computer graphics model rendered in real-time lack in realism, and look unreal or artificial due to artificial looking texture on the 3D computer graphics model, hereinafter referred to as 3D model. Even 3D model generated in non-real time rendering such as in making animation movies also lack realism or real texture. Efforts have been made to use 3D model to represent a real car in some implementation, where electronic systems display 3D models of car with pre-defined and/or very limited interactions possibilities available for users with the 3D models of car.
- 3D models in such systems still looks cartoonish or artificial due to use of artificial colour or images as texture.
- 3D model of a real car textured using conventional texturing methods or techniques interiors, seats, steering, and other internal and/or external parts looks unreal.
- one or more patches are mapped using photographs while other areas in the 3D model are painted by texture artist using artificial texture.
- texture mapping in computer graphics related to texturing of 3D model are limited to mostly texturing of only exterior or outside region of 3D- models using primarily artificial texture such as images other than photographs, colours using a texture map.
- Unwrapping of the 3D model, and then providing a functional UV layout before applying texture map is known.
- texturing of hidden regions due to fitting of one part with another part discussed in FIG. 8
- texturing of internal parts to texture 3D model using numerous photographs, say in hundreds or thousands real photographs and/or video is a challenge and problem unaddressed in art.
- Photograph based texturing of 3D model is a complex problem in real time rendering (the outcome of user controlled interaction implementation), as image data is heavy compared to using colours as texture.
- non-real time pCT/I 2 0 1 4 / ⁇ ⁇ ⁇ r ⁇ i rendering can handle very heavy texture, still limited attempts have been made to use real photographs as texture. However, such attempts could not show real look and feel as of real object, or in other words the results obtained looked cartoonish.
- generated 3D model is a solid body or a shell type single body of exterior of the real object.
- the generated 3D-model will be a single body depicting outer surface or region of car with very high polygonal faces.
- sub-parts such as car doors, window, and bonnet cannot be separated in the generated 3D-model.
- scanning interior of car, and further texturing of interior will be very difficult with known systems, and a costly affair.
- a 3D-model is a 3D computer graphics model representing a real or physical object, where the 3D computer graphics model representing a real 3D object is used in user-controlled interactions.
- the 3D-model is either a single body 3D model or multi-part 3D model having external and/or internal parts to form a single 3D-model.
- the user-controlled interactions are interactions performed by a user in real-time with a 3D-model representing a real object, where on providing an input by the user, a corresponding response is seen in the 3D computer model, and where the response is generated by real-time rendering of corresponding view of 3D model as output.
- the response can be a movement in the entire 3D model, or a part of the 3D model showing movement to a different position from initial state or position, or any texture change will result in change in view of the 3D model.
- the user-controlled interactions are performed as per user choice, or in other words controlled by user.
- the user-controlled interactions also includes the user-controlled realistic interactions, an advance form of user-controlled interactions, that are discussed in details in U.S. Patent application No. 13/946364, Patent Cooperation Treaty (PCT) Application No. PCT/IN2013/000448, Indian Patent application No. 2253/DEL/2012, all now pending, filed by the same applicants as of this application.
- texturing is carried out using colours and/or images.
- the images when used for texturing are either artificially created, or created to resemble texture close to photographs.
- the multi-part 3D-model of mobile includes parts viewed from outside such as display, keys, body, battery cover etc, and internal parts such as battery, interior of mobile, inner side of battery cover, SIM slots etc.
- parts viewed from outside such as display, keys, body, battery cover etc
- internal parts such as battery, interior of mobile, inner side of battery cover, SIM slots etc.
- the difficulty level further increases if texture is to be mapped on internal parts such as integrated SIM slot positioned beneath mobile battery, which in turn is positioned beneath battery cover, and the inner side of the battery cover in one example of 3D-model of mobile.
- the present invention is also directed to overcoming or at least reducing one or more of the challenges, difficulties and problems set forth above.
- a method for photograph- based texturing of external and/or internal surfaces of a 3D model of a real 3D object.
- the method makes possible providing highly realistic texture to the 3D-models by applying numerous detailed photographs' of the real 3D objects.
- the method makes possible providing extremely vivid appearances on and within the 3D-model, while retaining factual and precise details of the real 3D object such that textured 3D model look real both from exterior and interior side, and look real even when individual parts are separated from the 3D model during user-controlled realistic interactions such as intrusive interactions (further described in detail description below).
- the method involves capturing HD (high definition) photographs of external and/or internal surfaces in different photographs capturing manners, and applying the photographs on each UV layout of 3D model and then joining the UVs of different surfaces by applying different calibration techniques on the photographs and UV layouts.
- a texturing method of a three- dimensional (3D) model of a real 3D object using photograph and/or video for displaying real-time change of textures on the 3D model by real-time rendering during user-controlled interactions is provided.
- the method makes possible use of video in texturing, to further enhance view of 3D model's texture, and display realistic texture during user-controlled interactions.
- the method of present invention makes possible displaying realistic texture using video replicating real view of light blinking from a physical light emitting device such as head light or rear light of an automotive vehicle.
- a display method for displaying a 3D model in a virtual three-dimensional space on a Graphical User Interface (GUI) for performing user-controlled interactions.
- the method uses calibrated textures obtained using photographs/video of a real object for implementing user-controlled interactions on the 3D model, which makes possible displaying rendered graphics of 3D model as output of the performed user-controlled interaction in real-time in response to the user input, wherein the texture displayed on external and/or internal surfaces of 3D model in- or -during each user-controlled interaction is calibrated texture obtained using photographs and/or video of the real object.
- the calibrated textures on the 3D model make the 3D model look real. The realism is maintained in- or during- each user-controlled interaction performed and displayed.
- a system for displaying a 3D model in a virtual three-dimensional space on a Graphical User Interface (GUI) and performing user-controlled interactions with the 3D model in realtime.
- GUI Graphical User Interface
- the present invention makes possible texturing of external and/or internal surfaces of the 3D model using real photographs and/or video, where view of texture on the 3D-model that is textured using real photographs and/or video replicates view of texture as on the real 3D object.
- additionally texture made by photo-editing of real photographs and/or videos of the real 3D object and/or the real 3D object's variants; images other than photographs such as artificially created images, or images created to resemble texture close to photographs.
- Artificial colour can be optionally used for texturing only for surfaces which correspond to mono-colour surfaces in the real object to keep file size low without compromising on looks.
- use of photographic images in total UV layouts for texturing ranges from 10-100% of total number of images other than real photographs.
- a plurality of real HD photographs for example in hundreds or thousands in case of complex 3D objects such as automotive vehicles such as bike, are used for texturing the 3D model in the methods and the system of the present invention, it makes no or minimum visible impact on the rendering and displaying time, and real-time viewing of textured 3D model object even if data is transmitted over web-page via hypertext transfer protocol (HTTP), and maintains precise detailing and clarity even on zooming the 3D models such that even a mark region such as logo, symbol or written instructions are clearly visible on the textured 3D model.
- HTTP hypertext transfer protocol
- FIG. 1 illustrates, through illustrations (a)-(d), different photographs capturing manners of external surfaces, in an example, used in texturing with the help of a front view and a rear view of a real 3D object, here represented by a scooter, and also with the help of an enlarged view of a handle and meter portion of the scooter, according to an embodiment of the present invention
- FIG. 2 illustrates, through illustrations (a)-(h), different photographs, shown here in schematic representation, to depict further photographs capturing manners of the external surfaces of the handle and meter portion of the scooter of FIG.1 in an example used for texturing according to an embodiment of the present invention
- FIG. 3a illustrates, through illustrations (a)-(i), further photographs capturing manner of both external and internal parts for capturing different shades and texture in precise details used in a texturing method according to an embodiment of the present invention
- FIG. 3b illustrates, through illustrations (a)-(e), photographs capturing manner of internal surfaces for capturing different shades and texture in precise details used in a texturing method according to an embodiment of the present invention
- FIG. 3c illustrates video capturing manner for certain surfaces that are functioning and during their operation according to an embodiment of the present invention
- FIG. 4 illustrates selecting one or more surfaces of one or more external and/or internal parts of a 3D model in an example
- FIG. 5 through illustrations (a)-(p), illustrates in an example UV unwrapping for generating UV layout of each selected surface/s of FIG. 4, according to an embodiment of the present invention
- FIG. 7a shows a front guard part surfaces on the 3D model of FIG. 4;
- FIG. 7b shows a front neck part of the selected surface of the 3D model of
- FIG.4
- FIG. 7c shows joining all UVs of related UV layout to form texture for the selected surfaces of the 3D model of FIG.4 in an example
- FIG. 8 illustrates hidden regions of fitted parts of a real 3D object in an example
- FIG. 10a illustrates a calibration technique of photographs and UV layout in an example according to an embodiment of the present invention
- FIG. 10b illustrates a calibration technique of video and UV layout in an example according to an embodiment of the present invention
- FIG. 11 illustrates selecting another surfaces of the 3D model in an example of a 3D model according to an embodiment of the present invention
- FIG. 12 illustrates in an example UV unwrapping of the selected surfaces of FIG.11 for generating UV layout for each selected surfaces, according to an embodiment of the present invention
- FIG. 13 through illustrations (a)-(g), illustrates different schematic views of a textured 3D-model of mobile depicting textured chosen external and internal surfaces of outer and inner parts in one example using photographs of a mobile (a real 3D object), according to an embodiment of the present invention
- FIG. 14 illustrates a flowchart of a method for texturing on external and/or internal surfaces of a three-dimensional (3D) model of a real 3D object using photographs of the real 3D object, according to an embodiment of the present invention
- FIG. 15 illustrates a flowchart of a method for texturing of a 3D-model using photograph and video, according to another embodiment of the present invention
- FIG. 16 illustrates an example of uniform texture pattern of a seat part of 3D- model
- FIG. 17 illustrates an example of having multiple textures for same surface on 3D-model, according to an embodiment of the present invention
- FIG. 18 illustrates, through illustrations (a)-(c), schematic representation of textured 3D model of scooter in an example using a texturing method of the present invention of FIG. 14 or FIG. 15, according to an embodiment of the present invention.
- FIG. 19 illustrates a display method for displaying a 3D model in a virtual three-dimensional space on a Graphical User Interface (GUI) for performing user- controlled interactions in one example
- FIG. 20 illustrates user-controlled interactions in one example.
- FIG. 1 different photographs capturing manners of external surfaces of a real 3D object, here represented by a scooter, used in a texturing method according to an embodiment of the present invention, is illustrated.
- FIG. 1 a front view of the scooter is shown, where the external surfaces of the scooter is photographed from various angles and ways to capture precise factual details in photographs used for texturing of 3D models according to an embodiment of the present invention.
- External as well as internal surfaces of the scooter are photographed using a photographing device, preferably a digital camera.
- the camera is preferably a non-fixed and high resolution camera.
- the thick arrow shows different field of views or angles for capturing the whole surface of the scooter.
- FIG. 1 Other captured photographs includes photographs of each face or subsurface of the surface, where sub-surface or faces are captured normal to the face or plane, as shown by a thin arrow pointing to a small oval circle placed on a face of front of scooter with an asterisk mark within the circle denoting normal to the face/sub- surface.
- Such circle with asterisk mark are placed on different faces in illustration (a) and illustration (d) of FIG. 1 to denote angle and focus while capturing photographs of each face or sub-surface. Faces are considered to be different if their normal differ in more than 5 degree from adjacent face on a same surface, as shown in illustration (b) of FIG. 1, where a schematic representation of a stretch (si,s 2 ,S 3 )of external surface is shown in a curvature.
- An angle (ai) of normal (shown by dotted arrow) between a first stretch (s-i) and a second stretch (s 2 ) is greater than 5 degree, and thus the stretches (si,S 2 ) are considered different faces.
- the normal between a third stretch (s 3 ) and the second stretch (S 2 ) of the same surface has an angle (8 2 ) which is also greater than 5 degree, and therefore the third stretch (S 3 ) is taken as another face eligible for capturing a photograph of stretch (S 3 ) normal to the stretch (s 3 ).
- angle of 5 degree is preferred way to distinguish between faces during capturing of photographs, however the present invention should not be deemed as limited to a specific embodiment of capturing photographs used for texturing, and that different variation distinguishing between faces can be used by a person ordinary skilled in art such as angle within range 4-25 degree, a visual distinction etc.
- a texture or shade is different in a same surface or different surfaces, the texture and shade is captured individually for use in detailed and precise u ' / / texturing of 3D model.
- a rear view of the scooter in an example is shown to further explain the capturing of photographs for external surfaces in terms of textures, shade and mark region such as written words and/or instructions on the scooter (the real 3D object).
- FIG. 1 In illustration (d) of FIG. 1, different parts (p1-p6) and the symbols (y ye) are shown using an enlarged view of a handle and meter portion (101) of illustration (c) in an example to further demonstrate manner of capturing of photographs for external surfaces used in texturing methods of the present invention. Close-up photographs of mark region are captured, which includes logo, the written instructions (ii), the words (wi,w 2 ), drawings, the symbols (yi-ys) and the marks (mi), for bringing out clarity of the mark region.
- the external surfaces of external parts (p1-p6) of the scooter visible from outside are photographed not only when the parts are fitted and all parts are integrated on the scooter, but also by segregating or separating the individual parts (p1,p2), as shown in different illustrations of FIG. 2. It will be appreciated that although all parts or components may be photographed using different photograph capturing manner to be used for texturing, however for practical use, only those surfaces of external arid/or internal parts which are of interest or selected for texturing in the 3D model are photographed using different photograph capturing manner, described above.
- the surfaces displayed in- or during- different user-controlled interactions in the 3D model are usually the surfaces selected for texturing using photographs, and photographs of such surfaces are captured.
- the user- controlled interactions includes user controlled realistic interactions selected from extrusive interactions, intrusive interactions, time-bound changes based interactions and environment mapping based interactions, and also the interactions performed by a user with a 3D model, where on providing an input by the user, a corresponding response is seen in the 3D model. The response is generated in real-time resulting in change in view of the 3D model.
- FIG. 2 shows different photographs, shown here in schematic representation in illustrations (a)-(h), to depict further photographs capturing manners of the external parts with the help of the handle and meter portion (101) of the scooter of FIG.1 in an example used for texturing according to an embodiment of the present invention (not all photographs shown). Illustration (d) of FIG.
- FIG. 2 is a photograph (shown schematically) of dismantled and separated rear view mirror part (p2)
- illustration (e) of FIG.2 is a photograph (shown schematically) of dismantled and separated meter part (p1) of the scooter.
- Photographing segregated parts enables capturing details of texture of hidden regions or areas of parts, which gets covered or masked when the parts are fitted. An example is shown in FIG. 8, where four different parts (801-804) are shown in fitted position, and where due to fitting or arrangement of the parts, some regions (805) are masked or hidden. The texture of these regions (805) cannot be captured in photographs when in fitted position, and thereby the factual details of texture for the regions are captured when separating or segregating the individual parts (801 ,802,803,804) from each other.
- FIG. 3a shows different photographs, also shown in schematic representation in illustrations (a)-(i), for illustrating further photographs capturing manner of both external and internal surfaces for capturing different shades and texture in precise details used in a texturing method according to an embodiment of the present invention.
- An internal part (302) of the real 3D object, here scooter, not visible from outer side, is captured and then removed further as seen in illustration (d) of FIG. 3a to capture precise top view of a mudguard part(301) with an internal part holder, and a mudguard covering (306).
- the mudguard part (301) is completely segregated to capture photographs of the segregated mudguard part (301) as shown in illustration (e) of FIG. 3a.
- Small sub-parts can also be removed for capturing details such as removal of sub-part (304) to get interrupted view of wheel part (301), and further removal of sub-parts as shown in illustration (f) and (g)of FIG. 3a, where a subpart (303) is also removed to capture photograph of a wheel part (305).
- Photographs of all mark region such as inscribed words (w 3 ), embossed text (w 4 ) or marks (m 2 ) on the wheel part (305) as shown in illustrations (e),(f),(g) and (i) of FIG. 3a are also captured normal to the mark region, by taking a close-up photograph in zoomed manner.
- Tread (307) of tire is captured as shown in illustration (h) of FIG.
- FIG. 3b different photographs, also shown in schematic representation in illustrations (a)-(e) of FIG. 3b, for illustrating further photographs capturing manner of internal surfaces and internal parts for capturing different shades and texture in precise details used in a texturing method according to an embodiment of the present invention.
- Illustration (b), (c) and (d) of FIG. 3b shows captured photographs, in schematic representations, by focusing on different sub-surfaces of the internal surface having different faces. IJIu rati-Qn (e.)- of_EIG ⁇ 3b-is-internal-surfaceOf " seaf parf of the scooter.
- Inaccessible surfaces of real 3D object such as interior of fuel tank of an automotive vehicle can also be displayed in some implementation in user-controlled interactions.
- a flexible means such as pipe camera can be used to capture photographs or video to be used in texturing.
- the lighting condition or environment during capturing of photographs or video can be under natural light or artificial light, depending on getting best view of the real 3D object for capturing precise details in photographs or video to be used for texturing.
- FIG. 3c illustrates video capturing manners of external and internal surfaces of real 3D object used in a texturing method according to an embodiment of the present invention.
- Real 3D objects can have some surfaces which are functioning such as lights, digital display etc. For example, to display blinking of light or screen display change of digital meter, video of these functioning surfaces are captured. The video is captured normal to surface of the real 3D object, while the surface is in operative state and functioning. Capturing video of such functioning surfaces provides real visual rather than providing such visual by animation. The captured video can be used as a texture data and applied on the 3D-model on a particular surface which corresponds to operative state in real 3D model.
- video are captured for functional surfaces preferably, however in some implementation, video can be captured for other surfaces (other than functioning) same as that of photograph capturing manner, and in such implementation or cases, only images obtained from videos may be used in texturing methods of the present invention, instead of capturing photographs directly, as images obtained from video are also factual and retain precise details.
- FIG. 4 illustrates a portion of a 3D-model depicting selection of one or more surfaces of one or more parts of a 3D-model for carrying out UV unwrapping of the selected surface/s of the 3D model.
- the 3D model is a 3D computer graphics model on which user-controlled interactions are applied or in other words on which the user-controlled interactions can be programmed, so as to make the 3D model interactive to users' input.
- the surfaces of a handle and meter portion (10V) of the 3D model (entire 3D model not shown), are selected for further processing.
- UV unwrapping of the selected surfaces of the 3D model can be carried out using standard technique. However the entire 3D model is not unwrapped as a whole, and a single UV layout is drawn for each selected surface or part of the 3D model.
- the selected surfaces in this example contains multiple parts, both external and internal parts such as a meter part ( ⁇ 1'), a rear mirror ( ⁇ 2'), brakes, a handle cover part ( ⁇ 3'), a hand brake part ( ⁇ 4'), a meter-case part ( ⁇ 5'), a front guard ( ⁇ 6'), a screw part ( ⁇ 7') and other chosen sub-parts is UV unwrapped at a time for generation of UV layouts.
- a meter part ⁇ 1'
- a rear mirror ⁇ 2'
- brakes a handle cover part ( ⁇ 3')
- a hand brake part ⁇ 4'
- a meter-case part ⁇ 5'
- a front guard ⁇ 6'
- screw part ⁇ 7'
- the illustration (a) shows a UV layout for the meter-case part(p5")
- the illustration (b) shows a UV layout for the meter part ( ⁇ 1')
- illustration (c) shows a UV layout for the front guard ( ⁇ 6')
- illustration (d) shows multiple UV layouts for a part of front guard ( ⁇ 6'), a rear handlebar cover part, and a portion of covering
- illustration (e) shows a UV layout for the rear mirror ( ⁇ 2')
- illustration (f) shows a UV layout for another portion of handlebar covering part
- illustration (g) shows a UV layout for the screw part ( ⁇ 7')
- illustration (m) shows a UV layout for the hand brake part ( ⁇ 4').
- illustrations (h)-(l),(n),(o),(p) of FIG.5 represent UV layout of subsurfaces of different parts.
- the different photograph and video capturing manners in addition to obtaining a single UV layout makes possible easy and precise alignment of photograph and/or video for the particular face in distortion free manner in the first alignment attempt itself saving texturing time, while avoiding any loss of details during alignment of photographs or video. Further a check of distortion is also carried out, to see if any photograph if captured slightly away from normal is distorted, and where a distortion, if any is found, a calibrated technique is used as discussed in FIG. 9 and FIG. 10 to make the application of photographs and/or video completely distortion free, and retaining exact texture of photographs and/or video.
- an exterior UV canvas and an interior UV canvas can be drawn or generated, where the exterior UV canvas comprises UV layouts for chosen external surfaces and the interior UV canvas comprises UV layouts for chosen internal surfaces.
- the UV unwrapping may be any type UV unwrapping known to one having ordinary skill in the art. The present invention should not be deemed as limited to a specific embodiment of UV unwrapping and/or drawing/generation of UV layouts.
- FIG. 6 illustrates in an example, through illustrations (a)-(p), generation of texture by application of each photograph and/or video on corresponding UVs of the UV layouts of FIG. 5, joining of UVs of different surfaces to make texture of selected surfaces of different parts of the handle and meter portion (101"), while performing different calibrations of photographs and/or video during application according to an embodiment of the present invention.
- the photograph/s and/or video of a part or surface to be applied on corresponding UV layout for the part is/are identified among different photographs and/or video.
- the identified photographs for each part as shown in the illustrations (a)-(c),(f)-(h) of FIG.
- UV layout can have multiple textures for one or more surfaces in the 3D model.
- one or more UV layout such as the UV layout (p1 * ) can have textures obtained from just video.
- one or more UV layout such as the UV layout ( ⁇ 1') can have textures of both photograph and video.
- UVs of adjacent surface with calibrated texture are joined by joining of the UVs of adjacent surface with calibrated texture or in other words joining of the UVs of related UV layout having calibrated texture, as shown in illustration (a) of FIG. 6, where UVs of the meter-case part ( ⁇ 5') and the meter part ( ⁇ 1') are joined.
- the UVs of different surfaces of adjacent parts with calibrated texture are joined.
- UV layout of the rear handlebar cover part ( ⁇ 5') as shown in illustration (d) of FIG. 6 side face (701) of the front guard part ( ⁇ 6') as shown in FIG. 7a, and corresponding UV layout of the side face (701) of front guard part ( ⁇ 6'), as shown in illustration (d) of FIG. 6 and FIG. 7a, are joined with calibrated texture as shown in FIG. 7c.
- One or more UV layout can be drawn for single part depending on number of faces, angle between adjacent faces, and number of mark region on each part.
- the front guard part ( ⁇ 6') of the 3D model have two distinct faces (701,702) and a mark region of written words (w 5 ).
- UV layouts are drawn to align captured photographs and/or video (not shown) easily and additionally retain factual details of the texture of captured photographs and/or video.
- the application of photographs and/or video becomes quick and distortion free.
- different calibration techniques are used during application of photographs and/or video on UV layout for rectification of distortion and other artificial artifacts, if any. Another Calibration can be done during joining of UVs of related UV layouts.
- FIG. 9 a calibration technique of photographs with UV layout in an example according to an embodiment of the present invention, is illustrated through illustrations (a)-(d).
- Illustration (c) of FIG. 9 shows a UV layout mesh (902) drawn normal to a surface before calibration
- illustration (d) of FIG. 9 shows the UV layout mesh (902) after calibration.
- single photograph (901) either individual photograph or photographs of different faces of same part merged to become single photographic image
- the edges of the photographic image(901) are matched with edges of the UV layout mesh (902) as shown in illustration (a) of FIG. 9.
- the photograph shown here is captured slightly deviated from normal to the surface, and thus during application the photographic image on the UV layout mesh (902), a perfect alignment may not occur in first attempt.
- the UV mesh is calibrated automatically to first match with the edges of applied photograph as shown in illustration (b) of FIG. 9, and then each boundary points of the UV mesh layout (902) at each side of the boundaries are made equidistant with other points for the side, removing distortion easily and quickly without the loss of any details, as shown in illustration (d) of FIG. 9.
- a UV layout for an individual segregated part consists of two faces, for which two photographs can be captured for each face. Then the two photographs are joined to single photographic image in joining calibration, where the junction of two photographs corresponds to the junction of two faces of the individual segregated part.
- the rear view mirror part ( ⁇ 2') have two faces (603,604) which are not very distinct.
- One photograph of one face (601) is joined with photograph of other face (603) at the face junction (605), and then aligned or applied on the UV layout of the rear view mirror part ( ⁇ 2'), which helps avoiding artificial artifacts, and retaining factual details without any additional calibration.
- FIG. 10a a calibration technique of photographs and UV layout in an example according to an embodiment of the present invention, is illustrated.
- identification of photograph/s related to each UV layout is carried out (1001). If a single photograph is identified for a UV layout, then edges of the single photographic image are matched with edges of corresponding UV layout (1003), else if two or more photographs are identified for single UV layout, the photographs are first joined at the junction, particularly face junction (1002), as described above in an example, and then proceeds to matching of edges. Edges usually matches as the photographs captured are normal to faces of surface of external and internal parts, and the UV layout are also drawn normal to the surface of external and internal parts, making application of photographic image on UV layout correct, and distortion free automatically reducing human efforts.
- UV layout mesh can be adjusted or the photographs can be edited by photo-editing at boundary for aiding in alignment.
- the step 1004 makes the photographic image application on UV layout correct, and distortion free, marking completion of first calibration.
- FIG. 10b a calibration technique of video and UV layout in an example according to an embodiment of the present invention, is illustrated.
- identification of video/s related to each UV layout is carried out (1101) for functioning surfaces, that is video is used as a texture in the 3D model for surfaces corresponding JoJunctioning ⁇ arts-in-real-Qbjeet-a ⁇
- time synchronization calibration is carried out to generate one or more videos for functioning surfaces (1103).
- Time synchronization involves adjusting time intervals and/or editing of video.
- two or more videos may be captured from different fields of view to cover the entire functioning surface as single video cannot be captured of a functioning surface that is either curved or have surface area beyond the coverage of one field of view of the camera for covering the entire functioning surface.
- merging the identified videos to obtain a single video for the entire functioning surface is carried out (1102), while performing time synchronization to match or synchronize image frames of the captured videos by video editing.
- a surface such as rear light of a car is a curved surface (functioning surface) for which it is difficult to take/capture video from one field of view
- two or more videos may be captured from two to three fields of view.
- two or more videos may be merged to single video to apply on a UV layout for the curved surface.
- merging means using time synchronization to match/adjust or synchronize image frames of the videos captured from different field of view by photo/video editing such as to generate one video applicable on the UV layout of the curved surface.
- Another example can be a long surface (functioning surface with large surface area) where it becomes difficult to capture a single video normal to surface or a close-up video without changing the field of view of the camera lens.
- two or more videos captured to cover the entire length of the surface, may be merged to single video to apply on a UV layout for the long surface by video editing while synchronize image frames of the videos captured for long surface.
- Functioning surface means surface of functioning parts in real/physical 3D object that are operative such as automotive vehicle lights (rear lights, head lights, to show blinking, ON-OFF etc.).
- a calibration is carried out on edges of UV layout mesh to match with edges of the video, where the points on outer boundaries of each side of the UV layout mesh are made equidistant to each other (1104), same as that described in FIG. 9 for photographs.
- UV layout mesh can be adjusted or the video can be edited at boundary.
- applying of the video on the UV layout is denoted as correct, and distortion free; and proceeding to next UV layout is carried out (1105).
- Another calibration is carried out during joining of UVs of related UV layout to form texture.
- visible artifacts are very minimum, and joining of UVs of different surfaces is easier due to previously performed first calibration.
- a check is carried out for any visible artifacts such as seams, and visible artifacts if any observed are corrected by further adjustment of UV layout mesh boundaries and photographs and/or video.
- An editing of photographs at boundary can also be carried out.
- Clone patching of edges can also be used to remove seams using conventional techniques.
- more pixels are allocated to the mark regions in comparison to other regions or surfaces of comparatively uniform structure to bring out clarity and vividness and remove blurring of small marks. As separate UV layout is drawn for each mark region, the pixel allocation is simplified.
- FIG. 11 illustrates selecting another surface of the 3D model in an example of a 3D model, for generating UV layout for each selected surfaces according to an embodiment of the present invention.
- surfaces of front mudguard part (301") are shown selected for disintegration of the part (301") from the 3D model of scooter.
- Four UV layouts are created for different texture on the front mudguard part (301'), as shown in FIG. 12.
- the front mudguard part (301') selected in FIG. 11 is unwrapped or flattened with drawing/generation of UV layouts of each chosen/selected surfaces.
- FIG. 13 different schematic views of a 3D-model of mobile depicting external and/or internal surfaces textured using real photographs and/or video of 3D object mobile, according to one embodiment of the present invention, is illustrated, through illustrations (a)-(g). Illustration (a) of FIG. 13 shows different operable sub- parts viewed from outside such as display, keys and body.
- UV layout As mark region of symbols and alphabets appear on each keys on mobile, photographs or videos and UV layout are obtained separately for each key.
- the UVs of each UV layout of keys, display and body are joined after application of photographs and/or video to form texture for the front view of 3D model of mobile, as shown in illustration (a) of FIG. 13.
- the creation of separate UV layout for texturing of each key makes the mark region on each key very clear and real, such that on zooming the 3D-model, the symbols and alphabets do not get blurred.
- Illustration (b) shows schematically external surface of an external part that is outer side of battery cover, which is textured by a texturing method of the present invention.
- Illustration (c) shows internal surface that is inner side of battery cover, which is textured by a texturing method of the present invention.
- Illustration (d)-(g) of FIG. 13 shows interior of mobile, surfaces of internal parts and SIM slot positioned beneath mobile battery. All the visible surfaces observed during intrusive interaction of opening of parts of 3D mobile one by one in an user-controlled realistic interaction as shown in illustration (a)-(g) of FIG. 13, are textured by the texturing method of the present invention.
- FIG. 14 illustrates a flowchart according, to an embodiment of the present invention of a texturing method for external and/or internal surfaces of a three- dimensional (3D) model of a real 3D object using photographs of the real 3D object.
- the texturing method involves obtaining plurality of photographs of chosen external and/or internal surfaces of the real 3D object (1401) by different photograph capturing manners. The different photograph capturing manners are discussed in FIG. 1, and further in FIG. 2, FIG. 3a and FIG. 3b.
- one or more surfaces of one or more parts of the 3D model is/are selected. The selection of the surfaces is discussed by way of example in FIG.
- UV unwrapping of the selected surface/s of the 3D model for generating UV layout for each selected surface/s takes place.
- the UV layouts generation or drawing depends on angle between adjacent faces, and number of mark region on each unwrapped part. A separate UV layout is preferred for each mark region such as logo, words, marks, symbols etc.
- the drawing or generation of one or more UV layout for each selected surface is explained in an example in FIG. 5, and FIG. 7a.
- step 1404 photograph/s related to each UV layout are identified, and applied on each UV layout.
- a calibration is carried out on identified photograph/s and UV layout using a calibration technique of photographs with UV layout as described in FIG. 10a and FIG. 9 to obtain texture for each corresponding UV layout.
- the texture obtained in this step is calibrated texture of real photographs which aligns accurately with the corresponding surfaces of 3D model.
- step 1405 after calibration is done, joining all UVs of related UV layouts with calibrated texture to form texture for the selected surfaces is carried out. Meanwhile, further calibration of photographs with UV layouts is carried out, which includes performing a check for visible artifacts. And if visible artifacts are identified, the UV layout mesh boundaries and photographs are adjusted.
- the pixel allocation for the mark region is calibrated separately for resolution such that the texture of the mark region is clear and vivid, and relatively more pixels are assigned to the mark region.
- Photographs can be joined seamlessly during second and third calibration by means of photo-editing using conventional techniques.
- the editing or photo-editing means editing of real photographs to enhance the photographs quality, cropping photographs, obtaining texture patch from the ⁇ photographs for clone patching, tiling etc.
- Step 1406 involves repeating steps 1402- 1405 until all chosen external and/or internal surfaces of the 3D model are textured using photographs, while at the joining of surfaces of different set of the selected surfaces, applying third calibration for making seamless texture during each repetition step takes place.
- a check for texture alignment may be carried out after each calibration, by applying calibrated texture on the 3D model for selected surface/s of the 3D-model.
- Each UV point in UV layout corresponds to one x,y,z coordinate in the 3D model.
- the final calibrated textures and corresponding 3D-model is stored as texture data and 3D-model data respectively.
- This step provides the 3D model data and corresponding calibrated texture data obtained in repetition step 1406 to implement user-controlled interactions to transform the 3D model data with calibrated texture data into an interactive 3D model for performing user-controlled interactions.
- the calibrated textures and corresponding 3D-model obtained are used for displaying real-like textures on a 3D-model which is used for user-controlled interactions, as discussed in FIG. 19.
- one selected part surfaces of the 3D-model can be UV unwrapped at a time for creating one or more UV layout, followed by application of photograph for each UV layout, while performing first calibration of photographs with the UV layout. This is followed by unwrapping of second selected part's surface for creating one or more UV layout for the second part's surfaces.
- This embodiment may be employed for 3D models containing a few external and/or internal parts, or when an individual part contains very complex geometry with multiple faces, textures etc. Texturing of certain external and/or internal surfaces in the 3D model, such as surfaces having single colour, surfaces containing uniform texture can be textured using colour, or combination of colour and textures obtained by photo-editing of real photographs.
- FIG. 15 illustrates a flowchart of a texturing method of a three-dimensional (3D) model of a real 3D object using photograph and video, according to an embodiment of the present invention.
- step 1501 obtaining and using plurality of photographs and/or video of the real 3D object and/or the real 3D object's variants is carried out.
- the photographs and/or video are used as texture data.
- the real 3D object's variants have same shape as of the real 3D object.
- Each real 3D object's variant contains at least one texture, pattern or mark region different from the real 3D object qualify for variant.
- the different photograph capturing manners are discussed in FIG. 1 , and further in FIG. 2, FIG. 3a and FIG. 3b.
- the video capturing manner is discussed in FIG. 3c.
- the photographs and videos are captured by a photograph and video capturing device, preferably a digital camera configured for capturing high resolution photographs and video.
- Step 1502 involves selecting one or more surfaces of one or more external and/or internal parts of the 3D model.
- the selection of the section is discussed by way of example in FIG. 4.
- the surfaces selected for texturing are usually the external and/or internal surfaces, which are to be displayed in- or during- different user- controlled interactions.
- UV unwraps of selected surface/s of the 3D model for generating UV layout for each selected surface is carrying out. The drawing or generation of one or more UV layout for each selected surface is explained in an example in FIG. 5, and FIG. 7a.
- Step 1504 involves identifying texture data corresponding to each UV layout, and applying one or more matched photographs and/or video as texture data on each corresponding UV layout.
- Different calibrations on photographs and video are carried out during application, as described in FIG. 10a and FIG. 10b respectively.
- Calibration includes adjusting UV layout mesh to make points on outer boundaries of each side of the UV layout mesh equidistant. Additionally, editing of photograph/s at boundary can be carried out for aiding in alignment.
- Step 1505 involves joining all UVs of related UV layout to form texture for the selected surface/s.
- UV layout comprises calibrated texture.
- further calibration is carried out during joining.
- any visible artifacts such as seams, and visible artifacts if any observed are corrected by further adjustment of UV layout mesh boundaries and photographs and/or video.
- Clone patching of edges can also be used to remove seams using conventional techniques.
- more pixels are allocated to the mark regions in comparison to other regions or surfaces of comparatively uniform structure to bring out clarity and vividness and remove blurring of small marks. As separate UV layout is drawn for each mark region, the pixel allocation is simplified.
- Step 1506 involves repeating steps 1502 to 1505 until all selected/chosen external and/or internal surfaces of the 3D model are textured using photographs and/or video, while at the joining of surfaces of different set of the selection of surfaces, third calibration is applied for making seamless texture during each repetition step 1506.
- a check for texture alignment is optionally carried out after each calibration, by applying calibrated texture on the 3D model for selected surface/s of the 3D model.
- the view of texture on the textured 3D model replicates view of texture as on the real 3D object for the selected external and/or internal surfaces.
- the final calibrated textures and corresponding 3D model is stored as texture data and 3D model data respectively.
- the calibrated textures and corresponding 3D-model obtained are used in user-controlled interactions implementation.
- the calibrated textures and corresponding 3D-model obtained are used for displaying real-like textures on a 3D-model which is used for user-controlled interactions, as discussed in FIG. 19.
- the texture data optionally comprises texture made by photo-editing of real photographs and/or videos, images other than photographs; or artificial colour. Even if images other than photographs can be used, use of photographic images in total UV layouts for texturing ranges from 10-100% of total number of images optionally used in texturing in all cases. In other words, this means the method provides flexibility and is capable of using numerous photographic images up to 100% in all UV layouts for texturing.
- One of the above steps may be performed on a computer.
- FIG. 16 illustrates a seat surface having uniform pattern in one example. Texturing of certain external and/or internal surfaces such as surfaces having single colour and surfaces containing uniform texture can be textured using colour, or combination of colour and photo-editing of photographs or video. As the seat surface have uniform pattern, photo-editing measures can also provide realistic textures for such surfaces as an alternative of using real photographs for entire seat surface. Photo-editing of real photographs and/or videos includes photo-editing to enhance the photographs/video quality, cropping photographs, photo-editing to obtain texture patch from the photographs and/or videos, tiling or clone patching using known techniques.
- FIG. 17, through illustrations (a)-(b) illustrates different views of a rear light surface section in the 3D model showing use of texture data of both photographs and video for the rear light surface in the 3D model.
- the rear light surface is textured using photographs producing texture (te-.tg-) for lights in off- mode.
- the captured videos can be used as texture (vts ⁇ vtg-), and applied on UV layout of rear light surface one at a time, and then stored as calibrated texture.
- FIG. 18 illustrates, through illustrations (a)-(c), schematic representation of 3D model of scooter textured using real photographs in an example, using texturing methods of the present according to an embodiment of the present invention.
- the 3D model (shown here in black and white drawings and part images) provides or retains minute details and vivid appearance replicating view of real scooter texture.
- the exterior and interior of the 3D model looks real, and maintains minute details when viewed from different field of view, or even when individual parts are separated from the 3D model during user-controlled realistic interactions such as intrusive interactions.
- Internal surfaces of the textured and 3D model look extremely real and vivid. Illustration (c) of FIG. 16 shows schematically internal surfaces of seat and external surfaces of seat holder.
- FIG. 19 a display method for displaying a 3D model in a virtual three- dimensional space on a Graphical User Interface (GUI) for performing user-controlled interactions is illustrated in one example.
- the method involves providing 3D model data and corresponding texture data (step 1901 ).
- the texture data includes calibrated textures obtained using photographs/video of a real object obtained in texturing method of FIG. 15.
- implementing user-controlled interactions on the 3D model is carried out.
- the implementation of user-controlled interactions comprises applying user-controlled interactions logic to prepare for rendering of the 3D model with calibrated textures in real time using the provided texture data and the 3D model data.
- step 1903 real-time rendering and displaying the 3D model with the calibrated texture in the virtual three-dimensional space for performing user-controlled interactions takes place.
- the 3D model once rendered and displayed in the virtual three-dimensional space on GUI, all consequent user-controlled interactions can be done in continuation of previous interaction.
- Step 1904 involves receiving user input for performing user-controlled interactions with the displayed 3D-model.
- step 1905 in response to the user input, rendering the 3D model in real-time according to user- controlled interaction takes place.
- a separate 3D model is not loaded in response to user input for performing user-controlled interaction.
- step 1906 corresponding rendered graphics of 3D model is displayed as output of the performed user-controlled interaction in real-time in response to the user input.
- the 3D model is displayed with a background scene in one embodiment or without a background scene in another embodiment.
- a background scene when present may be still background or movable background scene.
- the last view, position and orientation of the 3D model in each user-controlled interaction is preserved for receiving input for next user-controlled interaction in any position or orientation.
- the texture displayed on external and/or internal surfaces of 3D model in- or -during each user-controlled interaction is calibrated texture obtained using photographs and/or video of the real object providing real-like look and feel on the displayed 3D model.
- the interactive 3D model can be displayed in virtual three-dimensional space on a- GUI over a webpage through a network such as INTERNET, LAN, WAN or the like.
- the interactive 3D model in one implementation, can be displayed in virtual three-dimensional space on a GUI in application software over a display.
- the display can be an electronic display, a projection based display, a wearable near eye display or a see through display. Examples of user-controlled interaction is illustrated in FIG. 20, through illustrations (a)— (d), where a 3D model of bike is rotated to different positions on providing user input, and where during rotation, real— like texture is displayed using the texturing methods of the present invention.
- a head light part can be put in ON mode in an interaction using texture of video for the functional part.
- Illustration (d) shows opening of seat part in an interaction to show internal surface in zoomed view also textured realistically with calibrated texture of photographs.
- processors one or more processors
- At least a non-transitory computer readable storage medium configured to contain: a database configured to store 3D model data and corresponding texture data, where the texture data includes calibrated textures of real photographs and/or video obtained from the texturing method as discussed in FIG. 14 or FIG. 15;
- the steps are repeated for performing each user-controlled interaction.
- the user input is a touch input, input through a pointing device or a keyboard, or a gesture input.
- the texture data includes calibrated textures of real photographs and/or videos of real 3D object and real 3D object's variants; and texture made by photo- editing of real photographs.
- the GUI can be accessible over a web-page via hypertext transfer protocol.
- the textured 3D-model obtained by the texturing method (FIG. 14, FIG. 15) of the present invention may be used to create rendered images of the textured 3D-model for different surfaces of external and internal parts.
- the rendered images from the textured 3D-model will carry improved looks and texture, and can be used for texturing of another similar 3D models using teachings of this patent application, instead of directly using real photographs.
- this aspect or implementation shall also be considered within the scope of the appended claims.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN3840/DEL/2013 | 2013-12-31 | ||
IN3840DE2013 IN2013DE03840A (enrdf_load_stackoverflow) | 2013-12-31 | 2014-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015102014A1 true WO2015102014A1 (en) | 2015-07-09 |
Family
ID=53493379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2014/000177 WO2015102014A1 (en) | 2013-12-31 | 2014-03-19 | Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation |
Country Status (2)
Country | Link |
---|---|
IN (1) | IN2013DE03840A (enrdf_load_stackoverflow) |
WO (1) | WO2015102014A1 (enrdf_load_stackoverflow) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019096686A3 (de) * | 2017-11-14 | 2019-07-11 | Zimmermann Holding-Ag | Verfahren zur darstellung eines dreidimensionalen objekts sowie diesbezügliches computerprogrammprodukt, digitales speichermedium und computersystem |
CN113240811A (zh) * | 2021-04-28 | 2021-08-10 | 深圳羽迹科技有限公司 | 三维人脸模型创建方法、系统、设备及存储介质 |
US11132845B2 (en) | 2019-05-22 | 2021-09-28 | Microsoft Technology Licensing, Llc | Real-world object recognition for computing device |
WO2023220778A1 (en) * | 2022-05-17 | 2023-11-23 | Breville Pty Limited | Decorated kitchen appliance |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6100941A (en) * | 1998-07-28 | 2000-08-08 | U.S. Philips Corporation | Apparatus and method for locating a commercial disposed within a video data stream |
US20090153577A1 (en) * | 2007-12-15 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method and system for texturing of 3d model in 2d environment |
US20090219281A1 (en) * | 2008-02-28 | 2009-09-03 | Jerome Maillot | Reducing seam artifacts when applying a texture to a three-dimensional (3d) model |
US20100122286A1 (en) * | 2008-11-07 | 2010-05-13 | At&T Intellectual Property I, L.P. | System and method for dynamically constructing personalized contextual video programs |
US8525846B1 (en) * | 2011-11-11 | 2013-09-03 | Google Inc. | Shader and material layers for rendering three-dimensional (3D) object data models |
WO2013174671A1 (en) * | 2012-05-22 | 2013-11-28 | Telefonica, S.A. | A method and a system for generating a realistic 3d reconstruction model for an object or being |
-
2014
- 2014-03-19 IN IN3840DE2013 patent/IN2013DE03840A/en unknown
- 2014-03-19 WO PCT/IN2014/000177 patent/WO2015102014A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6100941A (en) * | 1998-07-28 | 2000-08-08 | U.S. Philips Corporation | Apparatus and method for locating a commercial disposed within a video data stream |
US20090153577A1 (en) * | 2007-12-15 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method and system for texturing of 3d model in 2d environment |
US20090219281A1 (en) * | 2008-02-28 | 2009-09-03 | Jerome Maillot | Reducing seam artifacts when applying a texture to a three-dimensional (3d) model |
US20100122286A1 (en) * | 2008-11-07 | 2010-05-13 | At&T Intellectual Property I, L.P. | System and method for dynamically constructing personalized contextual video programs |
US8525846B1 (en) * | 2011-11-11 | 2013-09-03 | Google Inc. | Shader and material layers for rendering three-dimensional (3D) object data models |
WO2013174671A1 (en) * | 2012-05-22 | 2013-11-28 | Telefonica, S.A. | A method and a system for generating a realistic 3d reconstruction model for an object or being |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019096686A3 (de) * | 2017-11-14 | 2019-07-11 | Zimmermann Holding-Ag | Verfahren zur darstellung eines dreidimensionalen objekts sowie diesbezügliches computerprogrammprodukt, digitales speichermedium und computersystem |
CN111344744A (zh) * | 2017-11-14 | 2020-06-26 | 齐默曼控股公司 | 用于展示三维物体的方法以及相关计算机程序产品、数字存储介质和计算机系统 |
US11189080B2 (en) | 2017-11-14 | 2021-11-30 | Zimmermann Holding-Aktiengesellschaft | Method for presenting a three-dimensional object and an associated computer program product, digital storage medium and a computer system |
US11132845B2 (en) | 2019-05-22 | 2021-09-28 | Microsoft Technology Licensing, Llc | Real-world object recognition for computing device |
CN113240811A (zh) * | 2021-04-28 | 2021-08-10 | 深圳羽迹科技有限公司 | 三维人脸模型创建方法、系统、设备及存储介质 |
WO2023220778A1 (en) * | 2022-05-17 | 2023-11-23 | Breville Pty Limited | Decorated kitchen appliance |
Also Published As
Publication number | Publication date |
---|---|
IN2013DE03840A (enrdf_load_stackoverflow) | 2015-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10748324B2 (en) | Generating stylized-stroke images from source images utilizing style-transfer-neural networks with non-photorealistic-rendering | |
US8947422B2 (en) | Gradient modeling toolkit for sculpting stereoscopic depth models for converting 2-D images into stereoscopic 3-D images | |
US8351689B2 (en) | Apparatus and method for removing ink lines and segmentation of color regions of a 2-D image for converting 2-D images into stereoscopic 3-D images | |
EP2306745B1 (en) | Method and system for creating depth and volume in a 2-D planar image | |
CN102592275B (zh) | 虚拟视点绘制方法 | |
CN103426163B (zh) | 用于渲染受影响的像素的系统和方法 | |
US10122992B2 (en) | Parallax based monoscopic rendering | |
CN104854426B (zh) | 用于为了三维图像生成来标记图像的系统和方法 | |
Rematas et al. | Image-based synthesis and re-synthesis of viewpoints guided by 3d models | |
JP2010154422A (ja) | 画像処理装置 | |
JP2013235537A (ja) | 画像作成装置、画像作成プログラム、及び記録媒体 | |
CN114049464B (zh) | 一种三维模型的重建方法及设备 | |
CN103065360A (zh) | 一种发型效果图的生成方法及系统 | |
KR102000486B1 (ko) | 다중 텍스처를 이용한 3d 프린팅 모델 생성 장치 및 방법 | |
CN106204746B (zh) | 一种可实现3d模型实时上色的增强现实系统 | |
US9956717B2 (en) | Mapping for three dimensional surfaces | |
US10497165B2 (en) | Texturing of 3D-models of real objects using photographs and/or video sequences to facilitate user-controlled interactions with the models | |
WO2015102014A1 (en) | Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation | |
Carbon et al. | Da Vinci's Mona Lisa entering the next dimension | |
KR101454780B1 (ko) | 3d 모델의 텍스쳐 생성 방법 및 장치 | |
Seo et al. | Interactive painterly rendering with artistic error correction | |
KR20080041978A (ko) | 화가의 페인팅 절차에 기반한 회화적 렌더링 방법 및 전시시스템 | |
Guggeri et al. | Shape reconstruction from raw point clouds using depth carving | |
CN118735979B (zh) | 一种用于艺术作品的虚拟图像生成方法及系统 | |
Kawai et al. | Automatic generation of photorealistic 3d inner mouth animation only from frontal images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14876705 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14876705 Country of ref document: EP Kind code of ref document: A1 |