US20190080521A1 - Consumer product advertising image generation system and method - Google Patents

Consumer product advertising image generation system and method Download PDF

Info

Publication number
US20190080521A1
US20190080521A1 US15/912,397 US201815912397A US2019080521A1 US 20190080521 A1 US20190080521 A1 US 20190080521A1 US 201815912397 A US201815912397 A US 201815912397A US 2019080521 A1 US2019080521 A1 US 2019080521A1
Authority
US
United States
Prior art keywords
dimensional
consumer product
computer
objects
dimensional images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/912,397
Inventor
Damian M. Fulmer
Drew Loveridge
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/912,397 priority Critical patent/US20190080521A1/en
Publication of US20190080521A1 publication Critical patent/US20190080521A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Definitions

  • consumer product advertising images have been generated through the use of film and camera. Modern times have seen consumer product images advance to being captured with a digital cameras. Whether the consumer product advertising images are captured digitally or by film, hours and hours of time are spent positioning a product with just the right lighting to produce an advertising-worthy photo than can be used in magazines or on billboards. In some cases consumer product prototypes are built, at significant expense, to provide a product that is more easily adapted to photography. Each time a variation on the consumer product occurs new images must be captured at additional expense.
  • the present disclosure is directed to image generation and, more specifically, to consumer product advertising image generation.
  • One aspect of the present disclosure is directed towards a method of generating a three-dimensional consumer product image.
  • the method includes: distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects; applying a visual attribute to the one or more three-dimensional geometric representations of the objects; rendering a plurality of two dimensional images from the three-dimensional geometric representations of the objects with applied visual attributes; defining one or more stacking orders of at least a portion of the plurality of two dimensional images; and delivering to a user the two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing program capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.
  • Another aspect of the present disclosure is directed towards a method of generating a three-dimensional consumer product image.
  • the method includes: distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects; applying original layer assignment, original layer hierarchy and original shading associated with the three-dimensional model of the consumer product to the one or more three-dimensional geometric representations of the objects; rendering a first plurality of two-dimensional images based on the three-dimensional geometric representations of the one or more objects with original layer assignment, hierarchy and shading; applying a visual attribute to the first plurality of two-dimensional images; rendering a second plurality of two dimensional images from the first plurality of two dimensional images with applied visual attributes; defining one or more stacking orders of at least a portion of the second plurality of two dimensional images; and delivering to a user the second plurality of two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing
  • Another aspect of the present disclosure is directed towards a system comprising a computing device that includes a processing device and a computer readable storage device storing data instructions that when executed by the processing device cause the computing device to: distill one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects; apply a visual attribute to the one or more three-dimensional geometric representations of the objects; render a plurality of two dimensional images from the three-dimensional geometric representations of the objects with applied visual attributes; and define one or more stacking orders of at least a portion of the plurality of two dimensional images.
  • the rendered two-dimensional images and the defined one or more stacking orders are rendered and defined, respectively, in a form that is importable into a computer-executable image editing program.
  • FIG. 1 is a schematic flowchart illustrating a system and method for generating a consumer product advertising image.
  • FIG. 2 illustrates an example of a computing device that can be used to implement aspects of the present disclosure.
  • FIG. 3 is a schematic of the computing device and program modules that can be used to implement aspects of the present disclosure.
  • FIG. 4 is a flowchart of a method generating a consumer product advertising image.
  • FIG. 5 is schematic of a spreadsheet representation of metadata.
  • FIG. 6 is a flowchart illustrating the operation of a data cleaning module.
  • FIG. 7 is a flowchart illustrating the operation of the scene state module.
  • FIG. 8 is an example of a graphical user interface usable by the rendering module
  • FIG. 9 is a schematic of a configuration module.
  • FIG. 10 is an example of an “asset creation” graphical user interface
  • FIG. 11 is an example of an “asset deletion” graphical user interface.
  • FIG. 12 is an example of a “frame stacking order” graphical user interface.
  • FIG. 13 is an example of a “trim builds assignment” graphical user interface.
  • FIG. 14 is an example of an “attribute assignments” graphical user interface.
  • FIG. 15 is an example of a “trim build assignment” graphical user interface.
  • FIG. 16 is an example of a “finished parts assignment” graphical user interface.
  • FIG. 17 is an example of a “trims” graphical user interface.
  • FIG. 18 is an exemplary sample of XML instructions for generating a consumer product advertising image.
  • FIG. 19 is a schematic of the computing device and program modules that can be used to implement aspects of the present disclosure.
  • FIG. 20 is a flowchart of a method generating a consumer product advertising image.
  • the consumer product advertising image generation system and method of the present disclosure reduces the time and expense associated with physically capturing advertising images of a consumer product by, instead, creating a virtual consumer product in a virtual setting that can be positioned, colorized, texturized, lit, etc., and rendered to a two-dimensional or three-dimensional advertising image, or a sequence of such images that are substantially equivalent to physically captured images.
  • the method 100 is configured to be performed by one or more computing devices 102 executing instructions that are stored in memory.
  • the method 100 in the most general of terms, comprises establishing a computer-aided design (CAD) model (e.g. a computerized two-dimensional or three-dimensional model) of a consumer product, S 104 , such as a vehicle 105 or detergent bottle 107 , etc.
  • CAD computer-aided design
  • the method 100 then comprises deconstructing the model to usable geometric components, S 106 , and applying visual attributes (e.g., texture 109 , color 111 , lens effects and camera angles 113 , lighting 115 , etc.) to those components, S 108 .
  • visual attributes e.g., texture 109 , color 111 , lens effects and camera angles 113 , lighting 115 , etc.
  • the method 100 further comprises reconstructing the components and attributes into a virtual consumer product advertising model that can be rendered to a two-dimensional or three-dimensional consumer product advertising image (or sequence of images), S 110 , such as the exemplary consumer product advertising images of vehicle 117 or detergent bottle 119 , as illustrated in FIG. 1 .
  • FIG. 2 illustrates an exemplary architecture of a computing device 200 that can be used to implement aspects of the present disclosure.
  • the computing device 200 can be in any suitable form including a microcontroller, a microprocessor, a desktop computer, a laptop computer, a tablet computer, a mobile computing device (e.g., smart phone, iPodTM, iPadTM, or other mobile device), or other devices configured to process digital instructions. Rather, it is understood that the exemplary computing device 200 may be configured specific to its intended use incorporating various peripherals and programming instructions, as described herein, to achieve desired operations.
  • the computing device 200 is an example of programmable electronics, which may include one or more such computing devices, and when multiple computing devices are included, such computing devices can be coupled together with a suitable data communication network so as to collectively perform the various functions, methods and operations disclosed herein.
  • the computing device 200 includes at least one processing device and at least one computer readable storage device.
  • the processing device operates to execute data instructions stored in the computer readable storage device to perform various operations, methods, or functions described herein.
  • the computing device 200 includes at least one processing device 202 , such as a central processing unit (CPU), as well as a system memory 204 and a system bus 206 .
  • the system bus 206 couples various system components including the system memory 204 to the processing device 202 .
  • the system bus 206 is one of any number of types of bus structures including a memory bus, a peripheral bus, and a local bus using any variety of bus architectures.
  • the system memory 204 includes program memory 208 and random access memory (RAM) 210 .
  • the computing device 200 also includes a secondary storage device 214 , such as a hard disk drive or file server, for storing digital data.
  • the secondary storage device 214 is connected to the system bus 206 by a secondary storage interface (INTF) 216 .
  • the secondary storage device 214 and its associated computer readable media, provides nonvolatile storage of computer readable instructions (including application programs and program modules), data structures, and other data for the computing device 200 .
  • the exemplary computing device 200 described herein employs a secondary storage device 214
  • the secondary storage device is eliminated or its hard disk drive/file server configuration is replaced with an alternative form of computer readable storage media.
  • Alternative forms of computer readable storage media include, but are not limited to, magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, compact disc memories, digital versatile disk memories, and random access memories.
  • Some embodiments of the secondary storage devices 214 include non-transitory media.
  • the computer readable storage media can include local storage or cloud-based storage.
  • a number of program modules can be stored in the memory 204 , or the secondary storage device 214 . These program modules include an operating system 218 , one or more application programs 220 , other program modules 222 as described herein, and program data 224 .
  • the computing device 200 can utilize any suitable operating system, such as Microsoft WindowsTM, Google ChromeTM, Apple OS, and any other operating system suitable for a computing device.
  • the computing device 200 typically includes at least some form of computer readable media, e.g., computer readable media within the memory 204 or secondary storage device 214 .
  • Computer readable media includes any available media that can be accessed by the computing device 200 .
  • computer readable media includes computer readable storage media and computer readable communication media.
  • Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules or other data.
  • Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory or other memory technology, compact disc read only memory, digital versatile disks or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the computing device 200 .
  • a user provides inputs to the computing device 200 through one or more input devices 226 .
  • input devices 226 include a keyboard 228 , a mouse 230 , a scanner 232 , and a touch sensor 234 (such as a touchpad or touch sensitive display).
  • Other embodiments include other input devices 226 necessary for fulfilling the operations described herein.
  • the input devices 226 are incorporated into the computing device 200 itself.
  • the input devices 226 are external to the computing device 200 and are connected to the processing device 202 through an input interface 236 that is coupled to the system bus 206 .
  • the input devices 226 can be connected by any number of input/output interfaces, such as parallel port, serial port, game port, universal serial bus, or a custom interface.
  • Wireless communication between input devices and the input interface 236 is possible as well, and includes infrared, BLUETOOTH® wireless technology, 802.11/a/b/g/n, cellular or other radio frequency communication systems in some possible embodiments.
  • the computing device 200 incorporates within or is operably coupled to a display device 238 .
  • the display device 238 include a monitor, a liquid crystal display device, a projector, or a touch sensitive display device.
  • the display device 238 is also connected to the system bus 206 via an output interface 240 , such as a display controller.
  • the computing device 200 can control via output interface 240 various other peripherals such as a printing device 242 or speaker (not shown).
  • the output interface 240 can comprise any number of input/output interfaces such as those described in the paragraph above.
  • the computing device 200 further includes a network interface 244 that includes a network communication device to communicate digital data across a data communication network 246 .
  • An example of the network interface 244 includes a wireless transceiver for transmitting digital data over a wireless network.
  • the wireless transceiver is configured to work with one or more wireless communication technologies such as cellular communication, Wi-Fi communication (such as that conforming to one of the IEEE 802.11 family of communication protocols), Bluetooth® communication, and the like.
  • the network interface 244 is an Ethernet network interface device having an Ethernet port for receiving an Ethernet cable to transmit and receive digital data across the Ethernet cable to a network 246 such as a local area network or the Internet.
  • FIG. 3 provides an exemplary schematic of the computing device 200 and the program modules, e.g., program modules 222 , which are stored in memory and utilized by the computing device 200 to perform the various functions and operations of the system and method for generating a consumer product advertising image.
  • the computing device 200 generally includes a computer graphics module 300 and a database 350 accessible to the computer graphics module 300 .
  • the computer graphics module 300 further includes a data cleaning module 304 , a scene states module 306 , a rendering module 308 , a compositing module 310 and a configuration module 312 .
  • Each of the modules 304 , 306 , 308 , 310 and 312 will be described in further detail below in relation to the flowchart of FIG. 4 .
  • FIG. 4 is a flowchart illustrating an example embodiment of the method 100 for generating a consumer product advertising image; this embodiment of the method 100 will hereafter be identified as method 400 .
  • the method 400 will be described with reference to a consumer product comprising an automobile. However, it is to be understood that the method may be applied to any consumer product for which two-dimensional or three-dimensional digital drawings, e.g., CAD drawings, exist or may be created.
  • the method 400 begins with an order placed by a client to receive one or more of an advertising-worthy image of an automobile, S 402 .
  • the client then supplies original input data (e.g., two-dimensional or three-dimensional digital data such as that provided by CAD files) for the automobile from which the advertising images are to be generated and the input data is stored to the database 350 , S 404 .
  • original input data e.g., two-dimensional or three-dimensional digital data such as that provided by CAD files
  • the input data is received into the computing device 200 via the computer graphics module 300 (e.g., Autodesk Maya or other computer graphics program such as 3ds Max, AOI, Blender, Cinema 4D, Clara.io, MODO, Shade 3D, Softimage, ZBrush, etc.) and stored to the database 350 in manner such that all original history, modifiers, constraints, layers, and connections, etc., relating to the digital data are maintained.
  • the computer graphics module 300 e.g., Autodesk Maya or other computer graphics program such as 3ds Max, AOI, Blender, Cinema 4D, Clara.io, MODO, Shade 3D, Softimage, ZBrush, etc.
  • the input data generally includes all parts to be manufactured for the automobile as well as assembly data for assembling the components of the automobile into various final builds.
  • the final build of the automobile, or the “end item model” (EIM) of the automobile represents all selected options for the automobile. Such options may include, but are not limited to, the type of engine, the type of tires, the type of stereo, the type of upholstery, the type of interior trim, the type of exterior trim, the color of the automobile, the color of the upholstery, the type of suspension, the type of brakes, etc.
  • the input data additionally includes metadata that associates the various components/parts of the automobile with the associated possible builds; the originally supplied metadata is also stored in the database 350 .
  • FIG. 5 provides a simplified spreadsheet 500 representation of the metadata illustrating the connection between parts and various EIM builds.
  • “Set of Parts A” is to be used in the number one, two, five and eight EIM builds of the automobile
  • “Set of Parts B” is to be used in the number two, three, and seven EIM builds of the automobile, etc.
  • the actual organizational structure is much more complicated as thousands of parts, thousands of parts relating other parts, and thousands of EIM builds are possible.
  • the data cleaning module 304 is employed to clean the input data and store the cleaned data to the database 350 , S 406 .
  • the data cleaning module 304 generally operates in accordance with the flowchart of FIG. 6 .
  • First, all three-dimensional objects of the original input data are exported to an ALEMBICTM cache, S 602 , where all history, modifiers, constraints, and connections are removed from the input data, S 604 .
  • ALEMBICTM is a computer graphics interchange file format that can be used with numerous computer graphic applications. Through removal of the noted items, the exported objects are distilled into a non-procedural, application independent set of geometric results. The cleaned three-dimensional objects in the Alembic cache are then imported back into Maya, S 606 . Maya has operated to maintain the display layers, shader assignment and hierarchy (e.g., order of assembly) of the original input data which enables the imported objects to be placed back into its original layer and hierarchy, and to be re-associated with its original shaders, S 608 .
  • shader assignment and hierarchy e.g., order of assembly
  • the data cleaning module 304 enables the removal of any connections, attributes, or irregular settings that may have been previously set within the starting geometry. Accordingly, instead of attempting to check every possible combination of automobile components that might cause a problem when preparing the advertising images, the data cleaning module 304 essentially forces the connections, attributes or irregular settings to be reset/eliminated, yet allows the cleaned, imported objects to maintain their connections to, for example, their display layers, their place in the geometrical hierarchy, and their shaders.
  • the cleaned three-dimensional objects which have been re-associated with their layers, hierarchies and shaders, are combined into one or more scene states, via scene state module 306 , within Maya.
  • the scene state module 306 generally operates in accordance with flow chart of FIG. 7 . As shown, the scene state module 306 is used to recombine the cleaned, three-dimensional objects into a virtual two-dimensional or three-dimensional model that will be used to produce a two-dimensional image representative of each component or each combination of components necessary to produce the client-requested advertising images.
  • a first attempt at establishing the virtual two- or three-dimensional model utilizes the original layering and hierarchy sequence, as well as shaders, which were re-associated with the objects, S 702 . If the original layering and hierarchy sequence produces an acceptable virtual two- or three-dimensional model for a component or combination of components that will be used in the advertising image, S 704 [YES], the layering and hierarchy sequence, as well as the associated shaders, are related and stored as a scene state within the database 350 , S 708 .
  • the scene state module 306 enables a user to create a scene state of a virtual model wherein the brake is moved apart/separated from, yet still proximate to, the rim.
  • a separate, but similar scene state can be additionally created for each type of brake and each type of rim available on the automobile.
  • the scene state module 306 can be utilized to perform various other functions in relation to the scene states.
  • the scene state module 306 can be used to apply settings, e.g., treating a layer as car paint, to the objects within the scene state, S 710 .
  • the scene state module 306 can also be used to enable the referencing of one scene state into another scene state, S 712 .
  • the scene state module 306 can be used to create a new scene state from existing scene states (e.g., duplicating, deleting or modifying a scene state), S 714 .
  • the scene state module 306 can be used to modify/update a scene state, e.g., change geometry, change shading, etc., whereby all modifications/updates are carried through to all other scene states that reference the modified/updated scene state. Accordingly, the scene state module 306 enables the use of a single file, e.g., a scene state, where changes can be made and carried through to every other file referencing the single file. All referencing between scene states is maintained in the database 350 .
  • scene states e.g., two-dimensional images of components/objects with desired layering, hierarchy and shading
  • the scene states are then rendered in layers, by the rendering module 308 (e.g., the rendering function of Maya).
  • the render layers within Maya are then submitted to a graphics processing unit (e.g., a rendering farm) utilizing a render submission tool that is configured to interface with Maya.
  • the render submission tool enables the manual selection of desired layers.
  • FIG. 8 provides an exemplary screen shot of the render submission tool 800 illustrating selected layers 802 for submission.
  • the result of rendering the layers is a preliminary set of layers in two-dimensional image format that can be combined, e.g., stacked, to create component images, S 410 ; the preliminary set of layers is stored to the database 350 .
  • compositing of the preliminary layers is performed, S 412 , using the compositing module 310 of the computer graphics module 300 ; compositing data is stored to the database 350 .
  • the compositing module 310 e.g., compositing function of Maya
  • the image attributes applied through use of the compositing module 310 by the artist can include but are not limited to lens effects for sharpening or blurring an image, textures (e.g., wood, plastic, leather, etc.), camera angles (e.g., providing a 360 deg. view or less than a 360 deg. view), lighting, contrasting, etc.
  • the compositing module 310 tracks the attributes that are applied to the preliminary layers as additional layers to be rendered.
  • the compositing module 310 tracks these different sets of attributes as rendering layers so that multiple images of a component, each incorporating a different set of attributes, may be generated with a second and final layer rendering, which is described further below.
  • the rendering module 308 (e.g., rendering function of Maya) is utilized by the computing device 200 to perform a second and final rendering of the layers, including the attribute layers, in a two-dimensional image format, S 414 ; the final layers are stored to the database 350 .
  • the final layers, along with the configuration module 312 (described further below), can be delivered to the client to construct the three-dimensional advertising images requested.
  • the configuration module 312 is used to generate three-dimensional advertising images from the rendered, two-dimensional image format, final layers, S 416 ; the configurations, e.g., defined stacking orders of the rendered two-dimensional images are stored to the database 350 .
  • a schematic of the configuration module 312 is provided in FIG. 9 . As shown the configuration module 312 includes an “asset creation” module 902 , an “asset deletion” module 904 , a “config images” module 906 , a “config parts” module 908 , and a “trims” module 910 .
  • the “asset creation” module 902 enables a user to establish a part name for each of the final component images as well as designate whether the named part can inherit attributes from other named parts and/or other named images (named images are described in further detail below).
  • the “asset creation” module 902 also enables a user to filter the type of attributes that the named part can inherit so that the part can inherit only those attributes needed.
  • the “asset creation” module 902 further enables a user to designate whether the named part has a finish (e.g., textured surface, reflective surface, etc.) and/or is painted/colored.
  • FIG. 10 An example of an “asset creation” graphical user interface 1000 is illustrated in FIG. 10 .
  • the interface 1000 includes a part name field 1002 , a check box 1004 for designating the ability to inherit attributes from a named part, an inheritance named part field 1006 for selecting one or plurality of named parts from which the present named part may inherit attributes.
  • the interface 1000 further includes a check box 1008 for designating the ability to inherit attributes from a named image and an inheritance named image field 1010 for selecting one or a plurality of named images from which the present named part may inherit attributes.
  • a check box 1012 for disabling all inheritance is also provided.
  • Check boxes 1014 and 1016 provide the option for designating whether the currently named part has a finish and/or is painted/colored, respectively.
  • the interface 1000 further includes a filter panel 1018 providing a plurality of options for filtering inheritance attributes including a body filter 1020 , an engine filter 1022 , an axle filter 1024 , a drive filter 1026 , a grade filter 1028 , a transmission filter 1030 , a model filter 1032 , an intake filter 1034 , a zone filter, 1036 , an equipment filter 1038 , and various option filters 1040 .
  • the interface 1000 also includes a “create” button 1042 whose selection establishes the named part in the database 350 .
  • the “asset deletion” module 904 enables a user to delete a named part if it is no longer being used.
  • An example of an “asset deletion” graphical user interface 1100 is provided in FIG. 11 . As shown, the interface 1100 provides a listing panel 1102 that lists the named parts held within the database. A user may select one or more of the parts listed and click on the remove selected part button 1104 to remove the named part from the database 350 .
  • the “config images” module 906 enables a user to define an image by naming the image and identifying which of the named parts will be included in the named image. It further allows a user to define in which order the named parts should be stacked or layered to create the named image.
  • An example of a “config images” graphical user interface 1200 is provided in FIG. 12 . As illustrated , the “config images” graphical user interface 1200 is presented in a spreadsheet configuration where each row 1202 represent an image layer, e.g. (a final rendered layer), in a configurable stack of images, each column 1204 represents a frame number of the final asset and each cell 1206 defines the stacking order of the layer image which varies for each frame, with zero comprising the bottom layer.
  • the named image can be assigned to different trim builds; the named image can comprise a single frame or a sequence of frames.
  • An example of a “Trim Build Assignment” graphical user interface 1300 is provided in FIG. 13 . More specifically, interface 1300 allows the user to insert any new image layers into existing EIM builds. In the context of the automobile example, the new image layers are generally combinations of parts that cannot exist on their own, such as brakes and rims. Further, the named image can be assigned various attributes.
  • An example of an “Attribute Assignment” graphical user interface 1400 is provided in FIG. 14 . In this example, the named parts are identified as having a paint attribute, see left column 1402 , or not having a paint attribute, see right column 1404 .
  • the “config parts” module 908 enables a user to view the vehicle trims (e.g., base model or higher grade model of the vehicle) to which a named part is assigned and further enables a user to assign a named part to a trim or to remove a named part from a trim.
  • the “config parts” module 908 enables a user to view, add or delete a named part to a painted parts assignment, and/or to a finished part assignment.
  • An example “config parts” graphical user interface 1500 for trim build assignment is illustrated in FIG. 15 . As shown, interface 1500 includes a named parts panel 1502 that lists the various parts available as well as a trim build panel 1504 that lists that various trims to which a named part may be assigned.
  • the assignment or unassignment of a named part occurs by selecting one or more named parts from the panel 1502 , then selecting one or more of the trim builds from panel 1504 and clicking the assign button 1506 or unassign button 1508 , respectively.
  • Similar user interfaces are provided for the painted parts assignment (see FIG. 14 described herein) and the finished (e.g., texture) parts assignment
  • An example “finished parts assignment” graphical user interface 1600 is illustrated in FIG. 16 and includes columns 1602 , 1604 .
  • the “trims” module 910 provides the user with an overview of the various trim builds including the assigned named parts, colors and trims per build.
  • An example of a “trims” graphical user interface 1700 is provided in FIG. 17 .
  • the interface 1700 includes a trim level list panel 1702 (trim level list is a list of all the EIMS available for a specific automobile), that provides a filtering option 1704 .
  • the interface 1700 further includes a trim builds panel 1706 that lists the various trim builds along with the option for filtering the trims builds via filter panel 1708 .
  • the filter panel 1708 includes a body filter, an engine filter, an axle filter, a drive filter, a grade filter, a transmission filter, a model filter, an intake filter, a zone filter, and equipment filter, and various option filters.
  • the interface further includes a right panel 1710 that includes: a “config parts” option 1712 to provide a listing of the config images (see description of “config images” module 906 herein); an “exterior color” option 1714 where logic for applying an exterior automobile paint color to various parts can be added to/deleted from the database 350 through “assign” and “unassign” buttons 1716 , 1718 , respectively; and an “interior color option” 1722 where logic for applying interior automobile colors to various parts can be added to/deleted from the database 350 through “assign” and “unassign” buttons 1716 , 1718 , respectively.
  • the various modules described above work in conjunction with each other in the configuration module 312 to establish a framework from which three-dimensional advertising images of a consumer product can be generated.
  • the framework establishes instructions, via the “config images” module 906 (see also FIG. 12 ), that direct which of the named parts are to be selected and used to generate a specific three-dimensional advertising image and further comprises instructions that direct the assembly sequence of the selected named parts (.e.g, defined stacking orders of the rendered two-dimensional images).
  • the framework is configured to work in cooperation with the database 350 wherein the rendered, two-dimensional image format, final layers have been stored in a specific directory/sub-direction configuration as directed by the framework.
  • the framework is converted to an XML file that can be used by other programs.
  • An example of a portion of such an XML file is illustrated in FIG. 18 where a specific automobile trim 1800 is identified, the options 1802 to be applied to the two-dimensional image layers that generate the image of the specific automobile trim are identified, and the actual two-dimensional image layers 1804 to generate the three-dimensional advertising image of the specific automobile trim are identified.
  • the framework can be imported into an image-editing program, e.g., Photoshop (or Affinity Photo, GIMP, Sketch, Pixelmator, Pixlr, Acorn, Corel Paintshop Pro, Paint.net, Sumopaint, etc.) along with the two-dimensional images to generate the three-dimensional advertising images with the XML file essentially operating as the recipe and the two-dimensional image layers as the ingredients.
  • Photoshop or Affinity Photo, GIMP, Sketch, Pixelmator, Pixlr, Acorn, Corel Paintshop Pro, Paint.net, Sumopaint, etc.
  • a Photoshop assembler tool is built using java script.
  • the Photoshop assembler tool provides the options that are available in the XML file for the user to choose which configurations and which options they want built.
  • the Photoshop assembler tool then builds the image based on the choices of the user.
  • the system and method of generating a consumer product advertising image provides an advertising client with a set of two-dimensional images and a framework (e.g., XML file as the configuration module) for generating a 360 degree three-dimensional advertising image of any and all possible options and trim builds of the automobile.
  • a framework e.g., XML file as the configuration module
  • the set of two-dimensional images and framework use significantly less memory than would be required by three-dimensional images that represent all options and trims of an automobile.
  • the set of two-dimensional images and framework provide a fast-acting system enabling a user to select automobile options and to be presented with a corresponding image at almost a real-time speed.
  • Another example embodiment of the method 100 for generating a consumer product advertising image is configured to be implemented with the computing device 200 described above.
  • this method will be described with reference to a consumer product comprising a detergent bottle.
  • the method may be applied to any consumer product for which two-dimensional or three-dimensional digital drawings, e.g., CAD drawings, exist or may be created.
  • the computing device 200 utilizes a database 1950 in combination with a computer graphics module 1900 , which further includes a data cleaning module 1904 , a model generation module 1906 , and a rendering module 1908 .
  • the noted program modules are stored in memory and utilized by the computing device 200 to perform the various functions and operations for generating a consumer product advertising image in accordance with the flowchart of FIG. 20 .
  • the database 1950 comprises a ShotgunTM database available from Shotgun Software, however, other databases may be used.
  • the computer graphics module comprises Autodesk Maya (or just “ Maya”) available from Autodesk Inc, however, other computer graphics programs may be used.
  • a flowchart illustrates the method 2000 for generating a consumer product advertising image.
  • the method 2000 begins with an order for one or more consumer product advertising images from a client S 2002 .
  • the order is placed by the client by using a consumer product identifier such as a global trade item number (GTIN) or a universal product code (UPC).
  • GTIN global trade item number
  • UPC universal product code
  • the consumer product identifier is entered into the database 1950 to determine if the consumer product identifier is linked to any existing CAD models of the consumer product and/or label art that is to be applied to the consumer product and, if so, the input data (e.g.
  • GTIN global trade item number
  • UPC universal product code
  • the consumer product identifier may be linked to more than one CAD model and/or more than one option for label art.
  • a CAD model may exist for the cap of the bottle, another for the spout of the bottle, and still another for the actual bottle itself while label art may exist for the front of the bottle and for the back of the bottle.
  • the client is requested to supply the items and/or a digital model of the consumer product is made through traditional modeling techniques.
  • the data cleaning module 1904 is employed to clean the input data and store the cleaned input data to the database 1950 , S 2006 .
  • the data cleaning module 1904 generally operates in accordance with the flowchart of FIG. 6 where the computer graphics module exports the three-dimensional objects within the input data to an Alembic cache, S 602 , where all history, modifiers, constraints, and connections are removed from the input data and the three-dimensional objects within the data are distilled into a non-procedural, application independent set of geometric results, S 604 .
  • the geometric results are imported from the Alembic cache, S 606 and re-associated with original layers, hierarchy and shaders of the input data, S 608 .
  • the data cleaning module 1904 enables the removal of any connections, attributes, or irregular settings that may have been previously set within the starting geometry and ensures that the cleaned model of the consumer product can accept the designated label art.
  • the cleaned and re-associated input data is used by the model generation module 1906 to generate a three-dimensional model of the consumer product, S 2008 .
  • visual attributes are created and applied to the three-dimensional model; the visual attributes are stored to the database in relation to the consumer product identifier, S 2010 .
  • the visual attributes may comprise, but are not limited to, texture, color, lens effects and camera angles, lighting, etc.
  • the two-dimensional image layers can be assembled, or stacked, in the appropriate order, S 2010 (e.g. such as by using the configuration module 312 to define a stacking order of two-dimensional images) and provided to the rendering module 1908 of the computer graphics module 1900 to render, S 2014 , the desired two-dimensional consumer product advertising images, or sequence of images, that may be provided for use by the client.
  • the two-dimensional images and defined stacking orders can be imported into an image-editing program, such as Photoshop or similar program, to generate a three-dimensional.
  • an image-editing program such as Photoshop or similar program
  • the ability to generate three-dimensional images from two-dimensional images and a framework provides the benefits of reduced memory usage (three-dimensional images taking significantly more space than two-dimensional images) and faster image production (three-dimensional images take longer to load than two-dimensional images). Accordingly, three-dimensional images can be created from two-dimensional images in what essentially amounts to real-time, or on-the-fly image creation. Furthermore, each three-dimensional consumer product image produced through use of the two-dimensional images and framework is substantially equivalent to a physically capture photograph of the consumer product, and a sequence three-dimensional images produced through use of the two-dimensional images and framework is substantially equivalent to a physically captured film of the consumer product.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of generating a three-dimensional consumer product image includes: distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations; applying a visual attribute to the geometric representations of the objects; rendering a plurality of two dimensional images from the geometric representations of the objects with applied visual attributes; defining one or more stacking orders of at least a portion of the plurality of two dimensional images; and delivering to a user the two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing program capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S)
  • This application is a continuation of U.S. patent application Ser. No. 15/352,344, filed 15 Nov. 2016, titled CONSUMER PRODUCT ADVERTISING IMAGE GENERATION SYSTEM AND METHOD, the disclosure of which is hereby incorporated by reference in its entirety.
  • BACKGROUND
  • Since the advent of photography consumer product advertising images have been generated through the use of film and camera. Modern times have seen consumer product images advance to being captured with a digital cameras. Whether the consumer product advertising images are captured digitally or by film, hours and hours of time are spent positioning a product with just the right lighting to produce an advertising-worthy photo than can be used in magazines or on billboards. In some cases consumer product prototypes are built, at significant expense, to provide a product that is more easily adapted to photography. Each time a variation on the consumer product occurs new images must be captured at additional expense.
  • The same holds true for live-action consumer product commercials. For example, significant time and expense is incurred in dressing a set or identifying a remote location for a shoot. Additional time and expense is incurred in setting up cameras and lighting, hiring actors, positioning and/or modifying the consumer product to achieve a perfect image. Accordingly, a path for advertising-worthy photographs without the drawbacks described above would be beneficial to the advertising industry.
  • SUMMARY
  • The present disclosure is directed to image generation and, more specifically, to consumer product advertising image generation.
  • One aspect of the present disclosure is directed towards a method of generating a three-dimensional consumer product image. The method includes: distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects; applying a visual attribute to the one or more three-dimensional geometric representations of the objects; rendering a plurality of two dimensional images from the three-dimensional geometric representations of the objects with applied visual attributes; defining one or more stacking orders of at least a portion of the plurality of two dimensional images; and delivering to a user the two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing program capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.
  • Another aspect of the present disclosure is directed towards a method of generating a three-dimensional consumer product image. In this instance the method includes: distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects; applying original layer assignment, original layer hierarchy and original shading associated with the three-dimensional model of the consumer product to the one or more three-dimensional geometric representations of the objects; rendering a first plurality of two-dimensional images based on the three-dimensional geometric representations of the one or more objects with original layer assignment, hierarchy and shading; applying a visual attribute to the first plurality of two-dimensional images; rendering a second plurality of two dimensional images from the first plurality of two dimensional images with applied visual attributes; defining one or more stacking orders of at least a portion of the second plurality of two dimensional images; and delivering to a user the second plurality of two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing program capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.
  • Another aspect of the present disclosure is directed towards a system comprising a computing device that includes a processing device and a computer readable storage device storing data instructions that when executed by the processing device cause the computing device to: distill one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects; apply a visual attribute to the one or more three-dimensional geometric representations of the objects; render a plurality of two dimensional images from the three-dimensional geometric representations of the objects with applied visual attributes; and define one or more stacking orders of at least a portion of the plurality of two dimensional images. The rendered two-dimensional images and the defined one or more stacking orders are rendered and defined, respectively, in a form that is importable into a computer-executable image editing program.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic flowchart illustrating a system and method for generating a consumer product advertising image.
  • FIG. 2 illustrates an example of a computing device that can be used to implement aspects of the present disclosure.
  • FIG. 3 is a schematic of the computing device and program modules that can be used to implement aspects of the present disclosure.
  • FIG. 4 is a flowchart of a method generating a consumer product advertising image.
  • FIG. 5 is schematic of a spreadsheet representation of metadata.
  • FIG. 6 is a flowchart illustrating the operation of a data cleaning module.
  • FIG. 7 is a flowchart illustrating the operation of the scene state module.
  • FIG. 8 is an example of a graphical user interface usable by the rendering module
  • FIG. 9 is a schematic of a configuration module.
  • FIG. 10 is an example of an “asset creation” graphical user interface
  • FIG. 11 is an example of an “asset deletion” graphical user interface.
  • FIG. 12 is an example of a “frame stacking order” graphical user interface.
  • FIG. 13 is an example of a “trim builds assignment” graphical user interface.
  • FIG. 14 is an example of an “attribute assignments” graphical user interface.
  • FIG. 15 is an example of a “trim build assignment” graphical user interface.
  • FIG. 16 is an example of a “finished parts assignment” graphical user interface.
  • FIG. 17 is an example of a “trims” graphical user interface.
  • FIG. 18 is an exemplary sample of XML instructions for generating a consumer product advertising image.
  • FIG. 19 is a schematic of the computing device and program modules that can be used to implement aspects of the present disclosure.
  • FIG. 20 is a flowchart of a method generating a consumer product advertising image.
  • DETAILED DESCRIPTION
  • Various embodiments will be described in detail with reference to the drawings, wherein like reference numerals represent like parts and assemblies through the several view. Reference to various embodiments does not limit the scope of the claims attached hereto. Additionally, any examples set forth in this specification are not intended to be limiting and merely set forth the many possible embodiments for the appended claims.
  • The consumer product advertising image generation system and method of the present disclosure reduces the time and expense associated with physically capturing advertising images of a consumer product by, instead, creating a virtual consumer product in a virtual setting that can be positioned, colorized, texturized, lit, etc., and rendered to a two-dimensional or three-dimensional advertising image, or a sequence of such images that are substantially equivalent to physically captured images.
  • A system and method of generating a consumer product advertising image is depicted in the schematic flowchart of FIG. 1. The method 100 is configured to be performed by one or more computing devices 102 executing instructions that are stored in memory. The method 100, in the most general of terms, comprises establishing a computer-aided design (CAD) model (e.g. a computerized two-dimensional or three-dimensional model) of a consumer product, S104, such as a vehicle 105 or detergent bottle 107, etc. The method 100 then comprises deconstructing the model to usable geometric components, S106, and applying visual attributes (e.g., texture 109, color 111, lens effects and camera angles 113, lighting 115, etc.) to those components, S108. Once visual attributes have been applied, the method 100 further comprises reconstructing the components and attributes into a virtual consumer product advertising model that can be rendered to a two-dimensional or three-dimensional consumer product advertising image (or sequence of images), S110, such as the exemplary consumer product advertising images of vehicle 117 or detergent bottle 119, as illustrated in FIG. 1.
  • FIG. 2 illustrates an exemplary architecture of a computing device 200 that can be used to implement aspects of the present disclosure. The computing device 200 can be in any suitable form including a microcontroller, a microprocessor, a desktop computer, a laptop computer, a tablet computer, a mobile computing device (e.g., smart phone, iPod™, iPad™, or other mobile device), or other devices configured to process digital instructions. Rather, it is understood that the exemplary computing device 200 may be configured specific to its intended use incorporating various peripherals and programming instructions, as described herein, to achieve desired operations. Further, it is understood that the computing device 200 is an example of programmable electronics, which may include one or more such computing devices, and when multiple computing devices are included, such computing devices can be coupled together with a suitable data communication network so as to collectively perform the various functions, methods and operations disclosed herein.
  • In general terms, the computing device 200 includes at least one processing device and at least one computer readable storage device. The processing device operates to execute data instructions stored in the computer readable storage device to perform various operations, methods, or functions described herein.
  • In more particular terms and with reference to FIG. 2, the computing device 200 includes at least one processing device 202, such as a central processing unit (CPU), as well as a system memory 204 and a system bus 206. The system bus 206 couples various system components including the system memory 204 to the processing device 202. The system bus 206 is one of any number of types of bus structures including a memory bus, a peripheral bus, and a local bus using any variety of bus architectures.
  • The system memory 204 includes program memory 208 and random access memory (RAM) 210. A basic input/output system (BIOS) 212 containing the basic routines that act to transfer information within the computing device 200, such as during start up, is typically stored in the program memory 208. In some embodiments, the computing device 200 also includes a secondary storage device 214, such as a hard disk drive or file server, for storing digital data. The secondary storage device 214 is connected to the system bus 206 by a secondary storage interface (INTF) 216. The secondary storage device 214, and its associated computer readable media, provides nonvolatile storage of computer readable instructions (including application programs and program modules), data structures, and other data for the computing device 200.
  • Although the exemplary computing device 200 described herein employs a secondary storage device 214, in some the embodiments the secondary storage device is eliminated or its hard disk drive/file server configuration is replaced with an alternative form of computer readable storage media. Alternative forms of computer readable storage media include, but are not limited to, magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, compact disc memories, digital versatile disk memories, and random access memories. Some embodiments of the secondary storage devices 214 include non-transitory media. Further, the computer readable storage media can include local storage or cloud-based storage.
  • A number of program modules can be stored in the memory 204, or the secondary storage device 214. These program modules include an operating system 218, one or more application programs 220, other program modules 222 as described herein, and program data 224. The computing device 200 can utilize any suitable operating system, such as Microsoft Windows™, Google Chrome™, Apple OS, and any other operating system suitable for a computing device.
  • The computing device 200 typically includes at least some form of computer readable media, e.g., computer readable media within the memory 204 or secondary storage device 214. Computer readable media includes any available media that can be accessed by the computing device 200. By way of example, computer readable media includes computer readable storage media and computer readable communication media.
  • Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory or other memory technology, compact disc read only memory, digital versatile disks or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the computing device 200.
  • In some embodiments, a user provides inputs to the computing device 200 through one or more input devices 226. Examples of input devices 226 include a keyboard 228, a mouse 230, a scanner 232, and a touch sensor 234 (such as a touchpad or touch sensitive display). Other embodiments include other input devices 226 necessary for fulfilling the operations described herein. In some embodiments, the input devices 226 are incorporated into the computing device 200 itself. In some embodiments, the input devices 226 are external to the computing device 200 and are connected to the processing device 202 through an input interface 236 that is coupled to the system bus 206. The input devices 226 can be connected by any number of input/output interfaces, such as parallel port, serial port, game port, universal serial bus, or a custom interface. Wireless communication between input devices and the input interface 236 is possible as well, and includes infrared, BLUETOOTH® wireless technology, 802.11/a/b/g/n, cellular or other radio frequency communication systems in some possible embodiments.
  • In the example embodiment of FIG. 2, the computing device 200 incorporates within or is operably coupled to a display device 238. Examples of the display device 238 include a monitor, a liquid crystal display device, a projector, or a touch sensitive display device. The display device 238 is also connected to the system bus 206 via an output interface 240, such as a display controller. In addition to the display device 238, the computing device 200 can control via output interface 240 various other peripherals such as a printing device 242 or speaker (not shown). As with the input interface 236, the output interface 240 can comprise any number of input/output interfaces such as those described in the paragraph above.
  • The computing device 200 further includes a network interface 244 that includes a network communication device to communicate digital data across a data communication network 246. An example of the network interface 244 includes a wireless transceiver for transmitting digital data over a wireless network. The wireless transceiver is configured to work with one or more wireless communication technologies such as cellular communication, Wi-Fi communication (such as that conforming to one of the IEEE 802.11 family of communication protocols), Bluetooth® communication, and the like. In other embodiments, the network interface 244 is an Ethernet network interface device having an Ethernet port for receiving an Ethernet cable to transmit and receive digital data across the Ethernet cable to a network 246 such as a local area network or the Internet.
  • FIG. 3 provides an exemplary schematic of the computing device 200 and the program modules, e.g., program modules 222, which are stored in memory and utilized by the computing device 200 to perform the various functions and operations of the system and method for generating a consumer product advertising image. As shown, the computing device 200 generally includes a computer graphics module 300 and a database 350 accessible to the computer graphics module 300. The computer graphics module 300 further includes a data cleaning module 304, a scene states module 306, a rendering module 308, a compositing module 310 and a configuration module 312. Each of the modules 304, 306, 308, 310 and 312 will be described in further detail below in relation to the flowchart of FIG. 4.
  • FIG. 4 is a flowchart illustrating an example embodiment of the method 100 for generating a consumer product advertising image; this embodiment of the method 100 will hereafter be identified as method 400. For convenience of explanation and understanding, the method 400 will be described with reference to a consumer product comprising an automobile. However, it is to be understood that the method may be applied to any consumer product for which two-dimensional or three-dimensional digital drawings, e.g., CAD drawings, exist or may be created.
  • As shown in FIG. 4, the method 400 begins with an order placed by a client to receive one or more of an advertising-worthy image of an automobile, S402. The client then supplies original input data (e.g., two-dimensional or three-dimensional digital data such as that provided by CAD files) for the automobile from which the advertising images are to be generated and the input data is stored to the database 350, S404. The input data is received into the computing device 200 via the computer graphics module 300 (e.g., Autodesk Maya or other computer graphics program such as 3ds Max, AOI, Blender, Cinema 4D, Clara.io, MODO, Shade 3D, Softimage, ZBrush, etc.) and stored to the database 350 in manner such that all original history, modifiers, constraints, layers, and connections, etc., relating to the digital data are maintained.
  • The input data generally includes all parts to be manufactured for the automobile as well as assembly data for assembling the components of the automobile into various final builds. The final build of the automobile, or the “end item model” (EIM) of the automobile, represents all selected options for the automobile. Such options may include, but are not limited to, the type of engine, the type of tires, the type of stereo, the type of upholstery, the type of interior trim, the type of exterior trim, the color of the automobile, the color of the upholstery, the type of suspension, the type of brakes, etc. With thousands to millions of builds possible for the automobile, the input data additionally includes metadata that associates the various components/parts of the automobile with the associated possible builds; the originally supplied metadata is also stored in the database 350.
  • FIG. 5 provides a simplified spreadsheet 500 representation of the metadata illustrating the connection between parts and various EIM builds. As shown, “Set of Parts A” is to be used in the number one, two, five and eight EIM builds of the automobile, while “Set of Parts B” is to be used in the number two, three, and seven EIM builds of the automobile, etc. Of course, the actual organizational structure is much more complicated as thousands of parts, thousands of parts relating other parts, and thousands of EIM builds are possible.
  • Referring back to FIG. 4, with the input data and metadata stored as one or more Maya files within the database 350, the data cleaning module 304 is employed to clean the input data and store the cleaned data to the database 350, S406. The data cleaning module 304 generally operates in accordance with the flowchart of FIG. 6. First, all three-dimensional objects of the original input data are exported to an ALEMBIC™ cache, S602, where all history, modifiers, constraints, and connections are removed from the input data, S604.
  • ALEMBIC™ is a computer graphics interchange file format that can be used with numerous computer graphic applications. Through removal of the noted items, the exported objects are distilled into a non-procedural, application independent set of geometric results. The cleaned three-dimensional objects in the Alembic cache are then imported back into Maya, S606. Maya has operated to maintain the display layers, shader assignment and hierarchy (e.g., order of assembly) of the original input data which enables the imported objects to be placed back into its original layer and hierarchy, and to be re-associated with its original shaders, S608.
  • In using the above-described process, the data cleaning module 304 enables the removal of any connections, attributes, or irregular settings that may have been previously set within the starting geometry. Accordingly, instead of attempting to check every possible combination of automobile components that might cause a problem when preparing the advertising images, the data cleaning module 304 essentially forces the connections, attributes or irregular settings to be reset/eliminated, yet allows the cleaned, imported objects to maintain their connections to, for example, their display layers, their place in the geometrical hierarchy, and their shaders.
  • Referring once again to FIG. 4, and as indicated above, once the input data has been cleaned, the cleaned three-dimensional objects, which have been re-associated with their layers, hierarchies and shaders, are combined into one or more scene states, via scene state module 306, within Maya. The scene state module 306 generally operates in accordance with flow chart of FIG. 7. As shown, the scene state module 306 is used to recombine the cleaned, three-dimensional objects into a virtual two-dimensional or three-dimensional model that will be used to produce a two-dimensional image representative of each component or each combination of components necessary to produce the client-requested advertising images. A first attempt at establishing the virtual two- or three-dimensional model utilizes the original layering and hierarchy sequence, as well as shaders, which were re-associated with the objects, S702. If the original layering and hierarchy sequence produces an acceptable virtual two- or three-dimensional model for a component or combination of components that will be used in the advertising image, S704 [YES], the layering and hierarchy sequence, as well as the associated shaders, are related and stored as a scene state within the database 350, S708. If the original layering and hierarchy sequence produces an unacceptable virtual two- or three-dimensional model for a component or combination of components that will be used in the advertising image, S704 [NO], a new layering and/or hierarchy sequence (and/or shader association) is established to produce an acceptable virtual two- or three-dimensional model, S706, and the new layering and hierarchy sequence, as well as the associated shaders, are related and stored as a scene state within the database 350, S708.
  • By way of illustrative example, consider a situation where an advertising image of each type of brake available on an automobile is requested in a configuration where each brake is separated from the wheel rims of an automobile. When the automobile is viewed externally, one sees the wheel and the rim with brake residing behind the rim. The scene state module 306 enables a user to create a scene state of a virtual model wherein the brake is moved apart/separated from, yet still proximate to, the rim. A separate, but similar scene state can be additionally created for each type of brake and each type of rim available on the automobile.
  • With the scene states established within the database 350, the scene state module 306 can be utilized to perform various other functions in relation to the scene states. For example, the scene state module 306 can be used to apply settings, e.g., treating a layer as car paint, to the objects within the scene state, S710. The scene state module 306 can also be used to enable the referencing of one scene state into another scene state, S712. Further, the scene state module 306, can be used to create a new scene state from existing scene states (e.g., duplicating, deleting or modifying a scene state), S714. And, the scene state module 306, can be used to modify/update a scene state, e.g., change geometry, change shading, etc., whereby all modifications/updates are carried through to all other scene states that reference the modified/updated scene state. Accordingly, the scene state module 306 enables the use of a single file, e.g., a scene state, where changes can be made and carried through to every other file referencing the single file. All referencing between scene states is maintained in the database 350.
  • Referring once again to FIG. 4, with all desired component, or combination of component, scene states (e.g., two-dimensional images of components/objects with desired layering, hierarchy and shading) established, the scene states are then rendered in layers, by the rendering module 308 (e.g., the rendering function of Maya). The render layers within Maya are then submitted to a graphics processing unit (e.g., a rendering farm) utilizing a render submission tool that is configured to interface with Maya. The render submission tool enables the manual selection of desired layers.
  • FIG. 8 provides an exemplary screen shot of the render submission tool 800 illustrating selected layers 802 for submission. The result of rendering the layers is a preliminary set of layers in two-dimensional image format that can be combined, e.g., stacked, to create component images, S410; the preliminary set of layers is stored to the database 350.
  • Referring once again to FIG. 4, upon the completion of the rendering of the preliminary set of layers, compositing of the preliminary layers is performed, S412, using the compositing module 310 of the computer graphics module 300; compositing data is stored to the database 350. The compositing module 310 (e.g., compositing function of Maya) operates to combine layers to display desired component images to an artist wherein the artist can apply image attributes beyond the layering, hierarchy and shaders that were applied in the preliminary layers. The image attributes applied through use of the compositing module 310 by the artist can include but are not limited to lens effects for sharpening or blurring an image, textures (e.g., wood, plastic, leather, etc.), camera angles (e.g., providing a 360 deg. view or less than a 360 deg. view), lighting, contrasting, etc. The compositing module 310 tracks the attributes that are applied to the preliminary layers as additional layers to be rendered.
  • In some instances different sets of attributes need to be applied to the same preliminary layers (e.g., an attribute of a different lens effect needs to be applied to the component image). The compositing module 310 tracks these different sets of attributes as rendering layers so that multiple images of a component, each incorporating a different set of attributes, may be generated with a second and final layer rendering, which is described further below.
  • Referring once again to FIG. 4, once compositing of the two-dimensional preliminary layers has been performed, the rendering module 308 (e.g., rendering function of Maya) is utilized by the computing device 200 to perform a second and final rendering of the layers, including the attribute layers, in a two-dimensional image format, S414; the final layers are stored to the database 350. The final layers, along with the configuration module 312 (described further below), can be delivered to the client to construct the three-dimensional advertising images requested.
  • Returning once again to FIG. 4, the configuration module 312 is used to generate three-dimensional advertising images from the rendered, two-dimensional image format, final layers, S416; the configurations, e.g., defined stacking orders of the rendered two-dimensional images are stored to the database 350. A schematic of the configuration module 312 is provided in FIG. 9. As shown the configuration module 312 includes an “asset creation” module 902, an “asset deletion” module 904, a “config images” module 906, a “config parts” module 908, and a “trims” module 910.
  • The “asset creation” module 902 enables a user to establish a part name for each of the final component images as well as designate whether the named part can inherit attributes from other named parts and/or other named images (named images are described in further detail below). The “asset creation” module 902 also enables a user to filter the type of attributes that the named part can inherit so that the part can inherit only those attributes needed. The “asset creation” module 902 further enables a user to designate whether the named part has a finish (e.g., textured surface, reflective surface, etc.) and/or is painted/colored.
  • An example of an “asset creation” graphical user interface 1000 is illustrated in FIG. 10. As shown, the interface 1000 includes a part name field 1002, a check box 1004 for designating the ability to inherit attributes from a named part, an inheritance named part field 1006 for selecting one or plurality of named parts from which the present named part may inherit attributes. The interface 1000 further includes a check box 1008 for designating the ability to inherit attributes from a named image and an inheritance named image field 1010 for selecting one or a plurality of named images from which the present named part may inherit attributes. A check box 1012 for disabling all inheritance is also provided. Check boxes 1014 and 1016 provide the option for designating whether the currently named part has a finish and/or is painted/colored, respectively. The interface 1000 further includes a filter panel 1018 providing a plurality of options for filtering inheritance attributes including a body filter 1020, an engine filter 1022, an axle filter 1024, a drive filter 1026, a grade filter 1028, a transmission filter 1030, a model filter 1032, an intake filter 1034, a zone filter, 1036, an equipment filter 1038, and various option filters 1040. The interface 1000 also includes a “create” button 1042 whose selection establishes the named part in the database 350.
  • The “asset deletion” module 904, see FIG. 9, enables a user to delete a named part if it is no longer being used. An example of an “asset deletion” graphical user interface 1100 is provided in FIG. 11. As shown, the interface 1100 provides a listing panel 1102 that lists the named parts held within the database. A user may select one or more of the parts listed and click on the remove selected part button 1104 to remove the named part from the database 350.
  • The “config images” module 906 enables a user to define an image by naming the image and identifying which of the named parts will be included in the named image. It further allows a user to define in which order the named parts should be stacked or layered to create the named image. An example of a “config images” graphical user interface 1200 is provided in FIG. 12. As illustrated , the “config images” graphical user interface 1200 is presented in a spreadsheet configuration where each row 1202 represent an image layer, e.g. (a final rendered layer), in a configurable stack of images, each column 1204 represents a frame number of the final asset and each cell 1206 defines the stacking order of the layer image which varies for each frame, with zero comprising the bottom layer.
  • With the named image created and saved to the database 350, the named image can be assigned to different trim builds; the named image can comprise a single frame or a sequence of frames. An example of a “Trim Build Assignment” graphical user interface 1300 is provided in FIG. 13. More specifically, interface 1300 allows the user to insert any new image layers into existing EIM builds. In the context of the automobile example, the new image layers are generally combinations of parts that cannot exist on their own, such as brakes and rims. Further, the named image can be assigned various attributes. An example of an “Attribute Assignment” graphical user interface 1400 is provided in FIG. 14. In this example, the named parts are identified as having a paint attribute, see left column 1402, or not having a paint attribute, see right column 1404.
  • The “config parts” module 908, see FIG. 9, enables a user to view the vehicle trims (e.g., base model or higher grade model of the vehicle) to which a named part is assigned and further enables a user to assign a named part to a trim or to remove a named part from a trim. Similarly, the “config parts” module 908 enables a user to view, add or delete a named part to a painted parts assignment, and/or to a finished part assignment. An example “config parts” graphical user interface 1500 for trim build assignment is illustrated in FIG. 15. As shown, interface 1500 includes a named parts panel 1502 that lists the various parts available as well as a trim build panel 1504 that lists that various trims to which a named part may be assigned. The assignment or unassignment of a named part occurs by selecting one or more named parts from the panel 1502, then selecting one or more of the trim builds from panel 1504 and clicking the assign button 1506 or unassign button 1508, respectively. Similar user interfaces are provided for the painted parts assignment (see FIG. 14 described herein) and the finished (e.g., texture) parts assignment An example “finished parts assignment” graphical user interface 1600 is illustrated in FIG. 16 and includes columns 1602, 1604.
  • The “trims” module 910, see FIG. 9, provides the user with an overview of the various trim builds including the assigned named parts, colors and trims per build. An example of a “trims” graphical user interface 1700 is provided in FIG. 17. As shown, the interface 1700 includes a trim level list panel 1702 (trim level list is a list of all the EIMS available for a specific automobile), that provides a filtering option 1704. The interface 1700 further includes a trim builds panel 1706 that lists the various trim builds along with the option for filtering the trims builds via filter panel 1708. The filter panel 1708 includes a body filter, an engine filter, an axle filter, a drive filter, a grade filter, a transmission filter, a model filter, an intake filter, a zone filter, and equipment filter, and various option filters. The interface further includes a right panel 1710 that includes: a “config parts” option 1712 to provide a listing of the config images (see description of “config images” module 906 herein); an “exterior color” option 1714 where logic for applying an exterior automobile paint color to various parts can be added to/deleted from the database 350 through “assign” and “unassign” buttons 1716, 1718, respectively; and an “interior color option” 1722 where logic for applying interior automobile colors to various parts can be added to/deleted from the database 350 through “assign” and “unassign” buttons 1716, 1718, respectively.
  • The various modules described above (e.g., the “asset creation” module 902, the “asset deletion” module 904, the “config images” module 906, the “config parts” module 908, and the “trims” module 910), work in conjunction with each other in the configuration module 312 to establish a framework from which three-dimensional advertising images of a consumer product can be generated. The framework establishes instructions, via the “config images” module 906 (see also FIG. 12), that direct which of the named parts are to be selected and used to generate a specific three-dimensional advertising image and further comprises instructions that direct the assembly sequence of the selected named parts (.e.g, defined stacking orders of the rendered two-dimensional images). The framework is configured to work in cooperation with the database 350 wherein the rendered, two-dimensional image format, final layers have been stored in a specific directory/sub-direction configuration as directed by the framework. In one example embodiment, the framework is converted to an XML file that can be used by other programs. An example of a portion of such an XML file is illustrated in FIG. 18 where a specific automobile trim 1800 is identified, the options 1802 to be applied to the two-dimensional image layers that generate the image of the specific automobile trim are identified, and the actual two-dimensional image layers 1804 to generate the three-dimensional advertising image of the specific automobile trim are identified. In the format of an XML file, the framework can be imported into an image-editing program, e.g., Photoshop (or Affinity Photo, GIMP, Sketch, Pixelmator, Pixlr, Acorn, Corel Paintshop Pro, Paint.net, Sumopaint, etc.) along with the two-dimensional images to generate the three-dimensional advertising images with the XML file essentially operating as the recipe and the two-dimensional image layers as the ingredients. In certain examples, a Photoshop assembler tool is built using java script. The Photoshop assembler tool provides the options that are available in the XML file for the user to choose which configurations and which options they want built. The Photoshop assembler tool then builds the image based on the choices of the user.
  • In view of the above, the system and method of generating a consumer product advertising image provides an advertising client with a set of two-dimensional images and a framework (e.g., XML file as the configuration module) for generating a 360 degree three-dimensional advertising image of any and all possible options and trim builds of the automobile. Further, the set of two-dimensional images and framework use significantly less memory than would be required by three-dimensional images that represent all options and trims of an automobile. Moreover, the set of two-dimensional images and framework provide a fast-acting system enabling a user to select automobile options and to be presented with a corresponding image at almost a real-time speed.
  • Another example embodiment of the method 100 for generating a consumer product advertising image is configured to be implemented with the computing device 200 described above. For convenience of explanation and understanding, this method will be described with reference to a consumer product comprising a detergent bottle. However, it is to be understood that the method may be applied to any consumer product for which two-dimensional or three-dimensional digital drawings, e.g., CAD drawings, exist or may be created.
  • Referring to FIG. 19, in this embodiment the computing device 200 utilizes a database 1950 in combination with a computer graphics module 1900, which further includes a data cleaning module 1904, a model generation module 1906, and a rendering module 1908. The noted program modules are stored in memory and utilized by the computing device 200 to perform the various functions and operations for generating a consumer product advertising image in accordance with the flowchart of FIG. 20. In one example embodiment, the database 1950 comprises a Shotgun™ database available from Shotgun Software, however, other databases may be used. In another example embodiment, the computer graphics module comprises Autodesk Maya (or just “Maya”) available from Autodesk Inc, however, other computer graphics programs may be used.
  • Referring now to FIG. 20, wherein this embodiment of the method 100 is hereafter referred to as method 2000, a flowchart illustrates the method 2000 for generating a consumer product advertising image. As shown, the method 2000 begins with an order for one or more consumer product advertising images from a client S2002. The order is placed by the client by using a consumer product identifier such as a global trade item number (GTIN) or a universal product code (UPC). The consumer product identifier is entered into the database 1950 to determine if the consumer product identifier is linked to any existing CAD models of the consumer product and/or label art that is to be applied to the consumer product and, if so, the input data (e.g. digital drawings, metadata and label art) are imported from the database via the computer graphics module 1900, S2004. Note that the consumer product identifier may be linked to more than one CAD model and/or more than one option for label art. For example, in the instance of the detergent bottle, a CAD model may exist for the cap of the bottle, another for the spout of the bottle, and still another for the actual bottle itself while label art may exist for the front of the bottle and for the back of the bottle.
  • If the CAD models and/or label art for the consumer product are not present in the database 1950, the client is requested to supply the items and/or a digital model of the consumer product is made through traditional modeling techniques.
  • Continuing with the flowchart of FIG. 20, with the drawings, metadata, and label art imported into the computer graphics module 1900, the data cleaning module 1904 is employed to clean the input data and store the cleaned input data to the database 1950, S2006. The data cleaning module 1904 generally operates in accordance with the flowchart of FIG. 6 where the computer graphics module exports the three-dimensional objects within the input data to an Alembic cache, S602, where all history, modifiers, constraints, and connections are removed from the input data and the three-dimensional objects within the data are distilled into a non-procedural, application independent set of geometric results, S604. The geometric results are imported from the Alembic cache, S606 and re-associated with original layers, hierarchy and shaders of the input data, S608. In using the above-described process, the data cleaning module 1904 enables the removal of any connections, attributes, or irregular settings that may have been previously set within the starting geometry and ensures that the cleaned model of the consumer product can accept the designated label art.
  • Returning now to the flowchart of FIG. 21, the cleaned and re-associated input data is used by the model generation module 1906 to generate a three-dimensional model of the consumer product, S2008. Subsequently, visual attributes are created and applied to the three-dimensional model; the visual attributes are stored to the database in relation to the consumer product identifier, S2010. The visual attributes may comprise, but are not limited to, texture, color, lens effects and camera angles, lighting, etc.
  • With all elements (e.g., digital drawings for three-dimensional virtual model of consumer product, visual attributes for the virtual model, and label art for the virtual model), having been established for generating a virtual model of a desired consumer product advertising image, the two-dimensional image layers can be assembled, or stacked, in the appropriate order, S2010 (e.g. such as by using the configuration module 312 to define a stacking order of two-dimensional images) and provided to the rendering module 1908 of the computer graphics module 1900 to render, S2014, the desired two-dimensional consumer product advertising images, or sequence of images, that may be provided for use by the client.
  • As before, the two-dimensional images and defined stacking orders can be imported into an image-editing program, such as Photoshop or similar program, to generate a three-dimensional.
  • For any consumer product, the ability to generate three-dimensional images from two-dimensional images and a framework (e.g., stacking orders), as opposed to simply providing three-dimensional images, provides the benefits of reduced memory usage (three-dimensional images taking significantly more space than two-dimensional images) and faster image production (three-dimensional images take longer to load than two-dimensional images). Accordingly, three-dimensional images can be created from two-dimensional images in what essentially amounts to real-time, or on-the-fly image creation. Furthermore, each three-dimensional consumer product image produced through use of the two-dimensional images and framework is substantially equivalent to a physically capture photograph of the consumer product, and a sequence three-dimensional images produced through use of the two-dimensional images and framework is substantially equivalent to a physically captured film of the consumer product.
  • The various embodiments described above are provided by way of illustration only and should not be construed to limit the claims attached hereto. Those skilled in the art will readily recognize various modifications and changes that may be made without following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the following claims.

Claims (21)

What is claimed is:
1. A method comprising:
distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects;
applying a visual attribute to the one or more three-dimensional geometric representations of the objects;
rendering a plurality of two dimensional images from the three-dimensional geometric representations of the objects with applied visual attributes;
defining one or more stacking orders of at least a portion of the plurality of two dimensional images; and
delivering to a user the two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing program capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.
2. The method of claim 1, further comprising applying original layer assignment, original layer hierarchy and original shading associated with the three-dimensional model of the consumer product to the one or more three-dimensional geometric representations of the objects.
3. The method of claim 1, wherein each produced three-dimensional image of the consumer product is substantially equivalent to a physically captured photograph of the consumer product.
4. The method of claim 1, wherein a sequence of produced three-dimensional images of the consumer product is substantially equivalent to a physically captured film of the consumer product.
5. The method of claim 1, wherein the defined stacking orders are in the form of an XML file.
6. The method of claim 1, wherein the visual attribute comprises lighting, a color, a texture, a camera angle, and/or a lens effect.
7. The method of claim 1, wherein the computer-executable image editing program comprises Photoshop or a Photoshop equivalent.
8. The method of claim 1, wherein the step of distilling, applying and rendering are performed by a computer-executable computer graphics program.
9. The method of claim 8, wherein the computer-executable computer graphics program comprises Maya or a Maya equivalent.
10. A method comprising:
distilling one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects;
applying original layer assignment, original layer hierarchy and original shading associated with the three-dimensional model of the consumer product to the one or more three-dimensional geometric representations of the objects;
rendering a first plurality of two-dimensional images based on the three-dimensional geometric representations of the one or more objects with original layer assignment, hierarchy and shading;
applying a visual attribute to the first plurality of two-dimensional images;
rendering a second plurality of two dimensional images from the first plurality of two dimensional images with applied visual attributes;
defining one or more stacking orders of at least a portion of the second plurality of two dimensional images; and
delivering to a user the second plurality of two-dimensional images and the defined stacking orders in a form that is importable into a computer-executable image editing program, the computer-executable image editing program capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.
11. The method of claim 10, wherein each produced three-dimensional image of the consumer product is substantially equivalent to a physically captured photograph of the consumer product.
12. The method of claim 10, wherein a sequence of produced three-dimensional images of the consumer product is substantially equivalent to physically captured film of the consumer product.
13. The method of claim 10, wherein the defined stacking orders are in the form of an XML file.
14. The method of claim 10, wherein the visual attribute comprises lighting, a color, a texture, a camera angle, and/or a lens effect.
15. The method of claim 10, wherein the computer-executable image editing program comprises Photoshop or a Photoshop equivalent.
16. The method of claim 10, wherein the steps of distilling, applying and rendering are performed by a computer-executable computer graphics program.
17. The method of claim 10, wherein the computer-executable computer graphics program comprises Maya or a Maya equivalent.
18. A system comprising:
a computing device comprising:
a processing device; and
a computer readable storage device storing data instructions that when executed by the processing device cause the computing device to:
distill one or more objects of a three-dimensional model of a consumer product to corresponding three-dimensional geometric representations of the one or more objects;
apply a visual attribute to the one or more three-dimensional geometric representations of the objects;
render a plurality of two dimensional images from the three-dimensional geometric representations of the objects with applied visual attributes; and
define one or more stacking orders of at least a portion of the plurality of two dimensional images; and
wherein the rendered two-dimensional images and the defined one or more stacking orders are rendered and defined, respectively, in a form that is importable into a computer-executable image editing program.
19. The system of claim 18, wherein the computer-executable image editing program is capable of enabling user selection of one or more of the defined stacking orders and capable of producing a three-dimensional image of at least a portion of the consumer product based on the user-selected one or more defined stacking orders and the imported two-dimensional images.
20. The system of claim 19, wherein each produced three-dimensional image of the consumer product is substantially equivalent to a physically captured photograph of the consumer product.
21. The method of claim 19, wherein a sequence of produced three-dimensional images of the consumer product is substantially equivalent to a physically captured film of the consumer product.
US15/912,397 2016-11-15 2018-03-05 Consumer product advertising image generation system and method Abandoned US20190080521A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/912,397 US20190080521A1 (en) 2016-11-15 2018-03-05 Consumer product advertising image generation system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/352,344 US9972140B1 (en) 2016-11-15 2016-11-15 Consumer product advertising image generation system and method
US15/912,397 US20190080521A1 (en) 2016-11-15 2018-03-05 Consumer product advertising image generation system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/352,344 Continuation US9972140B1 (en) 2016-11-15 2016-11-15 Consumer product advertising image generation system and method

Publications (1)

Publication Number Publication Date
US20190080521A1 true US20190080521A1 (en) 2019-03-14

Family

ID=62091394

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/352,344 Expired - Fee Related US9972140B1 (en) 2016-11-15 2016-11-15 Consumer product advertising image generation system and method
US15/912,397 Abandoned US20190080521A1 (en) 2016-11-15 2018-03-05 Consumer product advertising image generation system and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/352,344 Expired - Fee Related US9972140B1 (en) 2016-11-15 2016-11-15 Consumer product advertising image generation system and method

Country Status (2)

Country Link
US (2) US9972140B1 (en)
WO (1) WO2018093861A2 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2248109A4 (en) 2008-03-03 2012-02-29 Honeywell Int Inc Model driven 3d geometric modeling system
US20130278600A1 (en) 2012-04-18 2013-10-24 Per Bloksgaard Christensen Rendering interactive photorealistic 3d model representations
GB2508242B (en) 2012-11-27 2016-08-03 Mirriad Advertising Ltd Producing video data
EP2996086A1 (en) 2014-09-12 2016-03-16 Kubity System, method and computer program product for automatic optimization of 3d textured models for network transfer and real-time rendering
US10324453B2 (en) 2014-11-25 2019-06-18 Autodesk, Inc. Space for materials selection
US20170061700A1 (en) * 2015-02-13 2017-03-02 Julian Michael Urbach Intercommunication between a head mounted display and a real world object

Also Published As

Publication number Publication date
WO2018093861A3 (en) 2018-07-26
WO2018093861A2 (en) 2018-05-24
US9972140B1 (en) 2018-05-15
US20180137688A1 (en) 2018-05-17

Similar Documents

Publication Publication Date Title
US10089662B2 (en) Made-to-order direct digital manufacturing enterprise
US8633939B2 (en) System and method for painting 3D models with 2D painting tools
US11977960B2 (en) Techniques for generating designs that reflect stylistic preferences
CA2267440C (en) A system for automatic generation of selective partial renderings of complex scenes
US20080013860A1 (en) Creation of three-dimensional user interface
US8612485B2 (en) Deferred 3-D scenegraph processing
CN101124582A (en) Automated derivative view rendering system
JP3870167B2 (en) Rendering system, rendering method and recording medium thereof
US20180046167A1 (en) 3D Printing Using 3D Video Data
EP2790156B1 (en) Generalized instancing for three-dimensional scene data
US10606455B2 (en) Method for processing information
US8525846B1 (en) Shader and material layers for rendering three-dimensional (3D) object data models
CN114708391B (en) Three-dimensional modeling method, three-dimensional modeling device, computer equipment and storage medium
US9639924B2 (en) Adding objects to digital photographs
US11625900B2 (en) Broker for instancing
US9972140B1 (en) Consumer product advertising image generation system and method
FR2984561A1 (en) METHOD AND DEVICE FOR SOLID DESIGN OF A SYSTEM
CN104871158A (en) Product intelligence engine
US10552888B1 (en) System for determining resources from image data
CN113626902A (en) Material modeling system based on PBR material
CN111563968A (en) Online material replacing method
US20120274639A1 (en) Method for Generating images of three-dimensional data
US11972534B2 (en) Modifying materials of three-dimensional digital scenes utilizing a visual neural network
Bradbury et al. Frequency-based creation and editing of virtual terrain
USRE46807E1 (en) Made to order digital manufacturing enterprise

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION