US20200221022A1 - Background separated images for print and on-line use - Google Patents
Background separated images for print and on-line use Download PDFInfo
- Publication number
- US20200221022A1 US20200221022A1 US16/822,876 US202016822876A US2020221022A1 US 20200221022 A1 US20200221022 A1 US 20200221022A1 US 202016822876 A US202016822876 A US 202016822876A US 2020221022 A1 US2020221022 A1 US 2020221022A1
- Authority
- US
- United States
- Prior art keywords
- item
- images
- image
- background
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012545 processing Methods 0.000 claims abstract description 52
- 230000003993 interaction Effects 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 72
- 230000008569 process Effects 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 7
- 238000009877 rendering Methods 0.000 claims description 5
- 230000003213 activating effect Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims 3
- 230000015572 biosynthetic process Effects 0.000 abstract description 57
- 230000007547 defect Effects 0.000 abstract description 8
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 238000003384 imaging method Methods 0.000 abstract description 3
- 238000009792 diffusion process Methods 0.000 description 74
- 238000005286 illumination Methods 0.000 description 19
- 230000033001 locomotion Effects 0.000 description 18
- 230000008901 benefit Effects 0.000 description 12
- 239000011521 glass Substances 0.000 description 11
- 239000000463 material Substances 0.000 description 11
- 238000012546 transfer Methods 0.000 description 11
- 230000008859 change Effects 0.000 description 8
- 230000002452 interceptive effect Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000004806 packaging method and process Methods 0.000 description 7
- 239000004033 plastic Substances 0.000 description 7
- 230000004044 response Effects 0.000 description 7
- 238000013515 script Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000000926 separation method Methods 0.000 description 6
- 238000007639 printing Methods 0.000 description 5
- 239000000428 dust Substances 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000006641 stabilisation Effects 0.000 description 3
- 238000011105 stabilization Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 241000699666 Mus <mouse, genus> Species 0.000 description 2
- 239000004677 Nylon Substances 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 2
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 2
- 229910052782 aluminium Inorganic materials 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000881 depressing effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 210000003811 finger Anatomy 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 229920001778 nylon Polymers 0.000 description 2
- 230000010287 polarization Effects 0.000 description 2
- 239000010959 steel Substances 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 241001479434 Agfa Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 229910005813 NiMH Inorganic materials 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- ONIBWKKTOPOVIA-UHFFFAOYSA-N Proline Natural products OC(=O)C1CCCN1 ONIBWKKTOPOVIA-UHFFFAOYSA-N 0.000 description 1
- 241000593989 Scardinius erythrophthalmus Species 0.000 description 1
- JAZBEHYOTPTENJ-JLNKQSITSA-N all-cis-5,8,11,14,17-icosapentaenoic acid Chemical compound CC\C=C/C\C=C/C\C=C/C\C=C/C\C=C/CCCC(O)=O JAZBEHYOTPTENJ-JLNKQSITSA-N 0.000 description 1
- 230000003667 anti-reflective effect Effects 0.000 description 1
- 230000002155 anti-virotic effect Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000011230 binding agent Substances 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 235000013372 meat Nutrition 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 201000005111 ocular hyperemia Diseases 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012857 repacking Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H04N5/23222—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/3871—Composing, repositioning or otherwise geometrically modifying originals the composed originals being of different kinds, e.g. low- and high-resolution originals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6083—Colour correction or control controlled by factors external to the apparatus
- H04N1/6086—Colour correction or control controlled by factors external to the apparatus by scene illuminant, i.e. conditions at the time of picture capture, e.g. flash, optical filter used, evening, cloud, daylight, artificial lighting, white point measurement, colour temperature
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H04N5/2256—
-
- H04N5/23216—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N5/2723—Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- PCCN-0001-P04 is a continuation of U.S. patent application Ser. No. 11/850,083 filed Sep. 5, 2007, which issued as U.S. Pat. No. 7,953,277 on May 31, 2011 (PCCN-0001-P02).
- U.S. patent application Ser. No. 11/850,083 claims the benefit of provisional application U.S. Provisional App. No. 60/824,571, filed on Sep. 5, 2006 (PCCN-0001-P60). All of the above applications are hereby incorporated by reference in their entirety.
- This application relates to illumination fixtures and in particular to illuminating an item to facilitate providing background separated images.
- Images of products for advertising or other purposes are prevalent in print and on-line content. Advertisers prefer to use high quality images of products in advertisements to differentiate their products. Additionally a high quality product image can convey critical information about a product much more quickly than a text description can. Attributes of high quality images for advertising include excellent image details, clear product edges, the absence of a visible background, (thereby conveniently allowing use of the image in various contexts, such as print advertisements, web advertisement, or the like), and an absence of defects such as spots, reflections, or specular highlights. To achieve high quality product images, advertisers conventionally use photography studios that employ specialists who are experts in generating high quality images of products.
- While some functions related to high quality photography may benefit from digital image processing, such as red-eye removal, achieving excellent image details with clear product edges and no visible background typically requires manual manipulation of the images, including clicking along segments of the edges of an object on a computer screen to define an outline of the object so that the object can be separated from its background. Additionally, providing high quality images for processing requires time intensive product placement, lighting, and photography with each change in product positioning often requiring adjustments to lighting. Once high quality product images are captured, specialists use manual graphic editing tools to separate the item from its background so that the item can be repurposed for various types of advertisements. As with most high quality detailed manual functions, the process is often time-consuming and costly, requiring a skilled and well trained specialist.
- a photograph or digital image as described herein may be taken with lighting configured to deliver a high quality image of the object without consideration for the critical manual operation of separating the object from its background.
- an outline of an object may be inadequate to ensure that the ability to separate the object from its background is of similarly high quality.
- lighting in a photography studio may be optimized for lighting an object and may be inadequate to illuminate the object for the purpose of efficiently separating it from its background.
- the photograph or digital image may be taken in a way that compromises between obtaining a high quality image of the object and obtaining an image from which the background may be separated. In such a case, neither the quality of the image nor the quality of the separation of object from background is optimized.
- An aspect of the invention herein disclosed may include an image formation and processing system for delivering high quality product images with no visible background. This aspect may be achieved by using automated image processing to generate and repurpose the product images.
- An advantage of this aspect of the invention may address key drawbacks of known methods and systems such as per image cost and processing time.
- the methods and systems herein disclosed may dramatically lower per image cost while providing greater throughput and higher capacity of the high quality images. This may result in facilitating the development and management of a library of product images that can be used to fulfill a wide variety of product display and advertising needs.
- Another aspect of the invention herein disclosed may include a user display and interface to facilitate a user manipulating a product display.
- An advantage of this aspect of the invention may include facilitating a user to virtually touch, turn, operate, open, tilt, activate and observe a product, and retrieve more information (e.g. detailed images, text, audio description).
- the interface may also allow use of game controllers or virtual reality interface devices.
- the elements of the invention herein disclosed may address key aspects of a consumer purchasing process.
- the consumer may view a preferred model, rather than the one on display at a local store. Details of a product, both internal and external may be easily viewed.
- the box contents e.g. manuals, included accessories, assembly instructions
- the invention facilitates providing the images and an interactive user display to deliver a rapid, automated response to a customer product query. This may result in a faster sales cycle.
- the invention may be combined with on-line sales support tools that explain product features, virtually demonstrate the product, answer frequently asked questions through images as well as text or audio, and allow a consumer to examine and evaluate a product without a time limit such as may be imposed in a physical store.
- the images and interactive user interface may be beneficial to customer support organizations to guide users through debugging, assembly, accessory installation, and home servicing of products.
- a user may view an interactive display of installing a replacement component rather than viewing a hard copy printout of static images typically included in a product manual.
- the invention could be used to provide interactive images quickly in reply to a customer request for support of an aspect not already available.
- a method for producing a background separated image includes acquiring at least two images of an item and a background with a camera, wherein one of the at least two images provides high contrast between the item and the background; processing the contrast image to detect a separation between the item and the background, providing an edge mask; aligning the edge mask with the boundary of the item in the other of the at least two images; extracting the other image content within the aligned edge mask to produce an item-only image; and storing the item-only image in an image library.
- the background is illuminated.
- the illumination may be uniform such as uniform in color or intensity.
- the illumination may be diffuse.
- the background is illuminated from a side opposite the item.
- the background material is textured plastic or a light diffusing film on a glass substrate.
- the camera is a digital camera.
- the digital camera may have a resolution of at least 2000 ⁇ 2000 pixels, or it may be a digital video camera.
- acquiring further includes configuring a plurality of camera settings for each of the at least two images wherein at least one camera setting is different for each of the at least two images.
- the processing the contrast image is performed automatically by an image processor.
- the processing is performed by an image processor executing a combination of standard and custom image transfer and processing software.
- the processing may include identifying a set of points of the background that are adjacent to a set of points of the item.
- the background of the contrast image is substantially brighter than the item.
- the item of the contrast image is substantially black.
- the edge mask is an image comprising the item boundary.
- the edge mask may be a list of vectors in a two-dimensional coordinate space or a list of points in a two-dimensional coordinate space.
- Each of the at least two images reference a common origin in a two-dimensional space.
- the edge mask is referenced to the common origin so that the edge mask can be automatically aligned to the item in the other image of the at least two images.
- aligning includes using an edge detection algorithm to identify a plurality of edges of the item in the other image of the at least two images and fitting the edge mask to the detected plurality of edges.
- extracting includes deleting all content of the other image of the at least two images image other than the content within the aligned edge mask. Deleting may include changing content to white, or marking the content as deleted.
- extracting includes selecting all image content within the aligned edge mask and copying the selected image content to a new image, providing a new image containing only the selected content.
- the method further comprises removing image defects selected from a set consisting of product surface blemishes, dust, and specular highlights.
- the item may be a product, or an accessory such as product packaging, an owner manual, a warranty certificate, a cable, a controller, or a data storage disk.
- the item is one or more assembly parts such as screws, nuts, bolts, wire, wire fasteners, wrench, screw driver, and nut driver.
- the item is a product for sale such as an on-line auction, an on-line purchase, a telephone purchase, an in person purchase, or a mail-order purchase.
- the item is for rent.
- a system for producing a background separated image includes a camera for acquiring at least two images of an item and a background, wherein one of the at least two images provides high contrast between the item and the background, providing a contrast image; an image processor for processing the contrast image to detect a separation between the item and the background, providing an item edge mask, wherein the edge mask is aligned with the boundary of the item in the other of the at least two images, and the other image content within the aligned edge mask is extracted, providing an item-only image; and an image library for storing the item-only image.
- the background is illuminated.
- the illumination may be uniform, such as uniform in color or intensity.
- the illumination may be diffuse.
- the background is illuminated from a side opposite the item.
- the background material may be textured plastic or light diffusing film on a glass substrate.
- the camera is a digital camera.
- the digital camera may have a resolution of at least 2000 ⁇ 2000 pixels or may be a digital video camera.
- the camera includes a plurality of camera settings wherein at least one of the plurality of camera setting is different for each of the at least two images.
- the image processor is an image processor executing a combination of standard and custom image transfer and processing software.
- the background of the contrast image is substantially brighter than the item.
- the item of the contrast image is substantially black.
- the edge mask may be an image comprising the item boundary, a list of vectors in a two-dimensional coordinate space, or a list of points in a two-dimensional coordinate space.
- the at least two images include a common origin in a two-dimensional space.
- the edge mask includes the common origin so that the image processor can automatically align the edge mask to the item in the other image of the at least two images.
- an edge detection algorithm to identify a plurality of edges of the item in the other image of the at least two images and a boundary matching algorithm for fitting the edge mask to the detected plurality of edges.
- algorithm are further included for one or more of deleting all content in the other of the at least two images that is not within the aligned edge mask, changing to white all content in the other of the at least two images that is not within the aligned edge mask, and marking as deleted all content in the other of the at least two images that is not within the aligned edge mask.
- the item-only image includes only the image content within the aligned edge masked.
- an algorithm is included for correcting image defects such as product surface blemishes, dust, and specular highlights.
- the item may be a product, or an accessory such as product packaging, an owner manual, a warranty certificate, a cable, a controller, or a data storage disk.
- the item is one or more assembly parts, such as screws, nuts, bolts, wire, wire fasteners, wrench, screw driver, and nut driver.
- the item is a product for sale such as an on-line auction, an on-line purchase, a telephone purchase, an in person purchase, a mail-order purchase.
- the item is for rent.
- a method of accumulating a set of item-only images of an item comprises providing a camera for acquiring images of an item; providing an illumination fixture for illuminating the item; acquiring a plurality of images of the item from different perspectives, wherein the item is illuminated by the fixture; automatically processing each of the plurality of images to generate item-only images; and associating the set of item-only images in an image library for presentation to a user.
- a method of presenting item-only images comprises providing a set of item-only images of an item; selecting at least one item-only image; combining the at least one selected item-only image with a background, providing an repurposed image; and receiving an item request from a user, and in response thereto presenting the repurposed image to the user.
- selecting at least one item-only image is based at least in part on the item request.
- the method further comprises receiving a request from a user to adjust a perspective view of the item and in response thereto: selecting a second item-only image from the set of item-only images; combining the second item-only image with a background, providing a perspective item image; and presenting the perspective item image to the user.
- the item display image is presented to the user in one or more of an on-line product purchase activity or an on-line product support activity.
- the item-only image includes a representation of an interior of a product.
- the background includes one or more of icons that represent aspects of the item. Selecting one of the one or more icons presents the aspect of the item.
- the aspects selectable through the one or more icons include an internal view of the item, a manual, one or more accessories, assembly instructions, product packaging, service plans, peripherals, and alternate configurations of the item.
- a method of customer support comprises: receiving a support request from a user; acquiring one or more item-only images of an item identified in the support request; repurposing the one or more item-only images for presentation to the user; and providing the repurposed images to the user as a response to the support request.
- the support request is selected from a list consisting of trouble shooting, assembly, accessory installation, home service, upgrading, and returning the item.
- providing the repurposed images includes one or more of displaying the repurposed images on a computer, rendering a video comprising the repurposed images, printing the repurposed images, sending the repurposed images as an email attachment, posting the repurposed images to a web site, storing the repurposed images on a server.
- FIG. 1 depicts an image formation and processing system for rendering high quality item images without any background
- FIG. 2 depicts a method of processing images captured with the image formation system of FIG. 1 ;
- FIG. 3 depicts an alternate embodiment of the image formation facility
- FIG. 4 depicts an overhead view of the image formation facility
- FIG. 5 depicts a section view of the image formation facility
- FIG. 6 depicts a perspective cut away view of the image formation facility
- FIG. 7 depicts a front view of a semicircular camera support ring
- FIG. 8 depicts a side view of the semicircular camera support ring
- FIG. 9 depicts a perspective view of the semicircular camera support ring with vertical supports
- FIG. 10 depicts a view of a user interface display of an image of an item that was photographed with the image formation system.
- FIG. 11 depicts various representations of a hand icon that may be used to manipulate images displayed in the user interface display.
- An aspect of the present invention involves an image formation and processing system 100 for producing high quality images of items so that the images contain no visible background or are readily separated from the background.
- An aspect of the image formation and processing system 100 includes rendering high quality images of items that are free of all background. To render such high quality item-only images, the image formation and processing system 100 may be adapted to eliminate any impact on the uniformity of background illumination such as shadows or reflections cast by the item being photographed.
- Film photography exposes a photographic film to all elements within a camera's field of view, thereby recording the item and any visible background on the film. Even small variations in a nearly uniform background may be detected and recorded by film photography. Therefore post processing to separate the item from the background is necessary. To achieve a background-less image in film photography would require that only the object be printed on the photograph paper. Because of the film exposure process described above, there is no direct equivalent of a background-less image in film photography.
- image processing techniques may include automatically identifying the outline of the item so that the image processor can remove image data outside of the item outline (such data being the background). The accuracy of outline identification can significantly affect the quality and usefulness of the resulting product image.
- the image formation and processing system 100 may combine digital image formation using the image formation facility 102 , camera 122 and lights 124 , with image processing using the image processor 130 to identify the outline of an item 128 to a high degree of accuracy. Identification of an outline of an item 128 may be an automatic process. The image formation and processing system 100 may further apply image processing techniques to automatically extract the item 128 data from acquired images using the identified outline.
- Objects often include complex three-dimensional outlines that may not transfer well when being captured by a two-dimensional digital camera image sensor. This is particularly apparent when a three-dimensional object is photographed in perspective view. Also, variations or inadequate lighting may further complicate the distinction of a trailing surface of the object and its background.
- the invention provides images that allow these complex three-dimensional outlines to be represented as simple sharp contrast changes in a two-dimensional image.
- the phrase “such as” means “such as and without limitation”.
- the use of “example”, such as in “for example” and “in an example” means an example and without limitation.
- the image formation and processing system 100 may comprise an image formation facility 102 for uniformly illuminating an item 128 , a camera 122 for capturing and transferring the image, lighting 124 for illuminating visible surfaces of the item 128 , an image processor 130 for processing images, an image library 132 containing images, and a user display 134 configured for interactively displaying the item images.
- the image formation facility 102 may be comprised of a structural support ring 104 into which a transparent platform 108 is slidably inserted.
- the support ring 104 circumferentially captures the platform 108 and may form a substantially horizontal surface for supporting and rotating the item 128 to facilitate acquiring images of the item 128 from a plurality of directions.
- a light reflection diffusion ring 110 may border the structural support ring 104 on the platform 108 to eliminate unwanted reflection of light from the support ring 104 .
- the light reflection diffusion ring 110 may prevent non-uniform illumination by diffusing light reflecting from the support ring 104 .
- Working height support for the structural ring 104 may be provided by a plurality of legs 112 .
- the legs 112 may further support a curved diffusion backdrop 114 through an association with a backdrop frame 118 .
- the diffusion backdrop 114 may be illuminated to facilitate a uniformly illuminated background for the item 128 .
- the legs 112 may also support a diffusion panel 120 below and substantially parallel to the platform 108 for providing uniform light to the platform 108 .
- a plurality of images of the item 128 may be acquired with the camera 122 and transferred to processor 130 for processing and storage in the image library 132 .
- the camera 122 and lights 124 may be integrated with the image formation facility 102 , or they may be separately supported by a stand (not shown), or held by a photographer.
- a platform 108 may be a clear glass plate. Such a plate may provide a stable, strong surface on which an item 128 may be placed for photographing. Additionally, a glass plate may provide excellent light transmission properties to facilitate a uniformly illuminated surface for photographing an item 128 .
- the platform 108 may be constructed of light diffusing plastic.
- the platform 108 may be a glass plate with light diffusing film applied to one or both sides.
- a non-diffuse light may illuminate the platform 108 from below and the platform 108 may diffuse the light.
- the diffuse light from platform 108 may uniformly illuminate the item 128 from below and the platform 108 may appear as a substantially uniform white background when imaged by a digital camera.
- Anti-reflective glass may be used for the platform 108 to reduce reflections.
- the glass of the platform 108 may be low iron-glass to reduce reflections.
- Polarized light may facilitate acquiring images that enable separation of the item 128 from a background.
- Polarized light may be provided by a polarized light source or may be provided by polarizing filters disposed between the light source and the item 128 and camera 122 .
- the backdrop 114 and the glass plate 108 may be wrapped with a polarizing filter so that light passing through the backdrop 114 and plate 108 may be polarized.
- a second polarized filter may be included in the optical system of the camera. When the filter in the camera optical system is tuned to reject light that is polarized by the backdrop 114 and/or plate 108 , light that passes directly from the backdrop 114 and the plate 108 to the camera 122 will not pass through the camera optical system filter. In this way, the light that makes up the background for the item 128 will not be acquired by the camera 122 .
- Polarized light that impacts the item 128 being photographed may scatter as reflected light thereby changing the polarization of the reflected light. Because the filter in the camera optical system is tuned to reject only light that matches the polarization of the background light, the light reflected off the item 128 may be acquired by the camera 122 because it may pass through the camera optical system filter. The resulting image acquired by the camera 122 may include only the light that is scattered by reflecting off of the item 128 . Thus, a camera filter tuned to filter out light that is polarized by the polarized backdrop 114 and polarized plate 108 may facilitate good separation between the image of the object (composed of scattered light) and the background (composed of polarized light).
- the support ring 104 may be formed of steel, aluminum, or other material that is sufficiently strong to stably support the platform 108 , backdrop 114 , backdrop frame 118 , and item 128 .
- the platform 108 is a round disk shaped platform and support ring 104 is a ring with an outer diameter slightly larger than the platform 108 .
- a support ring 104 may have a U-shaped profile with the U channel extending inward to capture a portion of the circumference of the platform 108 , thereby supporting the platform 108 .
- the support ring 104 may be fastened to the legs 112 so that the support ring 104 and the legs 112 do not move when the platform 108 is rotated within the U-shaped channel.
- an intervening material such as plastic or self-lubricating nylon may be disposed between the platform 108 and the support ring 104 .
- the support ring 104 provides a user access region 106 to a portion of the platform 108 circumference to facilitate rotating the platform 108 .
- the support ring 104 may be “L” shaped so that lower horizontal extension of the “L” supports the platform 108 from below.
- the intervening material may also be used to facilitate smoothly rotating the platform 108 .
- the legs 112 may be constructed of aluminum, steel, or other material that is capable of supporting the image formation facility 102 . In the embodiment depicted in FIG. 1 , three legs are assembled to support ring 104 . At least one of the legs 112 may include a leveling mechanism to facilitate leveling the platform 108 . In the embodiment depicted in FIG. 1 , each leg may include a leveling mechanism to allow adjustment of the platform 108 to a horizontal position on a variety of uneven surfaces.
- the diffusion panel 120 may be constructed of a light diffusing material such as textured plastic and may be assembled to the legs 112 .
- the diffusion panel 120 may be constructed of glass with a diffusion film applied to one or more surfaces. To achieve a high degree of uniformity of light, the diffusion panel 120 may be constructed to deliver a high degree of haze. Haze is a measurement of wide-angle scattering that causes a lack of contrast across the diffusion panel 120 .
- the diffusion panel 120 may include a light source, such as one or more fluorescent lights below the diffusion panel 120 that enables the diffusion ring to transmit diffused light to the platform 108 .
- the diffusion panel 120 and accompanying light source may be mounted to the legs 120 at an intermediate position between the platform 108 and the leveling mechanism at the bottom of the legs 112 .
- the diffusion panel 120 may be disposed within the image formation facility 102 so that only the diffusion panel 120 , the item 128 , and the backdrop 114 are imaged by the camera 122 . This may be accomplished by extending the diffusion panel 120 beyond the diameter of the support ring 104 . This extension generally may be beneath the platform 108 and support ring 104 in the direction of the backdrop 114 .
- the diffusion backdrop 114 may be supported by a backdrop frame 118 so that the lower edge of the diffusion backdrop 114 is in close proximity to a top surface of the platform 108 .
- the backdrop frame 118 may mount on top of support ring 104 .
- the backdrop frame 118 construction may include upward vertical extensions of at least two of the legs 112 .
- each of the leg 112 may attach to the support ring 104 through a bracket that secures the support ring 104 to a vertical portion of the leg 112 .
- the backdrop frame 118 may include horizontal extensions 116 to which the diffusion backdrop 114 is attached.
- the horizontal extensions 116 may be slidably attached to the backdrop frame 118 so that they may be adjusted horizontally and vertically, and secured in the adjusted position. Such an embodiment may allow precise positioning of the diffusion backdrop 114 relative to the platform 108 to facilitate a uniformly illuminated background for the item 128 .
- the bottom of the frame 118 may be constructed like a drum.
- An inside ring may be wrapped, such as with a white screen, then pulled tight to form a drum-like structure, then secured by an outside ring.
- the platform 108 is supported by legs, which may need to be reinforced to support heavy objects.
- a motor assembly may be used to rotate the platform 108 .
- bearings such as air bearings, are used to support the platform 108 .
- the diffusion backdrop 114 may be curved, similarly to the support ring 104 . However the curve of the diffusion backdrop 114 may be of a smaller radius than the support ring 104 so that the surface of the backdrop 114 facing the item 128 is within the radius of the support ring 104 .
- the diffusion backdrop 114 may be constructed of a light diffusing material such as textured plastic.
- the diffusion backdrop 114 may be constructed of glass with a diffusion film applied to one or both surfaces. To achieve a high degree of uniformity of light, the diffusion backdrop 114 may be constructed to deliver a high degree of haze. Haze is a measurement of wide-angle scattering that causes a lack of contrast across the diffusion backdrop 114 . In embodiments, the diffusion backdrop 114 may be illuminated.
- a light source for illuminating the diffusion backdrop 114 may be attached to the backdrop frame 118 or the diffusion backdrop 114 .
- a light source for the diffusion backdrop 114 may be one or more fluorescent lights arranged so that when the light source is turned on, the diffusion backdrop 114 generates uniform light intensity and color.
- the light source for diffusion backdrop 114 and the diffusion panel 120 may be controlled individually so that each may provide the same, similar, or different light intensities and colors.
- Light for the diffusion backdrop 114 and for the diffusion panel 120 may be provided by a variety of light technologies including fluorescent, incandescent, LED, electroluminescent, halogen, and such lighting that may provide a range of intensity, and may provide a substantially white light.
- an item 128 on the platform 108 may appear in a photograph as having no visible means of support. This may be accomplished by the diffusion panel 120 and the diffusion backdrop 114 providing a uniform, substantially white backdrop for the item 128 .
- the image formation facility 102 may be constructed to support and facilitate photographing a wide variety of items 128 ranging from small items such as a coin, to large items such as a high definition Television.
- the embodiment depicted in FIG. 1 may be suitable for photographing office and personal electronics products e.g.
- computers such as desktops, notebooks, tablet PCs, personal digital assistants (PDA), servers, workstations, fax servers, internet-cache servers, barebones systems, POS/kiosk systems; monitors & displays such as CRT monitors, LCD monitors, plasma monitors, projectors; printers such as color laser, mono laser, ink-jet, photo printers, multifunction units, dot-matrix, plotters, label printers, bar code printers, specialty printers, receipt printers, scanners, point-of-sale printer; software such as antivirus software, business software, development tools, education & entertainment, graphics & publishing, internet software, network mgt.
- PDA personal digital assistants
- the image formation facility 102 may also be suitable for facilitating photography of a wide variety of other products such as office products including AV supplies & equipment, basic supplies & labels, binders & accessories, janitorial, business cases, calendars & planners, custom printing, desk accessories, executive gifts, filing & storage, paper, forms, envelopes, pens, pencils & markers, printer & fax supplies, promotional products, school supplies; phones & accessories, or other products found in office, school, or home environments, and the like.
- office products including AV supplies & equipment, basic supplies & labels, binders & accessories, janitorial, business cases, calendars & planners, custom printing, desk accessories, executive gifts, filing & storage, paper, forms, envelopes, pens, pencils & markers, printer & fax supplies, promotional products, school supplies; phones & accessories, or other products found in office, school, or home environments, and the like.
- the image formation facility 102 may also be suitable for facilitating photography of other items such as groceries, produce, cuts of meat, deli products, health and beauty products, clothing, towels, pillows, artwork, models, tableware, collectibles, antiques, potted plants, financial instruments such as bonds, certificates of deposit, currency, and the like.
- the image formation facility 102 may be suitable for facilitating photography of humans, such as models. Background-less images of models may be useful in advertising by allowing an advertiser to “place” a model in an advertisement that includes any background. In an example, an advertiser may use the image formation facility 102 to photograph a model and then place the model into an advertisement of a sunset taken at a different location.
- any item 128 that may remain stable when placed on the platform 108 within the working volume of the image formation facility 102 may take advantage of aspects of the invention that facilitate high resolution photography.
- the working width may be approximately 72 inches
- the working height of the backdrop 114 above the platform 108 may be approximately 36 inches.
- Items 128 may be photographed within their packaging, partially removed, or fully removed from their packaging to record item 128 unpacking or repacking for display as an assembly video. Additionally, the package contents such as documentation, software, warranty cards, cables, batteries, remote controllers, car chargers, wall chargers, assembly tools, assembly fasteners, loose-supplied parts, anti-static packaging and the like may be photographed.
- An item 128 requiring assembly by a user may be photographed at various stages of assembly, or may be continuously photographed during assembly to generate an assembly procedure video. Additionally, battery installation, or product installation (such as connecting a power cord, or attaching speakers) may be photographed.
- audio recording equipment such as may be provided by a digital camcorder embodiment of the camera 122 , audio may also be recorded and/or combined with the assembly procedure video to provide an audio description of assembly steps, techniques, benefits, and the like.
- An item 128 may be photographed in various states of initialization and operation. Photographing a user turning on the item 128 , inserting a CD-ROM, changing a channel, selecting an item setting (e.g. for audio equipment, audio settings like base, treble, balance, and the like), opening a compartment, closing a compartment, turning off the item 128 , and other operation related settings may facilitate a user's understanding of interacting with the product. Such features may include any features that the user can change, control, or modify.
- An item 128 may be photographed from a plurality of angles and positions.
- an item 128 front, back, side, top, or bottom may be photographed.
- an item 128 may be photographed with the focal plane other than parallel to a surface of the item 128 , such as when the item 128 is photographed with the camera 122 directed at a 45 degree angle relative to vertical.
- a perspective view photograph may include more than one surface of the item 128 , such as the front and top of the item or a side and back as if the item were viewed looking toward a rear corner.
- Perspective views may provide visual confirmation of the shape of the item 128 .
- the camera 122 of the image formation and processing system 100 may be a digital still camera.
- the camera 122 may have a resolution of at least 2000 ⁇ 2000 pixels. In other embodiments, the camera 122 may have a resolution lower than about 2000 ⁇ 2000 pixels. It may be understood by one skilled in the art that a camera 122 with any resolution may be used with the image formation and processing system 100 .
- the camera 122 may also provide controls for a variety of settings that impact the acquired image.
- a camera may provide user settings such as resolution, zoom (optical and/or digital), aperture speed, f-stop setting, focus, auto exposure, white balance, macro mode, flash operation, and the like.
- the camera 122 may also include capabilities which affect performance and quality such as lag time, burst mode for rapid succession (such as video), CCD or CMOS sensor, image storage media (e.g. removable), 24 bit or greater color depth, low loss or lossless image compression, image output format support for standards such as JPEG, RAW, and TIFF, and the like.
- Additional camera features such as interface type (RS232, RS422, parallel port, SCSI, USB, IEE1394, wireless, and infrared), batteries (Alkaline, NiCad, NiMH, LiOn), on screen display, date/time in image, sound recording, software (download, editing), and the like may also be factored into the selection of a camera 122 .
- interface type RS232, RS422, parallel port, SCSI, USB, IEE1394, wireless, and infrared
- batteries Alkaline, NiCad, NiMH, LiOn
- on screen display date/time in image
- sound recording software (download, editing), and the like
- Exemplary digital cameras such as Agfa ePhoto, Canon PowerShot, Canon EOS, Casio QV, Concord Eye-Q, Fujifilm FinePix, HP PhotoSmart, Jenoptik Jendigital, Kodak DC, Konica Minolta DiMAGE, Kyocera Samurai, Nikon Coolpix, Olympus E, Pentax EI, Pentax Optio, Polaroid PDC, Premier DC, Pretec DC, Samsung Digimax, Sanyo DSC, Vivitar ViviCam, Yashica Finecam, or a digital video camera such as Canon Elura, Canon Optura, Canon ZR, Hitachi DZ, JVC Everio GZ, JVC GR, Panasonic Proline, Panasonic PV, Panasonic VDR, Samsung SC, SharpViewcam, Sony DCR, Sony Handycam, Sony HDR, Toshiba GSC, and the like may meet one or more requirements for use with the image formation and processing system 100 .
- lighting 124 may be used to illuminate the item 128 being photographed. Lighting 124 may be coordinated with photographic settings of the camera 122 to facilitate capture of high quality images of the item 128 .
- increasing the amount of light incident on the item 128 may cause the item 128 to washout in the image unless the camera 122 shutter speed is increased.
- Faster shutter speeds may reduce the amount of light that reaches the camera 122 image sensor, thereby reducing the potential to washout the item 128 .
- Faster shutter speeds may also reduce the amount of light reaching the camera 122 from the background. This may facilitate eliminating the background by causing the background to appear black, further enhancing the distinction of the item 128 from its background.
- the lighting 124 source may be one or more strobe lights.
- the actuation phase and duration may be coordinated with the camera 122 shutter speed and timing in order to facilitate high quality image capture.
- the lighting 124 may be a continuous illumination source, such as fluorescent, incandescent, LED, or other continuous illumination lighting technology.
- the lighting 124 may use a plurality of lights at one or more positions facing the item 128 from one or more angles such as from the front, top, and sides.
- the lighting 124 may be used to create lighting effects such as shadow, highlights, depth, contrast and other visual effects. Lighting may be white, although other color lights may be used to create color effects. Any combination of light types, colors, quantities, and positions may be used to illuminate the item 128 .
- the lighting 124 may be used in combination with illumination from the backdrop 114 and the platform 108 to create a high contrast image that includes sharp edge definition of the item 128 .
- the camera 122 may be configured with settings that facilitate acquiring such a high contrast image.
- Camera settings such as relating to shutter speed and aperture may be adjusted, such as to provide a very high quality image of the object in one case, while providing image contrast between the background and the object in the other case.
- Shutter speed and aperture may be adjusted, among other things, to account for different lighting conditions, with the object being front lit to obtain a high quality image that will be used for reproduction, and the object being back lit (typically with less light) to obtain a sharp distinction between the image and its background.
- the main image may be shot with a shutter speed of 1/200th of a second at f22, while the backlit image may be shot with a shutter speed of 1 ⁇ 4th of a second at f14.
- Other combinations may be suitable, depending on the respective lighting conditions for the two shots.
- the shutter speed and aperture are automatically adjusted between shots, such as under control of a computer facility.
- the lighting for the backlit image or the primary image may be provided by LEDs, such as an array of LEDs for the back light that provide even illumination of the background.
- Aperture settings may also be adjusted to address depth of field issues. With a small aperture, the lens used to take an image may have a wider depth of field.
- the image processor 130 for processing images of the image formation and processing system 100 may comprise a general purpose computer executing a combination of customized and commercially available software.
- the image processor 130 may be a personal computer executing customized scripts automating Adobe Photoshop image processing software.
- the image processor 130 may be a single computer or a network of computers in a client-server configuration. In embodiments, the image processor may connect through a network to other network devices such as the camera 122 , the image library 132 , the user display 134 , other computers, printers, scanners, data storage devices, network bridges, routers, and other computer network devices.
- other network devices such as the camera 122 , the image library 132 , the user display 134 , other computers, printers, scanners, data storage devices, network bridges, routers, and other computer network devices.
- the images acquired by the camera 122 may be transferred to the image processor 130 over a network such as an intranet, Ethernet, or other data communication network suitable for image transfer.
- the camera 122 may transfer images to the image processor 130 through a USB port.
- the camera 122 may transfer images to the image processor 130 through an IEE1394 port.
- the image may be transferred through a wired connection or a wireless connection between the camera 122 and the image processor 130 .
- the image processor 130 may be integrated into the same enclosure as the camera 122 .
- Images may be transferred from the camera 122 to the image processor 130 during image acquisition by the camera. Images may be transferred individually or in groups such as in batch mode between the camera 122 and the image processor 130 .
- the camera 122 may have storage capacity for a plurality of images that may facilitate batch mode image transfer. Images may be transferred automatically upon acquisition by the camera 122 , by a user directed action, on a predetermined schedule, or as a result of another action such as a new image being acquired by the camera 122 .
- Image transfer may adhere to one or more protocols such as PictBridge, WIA, Picture Transfer Protocol, PTP/IP, Media Transfer Protocol, and other protocols which facilitate transfer of images from one device to another.
- the image processor 130 may be executing custom software that may be coded in a variety of programming languages such as C, C++, Objective C, AppleScript, JAVA, JavaScript, HTML, visual basic, or a script language compatible with a commercial image processing software product such as Adobe Photoshop.
- the customized software may be embodied in a program, a macro, a script, or other suitable software embodiment.
- the methods and systems disclosed herein may be accomplished using a computer system that uses an operating system that supports a graphical user interface, such as the Apple Mac OS X operating system, which includes integrated applications such as Finder for interfacing with the storage of files and folders, and Xcode which allows the building of Mac compatible scripts and fully operational applications that may act on their own or interact with existing software.
- Xcode supports developers using C, C++, Objective C, AppleScript, and Java.
- the Image processor 130 may be primarily written in AppleScript and JavaScript. These applications may interface and run commands between Finder and Adobe Photoshop, where the actual image processing may occur.
- the image processor 130 may be executing commercially available software such as Adobe Photoshop, for example. However other commercially available image processing software such as Captiva, Tinderbox, Shake, and similar image processing software capable of automation of at least some functionality may be used. In embodiments, the image processor 130 may execute a combination of commercial software products and custom software programs to perform the necessary functions of interfacing with the camera 122 , automatically processing the transferred images to generate an item-only image, and storing the image in the image library 132 .
- commercially available image processing software such as Captiva, Tinderbox, Shake, and similar image processing software capable of automation of at least some functionality may be used.
- the image processor 130 may execute a combination of commercial software products and custom software programs to perform the necessary functions of interfacing with the camera 122 , automatically processing the transferred images to generate an item-only image, and storing the image in the image library 132 .
- the image formation and processing system 100 may include an image library 132 containing images processed by the image processor 130 .
- the image library 132 may be adapted to facilitate the automated storage and retrieval of item images based on some aspect of the item 128 .
- the image library 132 may organize images according to the item 128 model number, part number, serial number, vendor code, option, stock keeping unit (SKU), or other information that easily associates an image in the image library 132 to an item 128 .
- the image library 132 may include all images acquired by the camera 122 as well as the images processed by the image processor 130 .
- the image library 132 may be configured based on a file system such as NTFS, FAT, HFS+, UNIX, Joliet, and the like.
- the image library 132 may be configured as a database, such as a Sequel or Access database.
- images in the image library 132 may be used for display or printing and may be repurposed for a specific use by the image processor 130 .
- one or more images in the image library 132 may be presented on the user display 134 for purposes of facilitating a user viewing and learning about the item 128 depicted in the images.
- one or more of the images in the image library 132 may be provided to a printer for purposes of printing a catalog or other advertisement displaying the item 128 depicted in the images.
- An alternate embodiment of the image formation facility 102 may comprise a motorized means for rotating and tilting the item 128 to automate image acquisition with a plurality of cameras.
- a motor may be disposed in various locations to enable movement of the platform 108 .
- the example, a motor, a pair of motors, or more motors may be located underneath the platform 108 , on top of the platform 108 , or to the side of the platform 108 .
- a belt may be supplied around the platform 108 to reduce friction during movement.
- air bearings or other bearings may be used to support movement.
- the motor may have motor control software, which may be integrated with, and under the same control as, the software that may be used to acquire the images of the object held on the platform 108 , such as to cause the motor to rotate the object into a desired position, and then capture the images of the object.
- automation software may automate image taking and platform movement, such as to take a predetermined sequence of shots of an object.
- a mask may be provided inside the camera viewfinder, to show the area that will be cropped around the object during processing of the image. In such cases, what the viewer sees through the viewfinder may be matched to what will be handled by the image processing software.
- FIG. 2 depicts a process 200 of rendering an item-only image 220 of an item 128 photographed using the image formation and processing system 100 .
- the process 200 gains substantial advantage over manual methods by using aspects of the image formation and processing system 100 that ensure images of items 128 are automatically and quickly produced with consistently high quality at low cost.
- the process 200 may rely on a difference between illumination levels within an image to facilitate automatic processing by an image processor 130 .
- the process 200 may involve acquiring at least two different images from a single perspective. One of the two different images may be acquired with camera settings that facilitate using the image formation facility 102 to present the item 128 substantially in silhouette, thereby acquiring a silhouette image 202 .
- the silhouette image 202 may have very high contrast between a bright background and the dark item 128 .
- the very high contrast of the silhouette image 202 may facilitate the image processor 130 automatically and accurately detecting the edge of the item 128 to generate an edge mask 204 (e.g., outline mask of item from 202 ).
- the edge mask 204 may include a set of image points along the visible perimeter of the item 128 , wherein the points accurately define a separation between the item 128 and the background.
- a high quality image 208 of an item 128 presented by the image formation facility 102 may be acquired by the camera 122 with the assistance of the lights 124 illuminating the item 128 . While the backdrop 114 and the lower diffusion panel 120 may be illuminated moderately to facilitate acquiring the high quality image 208 , camera settings such as f-stop may be different from those used to acquire silhouette image 202 .
- the edge mask 204 may be superimposed on the high contrast image 208 so that the edge mask 204 aligns with the item 128 . Aligning the edge mask 204 with the item 128 facilitates automatically separating the item 128 from the background as shown in composite image 210 (e.g., applied outline mask of 204 to 208 ).
- the image processor 130 may automatically extract image content within the aligned edge mask 204 from the high resolution image 208 and automatically generate a new image such as that shown in extracted image 214 (e.g., item image with defects and bright spots).
- the extracted image 214 includes only item 128 image content from the high quality image 208 .
- Edge mask 204 may be an image containing image data which is known to the image processor 130 as representing the outline of the item 128 .
- silhouette image 202 , edge mask 204 image, and high resolution image 208 may all be referenced to a common origin point in a two-dimensional image space.
- the image processor 130 may combine the images in the image space to facilitate processing.
- Edge mask 204 may be a set of vectors.
- image processor 130 may apply edge mask 204 vectors to the high resolution image 208 in the two-dimensional image space to separate the item 128 from the background of the high resolution image 208 .
- image processor 130 may apply other image alignment algorithms that do not require the high resolution image 208 and the edge mask 204 to reference a common origin.
- Image alignment algorithms may include techniques such as pattern matching, edge detection, or a combination of these and other techniques that generate an accurate alignment of the edge mask 204 to the item 128 of the high resolution image 208 .
- the extracted image 214 may be stored in the image library 132 , it may be further processed by the image processor 130 to remove image defects, such as dust 212 , and image formation defects, such as specular highlights 218 to generate an item-only/product-only image 220 .
- the item-only image 220 may be suitable for printing, inclusion in a print catalog, or for display on an electronic display such as a computer monitor.
- Image processing software such as Adobe Photoshop, and many other commercially available titles, may include image enhancement capabilities and/or features that facilitate removal of image defects such as dust 212 and specular highlights 218 .
- the image processor 130 may include automated scripts or programs that facilitate automating image enhancement through commercially available image processing software.
- Each image-processing step may be manually executed within Adobe Photoshop through user input.
- the image processing application 130 automates the execution of the repetitive tasks in the form of script actions to step Photoshop through analyzing each image set.
- the user may set or adjust the threshold and tolerance levels Photoshop uses to distinguish between foreground and background information when generating the outline mask 204 from the high contrast image 202 prior to batching sequence.
- the image processing application 130 may be programmed to automate several resizing and cropping procedures that fit the user's needs for each Web and print-related image set that is being processed. These tasks may all be performed using image processing software, such as Photoshop.
- a collection of item-only images 220 from a variety of perspectives of the item 128 may be combined in an interactive user display 134 with a user interface adapted for facilitating a user viewing the item 128 from the variety of perspectives.
- the perspectives may include the front, sides, back, top as well as interior views, and various views of the item 128 in operation, assembly, and packaging.
- One such perspective may include a user rotating the item 128 .
- Another may include a user opening a door or compartment of the item 128 , or turning on/off the item 128 .
- FIG. 3 depicts an alternate embodiment of the image formation facility 302 wherein the lower diffusion panel 120 shown in the embodiment of FIG. 1 is replaced by a lower diffusion backdrop 320 .
- the lower diffusion backdrop 320 may provide diffuse uniform illumination in a similar way to the diffusion backdrop 114 .
- an item 128 on platform 108 may appear in a photograph as having no visible means of support. This may be accomplished by the lower diffusion backdrop 320 and the diffusion backdrop 114 providing a uniform, substantially white backdrop for the item 128 .
- image formation and processing system 100 includes two backdrop sections, such as a main section 114 above the platform 108 and a lower backdrop 320 under the platform 108
- other techniques may be used to provide smooth illumination at the intersection of the platform 108 with the supporting ring 104 .
- a separate ring of white material may be used to provide increased reflection from the bottom to account for the fact that the platform 108 absorbs some light from the bottom.
- a section of the platform 108 may be painted, such as white or grey, to provide a uniform transition in the background illumination between the upper and lower backdrop portions.
- FIG. 4 depicts an overhead view of the image formation facility 302 for purposes of describing an aspect of the invention related to the relative position of the upper backdrop 114 and the lower backdrop 320 .
- FIG. 4 shows the support ring 104 supporting the frame 118 for the diffusion backdrop 114 . It also shows the lower diffusion backdrop 320 that is supported by legs 112 (not shown).
- the support ring 104 , lower diffusion backdrop 320 , and upper diffusion backdrop 114 are all substantially concentric.
- the upper diffusion backdrop 114 has the smallest effective diameter and the support ring 104 has the largest diameter.
- the resulting positioning of the upper backdrop 114 and the lower backdrop 320 may facilitate providing a uniform background to an item 128 being photographed on the image formation facility 302 by ensuring that a top edge of the lower diffusion backdrop 320 is not photographed. This may be accomplished by the upper diffusion backdrop 114 blocking a line of sight between the camera and the top edge of the lower diffusion backdrop 320 .
- a side view of the relative positioning of the backdrops can be seen in FIG. 5 and is described below.
- the upper diffusion backdrop 114 and the lower diffusion backdrop 320 may alternatively form non-concentric arcs. In addition, neither backdrop need be concentric with support ring 104 . To provide the advantage of ensuring the top edge of the lower diffusion backdrop 320 is not photographed, any arc may be shaped by the backdrops. In an example, lower backdrop 320 may form an elliptical arc and upper backdrop 114 may form a circular arc.
- FIG. 5 depicts a section view of the image formation facility 302 showing a backdrop transition diffusion panel 502 with representative lighting 504 that may be used to illuminate the backdrops.
- FIG. 5 further includes an overhead light 510 that may illuminate the item 128 on the platform 108 .
- the lighting 504 in combination with diffusion backdrops 114 and 320 and in combination with diffusion panel 502 may facilitate providing a uniform background to an item 128 being photographed.
- the upper diffusion backdrop 114 is shown mounted offset from the backdrop frame 118 by the frame extensions 116 .
- the frame extensions 116 provide an offset so that the upper backdrop 114 is positioned closer than the lower backdrop 320 to the item 128 being photographed by camera 124 .
- a diffusion panel 502 may be included with the image formation facility 302 to further facilitate providing a uniform background for the item 128 .
- the diffusion panel 502 may be positioned so that only diffuse light projects onto a portion of the image formation facility 302 that includes the support ring 104 , top edge of the lower diffusion backdrop 320 and lower edge of the upper diffusion backdrop 114 . This may reduce the amount of reflected light from these elements and from any of the structural elements required to support the backdrops and the platform 108 in this portion of the facility 302 .
- the diffusion panel 502 may be constructed of translucent material much like white office paper, textured plastic, nylon, and other materials that facilitates diffuse transmission of light.
- the diffusion panel 502 may be mounted to the support ring 104 through a transition bracket or frame similar to the backdrop frame 118 .
- a plurality of lights 504 may be mounted on a light frame 508 behind the image formation facility 302 and directed toward the diffusion backdrops 114 and 320 and the diffusion panel 502 .
- One or more of the lights 504 may be controlled individually or in combination to provide preferred lighting.
- An overhead light 510 may also be included with the image formation facility 302 to illuminate the item 128 from above.
- the overhead light 510 may be mounted to an overhead fixture, the ceiling, or to the backdrop frame 118 .
- the overhead light 510 may be individually controlled or controlled in combination with one or more of the lights 504 or the lights 124 to properly illuminate the item 128 .
- FIG. 6 shows a perspective cut away view of the image formation facility 302 further providing details of the upper backdrop 114 , the lower backdrop 320 and the diffusion panel 502 .
- a plurality of upper backdrop stabilization rods 602 may be used to provide stability to the backdrop frame 118 .
- a stabilization rod 602 may be connected between an upper end of the backdrop frame 118 and the support ring 104 .
- the cutaway perspective view of FIG. 6 shows one of the stabilization rods 602 included with the image formation facility 302 .
- FIGS. 7 through 9 depict a structure and method for maintaining a constant camera-item distance.
- the camera 122 may need to be moved about an item 128 while maintaining a constant distance from the item 128 , to ensure that images taken as the camera is moved about the item 128 have similar levels of illumination. Maintaining a constant distance may also facilitate generating images that can be sequenced to represent rotating or tilting the object in an interactive user interface display.
- a semicircular ring or similar apparatus may be provided that rotates on an axis so that the distance from the camera to the object is maintained, such as when the semicircular ring is rotated to put the camera above the object, alongside the object, or somewhere in between.
- FIG. 7 depicts an embodiment of the semicircular ring 702 .
- the camera 122 may be slidably attached to the semicircular ring 702 so that the camera 122 can move along the surface of the ring 702 to various positions for acquiring images.
- FIG. 8 depicts the semicircular ring 702 and camera 122 mounted to a partial view of an embodiment of the image formation facility 102 .
- the semicircular ring 702 is shown tilting clockwise from an overhead position 802 to a substantially horizontal position 804 .
- the item can be viewed from all positions in the quarter-sphere generated by combined motion.
- the semicircular ring 702 may be tilted counterclockwise as well as clockwise, thereby facilitating capture images of the item 128 from all camera perspectives in the hemisphere above the platform 108 .
- FIG. 9 depicts a perspective view of the semicircular ring 702 and camera 122 mounted to an embodiment of the image formation facility 102 .
- the semicircular ring 702 is slidably mounted to vertical supports 902 to facilitate placing the camera at a variety of distances from the item 128 .
- the position of the semicircular ring 702 and the position of the camera 122 on the ring 702 may be automatically detected and transmitted to the image processor 130 so that the three-dimensional position of the camera may be associated with each image acquired.
- the image database 132 may receive the camera 122 position information along with each image to maintain the camera-item position association. This may facilitate automatically generating sequences of images to depict types of motion of the item such as rotating, tilting, turning, and the like.
- the three-dimensional camera position may also be used to facilitate identifying the specific image to display on the user interface display 134 in response to a user requesting an alternate view of the item 128 .
- the camera motion along the semicircular ring 702 and the movement of the ring 702 may be automatically controlled by the image processor 130 or other computing facility to automatically generate a collection of images from various three-dimensional camera positions.
- the object described herein could be any kind of object, including a person or other animate object.
- the image formation facility 302 may safely support a person on the platform 108 for purposes of acquiring images of the person.
- acquiring images of inanimate objects do not need to take into consideration object movement.
- the time between the at least two images used to form an item-only image must be so that the effect on the images of movement of the person, people, or other animate object may be minimized.
- Cameras and lighting adapted to support very rapid image acquisition may provide the requisite acquisition requirements. If camera or lighting settings must be changed between images, automation of these changes may be required in addition to the automation of acquiring the images.
- aspects of a person that may provide unique requirements for imaging may include the person's hair. Creating an appropriate image boundary or outline that includes elements as small as a human hair may be facilitated by high resolution imaging and lighting that may provide sharp contrast. Since a person's hair also may move between images, the formation and processing system 100 may compensate for this movement by masking hair so that changes in the hair do not impact generating the item-only image. In one embodiment, the hair of the high contrast image may be replaced by the hair of the detailed image during the generation of the item-only image. Other methods such as image processing, lighting adjustments, masking, and the like for minimizing the impact of hair movement on the resulting item-only image may also be applied.
- FIG. 10 depicts an embodiment of the user display 134 that may be included with the image formation and processing system 100 .
- the user display 134 may be used to display images.
- the images may be images stored in the image library 132 , or images acquired by the camera 122 , repurposed images, images combined with other graphics or text, web pages containing the images, videos comprising the images, or any combination thereof.
- the user display 134 may include a user interface 1002 with features adapted to facilitate a user viewing an item 128 from a variety of perspectives, angles, operating modes, stages of unpacking, setup and initialization, and other ways of viewing an item 128 .
- the user display 134 may be used to display images depicting activating item controls, observing item visual display features, reviewing item manuals or assembly instructions, displaying video for assembly or operation, and such other images that may facilitate informing, entertaining, or otherwise satisfying a user preference.
- the user interface 1002 of the user display 134 may include a user controlled icon or functional pointer that may depict a human hand, such as hand icon 1004 .
- the hand icon 1004 may be adapted to perform functions equivalent to touching the displayed item 128 , gripping and turning the item 128 , opening a compartment of the item 128 , depressing a control/button of the item 128 (e.g. with a finger or thumb), and other human-item interactions.
- the hand icon 1004 may change appearance to further emulate a human-item interaction.
- the hand icon 1004 may depict a human hand prepared to depress the button with a finger or thumb.
- the hand icon 1004 when a user positions the hand icon 1004 near a dial of the item 128 , the hand icon 1004 may depict a human hand prepared to turn the dial.
- User actions through the user interface 1002 such as clicking a button on a mouse may result in the hand icon 1004 depressing the push button or gripping the dial and subsequent movement of the mouse may turn the dial.
- the hand icon 1004 may be configured to move or rotate an image of an object.
- the image may be placed on a grid, such as with cells numbered or lettered to identify particular portions of the screen.
- a sequence of movements of the hand icon 1004 may be defined, such as using a script, such as indicating where the hand icon 1004 should be placed on the image (e.g., cell A5) and what action should be undertaken by the hand icon 1004 in the interface (e.g., “move to cell A8”, “rotate ninety degrees to the left,” zoom in to one hundred fifty percent of current image size,” or the like).
- the hand icon 1004 may be used by an editor to associate a series of movements of the image on the grid to provide a sequence of different images, such as for a presentation of a product.
- the image of the item 128 displayed on the user display 134 may change in response to a user action.
- a user turning a channel select dial on an image of a radio may cause a channel display on the item 128 to change value thereby representing the response of the radio to a user turning a channel select dial.
- the change in value of the channel display may be accomplished through updating the user display 134 with a sequence of new images of the item 128 from the image library 132 .
- the hand icon 1004 may change shape and size in order to mimic the look of a real hand moving a product.
- FIG. 11 depicts various changes of the hand icon 1004 for performing additional human-item interactions.
- the hand icon 1004 may further include images of a human hand for performing at least the following: zoom functions 1102 such as zoom in by pulling 1104 toward the user to magnify the item 128 , and zoom out by pushing 1108 away from the user to de-magnify the item 128 ; rotation 1110 by gripping and turning to rotate the item 128 ; spinning 1112 such as to spin a dial or wheel of the item 128 ; and panning 1114 to move the item 128 horizontally or vertically in the user display 134 .
- zoom functions 1102 such as zoom in by pulling 1104 toward the user to magnify the item 128 , and zoom out by pushing 1108 away from the user to de-magnify the item 128
- rotation 1110 by gripping and turning to rotate the item 128
- spinning 1112 such as to spin a dial or wheel of the item 128
- panning 1114 to move the item 128 horizontally or vertically in the user display
- the methods or processes described above, and steps thereof, may be realized in hardware, software, or any combination of these suitable for a particular application.
- the hardware may include a general-purpose computer and/or dedicated computing device.
- the processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory.
- the processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals.
- one or more of the processes may be realized as computer executable code created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software.
- a structured programming language such as C
- an object oriented programming language such as C++
- any other high-level or low-level programming language including assembly languages, hardware description languages, and database programming languages and technologies
- each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof.
- the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware.
- means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
In embodiments of the present invention improved capabilities are described for producing background separated product images for print and on-line display. An image formation system provides lighting of a product to facilitate acquiring images that can be automatically processed to generate high resolution item-only images free of quality defects and imaging artifacts. Image processing programs accurately detect an outline of an item in a set of digital images taken using the image formation system and automatically store processed images in an image library. The images in the library may be repurposed for print, sales display, transmission to a user, on-line customer support, and the like. A user display configured with an adaptable user interface facilitates user interaction with images in the library.
Description
- This application is a continuation of U.S. patent application Ser. No. 16/214,366, filed Dec. 10, 2018 (PCCN-0001-P05-001-001-001) which is a continuation of U.S. patent application Ser. No. 15/336,436, filed Oct. 27, 2016, which issued as U.S. Pat. No. 10,194,075 on Jan. 29, 2019 (PCCN-0001-P05-001-001). U.S. patent application Ser. No. 15/336,436 (PCCN-0001-P05-001-001) is a continuation of U.S. patent application Ser. No. 14/690,922, filed Apr. 20, 2015, which issued as U.S. Pat. No. 9,501,836 on Nov. 22, 2016 (PCCN-0001-P05-001). U.S. patent application Ser. No. 14/690,922 (PCCN-0001-P05-001 is a continuation of U.S. patent application Ser. No. 14/106,119, filed Dec. 13, 2013, which issued as U.S. Pat. No. 9,014,476 on Apr. 21, 2015 (PCCN-0001-P05). U.S. patent application Ser. No. 14/106,119 (PCCN-0001-P05) is a continuation of U.S. patent application Ser. No. 13/090,857, filed Apr. 20, 2011, which issued as U.S. Pat. No. 8,611,659 on Dec. 17, 2013 (PCCN-0001-P04). U.S. patent application Ser. No. 13/090,857 (PCCN-0001-P04) is a continuation of U.S. patent application Ser. No. 11/850,083 filed Sep. 5, 2007, which issued as U.S. Pat. No. 7,953,277 on May 31, 2011 (PCCN-0001-P02). U.S. patent application Ser. No. 11/850,083 (PCCN-0001-P02) claims the benefit of provisional application U.S. Provisional App. No. 60/824,571, filed on Sep. 5, 2006 (PCCN-0001-P60). All of the above applications are hereby incorporated by reference in their entirety.
- This application relates to illumination fixtures and in particular to illuminating an item to facilitate providing background separated images.
- Images of products for advertising or other purposes are prevalent in print and on-line content. Advertisers prefer to use high quality images of products in advertisements to differentiate their products. Additionally a high quality product image can convey critical information about a product much more quickly than a text description can. Attributes of high quality images for advertising include excellent image details, clear product edges, the absence of a visible background, (thereby conveniently allowing use of the image in various contexts, such as print advertisements, web advertisement, or the like), and an absence of defects such as spots, reflections, or specular highlights. To achieve high quality product images, advertisers conventionally use photography studios that employ specialists who are experts in generating high quality images of products.
- While some functions related to high quality photography may benefit from digital image processing, such as red-eye removal, achieving excellent image details with clear product edges and no visible background typically requires manual manipulation of the images, including clicking along segments of the edges of an object on a computer screen to define an outline of the object so that the object can be separated from its background. Additionally, providing high quality images for processing requires time intensive product placement, lighting, and photography with each change in product positioning often requiring adjustments to lighting. Once high quality product images are captured, specialists use manual graphic editing tools to separate the item from its background so that the item can be repurposed for various types of advertisements. As with most high quality detailed manual functions, the process is often time-consuming and costly, requiring a skilled and well trained specialist. Unfortunately, the cost of operating a photography studio of such specialists is passed on to the advertiser through high fees. To achieve improved throughput or increased capacity, additional specialists must be employed and their cost is also passed onto the advertiser. Since each image must be manipulated individually due to any change in lighting or viewing angle, creating a library of images that show multiple features or views of a product can be very expensive.
- In embodiments, a photograph or digital image as described herein may be taken with lighting configured to deliver a high quality image of the object without consideration for the critical manual operation of separating the object from its background. In some cases, when taking a photograph, an outline of an object may be inadequate to ensure that the ability to separate the object from its background is of similarly high quality. Additionally, lighting in a photography studio may be optimized for lighting an object and may be inadequate to illuminate the object for the purpose of efficiently separating it from its background. As a result using a single image of an object illuminated for purposes of a high quality image can compromise a step in delivering an object image suitable for advertisers' purposes. Alternatively, the photograph or digital image may be taken in a way that compromises between obtaining a high quality image of the object and obtaining an image from which the background may be separated. In such a case, neither the quality of the image nor the quality of the separation of object from background is optimized.
- Users of the Internet have come to expect on-line shopping to facilitate a superior understanding of a product before a purchase is made. High quality product images provide a high degree of information that is important to a consumer. While user interface interactive capability and dynamic displays are common aspects of today's personal computers used for on-line shopping, cost and schedule limitations of generating the necessary library of product images severely limits the extent to which on-line advertising can take advantage of these aspects. On-line consumers are often limited to a few views of the exterior of a product and do not get the opportunity to virtually interact with the product prior to purchase. As a result, consumers are either making purchases without sufficient information, or are using up a product supplier's support resources for answers to product specific questions they cannot get answered through their on-line shopping experience. However, on-line advertising, product display, and on-line shopping are especially attractive to both sellers and consumers due to the low cost of sale (and the resulting reduction in consumer price) and to consumers due to the access to information.
- Therefore, there exists a need to quickly and cost-effectively generate a wide range of product images for each product to facilitate the envisioned on-line shopping experience and the resulting sales cost advantage. There also exists a need to deliver the images to on-line users in an interactive display format that facilitates easy access to the features, capabilities, and other aspects of a product necessary to make a well-informed buying decision.
- An aspect of the invention herein disclosed may include an image formation and processing system for delivering high quality product images with no visible background. This aspect may be achieved by using automated image processing to generate and repurpose the product images.
- An advantage of this aspect of the invention may address key drawbacks of known methods and systems such as per image cost and processing time. By automating key activities while achieving higher quality product images, the methods and systems herein disclosed may dramatically lower per image cost while providing greater throughput and higher capacity of the high quality images. This may result in facilitating the development and management of a library of product images that can be used to fulfill a wide variety of product display and advertising needs.
- Another aspect of the invention herein disclosed may include a user display and interface to facilitate a user manipulating a product display. An advantage of this aspect of the invention may include facilitating a user to virtually touch, turn, operate, open, tilt, activate and observe a product, and retrieve more information (e.g. detailed images, text, audio description). The interface may also allow use of game controllers or virtual reality interface devices.
- The elements of the invention herein disclosed may address key aspects of a consumer purchasing process. With the invention, the consumer may view a preferred model, rather than the one on display at a local store. Details of a product, both internal and external may be easily viewed. In addition to the product, the box contents (e.g. manuals, included accessories, assembly instructions) may be available to be viewed by the consumer. Additionally, the invention facilitates providing the images and an interactive user display to deliver a rapid, automated response to a customer product query. This may result in a faster sales cycle. In embodiments, the invention may be combined with on-line sales support tools that explain product features, virtually demonstrate the product, answer frequently asked questions through images as well as text or audio, and allow a consumer to examine and evaluate a product without a time limit such as may be imposed in a physical store. Similarly, the images and interactive user interface may be beneficial to customer support organizations to guide users through debugging, assembly, accessory installation, and home servicing of products. A user may view an interactive display of installing a replacement component rather than viewing a hard copy printout of static images typically included in a product manual. The invention could be used to provide interactive images quickly in reply to a customer request for support of an aspect not already available.
- In an aspect of the invention, a method for producing a background separated image, includes acquiring at least two images of an item and a background with a camera, wherein one of the at least two images provides high contrast between the item and the background; processing the contrast image to detect a separation between the item and the background, providing an edge mask; aligning the edge mask with the boundary of the item in the other of the at least two images; extracting the other image content within the aligned edge mask to produce an item-only image; and storing the item-only image in an image library.
- In the method, the background is illuminated. The illumination may be uniform such as uniform in color or intensity. The illumination may be diffuse.
- In the method, the background is illuminated from a side opposite the item.
- In the method, the background material is textured plastic or a light diffusing film on a glass substrate.
- In the method, the camera is a digital camera. The digital camera may have a resolution of at least 2000×2000 pixels, or it may be a digital video camera.
- In the method, acquiring further includes configuring a plurality of camera settings for each of the at least two images wherein at least one camera setting is different for each of the at least two images.
- In the method, the processing the contrast image is performed automatically by an image processor.
- In the method, the processing is performed by an image processor executing a combination of standard and custom image transfer and processing software. The processing may include identifying a set of points of the background that are adjacent to a set of points of the item.
- In the method, the background of the contrast image is substantially brighter than the item. The item of the contrast image is substantially black. The edge mask is an image comprising the item boundary. The edge mask may be a list of vectors in a two-dimensional coordinate space or a list of points in a two-dimensional coordinate space. Each of the at least two images reference a common origin in a two-dimensional space. The edge mask is referenced to the common origin so that the edge mask can be automatically aligned to the item in the other image of the at least two images.
- In the method, aligning includes using an edge detection algorithm to identify a plurality of edges of the item in the other image of the at least two images and fitting the edge mask to the detected plurality of edges.
- In the method, extracting includes deleting all content of the other image of the at least two images image other than the content within the aligned edge mask. Deleting may include changing content to white, or marking the content as deleted.
- In the method, extracting includes selecting all image content within the aligned edge mask and copying the selected image content to a new image, providing a new image containing only the selected content.
- In the method, the method further comprises removing image defects selected from a set consisting of product surface blemishes, dust, and specular highlights.
- In the method, the item may be a product, or an accessory such as product packaging, an owner manual, a warranty certificate, a cable, a controller, or a data storage disk.
- In the method, the item is one or more assembly parts such as screws, nuts, bolts, wire, wire fasteners, wrench, screw driver, and nut driver.
- In the method, the item is a product for sale such as an on-line auction, an on-line purchase, a telephone purchase, an in person purchase, or a mail-order purchase.
- In the method, the item is for rent.
- In another aspect of the invention, a system for producing a background separated image, includes a camera for acquiring at least two images of an item and a background, wherein one of the at least two images provides high contrast between the item and the background, providing a contrast image; an image processor for processing the contrast image to detect a separation between the item and the background, providing an item edge mask, wherein the edge mask is aligned with the boundary of the item in the other of the at least two images, and the other image content within the aligned edge mask is extracted, providing an item-only image; and an image library for storing the item-only image.
- In the system, the background is illuminated. The illumination may be uniform, such as uniform in color or intensity. The illumination may be diffuse.
- In the system the background is illuminated from a side opposite the item. The background material may be textured plastic or light diffusing film on a glass substrate.
- In the system, the camera is a digital camera. The digital camera may have a resolution of at least 2000×2000 pixels or may be a digital video camera. The camera includes a plurality of camera settings wherein at least one of the plurality of camera setting is different for each of the at least two images.
- In the system, the image processor is an image processor executing a combination of standard and custom image transfer and processing software.
- In the system, the background of the contrast image is substantially brighter than the item. The item of the contrast image is substantially black.
- In the system, the edge mask may be an image comprising the item boundary, a list of vectors in a two-dimensional coordinate space, or a list of points in a two-dimensional coordinate space.
- In the system, the at least two images include a common origin in a two-dimensional space. The edge mask includes the common origin so that the image processor can automatically align the edge mask to the item in the other image of the at least two images.
- In the system, an edge detection algorithm to identify a plurality of edges of the item in the other image of the at least two images and a boundary matching algorithm for fitting the edge mask to the detected plurality of edges.
- In the system, algorithm are further included for one or more of deleting all content in the other of the at least two images that is not within the aligned edge mask, changing to white all content in the other of the at least two images that is not within the aligned edge mask, and marking as deleted all content in the other of the at least two images that is not within the aligned edge mask.
- In the system, the item-only image includes only the image content within the aligned edge masked.
- In the system, an algorithm is included for correcting image defects such as product surface blemishes, dust, and specular highlights.
- In the system, the item may be a product, or an accessory such as product packaging, an owner manual, a warranty certificate, a cable, a controller, or a data storage disk.
- In the system, the item is one or more assembly parts, such as screws, nuts, bolts, wire, wire fasteners, wrench, screw driver, and nut driver.
- In the system, the item is a product for sale such as an on-line auction, an on-line purchase, a telephone purchase, an in person purchase, a mail-order purchase.
- In the system, the item is for rent.
- In another aspect of the invention, a method of accumulating a set of item-only images of an item comprises providing a camera for acquiring images of an item; providing an illumination fixture for illuminating the item; acquiring a plurality of images of the item from different perspectives, wherein the item is illuminated by the fixture; automatically processing each of the plurality of images to generate item-only images; and associating the set of item-only images in an image library for presentation to a user.
- In another aspect of the invention, a method of presenting item-only images comprises providing a set of item-only images of an item; selecting at least one item-only image; combining the at least one selected item-only image with a background, providing an repurposed image; and receiving an item request from a user, and in response thereto presenting the repurposed image to the user.
- In the method, selecting at least one item-only image is based at least in part on the item request.
- In the method, the method further comprises receiving a request from a user to adjust a perspective view of the item and in response thereto: selecting a second item-only image from the set of item-only images; combining the second item-only image with a background, providing a perspective item image; and presenting the perspective item image to the user.
- In the method, the item display image is presented to the user in one or more of an on-line product purchase activity or an on-line product support activity.
- In the method, the item-only image includes a representation of an interior of a product.
- In the method, the background includes one or more of icons that represent aspects of the item. Selecting one of the one or more icons presents the aspect of the item. The aspects selectable through the one or more icons include an internal view of the item, a manual, one or more accessories, assembly instructions, product packaging, service plans, peripherals, and alternate configurations of the item.
- In another aspect of the invention, a method of customer support comprises: receiving a support request from a user; acquiring one or more item-only images of an item identified in the support request; repurposing the one or more item-only images for presentation to the user; and providing the repurposed images to the user as a response to the support request.
- In the method, the support request is selected from a list consisting of trouble shooting, assembly, accessory installation, home service, upgrading, and returning the item.
- In the method, providing the repurposed images includes one or more of displaying the repurposed images on a computer, rendering a video comprising the repurposed images, printing the repurposed images, sending the repurposed images as an email attachment, posting the repurposed images to a web site, storing the repurposed images on a server.
- These and other systems, methods, objects, features, and advantages of the present invention will be apparent to those skilled in the art from the following detailed description of the preferred embodiment and the drawings. All documents mentioned herein are hereby incorporated in their entirety by reference.
- The invention and the following detailed description of certain embodiments thereof may be understood by reference to the following figures:
-
FIG. 1 depicts an image formation and processing system for rendering high quality item images without any background; -
FIG. 2 depicts a method of processing images captured with the image formation system ofFIG. 1 ; -
FIG. 3 depicts an alternate embodiment of the image formation facility; -
FIG. 4 depicts an overhead view of the image formation facility; -
FIG. 5 depicts a section view of the image formation facility; -
FIG. 6 depicts a perspective cut away view of the image formation facility; -
FIG. 7 depicts a front view of a semicircular camera support ring; -
FIG. 8 depicts a side view of the semicircular camera support ring; -
FIG. 9 depicts a perspective view of the semicircular camera support ring with vertical supports; -
FIG. 10 depicts a view of a user interface display of an image of an item that was photographed with the image formation system; and -
FIG. 11 depicts various representations of a hand icon that may be used to manipulate images displayed in the user interface display. - An aspect of the present invention involves an image formation and
processing system 100 for producing high quality images of items so that the images contain no visible background or are readily separated from the background. An aspect of the image formation andprocessing system 100 includes rendering high quality images of items that are free of all background. To render such high quality item-only images, the image formation andprocessing system 100 may be adapted to eliminate any impact on the uniformity of background illumination such as shadows or reflections cast by the item being photographed. - Film photography exposes a photographic film to all elements within a camera's field of view, thereby recording the item and any visible background on the film. Even small variations in a nearly uniform background may be detected and recorded by film photography. Therefore post processing to separate the item from the background is necessary. To achieve a background-less image in film photography would require that only the object be printed on the photograph paper. Because of the film exposure process described above, there is no direct equivalent of a background-less image in film photography.
- While also capturing all elements within a camera's field of view, digital photography creates a digital image that can be processed through an image processor in which image processing techniques may be applied to remove the background. One image processing technique may include automatically identifying the outline of the item so that the image processor can remove image data outside of the item outline (such data being the background). The accuracy of outline identification can significantly affect the quality and usefulness of the resulting product image.
- The image formation and
processing system 100 may combine digital image formation using theimage formation facility 102,camera 122 andlights 124, with image processing using theimage processor 130 to identify the outline of anitem 128 to a high degree of accuracy. Identification of an outline of anitem 128 may be an automatic process. The image formation andprocessing system 100 may further apply image processing techniques to automatically extract theitem 128 data from acquired images using the identified outline. - Objects often include complex three-dimensional outlines that may not transfer well when being captured by a two-dimensional digital camera image sensor. This is particularly apparent when a three-dimensional object is photographed in perspective view. Also, variations or inadequate lighting may further complicate the distinction of a trailing surface of the object and its background. By generating consistently uniform background illumination, the invention provides images that allow these complex three-dimensional outlines to be represented as simple sharp contrast changes in a two-dimensional image. Throughout this disclosure the phrase “such as” means “such as and without limitation”. Throughout this disclosure the use of “example”, such as in “for example” and “in an example” means an example and without limitation.
- Referring to
FIG. 1 , the image formation andprocessing system 100 may comprise animage formation facility 102 for uniformly illuminating anitem 128, acamera 122 for capturing and transferring the image,lighting 124 for illuminating visible surfaces of theitem 128, animage processor 130 for processing images, animage library 132 containing images, and auser display 134 configured for interactively displaying the item images. - The
image formation facility 102 may be comprised of astructural support ring 104 into which atransparent platform 108 is slidably inserted. Thesupport ring 104 circumferentially captures theplatform 108 and may form a substantially horizontal surface for supporting and rotating theitem 128 to facilitate acquiring images of theitem 128 from a plurality of directions. A lightreflection diffusion ring 110 may border thestructural support ring 104 on theplatform 108 to eliminate unwanted reflection of light from thesupport ring 104. The lightreflection diffusion ring 110 may prevent non-uniform illumination by diffusing light reflecting from thesupport ring 104. Working height support for thestructural ring 104 may be provided by a plurality oflegs 112. Thelegs 112 may further support acurved diffusion backdrop 114 through an association with abackdrop frame 118. Thediffusion backdrop 114 may be illuminated to facilitate a uniformly illuminated background for theitem 128. Thelegs 112 may also support adiffusion panel 120 below and substantially parallel to theplatform 108 for providing uniform light to theplatform 108. - When an
item 128 is placed approximately centered on theplatform 108, a plurality of images of theitem 128 may be acquired with thecamera 122 and transferred toprocessor 130 for processing and storage in theimage library 132. Thecamera 122 andlights 124 may be integrated with theimage formation facility 102, or they may be separately supported by a stand (not shown), or held by a photographer. - As shown in
FIG. 1 , aplatform 108 may be a clear glass plate. Such a plate may provide a stable, strong surface on which anitem 128 may be placed for photographing. Additionally, a glass plate may provide excellent light transmission properties to facilitate a uniformly illuminated surface for photographing anitem 128. - In embodiments, the
platform 108 may be constructed of light diffusing plastic. In other embodiments, theplatform 108 may be a glass plate with light diffusing film applied to one or both sides. In an example of such an embodiment, a non-diffuse light may illuminate theplatform 108 from below and theplatform 108 may diffuse the light. The diffuse light fromplatform 108 may uniformly illuminate theitem 128 from below and theplatform 108 may appear as a substantially uniform white background when imaged by a digital camera. Anti-reflective glass may be used for theplatform 108 to reduce reflections. In embodiments, the glass of theplatform 108 may be low iron-glass to reduce reflections. - Polarized light may facilitate acquiring images that enable separation of the
item 128 from a background. Polarized light may be provided by a polarized light source or may be provided by polarizing filters disposed between the light source and theitem 128 andcamera 122. To provide polarized light, thebackdrop 114 and theglass plate 108 may be wrapped with a polarizing filter so that light passing through thebackdrop 114 andplate 108 may be polarized. A second polarized filter may be included in the optical system of the camera. When the filter in the camera optical system is tuned to reject light that is polarized by thebackdrop 114 and/orplate 108, light that passes directly from thebackdrop 114 and theplate 108 to thecamera 122 will not pass through the camera optical system filter. In this way, the light that makes up the background for theitem 128 will not be acquired by thecamera 122. - Polarized light that impacts the
item 128 being photographed may scatter as reflected light thereby changing the polarization of the reflected light. Because the filter in the camera optical system is tuned to reject only light that matches the polarization of the background light, the light reflected off theitem 128 may be acquired by thecamera 122 because it may pass through the camera optical system filter. The resulting image acquired by thecamera 122 may include only the light that is scattered by reflecting off of theitem 128. Thus, a camera filter tuned to filter out light that is polarized by thepolarized backdrop 114 andpolarized plate 108 may facilitate good separation between the image of the object (composed of scattered light) and the background (composed of polarized light). - The
support ring 104 may be formed of steel, aluminum, or other material that is sufficiently strong to stably support theplatform 108,backdrop 114,backdrop frame 118, anditem 128. In embodiments, theplatform 108 is a round disk shaped platform andsupport ring 104 is a ring with an outer diameter slightly larger than theplatform 108. In this example, asupport ring 104 may have a U-shaped profile with the U channel extending inward to capture a portion of the circumference of theplatform 108, thereby supporting theplatform 108. Thesupport ring 104 may be fastened to thelegs 112 so that thesupport ring 104 and thelegs 112 do not move when theplatform 108 is rotated within the U-shaped channel. - To facilitate smooth rotation of the
platform 108 within thesupport ring 104, an intervening material, such as plastic or self-lubricating nylon may be disposed between theplatform 108 and thesupport ring 104. In embodiments, thesupport ring 104 provides auser access region 106 to a portion of theplatform 108 circumference to facilitate rotating theplatform 108. In another embodiment, thesupport ring 104 may be “L” shaped so that lower horizontal extension of the “L” supports theplatform 108 from below. In such an embodiment, the intervening material may also be used to facilitate smoothly rotating theplatform 108. - The
legs 112 may be constructed of aluminum, steel, or other material that is capable of supporting theimage formation facility 102. In the embodiment depicted inFIG. 1 , three legs are assembled to supportring 104. At least one of thelegs 112 may include a leveling mechanism to facilitate leveling theplatform 108. In the embodiment depicted inFIG. 1 , each leg may include a leveling mechanism to allow adjustment of theplatform 108 to a horizontal position on a variety of uneven surfaces. - The
diffusion panel 120 may be constructed of a light diffusing material such as textured plastic and may be assembled to thelegs 112. Thediffusion panel 120 may be constructed of glass with a diffusion film applied to one or more surfaces. To achieve a high degree of uniformity of light, thediffusion panel 120 may be constructed to deliver a high degree of haze. Haze is a measurement of wide-angle scattering that causes a lack of contrast across thediffusion panel 120. Thediffusion panel 120 may include a light source, such as one or more fluorescent lights below thediffusion panel 120 that enables the diffusion ring to transmit diffused light to theplatform 108. Thediffusion panel 120 and accompanying light source may be mounted to thelegs 120 at an intermediate position between theplatform 108 and the leveling mechanism at the bottom of thelegs 112. Thediffusion panel 120 may be disposed within theimage formation facility 102 so that only thediffusion panel 120, theitem 128, and thebackdrop 114 are imaged by thecamera 122. This may be accomplished by extending thediffusion panel 120 beyond the diameter of thesupport ring 104. This extension generally may be beneath theplatform 108 andsupport ring 104 in the direction of thebackdrop 114. - The
diffusion backdrop 114 may be supported by abackdrop frame 118 so that the lower edge of thediffusion backdrop 114 is in close proximity to a top surface of theplatform 108. In embodiments, thebackdrop frame 118 may mount on top ofsupport ring 104. In other embodiments, thebackdrop frame 118 construction may include upward vertical extensions of at least two of thelegs 112. In such an embodiment, each of theleg 112 may attach to thesupport ring 104 through a bracket that secures thesupport ring 104 to a vertical portion of theleg 112. - The
backdrop frame 118 may includehorizontal extensions 116 to which thediffusion backdrop 114 is attached. In embodiments, thehorizontal extensions 116 may be slidably attached to thebackdrop frame 118 so that they may be adjusted horizontally and vertically, and secured in the adjusted position. Such an embodiment may allow precise positioning of thediffusion backdrop 114 relative to theplatform 108 to facilitate a uniformly illuminated background for theitem 128. - In embodiments, the bottom of the
frame 118 may be constructed like a drum. An inside ring may be wrapped, such as with a white screen, then pulled tight to form a drum-like structure, then secured by an outside ring. - In embodiments, the
platform 108 is supported by legs, which may need to be reinforced to support heavy objects. In embodiments, a motor assembly may be used to rotate theplatform 108. In embodiments, bearings, such as air bearings, are used to support theplatform 108. - The
diffusion backdrop 114 may be curved, similarly to thesupport ring 104. However the curve of thediffusion backdrop 114 may be of a smaller radius than thesupport ring 104 so that the surface of thebackdrop 114 facing theitem 128 is within the radius of thesupport ring 104. Thediffusion backdrop 114 may be constructed of a light diffusing material such as textured plastic. Thediffusion backdrop 114 may be constructed of glass with a diffusion film applied to one or both surfaces. To achieve a high degree of uniformity of light, thediffusion backdrop 114 may be constructed to deliver a high degree of haze. Haze is a measurement of wide-angle scattering that causes a lack of contrast across thediffusion backdrop 114. In embodiments, thediffusion backdrop 114 may be illuminated. A light source for illuminating thediffusion backdrop 114 may be attached to thebackdrop frame 118 or thediffusion backdrop 114. A light source for thediffusion backdrop 114 may be one or more fluorescent lights arranged so that when the light source is turned on, thediffusion backdrop 114 generates uniform light intensity and color. - In embodiments, the light source for
diffusion backdrop 114 and thediffusion panel 120 may be controlled individually so that each may provide the same, similar, or different light intensities and colors. Light for thediffusion backdrop 114 and for thediffusion panel 120 may be provided by a variety of light technologies including fluorescent, incandescent, LED, electroluminescent, halogen, and such lighting that may provide a range of intensity, and may provide a substantially white light. - When the
diffusion panel 120 and thediffusion backdrop 114 are properly illuminated, anitem 128 on theplatform 108 may appear in a photograph as having no visible means of support. This may be accomplished by thediffusion panel 120 and thediffusion backdrop 114 providing a uniform, substantially white backdrop for theitem 128. - Referring further to
FIG. 1 , theimage formation facility 102 may be constructed to support and facilitate photographing a wide variety ofitems 128 ranging from small items such as a coin, to large items such as a high definition Television. The embodiment depicted inFIG. 1 may be suitable for photographing office and personal electronics products e.g. computers such as desktops, notebooks, tablet PCs, personal digital assistants (PDA), servers, workstations, fax servers, internet-cache servers, barebones systems, POS/kiosk systems; monitors & displays such as CRT monitors, LCD monitors, plasma monitors, projectors; printers such as color laser, mono laser, ink-jet, photo printers, multifunction units, dot-matrix, plotters, label printers, bar code printers, specialty printers, receipt printers, scanners, point-of-sale printer; software such as antivirus software, business software, development tools, education & entertainment, graphics & publishing, internet software, network mgt. software, OS & utilities, security; electronics such as digital cameras, film cameras, camcorders, security cameras, games, digital media players, televisions, home audio, home video, home furniture, GPS, telephony, appliances, office equipment; networking such as adapters, client, communications, conferencing, hubs, infrastructure, KVM switches, modems, routers, security, software, switches, test equipment, wireless; storage devices such as CD drives, CD-DVD duplicators, CD-DVD servers, DVD drives, fibre channel switches, flash drives, floppy drives, hard drives, magneto-optical drives, media, network attached storage, removable drives, SAN equipment, storage enclosures, tape automation, tape drives; accessories such as cables, memory, flash memory, power & surge protection, computer components, audio hardware, video hardware, keyboards & mice, batteries, carrying cases, computer accessories, printer supplies, CD-DVD accessories, monitor & display accessories, mounting hardware, camera-camcorder accessories, PDA accessories, network accessories, projector accessories, scanner accessories, computer furniture, phone, cellular accessories, office & cleaning supplies, and the like. - The
image formation facility 102 may also be suitable for facilitating photography of a wide variety of other products such as office products including AV supplies & equipment, basic supplies & labels, binders & accessories, janitorial, business cases, calendars & planners, custom printing, desk accessories, executive gifts, filing & storage, paper, forms, envelopes, pens, pencils & markers, printer & fax supplies, promotional products, school supplies; phones & accessories, or other products found in office, school, or home environments, and the like. - The
image formation facility 102 may also be suitable for facilitating photography of other items such as groceries, produce, cuts of meat, deli products, health and beauty products, clothing, towels, pillows, artwork, models, tableware, collectibles, antiques, potted plants, financial instruments such as bonds, certificates of deposit, currency, and the like. In addition, theimage formation facility 102 may be suitable for facilitating photography of humans, such as models. Background-less images of models may be useful in advertising by allowing an advertiser to “place” a model in an advertisement that includes any background. In an example, an advertiser may use theimage formation facility 102 to photograph a model and then place the model into an advertisement of a sunset taken at a different location. - In general, any
item 128 that may remain stable when placed on theplatform 108 within the working volume of theimage formation facility 102, may take advantage of aspects of the invention that facilitate high resolution photography. In an embodiment of theimage formation facility 102, the working width may be approximately 72 inches, and the working height of thebackdrop 114 above theplatform 108 may be approximately 36 inches. -
Items 128 may be photographed within their packaging, partially removed, or fully removed from their packaging torecord item 128 unpacking or repacking for display as an assembly video. Additionally, the package contents such as documentation, software, warranty cards, cables, batteries, remote controllers, car chargers, wall chargers, assembly tools, assembly fasteners, loose-supplied parts, anti-static packaging and the like may be photographed. - An
item 128 requiring assembly by a user may be photographed at various stages of assembly, or may be continuously photographed during assembly to generate an assembly procedure video. Additionally, battery installation, or product installation (such as connecting a power cord, or attaching speakers) may be photographed. By using proper audio recording equipment, such as may be provided by a digital camcorder embodiment of thecamera 122, audio may also be recorded and/or combined with the assembly procedure video to provide an audio description of assembly steps, techniques, benefits, and the like. - An
item 128 may be photographed in various states of initialization and operation. Photographing a user turning on theitem 128, inserting a CD-ROM, changing a channel, selecting an item setting (e.g. for audio equipment, audio settings like base, treble, balance, and the like), opening a compartment, closing a compartment, turning off theitem 128, and other operation related settings may facilitate a user's understanding of interacting with the product. Such features may include any features that the user can change, control, or modify. - An
item 128 may be photographed from a plurality of angles and positions. In an example, anitem 128 front, back, side, top, or bottom may be photographed. Additionally, anitem 128 may be photographed with the focal plane other than parallel to a surface of theitem 128, such as when theitem 128 is photographed with thecamera 122 directed at a 45 degree angle relative to vertical. A perspective view photograph may include more than one surface of theitem 128, such as the front and top of the item or a side and back as if the item were viewed looking toward a rear corner. Perspective views may provide visual confirmation of the shape of theitem 128. - Referring to
FIG. 1 , thecamera 122 of the image formation andprocessing system 100 may be a digital still camera. In an example, thecamera 122 may have a resolution of at least 2000×2000 pixels. In other embodiments, thecamera 122 may have a resolution lower than about 2000×2000 pixels. It may be understood by one skilled in the art that acamera 122 with any resolution may be used with the image formation andprocessing system 100. - The
camera 122 may also provide controls for a variety of settings that impact the acquired image. In embodiments, a camera may provide user settings such as resolution, zoom (optical and/or digital), aperture speed, f-stop setting, focus, auto exposure, white balance, macro mode, flash operation, and the like. Thecamera 122 may also include capabilities which affect performance and quality such as lag time, burst mode for rapid succession (such as video), CCD or CMOS sensor, image storage media (e.g. removable), 24 bit or greater color depth, low loss or lossless image compression, image output format support for standards such as JPEG, RAW, and TIFF, and the like. Additional camera features such as interface type (RS232, RS422, parallel port, SCSI, USB, IEE1394, wireless, and infrared), batteries (Alkaline, NiCad, NiMH, LiOn), on screen display, date/time in image, sound recording, software (download, editing), and the like may also be factored into the selection of acamera 122. - Exemplary digital cameras such as Agfa ePhoto, Canon PowerShot, Canon EOS, Casio QV, Concord Eye-Q, Fujifilm FinePix, HP PhotoSmart, Jenoptik Jendigital, Kodak DC, Konica Minolta DiMAGE, Kyocera Samurai, Nikon Coolpix, Olympus E, Pentax EI, Pentax Optio, Polaroid PDC, Premier DC, Pretec DC, Samsung Digimax, Sanyo DSC, Vivitar ViviCam, Yashica Finecam, or a digital video camera such as Canon Elura, Canon Optura, Canon ZR, Hitachi DZ, JVC Everio GZ, JVC GR, Panasonic Proline, Panasonic PV, Panasonic VDR, Samsung SC, SharpViewcam, Sony DCR, Sony Handycam, Sony HDR, Toshiba GSC, and the like may meet one or more requirements for use with the image formation and
processing system 100. - In an aspect of the invention,
lighting 124 may be used to illuminate theitem 128 being photographed.Lighting 124 may be coordinated with photographic settings of thecamera 122 to facilitate capture of high quality images of theitem 128. In an example, increasing the amount of light incident on theitem 128 may cause theitem 128 to washout in the image unless thecamera 122 shutter speed is increased. Faster shutter speeds may reduce the amount of light that reaches thecamera 122 image sensor, thereby reducing the potential to washout theitem 128. Faster shutter speeds may also reduce the amount of light reaching thecamera 122 from the background. This may facilitate eliminating the background by causing the background to appear black, further enhancing the distinction of theitem 128 from its background. - In embodiments, the
lighting 124 source may be one or more strobe lights. The actuation phase and duration may be coordinated with thecamera 122 shutter speed and timing in order to facilitate high quality image capture. In other embodiments, thelighting 124 may be a continuous illumination source, such as fluorescent, incandescent, LED, or other continuous illumination lighting technology. For purposes of illuminating anitem 128 to be photographed, thelighting 124 may use a plurality of lights at one or more positions facing theitem 128 from one or more angles such as from the front, top, and sides. Thelighting 124 may be used to create lighting effects such as shadow, highlights, depth, contrast and other visual effects. Lighting may be white, although other color lights may be used to create color effects. Any combination of light types, colors, quantities, and positions may be used to illuminate theitem 128. - In embodiments, the
lighting 124 may be used in combination with illumination from thebackdrop 114 and theplatform 108 to create a high contrast image that includes sharp edge definition of theitem 128. Thecamera 122 may be configured with settings that facilitate acquiring such a high contrast image. - In embodiments, there may be a benefit to changing the camera settings between the camera exposure used to photograph the object and the exposure used to distinguish the object from the background. Camera settings such as relating to shutter speed and aperture may be adjusted, such as to provide a very high quality image of the object in one case, while providing image contrast between the background and the object in the other case. Shutter speed and aperture may be adjusted, among other things, to account for different lighting conditions, with the object being front lit to obtain a high quality image that will be used for reproduction, and the object being back lit (typically with less light) to obtain a sharp distinction between the image and its background. In an example, the main image may be shot with a shutter speed of 1/200th of a second at f22, while the backlit image may be shot with a shutter speed of ¼th of a second at f14. Other combinations may be suitable, depending on the respective lighting conditions for the two shots. In embodiments the shutter speed and aperture are automatically adjusted between shots, such as under control of a computer facility.
- In embodiments, the lighting for the backlit image or the primary image may be provided by LEDs, such as an array of LEDs for the back light that provide even illumination of the background.
- Aperture settings may also be adjusted to address depth of field issues. With a small aperture, the lens used to take an image may have a wider depth of field.
- In embodiments, it may be desirable to use the aperture and exposure settings of the camera to eliminate a low light originating behind or visible around the
item 128. This may be accomplished by reducing the camera aperture and exposure time to effectively shut out the lower light level so that only the front illumination of the item is captured by the camera. By selecting an effective combination of aperture and shutter speed, it may be possible to get a high quality front-lit image with good contrast. - Referring to
FIG. 1 , theimage processor 130 for processing images of the image formation andprocessing system 100 may comprise a general purpose computer executing a combination of customized and commercially available software. In embodiments, theimage processor 130 may be a personal computer executing customized scripts automating Adobe Photoshop image processing software. - The
image processor 130 may be a single computer or a network of computers in a client-server configuration. In embodiments, the image processor may connect through a network to other network devices such as thecamera 122, theimage library 132, theuser display 134, other computers, printers, scanners, data storage devices, network bridges, routers, and other computer network devices. - The images acquired by the
camera 122 may be transferred to theimage processor 130 over a network such as an intranet, Ethernet, or other data communication network suitable for image transfer. In an example, thecamera 122 may transfer images to theimage processor 130 through a USB port. In another example, thecamera 122 may transfer images to theimage processor 130 through an IEE1394 port. The image may be transferred through a wired connection or a wireless connection between thecamera 122 and theimage processor 130. In embodiments, theimage processor 130 may be integrated into the same enclosure as thecamera 122. - Images may be transferred from the
camera 122 to theimage processor 130 during image acquisition by the camera. Images may be transferred individually or in groups such as in batch mode between thecamera 122 and theimage processor 130. Thecamera 122 may have storage capacity for a plurality of images that may facilitate batch mode image transfer. Images may be transferred automatically upon acquisition by thecamera 122, by a user directed action, on a predetermined schedule, or as a result of another action such as a new image being acquired by thecamera 122. Image transfer may adhere to one or more protocols such as PictBridge, WIA, Picture Transfer Protocol, PTP/IP, Media Transfer Protocol, and other protocols which facilitate transfer of images from one device to another. - The
image processor 130 may be executing custom software that may be coded in a variety of programming languages such as C, C++, Objective C, AppleScript, JAVA, JavaScript, HTML, visual basic, or a script language compatible with a commercial image processing software product such as Adobe Photoshop. The customized software may be embodied in a program, a macro, a script, or other suitable software embodiment. - In an embodiment, the methods and systems disclosed herein may be accomplished using a computer system that uses an operating system that supports a graphical user interface, such as the Apple Mac OS X operating system, which includes integrated applications such as Finder for interfacing with the storage of files and folders, and Xcode which allows the building of Mac compatible scripts and fully operational applications that may act on their own or interact with existing software. In embodiments, Xcode supports developers using C, C++, Objective C, AppleScript, and Java. The
Image processor 130 may be primarily written in AppleScript and JavaScript. These applications may interface and run commands between Finder and Adobe Photoshop, where the actual image processing may occur. - The
image processor 130 may be executing commercially available software such as Adobe Photoshop, for example. However other commercially available image processing software such as Captiva, Tinderbox, Shake, and similar image processing software capable of automation of at least some functionality may be used. In embodiments, theimage processor 130 may execute a combination of commercial software products and custom software programs to perform the necessary functions of interfacing with thecamera 122, automatically processing the transferred images to generate an item-only image, and storing the image in theimage library 132. - The image formation and
processing system 100 may include animage library 132 containing images processed by theimage processor 130. Theimage library 132 may be adapted to facilitate the automated storage and retrieval of item images based on some aspect of theitem 128. In an example, theimage library 132 may organize images according to theitem 128 model number, part number, serial number, vendor code, option, stock keeping unit (SKU), or other information that easily associates an image in theimage library 132 to anitem 128. Theimage library 132 may include all images acquired by thecamera 122 as well as the images processed by theimage processor 130. Theimage library 132 may be configured based on a file system such as NTFS, FAT, HFS+, UNIX, Joliet, and the like. Alternatively, theimage library 132 may be configured as a database, such as a Sequel or Access database. - In embodiments, images in the
image library 132 may be used for display or printing and may be repurposed for a specific use by theimage processor 130. In an example, one or more images in theimage library 132 may be presented on theuser display 134 for purposes of facilitating a user viewing and learning about theitem 128 depicted in the images. In another example, one or more of the images in theimage library 132 may be provided to a printer for purposes of printing a catalog or other advertisement displaying theitem 128 depicted in the images. - An alternate embodiment of the
image formation facility 102 may comprise a motorized means for rotating and tilting theitem 128 to automate image acquisition with a plurality of cameras. In embodiments, a motor may be disposed in various locations to enable movement of theplatform 108. The example, a motor, a pair of motors, or more motors may be located underneath theplatform 108, on top of theplatform 108, or to the side of theplatform 108. In embodiments, a belt may be supplied around theplatform 108 to reduce friction during movement. In embodiments, air bearings or other bearings may be used to support movement. In embodiments, the motor may have motor control software, which may be integrated with, and under the same control as, the software that may be used to acquire the images of the object held on theplatform 108, such as to cause the motor to rotate the object into a desired position, and then capture the images of the object. In embodiments, automation software may automate image taking and platform movement, such as to take a predetermined sequence of shots of an object. - In embodiments, a mask may be provided inside the camera viewfinder, to show the area that will be cropped around the object during processing of the image. In such cases, what the viewer sees through the viewfinder may be matched to what will be handled by the image processing software.
-
FIG. 2 depicts aprocess 200 of rendering an item-only image 220 of anitem 128 photographed using the image formation andprocessing system 100. Theprocess 200 gains substantial advantage over manual methods by using aspects of the image formation andprocessing system 100 that ensure images ofitems 128 are automatically and quickly produced with consistently high quality at low cost. - In embodiments, the
process 200 may rely on a difference between illumination levels within an image to facilitate automatic processing by animage processor 130. Theprocess 200 may involve acquiring at least two different images from a single perspective. One of the two different images may be acquired with camera settings that facilitate using theimage formation facility 102 to present theitem 128 substantially in silhouette, thereby acquiring asilhouette image 202. Thesilhouette image 202 may have very high contrast between a bright background and thedark item 128. The very high contrast of thesilhouette image 202 may facilitate theimage processor 130 automatically and accurately detecting the edge of theitem 128 to generate an edge mask 204 (e.g., outline mask of item from 202). Theedge mask 204 may include a set of image points along the visible perimeter of theitem 128, wherein the points accurately define a separation between theitem 128 and the background. - The other of the at least two different images, a
high quality image 208 of anitem 128 presented by theimage formation facility 102 may be acquired by thecamera 122 with the assistance of thelights 124 illuminating theitem 128. While thebackdrop 114 and thelower diffusion panel 120 may be illuminated moderately to facilitate acquiring thehigh quality image 208, camera settings such as f-stop may be different from those used to acquiresilhouette image 202. - The
edge mask 204 may be superimposed on thehigh contrast image 208 so that theedge mask 204 aligns with theitem 128. Aligning theedge mask 204 with theitem 128 facilitates automatically separating theitem 128 from the background as shown in composite image 210 (e.g., applied outline mask of 204 to 208). Theimage processor 130 may automatically extract image content within the alignededge mask 204 from thehigh resolution image 208 and automatically generate a new image such as that shown in extracted image 214 (e.g., item image with defects and bright spots). The extractedimage 214 includesonly item 128 image content from thehigh quality image 208. -
Edge mask 204 may be an image containing image data which is known to theimage processor 130 as representing the outline of theitem 128. In an example,silhouette image 202,edge mask 204 image, andhigh resolution image 208 may all be referenced to a common origin point in a two-dimensional image space. Theimage processor 130 may combine the images in the image space to facilitate processing. -
Edge mask 204 may be a set of vectors. In embodiments,image processor 130 may applyedge mask 204 vectors to thehigh resolution image 208 in the two-dimensional image space to separate theitem 128 from the background of thehigh resolution image 208. - In other embodiments,
image processor 130 may apply other image alignment algorithms that do not require thehigh resolution image 208 and theedge mask 204 to reference a common origin. Image alignment algorithms may include techniques such as pattern matching, edge detection, or a combination of these and other techniques that generate an accurate alignment of theedge mask 204 to theitem 128 of thehigh resolution image 208. - While the extracted
image 214 may be stored in theimage library 132, it may be further processed by theimage processor 130 to remove image defects, such asdust 212, and image formation defects, such asspecular highlights 218 to generate an item-only/product-only image 220. The item-only image 220 may be suitable for printing, inclusion in a print catalog, or for display on an electronic display such as a computer monitor. - Image processing software, such as Adobe Photoshop, and many other commercially available titles, may include image enhancement capabilities and/or features that facilitate removal of image defects such as
dust 212 and specular highlights 218. In embodiments, theimage processor 130 may include automated scripts or programs that facilitate automating image enhancement through commercially available image processing software. - Each image-processing step may be manually executed within Adobe Photoshop through user input. The
image processing application 130 automates the execution of the repetitive tasks in the form of script actions to step Photoshop through analyzing each image set. The user may set or adjust the threshold and tolerance levels Photoshop uses to distinguish between foreground and background information when generating theoutline mask 204 from thehigh contrast image 202 prior to batching sequence. - The
image processing application 130 may be programmed to automate several resizing and cropping procedures that fit the user's needs for each Web and print-related image set that is being processed. These tasks may all be performed using image processing software, such as Photoshop. - A collection of item-only
images 220 from a variety of perspectives of theitem 128 may be combined in aninteractive user display 134 with a user interface adapted for facilitating a user viewing theitem 128 from the variety of perspectives. The perspectives may include the front, sides, back, top as well as interior views, and various views of theitem 128 in operation, assembly, and packaging. One such perspective may include a user rotating theitem 128. Another may include a user opening a door or compartment of theitem 128, or turning on/off theitem 128. -
FIG. 3 depicts an alternate embodiment of theimage formation facility 302 wherein thelower diffusion panel 120 shown in the embodiment ofFIG. 1 is replaced by alower diffusion backdrop 320. Thelower diffusion backdrop 320 may provide diffuse uniform illumination in a similar way to thediffusion backdrop 114. When thelower diffusion backdrop 320 and thediffusion backdrop 114 are properly illuminated, anitem 128 onplatform 108 may appear in a photograph as having no visible means of support. This may be accomplished by thelower diffusion backdrop 320 and thediffusion backdrop 114 providing a uniform, substantially white backdrop for theitem 128. - In embodiments where image formation and
processing system 100 includes two backdrop sections, such as amain section 114 above theplatform 108 and alower backdrop 320 under theplatform 108, other techniques may be used to provide smooth illumination at the intersection of theplatform 108 with the supportingring 104. In an example, a separate ring of white material may be used to provide increased reflection from the bottom to account for the fact that theplatform 108 absorbs some light from the bottom. In other embodiments, a section of theplatform 108 may be painted, such as white or grey, to provide a uniform transition in the background illumination between the upper and lower backdrop portions. -
FIG. 4 depicts an overhead view of theimage formation facility 302 for purposes of describing an aspect of the invention related to the relative position of theupper backdrop 114 and thelower backdrop 320.FIG. 4 shows thesupport ring 104 supporting theframe 118 for thediffusion backdrop 114. It also shows thelower diffusion backdrop 320 that is supported by legs 112 (not shown). Thesupport ring 104,lower diffusion backdrop 320, andupper diffusion backdrop 114 are all substantially concentric. Theupper diffusion backdrop 114 has the smallest effective diameter and thesupport ring 104 has the largest diameter. The resulting positioning of theupper backdrop 114 and thelower backdrop 320 may facilitate providing a uniform background to anitem 128 being photographed on theimage formation facility 302 by ensuring that a top edge of thelower diffusion backdrop 320 is not photographed. This may be accomplished by theupper diffusion backdrop 114 blocking a line of sight between the camera and the top edge of thelower diffusion backdrop 320. A side view of the relative positioning of the backdrops can be seen inFIG. 5 and is described below. - The
upper diffusion backdrop 114 and thelower diffusion backdrop 320 may alternatively form non-concentric arcs. In addition, neither backdrop need be concentric withsupport ring 104. To provide the advantage of ensuring the top edge of thelower diffusion backdrop 320 is not photographed, any arc may be shaped by the backdrops. In an example,lower backdrop 320 may form an elliptical arc andupper backdrop 114 may form a circular arc. -
FIG. 5 depicts a section view of theimage formation facility 302 showing a backdroptransition diffusion panel 502 withrepresentative lighting 504 that may be used to illuminate the backdrops.FIG. 5 further includes anoverhead light 510 that may illuminate theitem 128 on theplatform 108. Thelighting 504 in combination withdiffusion backdrops diffusion panel 502 may facilitate providing a uniform background to anitem 128 being photographed. Theupper diffusion backdrop 114 is shown mounted offset from thebackdrop frame 118 by theframe extensions 116. Theframe extensions 116 provide an offset so that theupper backdrop 114 is positioned closer than thelower backdrop 320 to theitem 128 being photographed bycamera 124. - A
diffusion panel 502 may be included with theimage formation facility 302 to further facilitate providing a uniform background for theitem 128. Thediffusion panel 502 may be positioned so that only diffuse light projects onto a portion of theimage formation facility 302 that includes thesupport ring 104, top edge of thelower diffusion backdrop 320 and lower edge of theupper diffusion backdrop 114. This may reduce the amount of reflected light from these elements and from any of the structural elements required to support the backdrops and theplatform 108 in this portion of thefacility 302. Thediffusion panel 502 may be constructed of translucent material much like white office paper, textured plastic, nylon, and other materials that facilitates diffuse transmission of light. Thediffusion panel 502 may be mounted to thesupport ring 104 through a transition bracket or frame similar to thebackdrop frame 118. - To provide light to the backdrops, a plurality of
lights 504 may be mounted on alight frame 508 behind theimage formation facility 302 and directed toward thediffusion backdrops diffusion panel 502. One or more of thelights 504 may be controlled individually or in combination to provide preferred lighting. Anoverhead light 510 may also be included with theimage formation facility 302 to illuminate theitem 128 from above. Theoverhead light 510 may be mounted to an overhead fixture, the ceiling, or to thebackdrop frame 118. Theoverhead light 510 may be individually controlled or controlled in combination with one or more of thelights 504 or thelights 124 to properly illuminate theitem 128. -
FIG. 6 shows a perspective cut away view of theimage formation facility 302 further providing details of theupper backdrop 114, thelower backdrop 320 and thediffusion panel 502. A plurality of upperbackdrop stabilization rods 602 may be used to provide stability to thebackdrop frame 118. Astabilization rod 602 may be connected between an upper end of thebackdrop frame 118 and thesupport ring 104. The cutaway perspective view ofFIG. 6 shows one of thestabilization rods 602 included with theimage formation facility 302. -
FIGS. 7 through 9 depict a structure and method for maintaining a constant camera-item distance. In embodiments, thecamera 122 may need to be moved about anitem 128 while maintaining a constant distance from theitem 128, to ensure that images taken as the camera is moved about theitem 128 have similar levels of illumination. Maintaining a constant distance may also facilitate generating images that can be sequenced to represent rotating or tilting the object in an interactive user interface display. Thus, a semicircular ring or similar apparatus may be provided that rotates on an axis so that the distance from the camera to the object is maintained, such as when the semicircular ring is rotated to put the camera above the object, alongside the object, or somewhere in between.FIG. 7 depicts an embodiment of thesemicircular ring 702. Thecamera 122 may be slidably attached to thesemicircular ring 702 so that thecamera 122 can move along the surface of thering 702 to various positions for acquiring images. -
FIG. 8 depicts thesemicircular ring 702 andcamera 122 mounted to a partial view of an embodiment of theimage formation facility 102. Thesemicircular ring 702 is shown tilting clockwise from anoverhead position 802 to a substantiallyhorizontal position 804. When the motion of thecamera 122 as shown inFIG. 7 and the motion of thesemicircular ring 702 are combined, the item can be viewed from all positions in the quarter-sphere generated by combined motion. In embodiments, thesemicircular ring 702 may be tilted counterclockwise as well as clockwise, thereby facilitating capture images of theitem 128 from all camera perspectives in the hemisphere above theplatform 108. -
FIG. 9 depicts a perspective view of thesemicircular ring 702 andcamera 122 mounted to an embodiment of theimage formation facility 102. Thesemicircular ring 702 is slidably mounted tovertical supports 902 to facilitate placing the camera at a variety of distances from theitem 128. - In embodiments, the position of the
semicircular ring 702 and the position of thecamera 122 on thering 702 may be automatically detected and transmitted to theimage processor 130 so that the three-dimensional position of the camera may be associated with each image acquired. Theimage database 132 may receive thecamera 122 position information along with each image to maintain the camera-item position association. This may facilitate automatically generating sequences of images to depict types of motion of the item such as rotating, tilting, turning, and the like. The three-dimensional camera position may also be used to facilitate identifying the specific image to display on theuser interface display 134 in response to a user requesting an alternate view of theitem 128. - In embodiments, the camera motion along the
semicircular ring 702 and the movement of thering 702 may be automatically controlled by theimage processor 130 or other computing facility to automatically generate a collection of images from various three-dimensional camera positions. - It should be noted that the object described herein could be any kind of object, including a person or other animate object. In embodiments, the
image formation facility 302 may safely support a person on theplatform 108 for purposes of acquiring images of the person. In general, acquiring images of inanimate objects do not need to take into consideration object movement. However, when acquiring images of a person, people, or other animate objects, such movement must be considered in generating a useful image. In an example, the time between the at least two images used to form an item-only image must be so that the effect on the images of movement of the person, people, or other animate object may be minimized. Cameras and lighting adapted to support very rapid image acquisition may provide the requisite acquisition requirements. If camera or lighting settings must be changed between images, automation of these changes may be required in addition to the automation of acquiring the images. - Aspects of a person that may provide unique requirements for imaging may include the person's hair. Creating an appropriate image boundary or outline that includes elements as small as a human hair may be facilitated by high resolution imaging and lighting that may provide sharp contrast. Since a person's hair also may move between images, the formation and
processing system 100 may compensate for this movement by masking hair so that changes in the hair do not impact generating the item-only image. In one embodiment, the hair of the high contrast image may be replaced by the hair of the detailed image during the generation of the item-only image. Other methods such as image processing, lighting adjustments, masking, and the like for minimizing the impact of hair movement on the resulting item-only image may also be applied. - In embodiments, it may be desirable to use different kinds of light between images, such as using lights of different color temperature, intensity, color or the like between the two images.
-
FIG. 10 depicts an embodiment of theuser display 134 that may be included with the image formation andprocessing system 100. Theuser display 134 may be used to display images. In embodiments, the images may be images stored in theimage library 132, or images acquired by thecamera 122, repurposed images, images combined with other graphics or text, web pages containing the images, videos comprising the images, or any combination thereof. Theuser display 134 may include auser interface 1002 with features adapted to facilitate a user viewing anitem 128 from a variety of perspectives, angles, operating modes, stages of unpacking, setup and initialization, and other ways of viewing anitem 128. Theuser display 134 may be used to display images depicting activating item controls, observing item visual display features, reviewing item manuals or assembly instructions, displaying video for assembly or operation, and such other images that may facilitate informing, entertaining, or otherwise satisfying a user preference. - The
user interface 1002 of theuser display 134 may include a user controlled icon or functional pointer that may depict a human hand, such ashand icon 1004. Thehand icon 1004 may be adapted to perform functions equivalent to touching the displayeditem 128, gripping and turning theitem 128, opening a compartment of theitem 128, depressing a control/button of the item 128 (e.g. with a finger or thumb), and other human-item interactions. In embodiments, thehand icon 1004 may change appearance to further emulate a human-item interaction. In an example, as a user positions thehand icon 1004 over a pushbutton control of theitem 128, thehand icon 1004 may depict a human hand prepared to depress the button with a finger or thumb. In another example, when a user positions thehand icon 1004 near a dial of theitem 128, thehand icon 1004 may depict a human hand prepared to turn the dial. User actions through theuser interface 1002 such as clicking a button on a mouse may result in thehand icon 1004 depressing the push button or gripping the dial and subsequent movement of the mouse may turn the dial. - In embodiments the
hand icon 1004 may be configured to move or rotate an image of an object. The image may be placed on a grid, such as with cells numbered or lettered to identify particular portions of the screen. A sequence of movements of thehand icon 1004 may be defined, such as using a script, such as indicating where thehand icon 1004 should be placed on the image (e.g., cell A5) and what action should be undertaken by thehand icon 1004 in the interface (e.g., “move to cell A8”, “rotate ninety degrees to the left,” zoom in to one hundred fifty percent of current image size,” or the like). Thus, thehand icon 1004 may be used by an editor to associate a series of movements of the image on the grid to provide a sequence of different images, such as for a presentation of a product. - In embodiments, the image of the
item 128 displayed on theuser display 134 may change in response to a user action. For example, a user turning a channel select dial on an image of a radio may cause a channel display on theitem 128 to change value thereby representing the response of the radio to a user turning a channel select dial. The change in value of the channel display may be accomplished through updating theuser display 134 with a sequence of new images of theitem 128 from theimage library 132. - These and other user-item interactions may be similarly depicted on the
user display 134 so that a user may get a feel for theitem 128 as if the user were able to physically interact with theitem 128. In embodiments, thehand icon 1004 may change shape and size in order to mimic the look of a real hand moving a product. -
FIG. 11 depicts various changes of thehand icon 1004 for performing additional human-item interactions. In addition to the types ofhand icon 1004 changes already described herein, thehand icon 1004 may further include images of a human hand for performing at least the following:zoom functions 1102 such as zoom in by pulling 1104 toward the user to magnify theitem 128, and zoom out by pushing 1108 away from the user to de-magnify theitem 128;rotation 1110 by gripping and turning to rotate theitem 128; spinning 1112 such as to spin a dial or wheel of theitem 128; and panning 1114 to move theitem 128 horizontally or vertically in theuser display 134. - The elements depicted in flow charts and block diagrams throughout the figures imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented as parts of a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations are within the scope of the present disclosure. Thus, while the foregoing drawings and description set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context.
- Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context.
- The methods or processes described above, and steps thereof, may be realized in hardware, software, or any combination of these suitable for a particular application. The hardware may include a general-purpose computer and/or dedicated computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as computer executable code created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software.
- Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.
- While the invention has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present invention is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law.
- All documents referenced herein are hereby incorporated by reference.
Claims (20)
1. A system comprising:
a digital library of images comprising one or more background-less images generated through processing a single camera image of an item and a background, the processing extracting item content from the single camera image via outline detection that distinguishes the item from the background, the digital library adapted to facilitate retrieving background-less images from the digital library based, at least in part, on at least one aspect of the item; and
an image processor that processes background-less images retrieved, based at least in part on the at least one aspect of the item, for presentation in a user interface display.
2. The system of claim 1 , wherein the retrieved background-less images represent a perspective view of the item.
3. The system of claim 2 , wherein the retrieved background-less images comprise a collection of images representing two or more perspectives of the item.
4. The system of claim 3 , wherein at least two of the retrieved background-less images in the collection comprise images captured from at least two different perspectives.
5. The system of claim 4 , wherein at least one perspective of the two different perspectives comprises a rotated perspective of the item.
6. The system of claim 4 , wherein at least one perspective of the two different perspectives comprises an open-door perspective of the item.
7. The system of claim 1 , further comprising a functional pointer presented in the user interface display, the functional pointer adapted to perform product interaction functions equivalent, in part, to a human hand.
8. The system of claim 7 , wherein the functional pointer is rendered in a shape of a human hand.
9. The system of claim 7 , wherein the functional pointer facilitates a user performing, within the user interface display, at least one interaction selected from a list of interactions consisting of: touching the item, turning the item, operating the item, opening the item, tilting the item, activating the item, observing the item, and retrieving item information.
10. The system of claim 1 , wherein the digital library is adapted to facilitate automated storage and retrieval of background-less images based on at least one aspect of an item captured in a background-less image of the digital library.
11. A method comprising:
determining an aspect of an item from an indication of user interaction in an item display user interface;
retrieving an item-only image of the item, based at least in part on the determined aspect, from a digital library of item-only images formed through processing a single camera image comprising the item, the processing including extracting item content from the single camera image via outline detection that distinguishes the item within the single camera image; and
rendering at least a portion of the retrieved item-only image in the item display user interface.
12. The method of claim 11 , wherein the digital library is adapted to facilitate retrieving images from the digital library based, at least in part, on at least one aspect of the item, including the determined aspect of the item.
13. The method of claim 11 , wherein the rendered at least a portion of the retrieved item-only image represents a perspective view of the item.
14. The method of claim 11 , wherein retrieving an item-only image comprises:
retrieving a collection of images representing two or more perspectives of the item.
15. The method of claim 14 , wherein at least two item-only images in the collection comprise images captured from at least two different perspectives.
16. The method of claim 15 , wherein at least one perspective of the two different perspectives comprises a rotated perspective of the item.
17. The method of claim 15 , wherein the at least one perspective of the two different perspectives comprises an open-door perspective of the item.
18. A system comprising:
a digital library of images comprising one or more item-only images generated through processing a single camera image comprising an item, the processing extracting item content from the single camera image via outline detection that distinguishes the item in the single camera image, the digital library adapted to facilitate retrieving item-only images from the digital library based, at least in part, on at least one aspect of the item; and
an image processor that facilitates a user viewing retrieved item-only images based, at least in part, on the at least one aspect of the item.
19. The system of claim 18 , wherein the retrieved item-only images represent a perspective view of the item.
20. The system of claim 18 , wherein the retrieved item-only images comprise a collection of images representing at least two perspectives of the item.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/822,876 US20200221022A1 (en) | 2006-09-05 | 2020-03-18 | Background separated images for print and on-line use |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US82457106P | 2006-09-05 | 2006-09-05 | |
US11/850,083 US7953277B2 (en) | 2006-09-05 | 2007-09-05 | Background separated images for print and on-line use |
US13/090,857 US8611659B2 (en) | 2006-09-05 | 2011-04-20 | Background separated images for print and on-line use |
US14/106,119 US9014476B2 (en) | 2006-09-05 | 2013-12-13 | Background separated images for print and on-line use |
US14/690,922 US9501836B2 (en) | 2006-09-05 | 2015-04-20 | Background separated images for print and on-line use |
US15/336,436 US10194075B2 (en) | 2006-09-05 | 2016-10-27 | Background separated images for print and on-line use |
US16/214,366 US10616477B2 (en) | 2006-09-05 | 2018-12-10 | Background separated images for print and on-line use |
US16/822,876 US20200221022A1 (en) | 2006-09-05 | 2020-03-18 | Background separated images for print and on-line use |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/214,366 Continuation US10616477B2 (en) | 2006-09-05 | 2018-12-10 | Background separated images for print and on-line use |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200221022A1 true US20200221022A1 (en) | 2020-07-09 |
Family
ID=39151597
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/850,083 Active 2030-03-28 US7953277B2 (en) | 2006-09-05 | 2007-09-05 | Background separated images for print and on-line use |
US13/090,857 Active US8611659B2 (en) | 2006-09-05 | 2011-04-20 | Background separated images for print and on-line use |
US14/106,119 Active US9014476B2 (en) | 2006-09-05 | 2013-12-13 | Background separated images for print and on-line use |
US14/690,922 Active US9501836B2 (en) | 2006-09-05 | 2015-04-20 | Background separated images for print and on-line use |
US15/336,436 Active US10194075B2 (en) | 2006-09-05 | 2016-10-27 | Background separated images for print and on-line use |
US16/214,366 Active US10616477B2 (en) | 2006-09-05 | 2018-12-10 | Background separated images for print and on-line use |
US16/822,876 Abandoned US20200221022A1 (en) | 2006-09-05 | 2020-03-18 | Background separated images for print and on-line use |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/850,083 Active 2030-03-28 US7953277B2 (en) | 2006-09-05 | 2007-09-05 | Background separated images for print and on-line use |
US13/090,857 Active US8611659B2 (en) | 2006-09-05 | 2011-04-20 | Background separated images for print and on-line use |
US14/106,119 Active US9014476B2 (en) | 2006-09-05 | 2013-12-13 | Background separated images for print and on-line use |
US14/690,922 Active US9501836B2 (en) | 2006-09-05 | 2015-04-20 | Background separated images for print and on-line use |
US15/336,436 Active US10194075B2 (en) | 2006-09-05 | 2016-10-27 | Background separated images for print and on-line use |
US16/214,366 Active US10616477B2 (en) | 2006-09-05 | 2018-12-10 | Background separated images for print and on-line use |
Country Status (1)
Country | Link |
---|---|
US (7) | US7953277B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11232321B2 (en) | 2017-04-27 | 2022-01-25 | Ecosense Lighting Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
Families Citing this family (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0428204D0 (en) * | 2004-12-23 | 2005-01-26 | Clinical Designs Ltd | Medicament container |
US7855732B2 (en) * | 2006-09-05 | 2010-12-21 | Pc Connection, Inc. | Hand producer for background separated images |
US7953277B2 (en) | 2006-09-05 | 2011-05-31 | Williams Robert C | Background separated images for print and on-line use |
US7931380B2 (en) | 2006-09-05 | 2011-04-26 | Williams Robert C | Imaging apparatus for providing background separated images |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
GB2464742B (en) * | 2008-10-27 | 2011-07-20 | Lobster Pot Photography Ltd | Method and apparatus for 360 degree product photography |
US9715701B2 (en) | 2008-11-24 | 2017-07-25 | Ebay Inc. | Image-based listing using image of multiple items |
WO2010073859A1 (en) * | 2008-12-26 | 2010-07-01 | Serendipity株式会社 | Lighting device for photographing |
JP2010166256A (en) * | 2009-01-14 | 2010-07-29 | Sony Corp | Information processor, information processing method, and program |
GB0904059D0 (en) | 2009-03-10 | 2009-04-22 | Euro Celtique Sa | Counter |
GB0904040D0 (en) | 2009-03-10 | 2009-04-22 | Euro Celtique Sa | Counter |
US8462206B1 (en) * | 2010-02-25 | 2013-06-11 | Amazon Technologies, Inc. | Image acquisition system |
US8830321B2 (en) * | 2010-03-09 | 2014-09-09 | Stephen Michael Swinford | Producing high-resolution images of the commonly viewed exterior surfaces of vehicles, each with the same background view |
US11570369B1 (en) | 2010-03-09 | 2023-01-31 | Stephen Michael Swinford | Indoor producing of high resolution images of the commonly viewed exterior surfaces of vehicles, each with the same background view |
US8301022B1 (en) * | 2010-12-10 | 2012-10-30 | Amazon Technologies, Inc. | Image capture device with booth |
DE202011108622U1 (en) * | 2011-03-11 | 2012-01-23 | Automate Images Gmbh | Device for the release of an object |
US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
US9001226B1 (en) * | 2012-12-04 | 2015-04-07 | Lytro, Inc. | Capturing and relighting images using multiple devices |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
SG2013069893A (en) * | 2013-09-13 | 2015-04-29 | Jcs Echigo Pte Ltd | Material handling system and method |
US20160267576A1 (en) * | 2013-11-04 | 2016-09-15 | Rycross, Llc D/B/A Seeltfit | System and Method for Controlling and Sharing Online Images of Merchandise |
US9902644B2 (en) | 2014-06-19 | 2018-02-27 | Corning Incorporated | Aluminosilicate glasses |
CN105447047B (en) * | 2014-09-02 | 2019-03-15 | 阿里巴巴集团控股有限公司 | It establishes template database of taking pictures, the method and device for recommendation information of taking pictures is provided |
CN105404703A (en) * | 2014-09-10 | 2016-03-16 | 贝里斯商上品开发有限公司 | Compound option-including realistic article replacing method |
US9444991B2 (en) | 2014-11-13 | 2016-09-13 | Lytro, Inc. | Robust layered light-field rendering |
USD777207S1 (en) * | 2015-02-27 | 2017-01-24 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with icon |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10110792B2 (en) | 2015-04-24 | 2018-10-23 | Lifetouch Inc. | Background replacement system and methods involving multiple captured images |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
WO2017129565A1 (en) * | 2016-01-25 | 2017-08-03 | Slrb Beteiligungs Gmbh | Device and method for capturing images of objects |
JP6837498B2 (en) * | 2016-06-03 | 2021-03-03 | ウトゥク・ビュユクシャヒンUtku BUYUKSAHIN | Systems and methods for capturing and generating 3D images |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US11099057B2 (en) * | 2016-09-01 | 2021-08-24 | Imam Abdulrahman Bin Faisal University | Grossing workstation with electronic scale |
KR102267397B1 (en) * | 2016-10-25 | 2021-06-21 | 삼성전자주식회사 | Electronic device and Method for controlling the electronic device thereof |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
IT201700022525A1 (en) * | 2017-02-28 | 2018-08-28 | Doma Automation S R L | DEVICE FOR PHOTOGRAPHING OBJECTS |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10489287B2 (en) | 2017-05-15 | 2019-11-26 | Bank Of America Corporation | Conducting automated software testing using centralized controller and distributed test host servers |
US10223248B2 (en) | 2017-05-15 | 2019-03-05 | Bank Of America Corporation | Conducting automated software testing using centralized controller and distributed test host servers |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
KR102027275B1 (en) | 2017-11-14 | 2019-10-01 | 김대훈 | Management system of cafeteria and operation method thereof |
TWI689868B (en) | 2018-01-03 | 2020-04-01 | 虹光精密工業股份有限公司 | Portable image capture electronic device and image capture system therewith |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
CN108647354A (en) * | 2018-05-16 | 2018-10-12 | 广东小天才科技有限公司 | Tutoring learning method and lighting equipment |
JP6780682B2 (en) * | 2018-09-20 | 2020-11-04 | 日本電気株式会社 | Information acquisition system, control device and information acquisition method |
US10516863B1 (en) * | 2018-09-27 | 2019-12-24 | Bradley Baker | Miniature portable projector device |
CN113586892A (en) * | 2021-07-30 | 2021-11-02 | 北京百度网讯科技有限公司 | Three-dimensional information acquisition device |
US11568558B1 (en) | 2021-08-25 | 2023-01-31 | Radwell International, Inc. | Product photo booth |
WO2023027703A1 (en) * | 2021-08-25 | 2023-03-02 | Radwell International Inc. | Product photo booth |
Family Cites Families (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US1039222A (en) | 1911-05-24 | 1912-09-24 | Westinghouse Air Brake Co | Brake-valve device. |
US3643085A (en) * | 1969-10-13 | 1972-02-15 | Jacqueline Durand | Photographic light box |
US4292662A (en) | 1979-09-10 | 1981-09-29 | Eugene Gasperini | Photographic exposure chamber |
EP0724229B1 (en) * | 1994-12-28 | 2001-10-10 | Canon Kabushiki Kaisha | Image processing apparatus and method |
US6106124A (en) | 1996-06-20 | 2000-08-22 | Tarsia; Joseph | Self-contained photo studio lighting apparatus |
US5864640A (en) | 1996-10-25 | 1999-01-26 | Wavework, Inc. | Method and apparatus for optically scanning three dimensional objects using color information in trackable patches |
US5778258A (en) * | 1997-03-31 | 1998-07-07 | Zamoyski; Mark | Photography booth for digital image capture |
US6106125A (en) | 1998-09-02 | 2000-08-22 | Finn; Bruce L. | Foldable modular light diffusion box |
US6161940A (en) | 1998-11-05 | 2000-12-19 | Optical Gaging Products, Inc. | Large area collimated substage illuminators for gaging applications |
JP2001285894A (en) | 2000-03-31 | 2001-10-12 | Olympus Optical Co Ltd | Method for running three-dimensional image data |
JP2002027289A (en) | 2000-06-14 | 2002-01-25 | Ureka Networks:Kk | Device and method for photographing three-dimensional image |
WO2002014982A2 (en) | 2000-08-11 | 2002-02-21 | Holomage, Inc. | Method of and system for generating and viewing multi-dimensional images |
US7460130B2 (en) | 2000-09-26 | 2008-12-02 | Advantage 3D Llc | Method and system for generation, storage and distribution of omni-directional object views |
US20020063714A1 (en) | 2000-10-04 | 2002-05-30 | Michael Haas | Interactive, multimedia advertising systems and methods |
US7714301B2 (en) * | 2000-10-27 | 2010-05-11 | Molecular Devices, Inc. | Instrument excitation source and calibration method |
US7253832B2 (en) | 2001-08-13 | 2007-08-07 | Olympus Corporation | Shape extraction system and 3-D (three dimension) information acquisition system using the same |
JP2004007447A (en) * | 2002-03-28 | 2004-01-08 | Olympus Corp | Electronic camera and photographing composition determining apparatus therefor |
TWI225232B (en) | 2002-07-12 | 2004-12-11 | Toshiba Matsushita Display Tec | Display device |
US7039222B2 (en) * | 2003-02-28 | 2006-05-02 | Eastman Kodak Company | Method and system for enhancing portrait images that are processed in a batch mode |
GB2405282A (en) | 2003-08-21 | 2005-02-23 | Canon Res Ct Europ Ltd | Apparatus to aid three-dimensional model creation from photographic images |
US7394977B2 (en) | 2003-10-07 | 2008-07-01 | Openvr Co., Ltd. | Apparatus and method for creating 3-dimensional image |
EP1555804A3 (en) * | 2004-01-19 | 2006-08-16 | Ricoh Company, Ltd. | Image processing apparatus, image processing program and storage medium |
US7616834B2 (en) | 2004-03-03 | 2009-11-10 | Virtual Iris Studios, Inc. | System for delivering and enabling interactivity with images |
US7502036B2 (en) | 2004-03-03 | 2009-03-10 | Virtual Iris Studios, Inc. | System for delivering and enabling interactivity with images |
US7553051B2 (en) | 2004-03-18 | 2009-06-30 | Brasscorp Limited | LED work light |
TWI239209B (en) * | 2004-04-08 | 2005-09-01 | Benq Corp | A specific image extraction method, storage medium and image pickup device using the same |
US7055976B2 (en) | 2004-04-28 | 2006-06-06 | Thomas Charles Blanford | Collapsible tabletop lighting apparatus |
GB2414357A (en) * | 2004-05-18 | 2005-11-23 | Medicsight Plc | Nodule boundary detection |
WO2006011674A1 (en) * | 2004-07-30 | 2006-02-02 | Matsushita Electric Works, Ltd. | Image processing device |
US20060215958A1 (en) | 2004-11-17 | 2006-09-28 | Yeo Terence E | Enhanced electroluminescent sign |
JP4646797B2 (en) * | 2005-02-01 | 2011-03-09 | キヤノン株式会社 | Image processing apparatus, control method therefor, and program |
DE102005005795A1 (en) | 2005-02-09 | 2006-08-17 | Drehmomente.De Gmbh | Computer-aided object e.g. shoe, animation producing device, has computer controlling drive device, where program is provided on computer to produce image sequence that reflects object under different visual angles |
JP2006300531A (en) | 2005-04-15 | 2006-11-02 | Brother Ind Ltd | Three-dimensional shape measuring device |
WO2006121203A1 (en) * | 2005-05-11 | 2006-11-16 | Fujifilm Corporation | Image capturing apparatus, image capturing method, image processing apparatus, image processing method and program |
CN101326545B (en) * | 2005-08-19 | 2012-05-30 | 松下电器产业株式会社 | Image processing method, image processing system |
US7659938B2 (en) | 2006-01-26 | 2010-02-09 | Peng-Cheng Lai | Nonlinear movement and tilt angle control structure of an image capture device inside a light box |
US20070172216A1 (en) | 2006-01-26 | 2007-07-26 | Ortery Technologies, Inc. | Computer controlled system for synchronizing photography implementation between a 3-D turntable and an image capture device with automatic image format conversion |
US7953277B2 (en) | 2006-09-05 | 2011-05-31 | Williams Robert C | Background separated images for print and on-line use |
CA2600139C (en) | 2006-09-05 | 2017-03-07 | Pc Connection, Inc. | Background separated images for print and on-line use |
US7855732B2 (en) * | 2006-09-05 | 2010-12-21 | Pc Connection, Inc. | Hand producer for background separated images |
US7931380B2 (en) | 2006-09-05 | 2011-04-26 | Williams Robert C | Imaging apparatus for providing background separated images |
JP4732488B2 (en) * | 2008-06-24 | 2011-07-27 | シャープ株式会社 | Image processing apparatus, image forming apparatus, image reading apparatus, image processing method, image processing program, and computer-readable recording medium |
US8761491B2 (en) * | 2009-02-06 | 2014-06-24 | Himax Technologies Limited | Stereo-matching processor using belief propagation |
US8675957B2 (en) * | 2010-11-18 | 2014-03-18 | Ebay, Inc. | Image quality assessment to merchandise an item |
JP5594282B2 (en) * | 2011-03-31 | 2014-09-24 | カシオ計算機株式会社 | Image processing device |
KR101303017B1 (en) * | 2011-05-11 | 2013-09-03 | 엘지전자 주식회사 | Methof for resizing an image, method for transmitting an image, and electronic device thereof |
US20130063487A1 (en) * | 2011-09-12 | 2013-03-14 | MyChic Systems Ltd. | Method and system of using augmented reality for applications |
US8718369B1 (en) * | 2011-09-20 | 2014-05-06 | A9.Com, Inc. | Techniques for shape-based search of content |
KR20130094113A (en) * | 2012-02-15 | 2013-08-23 | 삼성전자주식회사 | Apparatus and method for processing a camera data |
JP5923824B2 (en) * | 2012-02-21 | 2016-05-25 | 株式会社ミツトヨ | Image processing device |
US9177360B2 (en) * | 2012-09-11 | 2015-11-03 | Apple Inc. | Automatic image orientation and straightening through image analysis |
US10853407B2 (en) * | 2013-09-05 | 2020-12-01 | Ebay, Inc. | Correlating image annotations with foreground features |
US9204018B2 (en) | 2014-01-21 | 2015-12-01 | Carbon Objects, Inc. | System and method of adjusting the color of image objects based on chained reference points, gradient characterization, and pre-stored indicators of environmental lighting conditions |
JP6382995B2 (en) | 2014-01-23 | 2018-08-29 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Evaluation of carotid plaque using contrast-enhanced ultrasound imaging |
JP6445775B2 (en) * | 2014-04-01 | 2018-12-26 | キヤノン株式会社 | Image processing apparatus and image processing method |
-
2007
- 2007-09-05 US US11/850,083 patent/US7953277B2/en active Active
-
2011
- 2011-04-20 US US13/090,857 patent/US8611659B2/en active Active
-
2013
- 2013-12-13 US US14/106,119 patent/US9014476B2/en active Active
-
2015
- 2015-04-20 US US14/690,922 patent/US9501836B2/en active Active
-
2016
- 2016-10-27 US US15/336,436 patent/US10194075B2/en active Active
-
2018
- 2018-12-10 US US16/214,366 patent/US10616477B2/en active Active
-
2020
- 2020-03-18 US US16/822,876 patent/US20200221022A1/en not_active Abandoned
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11232321B2 (en) | 2017-04-27 | 2022-01-25 | Ecosense Lighting Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11328500B2 (en) | 2017-04-27 | 2022-05-10 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11386641B2 (en) * | 2017-04-27 | 2022-07-12 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11417084B2 (en) | 2017-04-27 | 2022-08-16 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11423640B2 (en) | 2017-04-27 | 2022-08-23 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11430208B2 (en) | 2017-04-27 | 2022-08-30 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11436821B2 (en) | 2017-04-27 | 2022-09-06 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11436820B2 (en) | 2017-04-27 | 2022-09-06 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11450090B2 (en) | 2017-04-27 | 2022-09-20 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11450089B2 (en) | 2017-04-27 | 2022-09-20 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11468662B2 (en) | 2017-04-27 | 2022-10-11 | Korrus, Inc. | Training a neural network for determining correlations between lighting effects and biological states |
US11514664B2 (en) | 2017-04-27 | 2022-11-29 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11657190B2 (en) | 2017-04-27 | 2023-05-23 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11768973B2 (en) | 2017-04-27 | 2023-09-26 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11803672B2 (en) | 2017-04-27 | 2023-10-31 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11803673B2 (en) | 2017-04-27 | 2023-10-31 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11868683B2 (en) | 2017-04-27 | 2024-01-09 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11880637B2 (en) | 2017-04-27 | 2024-01-23 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11928393B2 (en) | 2017-04-27 | 2024-03-12 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11972175B2 (en) | 2017-04-27 | 2024-04-30 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US11989490B2 (en) | 2017-04-27 | 2024-05-21 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US12014121B2 (en) | 2017-04-27 | 2024-06-18 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US12014122B2 (en) | 2017-04-27 | 2024-06-18 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US12026436B2 (en) | 2017-04-27 | 2024-07-02 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
US12079547B2 (en) | 2017-04-27 | 2024-09-03 | Korrus, Inc. | Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations |
Also Published As
Publication number | Publication date |
---|---|
US20190109983A1 (en) | 2019-04-11 |
US10194075B2 (en) | 2019-01-29 |
US10616477B2 (en) | 2020-04-07 |
US20140307160A1 (en) | 2014-10-16 |
US20170111573A1 (en) | 2017-04-20 |
US8611659B2 (en) | 2013-12-17 |
US20110311138A1 (en) | 2011-12-22 |
US9014476B2 (en) | 2015-04-21 |
US20080056569A1 (en) | 2008-03-06 |
US20150228082A1 (en) | 2015-08-13 |
US9501836B2 (en) | 2016-11-22 |
US7953277B2 (en) | 2011-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10616477B2 (en) | Background separated images for print and on-line use | |
US7855732B2 (en) | Hand producer for background separated images | |
US7931380B2 (en) | Imaging apparatus for providing background separated images | |
Jacobs et al. | Cosaliency: Where people look when comparing images | |
CA2840294A1 (en) | Imaging apparatus and controller for photographing products | |
CA3083486A1 (en) | Method, medium, and system for live preview via machine learning models | |
US20110010776A1 (en) | Image Management System | |
KR102151964B1 (en) | Product photograph service providing method for product detailed information content | |
CN104427282A (en) | Information processing apparatus, information processing method, and program | |
WO2005076898A3 (en) | Method for organizing photographic images in a computer for locating, viewing and purchase | |
US20080288888A1 (en) | Computer User Interface for a Digital Microform Imaging Apparatus | |
CA2600139C (en) | Background separated images for print and on-line use | |
KR20180070082A (en) | Vr contents generating system | |
KR20230016781A (en) | A method of producing environmental contents using AR/VR technology related to metabuses | |
KR20180113944A (en) | Vr contents generating system | |
Howse | iOS Application Development with OpenCV 3 | |
WO2020202215A1 (en) | A system and method of automated digitization of a product | |
US20070198107A1 (en) | 3D Image Projection System | |
JP2007323353A (en) | Three-dimensional manual and its creation method | |
Courvoisier | Lessons in DSLR Workflow with Lightroom and Photoshop: Less DSLR Workf ePub _1 | |
VR | Spherocam hdr | |
Zintel | Tools and products | |
Williams et al. | The use and capture of images for computer-based learning II | |
Elmansy | Developing Professional IPhone Photography: Using Photoshop, Lightroom, and Other IOS and Desktop Apps to Create and Edit Photos | |
CN113297653A (en) | Room model display method, room model display device, room model display equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PC CONNECTION, INC., NEW HAMPSHIRE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WHEELER, MATTHEW J.;WILLIAMS, ROBERT C.;KOONTZ, STEVEN;SIGNING DATES FROM 19970131 TO 20101109;REEL/FRAME:053099/0305 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |