WO2018038756A1 - System and method for representing a field of capture as physical media - Google Patents

System and method for representing a field of capture as physical media Download PDF

Info

Publication number
WO2018038756A1
WO2018038756A1 PCT/US2016/061397 US2016061397W WO2018038756A1 WO 2018038756 A1 WO2018038756 A1 WO 2018038756A1 US 2016061397 W US2016061397 W US 2016061397W WO 2018038756 A1 WO2018038756 A1 WO 2018038756A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
metadata
server
sphere
images
Prior art date
Application number
PCT/US2016/061397
Other languages
French (fr)
Inventor
Charles Pierre CARRIERE, IV.
Kaben Gabriel NANLOHY
Harold Cole WILEY
Original Assignee
Scandy, LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scandy, LLC filed Critical Scandy, LLC
Publication of WO2018038756A1 publication Critical patent/WO2018038756A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/4097Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by using design data to control NC machines, e.g. CAD/CAM
    • G05B19/4099Surface or curve machining, making 3D objects, e.g. desktop manufacturing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6058Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut
    • G06T3/12
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/49Nc machine tool, till multiple
    • G05B2219/490233-D printing, layer of powder, add drops of binder in layer, new powder
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping

Definitions

  • the present disclosure refers generally to a system and method for
  • a panorama is an unbroken view of the whole region surrounding an
  • Panoramic photography is a technique of photography that attempts to capture images without horizontally elongated fields of capture.
  • panoramic photographs cannot be printed with the same field of view in which they were shot. In other words, they are flat and not concave as when shot from a particular point. Moreover, if attempting to accurately print a user's field of view, there is an additional problem that multiple systems must be utilized since no individual system allows for printing of a panoramic photograph.
  • the present invention provides a method for representing a field of capture in the form of a physical media in accordance with independent claims 1, 7, and 13. Preferred embodiments of the invention are reflected in the dependent claims.
  • the claimed invention can be better understood in view of the embodiments described and illustrated in the present disclosure, viz. in the present specification and drawings. In general, the present disclosure reflects preferred embodiments of the invention. The attentive reader will note, however, that some aspects of the disclosed embodiments extend beyond the scope of the claims. To the respect that the disclosed
  • a system and method for representing how a photograph was captured in relation to the field of capture, and mapping this representation onto a three-dimensional shape in a three-dimensional print is provided.
  • the system may comprise multiple servers, databases, processors, and user interfaces.
  • a system for capturing and analyzing image data comprises a means for using data from individual images to determine a field of capture.
  • the field of capture is generated from a group of images being stitched together to form a panorama.
  • the present disclosure provides an integrated system allowing a user to upload images to a website and print those images as an accurate representation of a user's field of capture.
  • FIG. 1 is a diagram of an example environment in which techniques described herein may be implemented
  • FIG. 2 is an exemplary diagram of a client of FIG. 1 according to an implementation consistent with the principles of the present disclosure
  • FIG. 3 is a diagram of an example of a computing device and a mobile computing device
  • FIG. 4 is a diagram illustrating an example system configuration according to an implementation consistent with the principles of the present disclosure
  • FIG. 5 illustrates an example of an off the shelf camera that may be used according to an implementation consistent with the principles of the present disclosure
  • FIG. 6 illustrates an example of an off the shelf camera that may be used according to an implementation consistent with the principles of the present disclosure.
  • components A, B, and C can contain only components A, B, and C, or can contain not only components A, B, and C, but also one or more other
  • field of capture is used herein to mean the area captured when taking a photograph or group of photographs.
  • a single photograph has a certain field of capture representing the area that was visible when taking the single photograph.
  • a group of individual photographs may collectively make up a field of capture, in which each photograph is a piece of the larger field of capture.
  • a field of capture may be a panorama photograph or group of photographs making up a single panorama.
  • three-dimensional file refers to any computer file capable of being printed on a three-dimensional printer.
  • the defined steps can be carried out in any order or simultaneously (except where the context excludes that possibility), and the method can include one or more other steps which are carried out before any of the defined steps, between two of the defined steps, or after all the defined steps (except where the context excludes that possibility).
  • Systems and methods consistent with the principles of the present disclosure may provide solutions for representing how a photograph was captured in relation to the field of capture, and mapping this representation onto the appropriate shape in a three-dimensional print.
  • the systems and methods may permit a user to capture images making up a user's field of view, upload said images to a website, have those images stitched together, and project the stitched-together image onto a three-dimensional object.
  • the system and method allows for printing the stitched-together image onto a three-dimensional object.
  • FIG. 1 is a diagram of an example environment 100 in which techniques
  • Environment 100 may include multiple clients 105 connected to one or more servers 110-140 via a network 150.
  • Server 110 may be a search server that may implement a search engine;
  • server 120 may be a document indexing server, e.g., a web crawler;
  • servers 130 and 140 may be general web servers, such as servers that provide content to clients 105.
  • Clients 105 and servers 110-140 may be connected to network 150 via wired, wireless, or a combination of wired and wireless connections.
  • Three clients 105 and four servers 110-140 are illustrated as connected to network 150 for simplicity. In practice, there may be additional or fewer clients and servers. Also, a client may perform the functions of a server and a server may perform the functions of a client.
  • Clients 105 may include devices of users that access servers 110-140.
  • client 105 may include, for instance, a personal computer, a wireless telephone, a personal digital assistant (PDA), a laptop, a smart phone, a tablet computer, a camera, or another type of computation or communication device.
  • Servers 110-140 may include devices that access, fetch, aggregate, process, search, provide, and/or maintain documents, files, and/or images. Although shown as single components 110, 120, 130, and 140 in Fig. 1, each server 110-140 may be implemented as multiple computing devices, which potentially may be geographically distributed.
  • Search server 110 may include one or more computing devices designed to implement a search engine, such as a documents/records search engine, general webpage search engine, image search engines, etc.
  • Search server 110 may, for example, include one or more web servers to receive search queries and/or inputs from clients 105, search one or more databases in response to the search queries and/or inputs, and provide documents, files, or images, relevant to the search queries and/or inputs, to clients 105.
  • Search server 110 may include a web search server that may provide webpages to clients 105, where a provided webpage may include a reference to a web server, such as one of web servers 130 or 140, at which the desired information and/or links is located.
  • the references, to the web server at which the desired information is located may be included in a frame and/or text box, or as a link to the desired information/document.
  • Document indexing server 120 may include one or more computing devices designed to index files and images available through network 150.
  • Document indexing server 120 may access other servers, such as web servers that host content, to index the content.
  • Document indexing server 120 may index files/images stored by other servers, such as web servers 130 and 140, and connect to network 150.
  • Document indexing server 120 may, for example, store and index content, information, and documents relating to three-dimensional images and field of view images and prints.
  • Web servers 130 and 140 may each include web servers that provide
  • webpages to clients may be, for example, HTML-based webpages.
  • a web server 130/140 may host one or more websites.
  • a website as the term is used herein, may refer to a collection of related webpages. Frequently, a website may be associated with a single domain name, although some websites may potentially encompass more than one domain name.
  • the concepts described herein may be applied on a per-website basis. Alternatively, the concepts described herein may be applied on a per-webpage basis.
  • Network 150 may include one or more networks of any kind, including, but not limited to, a local area network (LAN), a wide area network (WAN), a telephone network, such as the Public Switched Telephone Network (PTSN), an intranet, the Internet, a memory device, another type of network, or a combination of networks.
  • LAN local area network
  • WAN wide area network
  • PTSN Public Switched Telephone Network
  • FIG. 1 shows example components of environment 100
  • environment 100 may contain fewer components, different components, differently arranged components, and/or additional components than those depicted in Fig. 1. Alternatively, or additionally, one or more components of environment 100 may perform one or more other tasks described as being performed by one or more other components of environment 100.
  • FIG. 2 is an exemplary diagram of a user/client 105 or server entity
  • the client/server entity 105 may include a bus 210, a processor 220, a main memory 230, a read only memory (ROM) 240, a storage device 250, one or more input devices 260, one or more output devices 270, and a communication interface 280.
  • Bus 210 may include one or more conductors that permit communication among the components of the client/server entity 105.
  • Processor 220 may include any type of conventional processor or
  • Main memory 230 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 220.
  • ROM 240 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 220.
  • Storage device 250 may include a magnetic and/or optical recording medium and its corresponding drive. Storage device 250 may also include flash storage and its corresponding hardware.
  • Input device(s) 260 may include one or more conventional mechanisms that permit an operator to input information to the client/server entity 105, such as a camera, keyboard, a mouse, a pen, voice recognition and/or biometric mechanisms, etc.
  • Output device(s) 270 may include one or more conventional mechanisms that output information to the operator, including a display, a printer, a speaker, etc.
  • Communication interface 280 may include any transceiver-like mechanism that enables the client/server entity 105 to communicate with other devices 105 and/or systems.
  • communication interface 280 may include mechanisms for communicating with another device 105 or system via a network, such as network 150.
  • the client/server entity 105 performs certain image recording and printing operations.
  • the client/server entity 105 may perform these operations in response to processor 220 executing software instructions contained in a computer-readable medium, such as memory 230.
  • a computer- readable medium may be defined as one or more physical or logical memory devices and/or carrier waves.
  • the software instructions may be read into memory 230 from another
  • Fig. 3 is a diagram of an example of a computing device 300 and a mobile computing device 350, which may be used with the techniques described herein.
  • Computing device 300 or mobile computing device 350 may correspond to, for example, a client 105 and/or a server 110-140.
  • Computing device 300 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, mainframes, and other appropriate computers.
  • Mobile computing device 350 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, tablet computers, and other similar computing devices.
  • the components shown in Fig. 3, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations described herein.
  • Computing device 300 may include a processor 302, a memory 304, a storage device 306, a high-speed interface 308 connecting to a memory 304 and high-speed expansion ports 310, and a low-speed interface 312 connecting to a low-speed expansion port 314 and a storage device 306.
  • processor 302 can process instructions for execution within computing device 300, including instructions stored in memory 304 or on storage device 306 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 316 coupled to high-speed interface 308.
  • GUI graphical user interface
  • processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing devices 300 may be connected, with each device providing portions of the necessary operations, as a server bank, a group of blade servers, or a multi-processor system, etc.
  • Memory 304 stores information within computing device 300.
  • Memory 304 may include a volatile memory unit or units or, alternatively, may include a nonvolatile memory unit or units.
  • Memory 304 may also be another form of computer- readable medium, such as a magnetic or optical disk.
  • a computer-readable medium may refer to a non-transitory memory device.
  • a memory device may refer to storage space within a single storage device or spread across multiple storage devices.
  • Storage device 306 is capable of providing mass storage for computing device 300.
  • Storage device 306 may be or may contain a computer-readable medium, such as a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product can be tangibly embodied in an information carrier.
  • the computer program product may also contain instructions that, when executed, perform one or more methods, such as those described herein.
  • the information carrier is a computer or machine-readable medium, such as memory 304, storage device 306, or a memory on processor 302.
  • High-speed interface 308 manages bandwidth-intensive operations for computing device 300, while low-speed interface 312 manages lower bandwidth- intensive operations. Such allocation of functions is an example only.
  • High-speed interface 308 may be coupled to memory 304, display 316, such as through a graphics processor or accelerator, and to high-speed expansion ports 310, which may accept various expansion cards.
  • Low-speed interface 312 may be coupled to storage device 306 and low-speed expansion port 314.
  • Low-speed expansion port 3144 which may include various communication ports, such as USB, Bluetooth, Ethernet, wireless Ethernet, etc., may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as switch or router, e.g., through a network adapter.
  • Computing device 300 may be implemented in a number of different forms, as shown in the figures.
  • computing device 300 may be implemented as a standard server 320, or in a group of such servers.
  • Computing device 300 may also be implemented as part of a rack server system 324.
  • computing device 300 may be implemented in a personal computer, such as a laptop computer 322.
  • components from computing device 300 may be combined with other components in a mobile device, such as mobile computing device 350.
  • Each of such devices may contain one or more computing devices 300, 350, and an entire system may be made up of multiple computing devices 300, 350 communicating with each other.
  • Mobile computing device 350 may include a processor 352, a memory 364, an input/output ("I/O") device, such as a display 354, a communication interface 366, and a transceiver 368, among other components.
  • Mobile computing device 350 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage.
  • a storage device such as a micro-drive or other device, to provide additional storage.
  • Each of the components 352, 364, 354, 366, and 368 may be interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
  • Processor 352 can execute instructions within mobile computing device 350, including instructions stored in memory 364.
  • Processor 352 may be implemented as a chipset of chips that include separate and multiple analog and digital processors.
  • Processor 352 may provide, for example, for coordination of the other components of mobile computing device 350, such as control of user interfaces, applications run by mobile computing device 350, and wireless communication by mobile computing device 350.
  • Processor 352 may communicate with a user through control interface 358 and display interface 356 coupled to a display 354.
  • Display 354 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display or other appropriate display technology.
  • Display interface 356 may include appropriate circuitry for driving display 354 to present graphical and other information to a user.
  • Control interface 358 may receive commands from a user and convert the commands for submission to processor 352.
  • an external interface 362 may be provided in communication with processor 352, so as to enable near area communication of mobile computing device 350 with other devices. External interface 362 may provide, for example, for wired communications, or for wireless communication, and multiple interfaces may also be used.
  • Memory 364 stores information within mobile computing device 350.
  • Memory 364 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
  • Expansion memory 374 may also be provided and connected to mobile computing device 350 through expansion interface 372, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 374 may provide extra storage space for device 350, or may also store applications or other information for mobile computing device 350. Specifically, expansion memory 374 may include instructions to carry out or supplement the processes described herein, and may include secure information also. Thus, for example, expansion memory 374 may be provided as a security module for mobile computing device 350, and may be programmed with instructions that permit secure use of mobile computing device 350. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
  • SIMM Single In Line Memory Module
  • Expansion memory 374 may include, for example, flash memory and/or
  • a computer program product can be tangibly embodied in an information carrier.
  • the computer program product may contain instructions that, when executed, perform one or more methods, such as those described herein.
  • the information carrier may be a computer-or machine readable-medium, such as memory 364, expansion memory 374, or a memory on processor 352, that may be received, for example, over transceiver 368 or external interface 362.
  • Mobile computing device 350 may communicate wirelessly through communication interface 366, which may include digital signal processing circuitry where necessary. Communication interface 366 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through transceiver 368. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver. In addition, GPS (Global Positioning System) received module 370 may provide additional navigation-related and location-related wireless data to mobile computing device 350, which may be used as appropriate by applications running on mobile computing device 350.
  • GPS Global Positioning System
  • Mobile computing device 350 may be implemented in a number of different forms, as shown in the figures.
  • mobile computing device 350 may be implemented as a cellular telephone 380.
  • Mobile computing device 350 may also be implemented as part of a smart phone 382, personal digital assistant, or other similar mobile device.
  • ASICs application specific integrated circuits
  • These various implementations may include implementations in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • applications include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented
  • machine-readable medium and “computer-readable medium” refer to any apparatus and/or device, such as magnetic discs, optical disks, memory,
  • Programmable Logic Devices used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
  • machine- readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • Computer-readable medium may physically reside in one or more memory devices accessible by a server.
  • Computer-readable medium may include a database of entries corresponding to field of view photographs and files. Each of the entries may include, but are not limited to, a plurality of images collectively making up a user's field of capture when taking the plurality of images, metadata relating to those images, GPS information, and other like data.
  • the techniques described herein may be implemented on a computer having a display device, such as a CRT (cathode ray tube), LCD (liquid crystal display), or LED (Light Emitting Diode) monitor, for displaying information to the user and a keyboard and a pointing device by which the user can provide input to the computer.
  • a display device such as a CRT (cathode ray tube), LCD (liquid crystal display), or LED (Light Emitting Diode) monitor
  • CTR cathode ray tube
  • LCD liquid crystal display
  • LED Light Emitting Diode
  • Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • the techniques described herein can be implemented in a computing system that includes a back end component, such as a data server, or that includes a middleware component, such as an application server, or that includes a front end component, such as a client computer having a graphical user interface or Web browser through which a user can interact with an implementation of the techniques described herein, or any combination of such back end, middleware, or front end components.
  • a back end component such as a data server
  • a middleware component such as an application server
  • a front end component such as a client computer having a graphical user interface or Web browser through which a user can interact with an implementation of the techniques described herein, or any combination of such back end, middleware, or front end components.
  • the components of the system may be interconnected by any form of medium of digital communication.
  • a system and method are provided for representing how a photograph was captured in relation to the field of capture and mapping this field of capture onto an appropriate three-dimensional object.
  • the system and method may provide for a three-dimensional print having the same field of capture as when originally taken. Additionally, the present disclosure may provide a complete system for capturing, uploading, and printing three- dimensional photographs.
  • the system may comprise a user management system allowing a user to
  • the system may further comprise a file creation system that compiles a plurality of images collectively making up a field of capture into a single stitched image.
  • the system may further comprise altering the stitched image based on metadata or the images' specific parameters to form a three- dimensional printed object.
  • the system may further comprise a secure database for storing the files for later use.
  • the system may further comprise print management allowing a user to print various fields of capture in a tangible medium.
  • the system may comprise a suite of web services that powers all of the applications and tools making up the system.
  • This suite of servers allows a user to capture images, upload those images to a website, preview a stitched-together panorama photo, and send the approved panorama to a printer.
  • the system may comprise at least one database 410 where all user photos, data, and files are stored. It is understood that the system may comprise more than one database for storing information. It is additionally understood that the database may be cloud storage.
  • Each user 420 is unique and has a unique identifier, such as a unique identification number, when each user 420 is added to the system. Each user 420 may use a unique ten-digit phone number as an identification number when each customer account is created. Alternatively, each user 420 may use a unique email address or alternative unique identifier. Additionally, the system may provide a unique identifier. Additional customer information, such as physical address, may also be stored at the time said customer account is created.
  • Fig. 4 shows a simple representation of an exemplar architecture for a system in accordance with the present disclosure.
  • the system may utilize a database management system 410 such as Microsoft SQL, PostgreSQL, or similar system. It is understood that various servers may be used to access stored data. Users 420 may connect through a service or server before accessing stored data. This ensures all access to the database 410 has been authenticated.
  • database management system 410 such as Microsoft SQL, PostgreSQL, or similar system. It is understood that various servers may be used to access stored data. Users 420 may connect through a service or server before accessing stored data. This ensures all access to the database 410 has been authenticated.
  • the present disclosure provides a method for representing a field of capture in the form of physical media, such as a three-dimensional object.
  • a user 420 captures a plurality of photographs before uploading the photographs into the system.
  • the user 420 may use a mobile phone to take the photographs.
  • the user 420 may use a camera to take the photographs.
  • the user may use a three hundred and sixty degree camera, such as the Ricoh Theta, as illustrated in Fig. 5, or Samsung Gear 360.
  • the user may utilize a camera having a "fish-eye" lens, such as a GoPro, as illustrated in Fig. 6.
  • the user may also utilize the "panorama" mode of a phone or camera that stitches the panorama on the device and stores metadata information in the image.
  • these cameras will store the information in the metadata of the captured image.
  • These cameras are provided only as examples and it is understood that any camera, both currently available and available in the future, capable of taking panoramic or three hundred and sixty degree photos may be utilized with the methods and system disclosed herein. If using a camera, it is preferred that a fish-eye lens is used.
  • Fig. 5 and Fig. 6 show examples of currently available cameras that may optionally be used in accordance with the present disclosure.
  • a user 420 may open a system webpage or mobile application in order to receive prompts for taking individual photographs.
  • the system application may present the user with pre-set areas in a field of view for aligning the phone's camera. These pre-set areas may be represented by dots on a smart-phone screen. Once aligned, a user will capture a photograph. The user may rotate the phone about the user's body while capturing photos corresponding to each pre-set area in the system application's user interface.
  • a web server 411 communicates with a web server 411 and gives the group of photos a name. This action causes the group of images to be associated to a single project. This may be done over an HTTP application program interface (API).
  • API application program interface
  • the user may give a name to the collective group of images for identification.
  • the individual images are stored in a database 410.
  • the images may be uploaded to the database as a zipped file.
  • a web server may associate the user with the new named images and generate a new asset path for the images.
  • a system and method detects the field of capture of the images and projects the images to be printed as originally taken.
  • the files from storage 410 are downloaded to a local server 412.
  • the files may be downloaded from cloud storage.
  • the local server 412 may unzip the images onto a local hard drive and feed the unzipped images into stitching software.
  • the images are stitched together and the field of capture is calculated based on analyzing metadata of the various images. Alternatively, if metadata is not available, the system calculates the most likely field of capture based on the image dimensions.
  • the available metadata may relate only to the focal length of the lens and the width and height of the image.
  • the focal length of a lens is an inherent property of the lens and is the distance from the center of the lens to the point at which objects at infinity focus.
  • the system may utilize the focal length and height of the image to calculate a field of capture for projecting an image or images onto a sphere or other three-dimensional object.
  • the horizontal field may equal [2 atan(0.5 width / focallength)].
  • the images may be stitched together, and any lens distortion may be used.
  • Images may be stitched together using any means known in the art. For example, AUTOPANO SERVER or PHOTOSHOP may be utilized to stitch together the various photos. Once stitched together, the images are manipulated for creating the three-dimensional printable object.
  • the system may parse image metadata to determine what information is available about the panoramic image. If metadata is available, that information may be used to calculate the projection onto either a sphere or cylinder.
  • the image may be cropped and scaled to map onto a physical object 414.
  • the physical object 414 may be a sphere or cylinder.
  • the term "sphere” may refer to any three- dimensional object having a spherical shape or partially spherical shape, such as a hemisphere, or to any three-dimensional object generally having a rounded shape, such as an ellipsoid.
  • a three-dimensional file that is print ready is created based on the cropped image data.
  • Additional data may include the diameter, wall thickness, and hole size of the object 414.
  • the three-dimensional print-ready file may be exported to the database 410.
  • a user 420 may preview an image or video of the three-dimensional file.
  • the three-dimensional file may be sent to a three-dimensional printer 413.
  • the resulting product 414 is a printed three- dimensional object having a representation of a field of capture printed thereon.
  • the system may utilize an equirectangular projection represented by three hundred sixty degrees of latitude by one hundred eighty degrees of longitude. As such, the system may utilize an aspect ratio of 2.0 (360°/180°). Because spherical panoramas are often large and/or incomplete, panoramas may be cropped to exclude data that was not captured.
  • the images forming the panorama may be uncropped.
  • the system generally requires the following to uncrop an image: (a) the width and height of the cropped image; (b) the width and height of the uncropped image; and (c) the horizontal and vertical offset of the cropped image within the uncropped image.
  • This information may come in various forms of metadata and it is understood that said forms of metadata fall within the scope of the present disclosure.
  • the metadata used may be GPano metadata, otherwise known as Photo Sphere XMP Metadata.
  • the system may compute GPano metadata as follows:
  • GPano:FullPanoWidthPixels GPano:CroppedAreaImageWidthPixels /
  • Metadata may be embedded in image files using Adobe's Extensible Metadata Platform format.
  • the system may utilize photo sphere properties to create the media to be
  • Photo sphere metadata properties are dependent on the images taken and it is understood that the system may utilize different photo sphere metadata parameters.
  • the system may utilize Euler angles to provide a mapping from the points in the various photos that are stitched together. Further, artificial intelligence may also be utilized to learn from the user input on whether mapping is accurate.
  • the panorama may be scaled to dimensions that disagree with the embedded dimensions.
  • a spherical panorama can be produced by many different camera systems, some of which may embed incorrect metadata in the image. If originally correct, when a panorama is edited, the editor can corrupt or fail to update GPano metadata. As such, the system may optionally validate the metadata.
  • panoramas created using the system may include correct metadata.
  • the system may use two tactics to validate and correct bad or missing GPano metadata for an equirectangular projection. If GPano metadata is present, but the actual dimensions disagree with the embedded dimensions, the system may check whether the original and actual aspect ratios agree to within around 1%. If so, the system scales the GPano data to match the actual image dimensions. If no GPano metadata is present, but the panorama has aspect ratio within 1% of 2.0, then the system may assume the panorama is an uncropped equirectangular projection and may add GPano metadata describing this assumption.
  • the system may determine it has too little data to place the panorama onto a sphere, in which case it may create a cylindrical projection. If the system determines it cannot place the panorama onto a cylinder, it may create a simple planar projection. Additionally, the system may receive a preassembled panorama that it determines is not spherical. In this case, the system may handle the panorama as a cylinder and the measurements may be approximated for printing.
  • the process of mapping the image onto a sphere takes into account the ideal image, which is preferably an image that is not substantially skewed, to wrap onto the object and then modifies the image by scaling and cropping the image to fit the image onto a shape without distorting the image.
  • the ideal image which is preferably an image that is not substantially skewed, to wrap onto the object and then modifies the image by scaling and cropping the image to fit the image onto a shape without distorting the image.
  • the system generates a three-dimensional file that may be loaded into three- dimensional printing software with no modifications and be printed in full color.
  • the file may include diameter, wall thickness, and hole sizes specified for the three- dimensional object to be printed.
  • the three-dimensional object may be cropped so that the image projected onto it is the only portion of the sphere remaining.
  • the three-dimensional file to be printed may be exported as a Virtual Reality Modeling Language (VRML) file.
  • VRML Virtual Reality Modeling Language
  • the file may then be uploaded to be printed.
  • the system may utilize three- dimensional print software by 3D Systems.
  • the user can print the file without further manipulation.
  • the system may use any computer graphics software to create this file.
  • BLENDER may be utilized.
  • the three-dimensional file may then be synced to cloud storage in a location specified by the web server. This three- dimensional file is then ready to be downloaded by the three-dimensional printer operator and loaded directly into the three-dimensional printing software.
  • a scaling and cropping code may be applied to the sphere itself so that the system provides partial spheres that represent the area that has been captured by the photographs.
  • the system may selectively cut out the region of the sphere that physically represents the shape of the panoramic photograph that was actually captured in the field of capture. For example, if the stitched image is not a full three hundred sixty degree by one hundred eighty degree (true spherical), only that portion of the field of capture which was originally represented will be projected onto the sphere.
  • the system may project a whole three hundred sixty degree by one hundred eighty degree panorama onto a sphere and then crop the sphere such that the only remaining portions of the sphere are the areas that were captured in the original field of capture.
  • the server may then export the three-dimensional file such that a preview
  • the image or video of the three-dimensional file is generated and presented to a user via a user interface. Additionally, the uploaded three-dimensional file and stitched panorama may be uploaded and stored to cloud storage.
  • a user may use a smart phone or similar computing device to open a mobile application of the system.
  • the user may be presented with a user interface displaying a plurality of dots.
  • the user may move the phone until the field of capture is aligned with one of the plurality of dots.
  • the dots may emphasize or change colors when the field of capture is properly aligned.
  • the user captures the image.
  • the user may move the smart phone about her body in order to capture each field of capture corresponding to each one of the plurality of dots on the user interface.
  • the user may upload the images to a system website.
  • the user's images may be automatically loaded to the system website from the mobile application.
  • a user may use a web browser to navigate to the system website.
  • the system website may comprise a plurality of web pages. Each webpage may be accessed via tabs on the system website homepage or other system webpage. Links or tabs may allow a user to navigate from one page to another.
  • the system may further comprise web-based forms with text fields therein. The text fields may auto populate predetermined forms, webpages, databases, servers, or other targeted destinations. A user may access a webpage that accesses a database via a server.
  • User images and data may be tied to a specific user such that the actions and data relating to the user are grouped with that user, separately from other users on the system.
  • a user requests a server via a user interface to access customer images, files, data, or other user information.
  • the server verifies the user's permission level to access said user information, including images, files, or data, or a combination thereof. It is understood that various forms of verification may be used, including login names and passwords, device recognition, or other known
  • the server may access user images, files, data, or other user information based on the requesting user's permission level.
  • the server may then communicate, based on the received user request, the requested user images, files, data, or other user information, which may include a combination of images, files, or data.
  • the server then generates output information based on the user information.
  • the output information may include the requested user images, files, or data, or a combination thereof.
  • the server may then provide the output information to a user interface relating to the user's request and permission levels. For instance, the output information may include a preview of a three-dimensional file to be printed. The preview may be presented to the user in the user interface.
  • user information may include a variety of information or data, including, but not limited to, user images, metadata, three-dimensional files, panorama images, field of capture data, and image quality data.
  • the system may generate output information based on the user information.
  • the output information may include a three-dimensional file ready for printing on a three-dimensional printer.
  • a method for representing a field of capture in the form of a physical media comprising:
  • g. Determining, by the server, the orientation of the stitched image on a sphere or cylinder, wherein the orientation of the image is based on a user's field of capture when capturing each of the plurality of images; h. Generating, by the server, a three-dimensional file, wherein the three- dimensional file represents the stitched image as the field of capture; i. Storing the three-dimensional file in a database; j . Providing, by the server to a printer, the three-dimensional file; and k. Printing, by the printer, the three-dimensional file.
  • the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the server abstracts the metadata from each of the plurality of images making up the stitched image to be projected onto a sphere or cylinder.
  • the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the metadata is GPano metadata.
  • a method for representing a field of capture in the form of a physical media comprising:
  • c. Generating, by the computing device, a three-dimensional file for printing, wherein the file includes a stitched image representing the field of capture; d. Receiving, by the computing device and from a user, a user request to print the field of capture;
  • g. Determining, by the computing device and based on the user request, at least one supporting application to use for processing the user request; h. Communicating, by the computing device, with the at least one supporting application;
  • a method for representing a field of capture in the form of a physical media comprising:
  • panorama image having metadata relating to a field of capture
  • c. Providing, by a server, the panorama image and metadata to a local server; d. Analyzing, by the local server, the metadata of the panorama image and cropping and scaling said panorama image to portray the field of capture; e. Storing the cropped and scaled panorama image and the metadata as a three-dimensional file;
  • physical media is a sphere or cylinder, and wherein the printed media represents the field of capture.
  • the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the server abstracts the metadata from a plurality of images collectively making up the panorama image to be projected onto a sphere or cylinder.
  • the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the server uses the following metadata: a. the width and height of each of the plurality of images collectively making up the panorama image within an uncropped image;

Abstract

The invention is directed to a system and method for representing how a photograph was captured in relation to the field capture, and mapping this representation onto a shape in a 3D dimensional print. More specifically, a group of images, or single image captured through a lens with field of view distortion, is captured and stored together as a group. The images may be stitched together to form a single image. Once stitched together, a three-dimensional file is created and stored to the system. A server then provides the three-dimensional file to a three- dimensional printer for printing. Once printed, the three-dimensional object is packaged and mailed to the sender.

Description

SYSTEM AND METHOD FOR REPRESENTING A FIELD OF CAPTURE AS
PHYSICAL MEDIA
by
Charles Pierre Carriere IV
Kaben Gabriel Nanlohy
Harold Cole Wiley
CROSS REFERENCES
[0001] This application claims the benefit of U.S. Application No. 15/243,555, filed on August 22, 2016, which application is incorporated herein by reference.
FIELD OF THE INVENTION
[0002] The present disclosure refers generally to a system and method for
representing how a photograph was captured in relation to a field of capture, and mapping this representation onto a shape in a three-dimensional print.
BACKGROUND
[0003] A panorama is an unbroken view of the whole region surrounding an
observer. This view is often considered the observer's field of view. Panoramic photography is a technique of photography that attempts to capture images without horizontally elongated fields of capture.
[0004] There are many cameras and other devices capable of capturing panoramic images. However, there is no mechanism currently available to physically represent the field of view captured in a panoramic image. This is because panoramic photography is printed on flat media.
[0005] One problem with printing on flat media is that panoramic photographs cannot be printed with the same field of view in which they were shot. In other words, they are flat and not concave as when shot from a particular point. Moreover, if attempting to accurately print a user's field of view, there is an additional problem that multiple systems must be utilized since no individual system allows for printing of a panoramic photograph.
[0006] Accordingly, there is a need for a single system capable of representing how a photograph was captured in relation to the field of capture, and mapping this representation onto a three-dimensional print. Additionally, there is a need for a system capable of allowing individuals to take three-dimensional photographs and print representations of those photographs as taken.
SUMMARY
[0007] The present invention provides a method for representing a field of capture in the form of a physical media in accordance with independent claims 1, 7, and 13. Preferred embodiments of the invention are reflected in the dependent claims. The claimed invention can be better understood in view of the embodiments described and illustrated in the present disclosure, viz. in the present specification and drawings. In general, the present disclosure reflects preferred embodiments of the invention. The attentive reader will note, however, that some aspects of the disclosed embodiments extend beyond the scope of the claims. To the respect that the disclosed
embodiments indeed extend beyond the scope of the claims, the disclosed
embodiments are to be considered supplementary background information and do not constitute definitions of the invention per se.
[0008] In accordance with one aspect of the present disclosure, a system and method for representing how a photograph was captured in relation to the field of capture, and mapping this representation onto a three-dimensional shape in a three-dimensional print, is provided. The system may comprise multiple servers, databases, processors, and user interfaces.
[0009] According to another aspect, a system for capturing and analyzing image data is provided. As set forth herein, the system comprises a means for using data from individual images to determine a field of capture. The field of capture is generated from a group of images being stitched together to form a panorama. In one aspect, the present disclosure provides an integrated system allowing a user to upload images to a website and print those images as an accurate representation of a user's field of capture.
[00010] Additional features and advantages of the invention as claimed will be set forth in the description which follows, and will be apparent from the description, or may be learned by practice of the invention as claimed. The foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages of the present disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings where:
FIG. 1 is a diagram of an example environment in which techniques described herein may be implemented;
FIG. 2 is an exemplary diagram of a client of FIG. 1 according to an implementation consistent with the principles of the present disclosure;
FIG. 3 is a diagram of an example of a computing device and a mobile computing device;
FIG. 4 is a diagram illustrating an example system configuration according to an implementation consistent with the principles of the present disclosure;
FIG. 5 illustrates an example of an off the shelf camera that may be used according to an implementation consistent with the principles of the present disclosure; and
FIG. 6 illustrates an example of an off the shelf camera that may be used according to an implementation consistent with the principles of the present disclosure.
DETAILED DESCRIPTION
011] In the Summary above and in this Detailed Description, and the claims below, and in the accompanying drawings, reference is made to particular features, including method steps, of the invention as claimed. In the present disclosure, many features are described as being optional, e.g. through the use of the verb "may" or the use of parentheses. For the sake of brevity and legibility, the present disclosure does not explicitly recite each and every permutation that may be obtained by choosing from the set of optional features. However, the present disclosure is to be interpreted as explicitly disclosing all such permutations. For example, a system described as having three optional features may be embodied in seven different ways, namely with just one of the three possible features, with any two of the three possible features, or with all three of the three possible features. It is to be understood that the disclosure in this specification includes all possible combinations of such particular features. For example, where a particular feature is disclosed in the context of a particular aspect or embodiment, or a particular claim, that feature can also be used, to the extent possible, in combination with/or in the context of other particular aspects or embodiments, and generally in the invention as claimed.
[00012] The term "comprises" and grammatical equivalents thereof are used herein to mean that other components, steps, etc. are optionally present. For example, a system "comprising" components A, B, and C can contain only components A, B, and C, or can contain not only components A, B, and C, but also one or more other
components.
[00013] The term "field of capture" is used herein to mean the area captured when taking a photograph or group of photographs. For example, a single photograph has a certain field of capture representing the area that was visible when taking the single photograph. Similarly, a group of individual photographs may collectively make up a field of capture, in which each photograph is a piece of the larger field of capture. A field of capture may be a panorama photograph or group of photographs making up a single panorama. As used herein, the term "three-dimensional file" refers to any computer file capable of being printed on a three-dimensional printer.
[00014] Where reference is made herein to a method comprising two or more defined steps, the defined steps can be carried out in any order or simultaneously (except where the context excludes that possibility), and the method can include one or more other steps which are carried out before any of the defined steps, between two of the defined steps, or after all the defined steps (except where the context excludes that possibility).
[00015] Systems and methods consistent with the principles of the present disclosure may provide solutions for representing how a photograph was captured in relation to the field of capture, and mapping this representation onto the appropriate shape in a three-dimensional print. For example, the systems and methods may permit a user to capture images making up a user's field of view, upload said images to a website, have those images stitched together, and project the stitched-together image onto a three-dimensional object. Additionally, the system and method allows for printing the stitched-together image onto a three-dimensional object.
[00016] Fig. 1 is a diagram of an example environment 100 in which techniques
described herein may be implemented. Environment 100 may include multiple clients 105 connected to one or more servers 110-140 via a network 150. Server 110 may be a search server that may implement a search engine; server 120 may be a document indexing server, e.g., a web crawler; and servers 130 and 140 may be general web servers, such as servers that provide content to clients 105. Clients 105 and servers 110-140 may be connected to network 150 via wired, wireless, or a combination of wired and wireless connections. [00017] Three clients 105 and four servers 110-140 are illustrated as connected to network 150 for simplicity. In practice, there may be additional or fewer clients and servers. Also, a client may perform the functions of a server and a server may perform the functions of a client.
[00018] Clients 105 may include devices of users that access servers 110-140. A
client 105 may include, for instance, a personal computer, a wireless telephone, a personal digital assistant (PDA), a laptop, a smart phone, a tablet computer, a camera, or another type of computation or communication device. Servers 110-140 may include devices that access, fetch, aggregate, process, search, provide, and/or maintain documents, files, and/or images. Although shown as single components 110, 120, 130, and 140 in Fig. 1, each server 110-140 may be implemented as multiple computing devices, which potentially may be geographically distributed.
[00019] Search server 110 may include one or more computing devices designed to implement a search engine, such as a documents/records search engine, general webpage search engine, image search engines, etc. Search server 110 may, for example, include one or more web servers to receive search queries and/or inputs from clients 105, search one or more databases in response to the search queries and/or inputs, and provide documents, files, or images, relevant to the search queries and/or inputs, to clients 105. Search server 110 may include a web search server that may provide webpages to clients 105, where a provided webpage may include a reference to a web server, such as one of web servers 130 or 140, at which the desired information and/or links is located. The references, to the web server at which the desired information is located, may be included in a frame and/or text box, or as a link to the desired information/document.
[00020] Document indexing server 120 may include one or more computing devices designed to index files and images available through network 150. Document indexing server 120 may access other servers, such as web servers that host content, to index the content. Document indexing server 120 may index files/images stored by other servers, such as web servers 130 and 140, and connect to network 150.
Document indexing server 120 may, for example, store and index content, information, and documents relating to three-dimensional images and field of view images and prints.
[00021] Web servers 130 and 140 may each include web servers that provide
webpages to clients. The webpages may be, for example, HTML-based webpages. A web server 130/140 may host one or more websites. A website, as the term is used herein, may refer to a collection of related webpages. Frequently, a website may be associated with a single domain name, although some websites may potentially encompass more than one domain name. The concepts described herein may be applied on a per-website basis. Alternatively, the concepts described herein may be applied on a per-webpage basis.
[00022] While servers 110-140 are shown as separate entities, it may be possible for one or more servers 110-140 to perform one or more of the functions of another one or more of servers 110-140. For example, it may be possible that two or more of servers 110-140 are implemented as a single server. It may also be possible for one of servers 110-140 to be implemented as multiple, possibly distributed, computing devices. [00023] Network 150 may include one or more networks of any kind, including, but not limited to, a local area network (LAN), a wide area network (WAN), a telephone network, such as the Public Switched Telephone Network (PTSN), an intranet, the Internet, a memory device, another type of network, or a combination of networks.
[00024] Although Fig. 1 shows example components of environment 100,
environment 100 may contain fewer components, different components, differently arranged components, and/or additional components than those depicted in Fig. 1. Alternatively, or additionally, one or more components of environment 100 may perform one or more other tasks described as being performed by one or more other components of environment 100.
[00025] Fig. 2 is an exemplary diagram of a user/client 105 or server entity
(hereinafter called "client/server entity"), which may correspond to one or more of the clients and servers. The client/server entity 105 may include a bus 210, a processor 220, a main memory 230, a read only memory (ROM) 240, a storage device 250, one or more input devices 260, one or more output devices 270, and a communication interface 280. Bus 210 may include one or more conductors that permit communication among the components of the client/server entity 105.
[00026] Processor 220 may include any type of conventional processor or
microprocessor that interprets and executes instructions. Main memory 230 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 220. ROM 240 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 220. Storage device 250 may include a magnetic and/or optical recording medium and its corresponding drive. Storage device 250 may also include flash storage and its corresponding hardware.
[00027] Input device(s) 260 may include one or more conventional mechanisms that permit an operator to input information to the client/server entity 105, such as a camera, keyboard, a mouse, a pen, voice recognition and/or biometric mechanisms, etc. Output device(s) 270 may include one or more conventional mechanisms that output information to the operator, including a display, a printer, a speaker, etc.
Communication interface 280 may include any transceiver-like mechanism that enables the client/server entity 105 to communicate with other devices 105 and/or systems. For example, communication interface 280 may include mechanisms for communicating with another device 105 or system via a network, such as network 150.
[00028] As will be described in detail below, the client/server entity 105 performs certain image recording and printing operations. The client/server entity 105 may perform these operations in response to processor 220 executing software instructions contained in a computer-readable medium, such as memory 230. A computer- readable medium may be defined as one or more physical or logical memory devices and/or carrier waves.
[00029] The software instructions may be read into memory 230 from another
computer-readable medium, such as data storage device 250, or from another device via communication interface 280. The software instructions contained in memory 230 causes processor 220 to perform processes that will be described in greater detail below. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the principles of the present disclosure. Thus, implementations consistent with the principles of the present disclosure are not limited to any specific combination of hardware circuitry and software.
[00030] Fig. 3 is a diagram of an example of a computing device 300 and a mobile computing device 350, which may be used with the techniques described herein. Computing device 300 or mobile computing device 350 may correspond to, for example, a client 105 and/or a server 110-140. Computing device 300 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, mainframes, and other appropriate computers. Mobile computing device 350 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, tablet computers, and other similar computing devices. The components shown in Fig. 3, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations described herein.
[00031] Computing device 300 may include a processor 302, a memory 304, a storage device 306, a high-speed interface 308 connecting to a memory 304 and high-speed expansion ports 310, and a low-speed interface 312 connecting to a low-speed expansion port 314 and a storage device 306. Each of components 302, 304, 306, 308, 310, 312, and 314 are interconnected using various buses, and may be mounted on a common motherboard or in other manners as appropriate. Processor 302 can process instructions for execution within computing device 300, including instructions stored in memory 304 or on storage device 306 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 316 coupled to high-speed interface 308. Multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 300 may be connected, with each device providing portions of the necessary operations, as a server bank, a group of blade servers, or a multi-processor system, etc.
[00032] Memory 304 stores information within computing device 300. Memory 304 may include a volatile memory unit or units or, alternatively, may include a nonvolatile memory unit or units. Memory 304 may also be another form of computer- readable medium, such as a magnetic or optical disk. A computer-readable medium may refer to a non-transitory memory device. A memory device may refer to storage space within a single storage device or spread across multiple storage devices.
[00033] Storage device 306 is capable of providing mass storage for computing device 300. Storage device 306 may be or may contain a computer-readable medium, such as a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described herein. The information carrier is a computer or machine-readable medium, such as memory 304, storage device 306, or a memory on processor 302.
[00034] High-speed interface 308 manages bandwidth-intensive operations for computing device 300, while low-speed interface 312 manages lower bandwidth- intensive operations. Such allocation of functions is an example only. High-speed interface 308 may be coupled to memory 304, display 316, such as through a graphics processor or accelerator, and to high-speed expansion ports 310, which may accept various expansion cards. Low-speed interface 312 may be coupled to storage device 306 and low-speed expansion port 314. Low-speed expansion port 314, which may include various communication ports, such as USB, Bluetooth, Ethernet, wireless Ethernet, etc., may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as switch or router, e.g., through a network adapter.
[00035] Computing device 300 may be implemented in a number of different forms, as shown in the figures. For example, computing device 300 may be implemented as a standard server 320, or in a group of such servers. Computing device 300 may also be implemented as part of a rack server system 324. In addition, computing device 300 may be implemented in a personal computer, such as a laptop computer 322. Alternatively, components from computing device 300 may be combined with other components in a mobile device, such as mobile computing device 350. Each of such devices may contain one or more computing devices 300, 350, and an entire system may be made up of multiple computing devices 300, 350 communicating with each other.
[00036] Mobile computing device 350 may include a processor 352, a memory 364, an input/output ("I/O") device, such as a display 354, a communication interface 366, and a transceiver 368, among other components. Mobile computing device 350 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the components 352, 364, 354, 366, and 368 may be interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
[00037] Processor 352 can execute instructions within mobile computing device 350, including instructions stored in memory 364. Processor 352 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. Processor 352 may provide, for example, for coordination of the other components of mobile computing device 350, such as control of user interfaces, applications run by mobile computing device 350, and wireless communication by mobile computing device 350.
[00038] Processor 352 may communicate with a user through control interface 358 and display interface 356 coupled to a display 354. Display 354 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display or other appropriate display technology. Display interface 356 may include appropriate circuitry for driving display 354 to present graphical and other information to a user. Control interface 358 may receive commands from a user and convert the commands for submission to processor 352. In addition, an external interface 362 may be provided in communication with processor 352, so as to enable near area communication of mobile computing device 350 with other devices. External interface 362 may provide, for example, for wired communications, or for wireless communication, and multiple interfaces may also be used. [00039] Memory 364 stores information within mobile computing device 350.
Memory 364 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
Expansion memory 374 may also be provided and connected to mobile computing device 350 through expansion interface 372, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 374 may provide extra storage space for device 350, or may also store applications or other information for mobile computing device 350. Specifically, expansion memory 374 may include instructions to carry out or supplement the processes described herein, and may include secure information also. Thus, for example, expansion memory 374 may be provided as a security module for mobile computing device 350, and may be programmed with instructions that permit secure use of mobile computing device 350. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[00040] Expansion memory 374 may include, for example, flash memory and/or
NVRAM memory. A computer program product can be tangibly embodied in an information carrier. The computer program product may contain instructions that, when executed, perform one or more methods, such as those described herein. The information carrier may be a computer-or machine readable-medium, such as memory 364, expansion memory 374, or a memory on processor 352, that may be received, for example, over transceiver 368 or external interface 362.
[00041] Mobile computing device 350 may communicate wirelessly through communication interface 366, which may include digital signal processing circuitry where necessary. Communication interface 366 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through transceiver 368. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver. In addition, GPS (Global Positioning System) received module 370 may provide additional navigation-related and location-related wireless data to mobile computing device 350, which may be used as appropriate by applications running on mobile computing device 350.
[00042] Mobile computing device 350 may be implemented in a number of different forms, as shown in the figures. For example, mobile computing device 350 may be implemented as a cellular telephone 380. Mobile computing device 350 may also be implemented as part of a smart phone 382, personal digital assistant, or other similar mobile device.
[00043] Various implementations described herein may be realized in digital
electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations may include implementations in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
[00044] These computer programs, also known as programs, software, software
applications, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented
programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any apparatus and/or device, such as magnetic discs, optical disks, memory,
Programmable Logic Devices ("PLDs"), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine- readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
[00045] The contents of computer-readable medium may physically reside in one or more memory devices accessible by a server. Computer-readable medium may include a database of entries corresponding to field of view photographs and files. Each of the entries may include, but are not limited to, a plurality of images collectively making up a user's field of capture when taking the plurality of images, metadata relating to those images, GPS information, and other like data.
[00046] To provide for interaction with a user, the techniques described herein may be implemented on a computer having a display device, such as a CRT (cathode ray tube), LCD (liquid crystal display), or LED (Light Emitting Diode) monitor, for displaying information to the user and a keyboard and a pointing device by which the user can provide input to the computer. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
[00047] The techniques described herein can be implemented in a computing system that includes a back end component, such as a data server, or that includes a middleware component, such as an application server, or that includes a front end component, such as a client computer having a graphical user interface or Web browser through which a user can interact with an implementation of the techniques described herein, or any combination of such back end, middleware, or front end components. The components of the system may be interconnected by any form of medium of digital communication.
[00048] In accordance with one aspect of the present disclosure, a system and method are provided for representing how a photograph was captured in relation to the field of capture and mapping this field of capture onto an appropriate three-dimensional object. The system and method may provide for a three-dimensional print having the same field of capture as when originally taken. Additionally, the present disclosure may provide a complete system for capturing, uploading, and printing three- dimensional photographs.
[00049] The system may comprise a user management system allowing a user to
upload images to a system website. The system may further comprise a file creation system that compiles a plurality of images collectively making up a field of capture into a single stitched image. The system may further comprise altering the stitched image based on metadata or the images' specific parameters to form a three- dimensional printed object. The system may further comprise a secure database for storing the files for later use. The system may further comprise print management allowing a user to print various fields of capture in a tangible medium.
[00050] It is understood that system and methods disclosed herein may be carried out utilizing any language using any framework. The system may comprise a suite of web services that powers all of the applications and tools making up the system. This suite of servers allows a user to capture images, upload those images to a website, preview a stitched-together panorama photo, and send the approved panorama to a printer.
[00051] As illustrated in Fig. 4, the system may comprise at least one database 410 where all user photos, data, and files are stored. It is understood that the system may comprise more than one database for storing information. It is additionally understood that the database may be cloud storage. Each user 420 is unique and has a unique identifier, such as a unique identification number, when each user 420 is added to the system. Each user 420 may use a unique ten-digit phone number as an identification number when each customer account is created. Alternatively, each user 420 may use a unique email address or alternative unique identifier. Additionally, the system may provide a unique identifier. Additional customer information, such as physical address, may also be stored at the time said customer account is created.
[00052] Fig. 4 shows a simple representation of an exemplar architecture for a system in accordance with the present disclosure. The system may utilize a database management system 410 such as Microsoft SQL, PostgreSQL, or similar system. It is understood that various servers may be used to access stored data. Users 420 may connect through a service or server before accessing stored data. This ensures all access to the database 410 has been authenticated.
[00053] The present disclosure provides a method for representing a field of capture in the form of physical media, such as a three-dimensional object. As illustrated in Fig. 4, in accordance with the present disclosure, a user 420 captures a plurality of photographs before uploading the photographs into the system. The user 420 may use a mobile phone to take the photographs. Alternatively, the user 420 may use a camera to take the photographs. The user may use a three hundred and sixty degree camera, such as the Ricoh Theta, as illustrated in Fig. 5, or Samsung Gear 360. The user may utilize a camera having a "fish-eye" lens, such as a GoPro, as illustrated in Fig. 6. The user may also utilize the "panorama" mode of a phone or camera that stitches the panorama on the device and stores metadata information in the image. Optionally, these cameras will store the information in the metadata of the captured image. These cameras are provided only as examples and it is understood that any camera, both currently available and available in the future, capable of taking panoramic or three hundred and sixty degree photos may be utilized with the methods and system disclosed herein. If using a camera, it is preferred that a fish-eye lens is used. Fig. 5 and Fig. 6 show examples of currently available cameras that may optionally be used in accordance with the present disclosure.
[00054] A user 420 may open a system webpage or mobile application in order to receive prompts for taking individual photographs. Generally, the system application may present the user with pre-set areas in a field of view for aligning the phone's camera. These pre-set areas may be represented by dots on a smart-phone screen. Once aligned, a user will capture a photograph. The user may rotate the phone about the user's body while capturing photos corresponding to each pre-set area in the system application's user interface.
[00055] As illustrated in Fig. 4, once all the photos are captured, a user 410
communicates with a web server 411 and gives the group of photos a name. This action causes the group of images to be associated to a single project. This may be done over an HTTP application program interface (API). The user may give a name to the collective group of images for identification. The individual images are stored in a database 410. The images may be uploaded to the database as a zipped file. A web server may associate the user with the new named images and generate a new asset path for the images.
[00056] As set forth herein, a system and method detects the field of capture of the images and projects the images to be printed as originally taken. To create a file for printing, the files from storage 410 are downloaded to a local server 412. The files may be downloaded from cloud storage. The local server 412 may unzip the images onto a local hard drive and feed the unzipped images into stitching software. The images are stitched together and the field of capture is calculated based on analyzing metadata of the various images. Alternatively, if metadata is not available, the system calculates the most likely field of capture based on the image dimensions.
[00057] In some situations, the available metadata may relate only to the focal length of the lens and the width and height of the image. The focal length of a lens is an inherent property of the lens and is the distance from the center of the lens to the point at which objects at infinity focus. As such, the system may utilize the focal length and height of the image to calculate a field of capture for projecting an image or images onto a sphere or other three-dimensional object. Using this information, the horizontal field may equal [2 atan(0.5 width / focallength)]. Furthermore, using this information, the vertical field of view may be equal to [vertical field of view = 2 atan(0.5 height / focallength)].
[00058] The images may be stitched together, and any lens distortion may be
corrected. Images may be stitched together using any means known in the art. For example, AUTOPANO SERVER or PHOTOSHOP may be utilized to stitch together the various photos. Once stitched together, the images are manipulated for creating the three-dimensional printable object. The system may parse image metadata to determine what information is available about the panoramic image. If metadata is available, that information may be used to calculate the projection onto either a sphere or cylinder.
[00059] As further illustrated in Fig. 4, once the stitched image has been successfully analyzed for metadata, the image may be cropped and scaled to map onto a physical object 414. The physical object 414 may be a sphere or cylinder. However, it is understood that other objects may optionally be utilized and still fall within the scope of the present disclosure. As used herein, the term "sphere" may refer to any three- dimensional object having a spherical shape or partially spherical shape, such as a hemisphere, or to any three-dimensional object generally having a rounded shape, such as an ellipsoid. A three-dimensional file that is print ready is created based on the cropped image data. Additional data may include the diameter, wall thickness, and hole size of the object 414. The three-dimensional print-ready file may be exported to the database 410. A user 420, may preview an image or video of the three-dimensional file. Once approved, the three-dimensional file may be sent to a three-dimensional printer 413. The resulting product 414 is a printed three- dimensional object having a representation of a field of capture printed thereon. 60] The system may utilize an equirectangular projection represented by three hundred sixty degrees of latitude by one hundred eighty degrees of longitude. As such, the system may utilize an aspect ratio of 2.0 (360°/180°). Because spherical panoramas are often large and/or incomplete, panoramas may be cropped to exclude data that was not captured. In order to correctly display a cropped spherical panorama, the images forming the panorama may be uncropped. The system generally requires the following to uncrop an image: (a) the width and height of the cropped image; (b) the width and height of the uncropped image; and (c) the horizontal and vertical offset of the cropped image within the uncropped image. This information may come in various forms of metadata and it is understood that said forms of metadata fall within the scope of the present disclosure. The metadata used may be GPano metadata, otherwise known as Photo Sphere XMP Metadata. The system may compute GPano metadata as follows:
(a) GPano:CroppedAreaImageWidthPixels = finalRender.nCols;
(b) GPano: Cropped ArealmageHeightPixels = finalRender.nRows; (c)
GPano:FullPanoWidthPixels = GPano:CroppedAreaImageWidthPixels /
(projection.xnMax - projection.xnMin); (d) GPano:FullPanoHeightPixels =
GPano: Cropped ArealmageHeightPixels / (project! on.ynMax - project on. ynMin); (e) GPano: Cropped AreaLeftPixels = GPano:FullPanoWidthPixels * projection.xnMin; (f) GPano: Cropped AreaTopPixels = GPano:FullPanoHeightPixels *
projection.ynMin. Noting that, xnMin, xnMax, ynMin, and ynMax are normalized to the interval [0, 1]. Also noting that, xnMin < xnMax and ynMin < inmate. Also noting that, for an uncropped panorama, xnMin = 0, xnMax = 1, ynMin = 0, ynMax = 1.
[00061] Generally, this metadata is embedded in each image's files. The GPano
metadata may be embedded in image files using Adobe's Extensible Metadata Platform format.
[00062] The system may utilize photo sphere properties to create the media to be
printed. Photo sphere metadata properties are dependent on the images taken and it is understood that the system may utilize different photo sphere metadata parameters. The system may utilize Euler angles to provide a mapping from the points in the various photos that are stitched together. Further, artificial intelligence may also be utilized to learn from the user input on whether mapping is accurate.
[00063] The correctness of the image metadata is not guaranteed. For example, the panorama may be scaled to dimensions that disagree with the embedded dimensions. A spherical panorama can be produced by many different camera systems, some of which may embed incorrect metadata in the image. If originally correct, when a panorama is edited, the editor can corrupt or fail to update GPano metadata. As such, the system may optionally validate the metadata.
[00064] Panoramas created using the system may include correct metadata. For
panoramas created outside the system, the system may use two tactics to validate and correct bad or missing GPano metadata for an equirectangular projection. If GPano metadata is present, but the actual dimensions disagree with the embedded dimensions, the system may check whether the original and actual aspect ratios agree to within around 1%. If so, the system scales the GPano data to match the actual image dimensions. If no GPano metadata is present, but the panorama has aspect ratio within 1% of 2.0, then the system may assume the panorama is an uncropped equirectangular projection and may add GPano metadata describing this assumption.
[00065] When stitching images into a panorama, the system may determine it has too little data to place the panorama onto a sphere, in which case it may create a cylindrical projection. If the system determines it cannot place the panorama onto a cylinder, it may create a simple planar projection. Additionally, the system may receive a preassembled panorama that it determines is not spherical. In this case, the system may handle the panorama as a cylinder and the measurements may be approximated for printing.
[00066] The process of mapping the image onto a sphere takes into account the ideal image, which is preferably an image that is not substantially skewed, to wrap onto the object and then modifies the image by scaling and cropping the image to fit the image onto a shape without distorting the image.
[00067] The system generates a three-dimensional file that may be loaded into three- dimensional printing software with no modifications and be printed in full color. The file may include diameter, wall thickness, and hole sizes specified for the three- dimensional object to be printed. The three-dimensional object may be cropped so that the image projected onto it is the only portion of the sphere remaining.
[00068] The three-dimensional file to be printed may be exported as a Virtual Reality Modeling Language (VRML) file. However, it is understood that any file formats may be used and still fall within the scope of the present disclosure. The file may then be uploaded to be printed. For example, the system may utilize three- dimensional print software by 3D Systems. The user can print the file without further manipulation. The system may use any computer graphics software to create this file. For example, BLENDER may be utilized. The three-dimensional file may then be synced to cloud storage in a location specified by the web server. This three- dimensional file is then ready to be downloaded by the three-dimensional printer operator and loaded directly into the three-dimensional printing software.
[00069] If the image is not a full three hundred sixty degree by one hundred eighty degree panorama (true spherical), a scaling and cropping code may be applied to the sphere itself so that the system provides partial spheres that represent the area that has been captured by the photographs. As such, the system may selectively cut out the region of the sphere that physically represents the shape of the panoramic photograph that was actually captured in the field of capture. For example, if the stitched image is not a full three hundred sixty degree by one hundred eighty degree (true spherical), only that portion of the field of capture which was originally represented will be projected onto the sphere. In a preferred embodiment, the system may project a whole three hundred sixty degree by one hundred eighty degree panorama onto a sphere and then crop the sphere such that the only remaining portions of the sphere are the areas that were captured in the original field of capture.
[00070] The server may then export the three-dimensional file such that a preview
image or video of the three-dimensional file is generated and presented to a user via a user interface. Additionally, the uploaded three-dimensional file and stitched panorama may be uploaded and stored to cloud storage.
[00071] A user may use a smart phone or similar computing device to open a mobile application of the system. The user may be presented with a user interface displaying a plurality of dots. The user may move the phone until the field of capture is aligned with one of the plurality of dots. The dots may emphasize or change colors when the field of capture is properly aligned. Once aligned, the user captures the image. The user may move the smart phone about her body in order to capture each field of capture corresponding to each one of the plurality of dots on the user interface. Once all of the images are captured, the user may upload the images to a system website. The user's images may be automatically loaded to the system website from the mobile application. Alternatively, a user may use a web browser to navigate to the system website. The system website may comprise a plurality of web pages. Each webpage may be accessed via tabs on the system website homepage or other system webpage. Links or tabs may allow a user to navigate from one page to another. The system may further comprise web-based forms with text fields therein. The text fields may auto populate predetermined forms, webpages, databases, servers, or other targeted destinations. A user may access a webpage that accesses a database via a server.
[00072] User images and data may be tied to a specific user such that the actions and data relating to the user are grouped with that user, separately from other users on the system. Generally, a user requests a server via a user interface to access customer images, files, data, or other user information. The server verifies the user's permission level to access said user information, including images, files, or data, or a combination thereof. It is understood that various forms of verification may be used, including login names and passwords, device recognition, or other known
authentication means.
[00073] The server may access user images, files, data, or other user information based on the requesting user's permission level. The server may then communicate, based on the received user request, the requested user images, files, data, or other user information, which may include a combination of images, files, or data. The server then generates output information based on the user information. The output information may include the requested user images, files, or data, or a combination thereof. The server may then provide the output information to a user interface relating to the user's request and permission levels. For instance, the output information may include a preview of a three-dimensional file to be printed. The preview may be presented to the user in the user interface. As used herein, user information may include a variety of information or data, including, but not limited to, user images, metadata, three-dimensional files, panorama images, field of capture data, and image quality data. The system may generate output information based on the user information. The output information may include a three-dimensional file ready for printing on a three-dimensional printer.
[00074] It will also be apparent to one of ordinary skill in the art that aspects of the invention as claimed may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement aspects consistent with the present disclosure is not limiting. Thus, the operation and behavior of the aspects were described without references to the specific software code— it being understood that one of ordinary skill in the art would be able to design software and control hardware to implement the aspects based on the description herein.
What is claimed is:
1) A method for representing a field of capture in the form of a physical media, said method comprising:
a. Capturing, by a user, a plurality of images, wherein the plurality of images forms a field of capture;
b. Storing the plurality of images in a database;
c. Requesting, by a user via a user interface, a server to access the images from the database;
d. Communicating, by the server, requested images from the database based on the user' s request;
e. Generating, by the server, output information that includes the requested images being stitched together as a stitched image in a file; f. Storing the stitched image in the database;
g. Determining, by the server, the orientation of the stitched image on a sphere or cylinder, wherein the orientation of the image is based on a user's field of capture when capturing each of the plurality of images; h. Generating, by the server, a three-dimensional file, wherein the three- dimensional file represents the stitched image as the field of capture; i. Storing the three-dimensional file in a database; j . Providing, by the server to a printer, the three-dimensional file; and k. Printing, by the printer, the three-dimensional file.
The method of claim 1, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder.
The method of any one of claims 1-2, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the server abstracts the metadata from each of the plurality of images making up the stitched image to be projected onto a sphere or cylinder.
The method of any one of claims 1-3, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the server uses the following metadata:
a. the width and height of a cropped image within an uncropped image; b. the width and height of the uncropped image; and
c. the horizontal and vertical offset of the cropped image within the
uncropped image.
The method of any one of claims 1-4, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the metadata is GPano metadata.
The method of claim 1, further comprising, where no metadata is available, utilizing a focal length of the lens that captured each image and a height of each image to determine a field of capture for projecting an image or images onto a sphere or cylinder. ) A method for representing a field of capture in the form of a physical media, said method comprising:
a. Receiving, by a computing device, a plurality of images, wherein the
plurality of images forms a field of capture;
b. Receiving, by the computing device, metadata relating to each of the
plurality of images;
c. Generating, by the computing device, a three-dimensional file for printing, wherein the file includes a stitched image representing the field of capture; d. Receiving, by the computing device and from a user, a user request to print the field of capture;
e. Receiving, by the computing device, user information relating to the user's request;
f. Providing, by the computing device and for presentation in a user
interface, the user information;
g. Determining, by the computing device and based on the user request, at least one supporting application to use for processing the user request; h. Communicating, by the computing device, with the at least one supporting application;
i. Generating, by the computing device and based on communicating with the at least one supporting application, output information for presentation in the user interface;
j . Providing, by the computing device to a printer, said output information; and k. Printing, by the printer, said output information, received from the computing device.
) The method of claim 7, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder.
) The method of any one of claims 7-8, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the computing device abstracts the metadata from each individual image making up the stitched image to be projected onto a sphere or cylinder.
0) The method of any one of claims 7-9, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the computing device uses the following metadata:
a. the width and height of a cropped image within an uncropped image; b. the width and height of the uncropped image; and
c. the horizontal and vertical offset of the cropped image within the
uncropped image.
1) The method of any one of claims 7-10, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the metadata is GPano metadata.
2) The method of claim 7, further comprising, where no metadata is available, utilizing a focal length of the lens that captured each image and a height of each image to determine a field of capture for projecting an image or images onto a sphere or cylinder. ) A method for representing a field of capture in the form of a physical media, said method comprising:
a. Receiving, from an image capturing device, a panorama image, said
panorama image having metadata relating to a field of capture;
b. Storing the panorama image and metadata in a database;
c. Providing, by a server, the panorama image and metadata to a local server; d. Analyzing, by the local server, the metadata of the panorama image and cropping and scaling said panorama image to portray the field of capture; e. Storing the cropped and scaled panorama image and the metadata as a three-dimensional file;
f. Exporting, by a server, the three-dimensional file to a printer; and g. Printing the three-dimensional file onto physical media, wherein the
physical media is a sphere or cylinder, and wherein the printed media represents the field of capture.
) The method of claim 13, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder.
) The method of any one of claim 13-14, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the server abstracts the metadata from a plurality of images collectively making up the panorama image to be projected onto a sphere or cylinder.
) The method of any one of claim 13-15, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the server uses the following metadata: a. the width and height of each of the plurality of images collectively making up the panorama image within an uncropped image;
b. the width and height of the uncropped image; and
c. the horizontal and vertical offset of a cropped image within the uncropped image.
) The method of any one of claim 13-16, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the metadata is GPano metadata.
) The method of claim 13, further comprising, where no metadata is available, utilizing a focal length of the lens that captured the image and a height of the image to determine a field of capture for projecting the image onto a sphere or cylinder.

Claims

What is claimed is:
1) A method for representing a field of capture in the form of a physical media, said method comprising:
a. Capturing, by a user, a plurality of images, wherein the plurality of images forms a field of capture;
b. Storing the plurality of images in a database;
c. Requesting, by a user via a user interface, a server to access the images from the database;
d. Communicating, by the server, requested images from the database based on the user' s request;
e. Generating, by the server, output information that includes the requested images being stitched together as a stitched image in a file; f. Storing the stitched image in the database;
g. Determining, by the server, the orientation of the stitched image on a sphere or cylinder, wherein the orientation of the image is based on a user's field of capture when capturing each of the plurality of images; h. Generating, by the server, a three-dimensional file, wherein the three- dimensional file represents the stitched image as the field of capture; i. Storing the three-dimensional file in a database;
j . Providing, by the server to a printer, the three-dimensional file; and k. Printing, by the printer, the three-dimensional file. The method of claim 1, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder.
The method of any one of claims 1-2, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the server abstracts the metadata from each of the plurality of images making up the stitched image to be projected onto a sphere or cylinder.
The method of any one of claims 1-3, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the server uses the following metadata:
a. the width and height of a cropped image within an uncropped image; b. the width and height of the uncropped image; and
c. the horizontal and vertical offset of the cropped image within the
uncropped image.
The method of any one of claims 1-4, wherein the server utilizes metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the metadata is GPano metadata.
The method of claim 1, further comprising, where no metadata is available, utilizing a focal length of the lens that captured each image and a height of each image to determine a field of capture for projecting an image or images onto a sphere or cylinder.
A method for representing a field of capture in the form of a physical media, said method comprising: a. Receiving, by a computing device, a plurality of images, wherein the plurality of images forms a field of capture;
b. Receiving, by the computing device, metadata relating to each of the
plurality of images;
c. Generating, by the computing device, a three-dimensional file for printing, wherein the file includes a stitched image representing the field of capture; d. Receiving, by the computing device and from a user, a user request to print the field of capture;
e. Receiving, by the computing device, user information relating to the user's request;
f. Providing, by the computing device and for presentation in a user
interface, the user information;
g. Determining, by the computing device and based on the user request, at least one supporting application to use for processing the user request; h. Communicating, by the computing device, with the at least one supporting application;
i. Generating, by the computing device and based on communicating with the at least one supporting application, output information for presentation in the user interface;
j . Providing, by the computing device to a printer, said output information; and
k. Printing, by the printer, said output information, received from the
computing device. 8) The method of claim 7, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder.
9) The method of any one of claims 7-8, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the computing device abstracts the metadata from each individual image making up the stitched image to be projected onto a sphere or cylinder.
10) The method of any one of claims 7-9, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the computing device uses the following metadata:
a. the width and height of a cropped image within an uncropped image; b. the width and height of the uncropped image; and
c. the horizontal and vertical offset of the cropped image within the
uncropped image.
11) The method of any one of claims 7-10, wherein the computing device utilizes the metadata to determine the orientation of the stitched image on a sphere or cylinder, wherein the metadata is GPano metadata.
12) The method of claim 7, further comprising, where no metadata is available,
utilizing a focal length of the lens that captured each image and a height of each image to determine a field of capture for projecting an image or images onto a sphere or cylinder.
13) A method for representing a field of capture in the form of a physical media, said method comprising: a. Receiving, from an image capturing device, a panorama image, said panorama image having metadata relating to a field of capture; b. Storing the panorama image and metadata in a database;
c. Providing, by a server, the panorama image and metadata to a local server; d. Analyzing, by the local server, the metadata of the panorama image and cropping and scaling said panorama image to portray the field of capture; e. Storing the cropped and scaled panorama image and the metadata as a three-dimensional file;
f. Exporting, by a server, the three-dimensional file to a printer; and g. Printing the three-dimensional file onto physical media, wherein the
physical media is a sphere or cylinder, and wherein the printed media represents the field of capture.
14) The method of claim 13, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder.
15) The method of any one of claim 13-14, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the server abstracts the metadata from a plurality of images collectively making up the panorama image to be projected onto a sphere or cylinder.
16) The method of any one of claim 13-15, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the server uses the following metadata:
a. the width and height of each of the plurality of images collectively making up the panorama image within an uncropped image; b. the width and height of the uncropped image; and
c. the horizontal and vertical offset of a cropped image within the uncropped image.
17) The method of any one of claim 13-16, wherein the server utilizes the metadata to determine the orientation of the panorama image on a sphere or cylinder, wherein the metadata is GPano metadata.
18) The method of claim 13, further comprising, where no metadata is available, utilizing a focal length of the lens that captured the image and a height of the image to determine a field of capture for projecting the image onto a sphere or cylinder.
PCT/US2016/061397 2016-08-22 2016-11-10 System and method for representing a field of capture as physical media WO2018038756A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/243,555 2016-08-22
US15/243,555 US9983569B2 (en) 2016-08-22 2016-08-22 System and method for representing a field of capture as physical media

Publications (1)

Publication Number Publication Date
WO2018038756A1 true WO2018038756A1 (en) 2018-03-01

Family

ID=61190709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/061397 WO2018038756A1 (en) 2016-08-22 2016-11-10 System and method for representing a field of capture as physical media

Country Status (2)

Country Link
US (1) US9983569B2 (en)
WO (1) WO2018038756A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489607B2 (en) * 2017-04-28 2019-11-26 Innovative Lending Solutions, LLC Apparatus and method for a document management information system
US11226937B2 (en) * 2019-03-21 2022-01-18 Faro Technologies, Inc. Distributed measurement system for scanning projects

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
US20070273767A1 (en) * 2006-04-13 2007-11-29 Samsung Electronics Co., Ltd. Method and apparatus for requesting printing of panoramic image in mobile device
US7778485B2 (en) * 2004-08-31 2010-08-17 Carl Zeiss Microimaging Gmbh Systems and methods for stitching image blocks to create seamless magnified images of a microscope slide
US20160078904A1 (en) * 2014-09-12 2016-03-17 Fujifilm Corporation Content management system, management content generating method, management content play back method, and recording medium
US20160203644A1 (en) * 2015-01-14 2016-07-14 Ricoh Company, Ltd. Information processing apparatus, information processing method, and computer-readable recording medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7224387B2 (en) * 2002-01-09 2007-05-29 Hewlett-Packard Development Company, L.P. Method and apparatus for correcting camera tilt distortion in panoramic images
EP1931945B1 (en) * 2005-09-12 2011-04-06 Trimble Jena GmbH Surveying instrument and method of providing survey data using a surveying instrument
US8310556B2 (en) * 2008-04-22 2012-11-13 Sony Corporation Offloading processing of images from a portable digital camera
US8724007B2 (en) * 2008-08-29 2014-05-13 Adobe Systems Incorporated Metadata-driven method and apparatus for multi-image processing
JPWO2011024313A1 (en) * 2009-08-31 2013-01-24 株式会社ミマキエンジニアリング 3D inkjet printer
US20130293671A1 (en) * 2012-05-01 2013-11-07 Tourwrist, Inc. Systems and methods for stitching and sharing panoramas
US20140152806A1 (en) * 2012-11-30 2014-06-05 Sharp Cars Detailing & More, LLC Automated studio
JP6620740B2 (en) * 2014-05-09 2019-12-18 ソニー株式会社 Information processing apparatus, information processing method, and program
US20160065842A1 (en) * 2014-09-02 2016-03-03 Honeywell International Inc. Visual data capture feedback

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7778485B2 (en) * 2004-08-31 2010-08-17 Carl Zeiss Microimaging Gmbh Systems and methods for stitching image blocks to create seamless magnified images of a microscope slide
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
US20070273767A1 (en) * 2006-04-13 2007-11-29 Samsung Electronics Co., Ltd. Method and apparatus for requesting printing of panoramic image in mobile device
US20160078904A1 (en) * 2014-09-12 2016-03-17 Fujifilm Corporation Content management system, management content generating method, management content play back method, and recording medium
US20160203644A1 (en) * 2015-01-14 2016-07-14 Ricoh Company, Ltd. Information processing apparatus, information processing method, and computer-readable recording medium

Also Published As

Publication number Publication date
US9983569B2 (en) 2018-05-29
US20180052446A1 (en) 2018-02-22

Similar Documents

Publication Publication Date Title
US10691929B2 (en) Method and apparatus for verifying certificates and identities
US11195018B1 (en) Augmented reality typography personalization system
US10169896B2 (en) Rebuilding images based on historical image data
US20230274537A1 (en) Eye gaze tracking using neural networks
US9324014B1 (en) Automated user content processing for augmented reality
US9350820B2 (en) Photo selection for mobile devices
JP6267224B2 (en) Method and system for detecting and selecting the best pictures
CN110276366A (en) Carry out test object using Weakly supervised model
US20110292221A1 (en) Automatic camera
US9094616B2 (en) Method and system for image capture and facilitated annotation
US20180253820A1 (en) Systems, methods, and devices for generating virtual reality content from two-dimensional images
CN116057533A (en) Automatic website data migration
EP4120106A1 (en) Identity verification method and apparatus based on artificial intelligence, and computer device and storage medium
US11341605B1 (en) Document rectification via homography recovery using machine learning
CN108876706A (en) It is generated according to the thumbnail of panoramic picture
EP4120121A1 (en) Face liveness detection method, system and apparatus, computer device, and storage medium
US20220405961A1 (en) Long distance qr code decoding
US11481683B1 (en) Machine learning models for direct homography regression for image rectification
CN105630791A (en) Web album browsing method and device
US9983569B2 (en) System and method for representing a field of capture as physical media
CN104102732B (en) Picture showing method and device
US10699145B1 (en) Systems and methods for augmented reality assisted form data capture
CN104871179A (en) Method and system for image capture and facilitated annotation
WO2019056492A1 (en) Contract investigation processing method, storage medium, and server
US20230046591A1 (en) Document authenticity verification in real-time

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16914371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16914371

Country of ref document: EP

Kind code of ref document: A1