US20200257266A1 - Generating of 3d-printed custom wearables - Google Patents
Generating of 3d-printed custom wearables Download PDFInfo
- Publication number
- US20200257266A1 US20200257266A1 US16/784,713 US202016784713A US2020257266A1 US 20200257266 A1 US20200257266 A1 US 20200257266A1 US 202016784713 A US202016784713 A US 202016784713A US 2020257266 A1 US2020257266 A1 US 2020257266A1
- Authority
- US
- United States
- Prior art keywords
- mobile device
- wearable
- body part
- images
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
-
- A—HUMAN NECESSITIES
- A43—FOOTWEAR
- A43B—CHARACTERISTIC FEATURES OF FOOTWEAR; PARTS OF FOOTWEAR
- A43B17/00—Insoles for insertion, e.g. footbeds or inlays, for attachment to the shoe after the upper has been joined
-
- A—HUMAN NECESSITIES
- A43—FOOTWEAR
- A43B—CHARACTERISTIC FEATURES OF FOOTWEAR; PARTS OF FOOTWEAR
- A43B17/00—Insoles for insertion, e.g. footbeds or inlays, for attachment to the shoe after the upper has been joined
- A43B17/003—Insoles for insertion, e.g. footbeds or inlays, for attachment to the shoe after the upper has been joined characterised by the material
-
- A—HUMAN NECESSITIES
- A43—FOOTWEAR
- A43D—MACHINES, TOOLS, EQUIPMENT OR METHODS FOR MANUFACTURING OR REPAIRING FOOTWEAR
- A43D1/00—Foot or last measuring devices; Measuring devices for shoe parts
- A43D1/02—Foot-measuring devices
- A43D1/025—Foot-measuring devices comprising optical means, e.g. mirrors, photo-electric cells, for measuring or inspecting feet
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
- B29C64/393—Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29D—PRODUCING PARTICULAR ARTICLES FROM PLASTICS OR FROM SUBSTANCES IN A PLASTIC STATE
- B29D35/00—Producing footwear
- B29D35/12—Producing parts thereof, e.g. soles, heels, uppers, by a moulding technique
- B29D35/122—Soles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y10/00—Processes of additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y30/00—Apparatus for additive manufacturing; Details thereof or accessories therefor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
- B33Y50/02—Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y80/00—Products made by additive manufacturing
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/18—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
- G05B19/4097—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by using design data to control NC machines, e.g. CAD/CAM
- G05B19/4099—Surface or curve machining, making 3D objects, e.g. desktop manufacturing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29L—INDEXING SCHEME ASSOCIATED WITH SUBCLASS B29C, RELATING TO PARTICULAR ARTICLES
- B29L2031/00—Other particular articles
- B29L2031/48—Wearing apparel
- B29L2031/50—Footwear, e.g. shoes or parts thereof
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/35—Nc in input of data, input till input file format
- G05B2219/35134—3-D cad-cam
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/49—Nc machine tool, till multiple
- G05B2219/49007—Making, forming 3-D object, model, surface
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Definitions
- This disclosure relates to 3-D digital modeling and subsequent 3-D printing. This disclosure more particularly relates to input and output handling of image data to generate 3-D models for printing.
- 3-D printing can be used to create customized items.
- the present cost of ownership of a 3-D printer and the requisite skill required for proficient use is prohibitive for most people.
- Generation of 3-D models for printing that fit a particular wearer, as opposed to standardized sizing schemes, adds a further complexity.
- FIG. 1 is a block diagram illustrating a system for the generation of customized 3D printed wearables
- FIG. 2 is a flowchart illustrating a process for generating custom 3D printed wearables
- FIG. 3 is a flowchart illustrating a process for acquiring images of the user and other data to be used in creating a customized 3D printed wearable
- FIG. 4 is a flowchart illustrating a process by which the mobile device interacts with the user to acquire images of the user
- FIG. 5 is a flowchart illustrating a process for performing computer vision on collected images of a user
- FIG. 6 is an illustration of a coordinate graph including a collection of X,Y locations along a body curve
- FIG. 7 is a flowchart illustrating a process for customizing tessellation models
- FIG. 8 is an illustration of a negative normal technique to create complex tessellation files
- FIG. 9 is a block diagram illustrating a distributed system for the generation of customized 3D printed wearables
- FIG. 10 is a flowchart illustrating API access at a number of steps in wearable generation
- FIG. 11 is a flowchart illustrating an embodiment for handling a number of input body data types
- FIG. 12 is a flowchart illustrating wearable generation including simultaneous computer vision and machine learning processes
- FIG. 13 is a flowchart illustrating distance measurement in images taken from single lens 2-D cameras
- FIG. 14 is a graphic illustration of a customized computer mouse generated through body scanning
- FIG. 15 is a graphic illustration of a customized ear headphone generated through body scanning
- FIG. 16 is a graphic illustration of an assortment of wearables that are generated via body imaging followed by 3-D printing;
- FIG. 17 is a graphic illustration of customized eye glasses generated through body scanning.
- FIG. 18 is a graphic illustration of a customized brace generated through body scanning.
- wearables To generate 3D printed wearable objects (“wearables”) with simple instructions and minimal processing, the technique introduced here enables commonly available mobile devices to be used to image the prospective wearer and enables a processing server or distributed set of servers to use the resulting images to generate of a tessellation model. These tessellation model can then be used to generate a 3D printed wearable object (hereinafter simply “wearable”).
- the term “wearable” refers to articles, adornments or items designed to be worn by a user, incorporated into another item worn by a user, act as an orthosis for the user, or interfacing with the contours of a user's body.
- An example of a wearable used throughout this disclosure to facilitate description is a shoe insole.
- a shoe insole is illustrative in that the shape and style one particular person would want over the shape and style another person would want tend to vary greatly across people, and customization is an important detail. Nonetheless, the teachings in this disclosure apply similarly to other types of wearables, such as bracelets, rings, bras, helmets, earphones, goggles, support braces (e.g. knee, wrist), gauge earrings, and body-contoured peripherals.
- FIG. 1 is a block diagram illustrating a system for the generation of customized 3D printed wearables 20 . Included in the system is the capability for providing body part input data.
- a mobile processing device that includes a digital camera and is equipped to communicate over wireless network, such as a smartphone, tablet computer, a networked digital camera or other suitable known mobile devices in the art (hereafter, “mobile device”) 22 , a processing server 24 , and a 3D printer 26 .
- the system further can include a manual inspection computer 28 .
- the mobile device 22 is a device that is capable of capturing and transmitting images over a wireless network, such as the Internet 30 .
- mobile device 22 is a handheld device.
- mobile device 22 include a smart phone (e.g., Apple iPhone, Samsung Galaxy), a confocal microscopy body scanner, an infrared camera, a ultrasound camera, a digital camera, and tablet computer (e.g., Apple iPad or Dell Venture 8 7000).
- the mobile device 22 is a processor enabled device including a camera 34 , a network transceiver 36 A, a user interface 38 A, and digital storage and memory 40 A containing client application software 42 .
- the camera 34 on the mobile device may be a simple digital camera or a more complex 3D camera, scanning device, or video capture device.
- 3D cameras include Intel RealSense cameras or Lytro light field cameras.
- complex cameras may include scanners developed by TOM-CAT Solutions, LLC (the TOM-CAT, or iTOM-CAT), adapted versions of infrared cameras, ultrasound cameras, or adapted versions of intra-oral scanners by 3 Shape.
- Simple digital cameras (including no sensors beyond 2-D optical) use reference objects of known size to calculate distances within images.
- Use of a 3D camera may reduce or eliminate the need for a reference object because 3D cameras are capable of calculating distances within a given image without any predetermined sizes/distances in the images.
- the mobile device also provides a user interface 38 A that is used in connection with the client application software 42 .
- the client application software 42 provides the user with the ability to select various 3D printed wearable products. The selection of products corresponds with camera instructions for images that the user is to capture. Captured images are delivered over the Internet 30 to the processing server 24 .
- Processor 32 B operates processing server 24 .
- the processing server 24 receives image data from the mobile device 22 .
- server application software 44 uses the image data to perform image processing, machine learning and computer vision operations that populate characteristics of the user.
- the server application software 44 includes computer vision tools 46 to aid in the performance of computer vision operations. Examples of computer vision tools 46 include OpenCV or SimpleCV, though other suitable examples are known in the art and may be programmed to identify pixel variations in digital images. Pixel variation data is implemented as taught herein to produce desired results.
- a user or administrative user may perform manual checks and/or edits to the results of the computer vision operations.
- the manual checks are performed on the manual inspection computer 28 or at a terminal that accesses processing server 24 's resources.
- the processing server 24 includes a number of premade tessellation model kits 48 corresponding to products that the user selects from the client application software 42 . Edits may affect both functional and cosmetic details of the wearable —edits include looseness/tightness, and high rise/low rise fit. Edits are further stored by the processing server 24 as observations to improve machine learning algorithms.
- the tessellation model kits 48 are used as a starting point from which the processing server 24 applies customizations.
- Tessellation model kits 48 are a collection of data files that can be used to digitally render an object for 3D printing and to print the object using the 3D printer 26 .
- Common file types of tessellation model kits 48 include 0.3dm, 0.3ds, .blend, .bvh, .c4d, .dae, .dds, .dxf, .fbx, .lwo, .lws, .max, .mtl, .obj, .skp, .stl, .tga, or other suitable file types known in the art.
- the customizations generate a file for use with a 3D printer.
- the processing server 24 is in communication with the 3D printer 26 in order to print out the user's desired 3D wearable.
- tessellation files 48 are generated on the fly from the input provided to the system.
- the tessellation file 48 is instead generated without premade input through an image processing, computer vision, and machine learning process.
- 3D printer 26 Numerous models of 3D printer 26 may be used by the invented system. 3D printers 26 vary in size of printed article. Based on the type of 3D wearable users are selecting, varying sizes of 3D printer 26 are appropriate. In the case where the 3D wearable is a bracelet, for example, a 6 cubic inch printer may be sufficient. When printing shoe insoles, however, a larger printer may be required. A majority of insoles can be printed with a 1 cubic foot printer, though some larger insoles require a larger 3-D printer.
- Users of the system may take a number of roles. Some users may be administrators, some may be the intended wearer of an end, 3-D printed product, some users may facilitate obtaining input data for the system, and some may be agents working on behalf of any of user type previously mentioned.
- FIG. 2 is a flowchart illustrating a process for generating custom 3D printed wearables.
- the mobile device accepts input from a user through the user interface concerning the selection of the type of wearable the user wants to purchase.
- the mobile device uses a mobile application, or an application program interface (“API”) that include an appropriate user interface and enable communication between the mobile device and external web servers.
- API application program interface
- Wearable examples previously mentioned include shoe insoles, bracelets, rings, bras, and gauge earrings.
- the product may be drilled down further into subclasses of wearables.
- shoe insoles for example, there can be dress shoe, athletic shoe, walking shoe, and other suitable insoles known in the art. Each subclass of wearable can have construction variations.
- Some embodiments include social features by which the user is enabled to post the results of a 3D wearable customization process to a social network.
- the mobile device provides instructions to the user to operate the camera to capture images of the user (or more precisely, a body part of the user) in a manner which collects the data necessary to provide a customized wearable for that user.
- the mobile device transmits the collected image data to the processing server 24 .
- the processing server performs computer vision operations on the image data in order to determine the size, shape, and curvature of the user (or body part of the user), where applicable to the chosen product type.
- the server application software obtains size and curvature specifications from the computer vision process to apply these specifications and completes a tessellation model.
- the process of applying specifications from computer vision involves the server application software altering a predetermined set of vertices on a premade tessellation model kit that most closely resembles the wearable the user wants to purchase.
- details relating to the particular chosen wearable type may be applied to the tessellation model kit as applicable (e.g., textures or shape modifications pertaining to a subclass of wearable).
- the tessellation file is generated on the fly from the input provided to the system.
- the tessellation file is instead generated without premade input through an image processing, computer vision, and machine learning process.
- step 212 the processing server forwards the customized tessellation model to the 3D printer.
- step 214 the 3D printer prints the customized wearable based on the customized tessellation model.
- step 216 the wearable is delivered to the user. Delivery to the user depends on a number of factors such as the source of the order and the destination of the order. The system can route the orders to in-house or third party manufacturers that are located closer to the user (geographic location) in order to facilitate the delivery process.
- FIG. 3 is a flowchart illustrating a process for acquiring images of the user and other data to be used in creating a customized 3D printed wearable.
- the mobile device upon initiation of the client application software in the mobile device, and in a manner similar to step 202 of FIG. 2 , the mobile device prompts for and receives a user selection for wearable type and subclass.
- the client application software loads (e.g., from local or remote memory) the necessary instructions to provide for the wearable type and subclass used to properly direct a user to capture image data of the user's body part or parts as applicable to the wearable type.
- the loaded instructions are provided to the user via the mobile device's user interface (e.g., via a touchscreen and/or audio speaker) to facilitate image data capture.
- the body part imaging may involve acquiring multiple images. The images required may depend on the kind of mobile device used and/or the wearable type selected.
- the instructions provided to the user can include details such as the orientation of the camera and the objects/body parts that should be in frame of the camera's view finder.
- the mobile device camera captures body part image data.
- the body part image data may come in any of various formats, such as 2D images, 3D images, video clips, and body scanner imaging data.
- the body part image data may come from a 3rd party apparatus (such as a body scanner from a doctor's office), a user uploading images via a website.
- the client application software performs validation on the captured image.
- the validation is performed by the server application software.
- the validation can use object and pixel recognition software incorporated into either the client or server application software to determine whether or not a given image is acceptable. For example, if the image expected is of the user's foot in a particular orientation, an image of a dog is recognized as unacceptable based on expected variations in pixel color.
- the current image data is deleted or archived, and the user is instructed again to capture the correct image data.
- image data is acceptable, the process moves on to step 314 .
- step 314 the application software determines if more images are required. If so, the instructions for the next image are presented and the user is once again expected to orient the mobile device correctly to capture an acceptable image. Where the application has all the image data required, the process continues to step 316 .
- the client application software accepts other user customizations that are unconnected to the size, curvature, or shape of the wearable. Examples of these other user customizations include color, material, and branding.
- the wearable data collected is transmitted by the mobile device to the processing server.
- FIG. 4 is a flowchart illustrating a process by which the mobile device interacts with the user to acquire images of the user.
- particular image data is used, and multiple images may be requested.
- five photos of image data for each pair of insoles to be fabricated may be requested (e.g., two images of the top of each of the user's feet, two of the inner side of each of the user's foot, and a background image of the space behind the side images, without the user's foot).
- three images are used. The system does not take the images from the foot that will not serve to model a custom insole.
- the mobile device provides the user with instructions for the top down views.
- the instructions include a reference object.
- the reference object is a piece of standard-sized paper, such as letter-size (e.g., 8.5 ⁇ 11 inch) or A4 size. Because such paper has well-known dimensions and is commonly available in almost every home, it can be used as a convenient reference object.
- the application software can determine automatically whether the paper is letter sized, legal size, A4 sized, or other suitable standardized sizes known in the art. Based on the style of the paper, the application software has dimensions of known size within frame of the image. In other embodiments, the user may be asked to indicate the paper size chosen via the user interface (e.g., letter-size or A4).
- the instructions for the top down image direct the user to find an open floor space on a hard flat surface (such as wood or tile) in order to avoid warping the paper, thereby casing errors in the predetermined sizes.
- the user is instructed to place the paper flush against a wall, stand on the paper and aim the mobile device downward towards the top of the user's foot.
- the mobile device user interface includes a level or orientation instruction which is provided by an accelerometer or gyroscope onboard the mobile device. The level shows the user the acceptable angle at which image data is captured.
- no reference object is necessary.
- parallax distance measurement between two photographs may be used to determine a known distance and therefore calculate sizes of the body part.
- other sizes within the image such as the shape of body parts may be calculated with mathematical techniques known in the art.
- IMU inertial measurement unit
- the method may be performed with a video clip instead. While the video clip is captured, the IMU tracks the movement of the mobile device relative to a first location. Time stamps between the video clip and the IMU tracking are matched up to identify single frames as static images, the parallax angle between each is solvable and the distance to objects in the images are identifiable.
- the video clip is an uninterrupted clip. The uninterrupted clip may pass around the body part capturing image data.
- the mobile device captures images of the user's foot from the top down. Reference is made to the use of a single foot; despite this, this process is repeated for each foot that the user wishes to purchase an insole for. Later, during image processing, computer vision, and machine learning operations on the processing server, the top down images are used to determine the length and width of the foot (at more than one location). Example locations for determining length and width include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones. An additional detail collected from the top down view is the skin tone of the user's foot.
- the mobile application or API provides the user with directions to collect image data for the inner sides of the user's foot. This image is used to later process the curvature of the foot arch.
- the mobile application or API instructs the user to place the mobile device up against a wall and then place a foot into a shaded region of the viewfinder.
- the application Based upon predetermined specifications of the model of mobile device being used, and the orientation of the mobile device (indicated by onboard sensors) the application knows the height of the camera from the floor. Using a known model of mobile device provides a known or expected height for the camera lens.
- the mobile device captures images of the inner side of the user's foot. Later, during computer vision operations on the processing server, the inner side images are mapped for the curvature of the user's foot arch. Using pixel color differences from the background and the foot, the computer vision process identifies a number of points (ex: 100) from the beginning of the arch to the end. In some embodiments, the server application software uses the skin tone captured from the top down images to aid the computer vision process to identify the foot from the background.
- additional images of the base of the user's foot are also taken.
- the server application software uses these photos to determine the depth of the user's foot arch. Without the base image data, the depth of the user's foot arch is estimated based on the height of the foot arch as derived from the inner foot side photos.
- the mobile device provides instructions to take an image matching the inner side foot photos, only without a foot.
- This image aids the computer vision process in differentiating between the background of the image and the foot from prior images.
- the difference between the inner side images and the background images should only be the lack of a foot, thus anything in both images would not be the user's foot.
- FIG. 4 concerning a user's foot is intended to be illustrative. Many different body parts can be imaged with similar sets of photographs/media, which may vary in angle and/or number based on the body part.
- FIG. 5 is a flowchart illustrating a process for performing computer vision on collected user images in order to generate size and curvature specifications.
- FIG. 5 is directed to the example of a foot, though other body parts work similarly. The curves of each body part vary, the use of a foot in this example is represents a complex, curved body structure.
- the steps of FIG. 5 are performed by the server application software.
- the processing server receives image data from the mobile device. Once received, in step 504 and 506 , the processing server performs computer vision operations on the acquired image data to determine size and curvature specifications for the user's applicable body part.
- the server application software analyzes the image data to determine distances between known points or objects on the subject's body part.
- Example distances include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones.
- This process entails using predetermined or calculable distances based on a reference object or calculated distances with knowledge of camera movement to provide a known distance (and angle).
- the reference object can be a piece of standard size paper (such as 8.5′′ ⁇ 11′′), as mentioned above.
- the application software uses known distances to calculate unknown distances associated with the user's body part based on the image.
- the processing server analyzes the image data for body part curvature.
- the computer vision process seeks an expected curve associated with the body part and with the type of wearable selected. Once the curve is found, in step 508 , points are plotted along the curve in a coordinate graph (see FIG. 6 ). Shown in FIG. 6 , the coordinate graph 50 includes an X,Y location along the curve in a collection of points 52 . Taken together, the collection of points 52 model the curvature of the body part (here, the arch of a foot).
- the processing server packages the data collected by the computer vision process into one or more files for inspection.
- an administrative user conducts an inspection of the output data from the computer vision process in relation to the acquired images. If there are obvious errors in the data (e.g., the curvature graph is of a shape clearly inconsistent with the curvature of the applicable body part), the generated data is deemed to have failed inspection and can be rejected.
- a user or an administrator may perform a manual edit of the output data from computer vision reviewed image data.
- the system transmits a copy of the original images to the user/administrator for editing.
- the user edits the points and then transmits the edited images.
- the user only provides a reduced selection of points rather than an entire curvature.
- the image data is processed through an enhancement process to further improve the distinction between foot and background.
- the enhancement process refers to image retouching to improve line clarity through editing sharpness, resolution, or selective color by changing using individual pixel, group pixel, or vector image edits.
- the current computer vision reviewed images are discarded, and the computer vision process is run again.
- the processing server receives updated curvature and/or size specifications.
- step 520 final copies of the size and curvature specifications of the user's subject body part are forwarded to a customization engine of the server application software on the processing server.
- FIG. 7 is a flowchart illustrating a process performed by the customization engine for customizing tessellation models.
- the user's selected wearable type and subclass are used to narrow the selection of tessellation model kits from a large number of provided options to a select group. For example, if the wearable type is a shoe insole, all other wearable type tessellation model kits are eliminated from the given printing task process. Then further if the shoe insole is of subclass “running shoe insole,” other shoe insole tessellation model kits are eliminated.
- the remaining set of premade tessellation model kits may be those associated with different sizes of shoe, men's shoes or women's shoes, or of a limited variation of genetic foot types (ex: normal, narrow, flat, high arch, etc.).
- the narrowing of tessellation model kits reduces the amount of processing that needs to occur when customization is applied.
- the computer vision data including the size and curvature specifications, is imported into the customization engine.
- the vision data is roughly categorized, thereby eliminating irrelevant tessellation model kits. Remaining is a single, determined model kit, which most closely resembles the size and curvature specifications.
- the tessellation model is built from the ground up on the fly based on the observations in image processing, computer vision and machine learning.
- step 706 the size and curvature specifications are applied to the determined model kit. In doing so, predetermined vertices of the determined model kit are altered using graph coordinates obtained from the computer vision operations.
- step 708 other adjustments can be made to prepare the tessellation model for printing.
- the other adjustments can be either ornamental and/or functional but are generally unconnected to the measurements of the user obtained through computer vision operations.
- One possible technique for adjusting a tessellation model uses so-called “negative normals.”
- FIG. 8 illustrates a technique to create or adjust a complex tessellation file that can be used in generating 3D printed wearables as described herein.
- the technique illustrated in FIG. 8 is referred to as a “negative normal” technique.
- a 3-D object, model, or lattice is a hollow digital object.
- the object is constructed from a lattice of planar polygons, e.g., triangles or quadrilaterals.
- Each polygon 802 that makes up the lattice of the 3-D object has an outer surface and an inner surface.
- normal in this context refers to a normal vector, i.e., a vector that is perpendicular to the surface of a polygon.
- positive normals point outward from the surface polygons and negative normals point inward from the surface polygons; a negative object is the reverse.
- a negative object is created.
- a negative object intersects with and extends from a positive object, a negative normal is created.
- a negative normal is treated as an error by conventional 3D printer software, which results in the printer not printing the portions of the positive object that have negative normals.
- negative normals can be used advantageously to create a 3D printable object with complex geometry.
- FIG. 8 illustrates progressions of a positive object (shoe insole) with a negative object applied as a negative normal.
- the top-most 3-D object 54 is a simple shoe insole positive object, in which the polygons 55 that make up the positive shoe insole are visible.
- the second object from the top 56 is a smoothed version of the positive insole object.
- the third object from the top 58 illustrates the use of a negative normal.
- the positive shoe insole 60 intersects with a negative ellipsoid prism 62 .
- the negative ellipsoid 62 includes a horizontal upper surface 64 and an angled lower surface 66 which contacts the upper surface of the positive shoe insole 60 .
- the negative ellipsoid prism 62 further includes a number of vertically oriented negative cylindrical tubes 68 . In the empty space where the tubes 68 are not present, the positive shoe insole is unaffected. Where the negative ellipsoid prism intersects the positive shoe insole, the top surface of the positive shoe insole 60 is reduced in height.
- the result of the negative normal technique is shown.
- the positive shoe insole 60 has a number of bumps 72 on the topmost surface caused from the removal of the surrounding polygon lattice.
- the negative normal technique does not alter the digital rendering, but only the 3-D printed object. This is because the digital rendering is a hollow object, whereas the 3-D printed object is solid. Hence, stating that the negative normal “removes” material from the positive object is a misnomer, strictly speaking, though it may be a convenient way of thinking of the technique. More precisely, the material is never applied to the 3D printed object in the first place.
- the bottom-most object 74 in FIG. 8 illustrates the results of the negative normal technique used a number of times.
- the many bumps 72 of final shoe insole would have to be generated individually.
- the amount of processing resources used to calculate a bump which is constructed of many of positive polygonal planes of varying size is substantially more than the processing power to create a simple negative shape which has comparatively fewer polygonal planes.
- FIG. 9 is a block diagram illustrating a distributed system for the generation of customized 3D printed wearables according to the technique introduced here.
- Embodiments of the system and method herein may be distributed.
- the hardware involved may communicate over a network operated by different parties, be directly connected operating by the same party, or any combination thereof.
- the 3D printer 26 is a network location and outsourced to a contractor for 3D printing. This contrasts with FIG. 1 in which the 3D printer 26 is directly connected with the backend processing server 24 . Accordingly, instructions are sent to the 3D printer 26 over a network.
- the manual inspection computer 28 may be separate from the backend processing server 24 , in both a physical sense and as an acting entity.
- the manual inspection computer 28 may be operated by a doctor of a patient who owns the mobile device 22 .
- the manual inspection computer 28 may be operated by the same corporate entity that operates the processing server 24 .
- both the mobile device 22 and the manual inspection computer 28 are operated by a doctor of a patient for whom body images/videos are taken.
- embodiments of the system introduced here include a software interface, such as application program interface (“API”) 54 , which is used across the distributed hardware to coordinate inputs and outputs in order to reach a 3D-printed and delivered wearable.
- API application program interface
- the API 54 is instantiated on a number of the hardware objects of the system, and ultimately references databases 56 on the processing server 24 .
- the database 56 stores body images/videos, associated 3D models of body parts, and 3D models of wearable which match the 3D models of the body parts. Each of these images/models are indexed by a user or order number. Devices which instantiate the API 54 may call up images/videos/materials at various points in the wearable generation process, provide input, and observe status of said materials. The API 54 is able to provide query-able status updates for a given wearable order. In this way, the wearable generation has a great degree of transparency and modularity.
- FIG. 10 is a flowchart illustrating a process for API access at a number of steps in wearable generation. The process does not necessarily have to be performed by a single entity. Rather, there are a number of starting points and ending points where different entities may participate in the process.
- the flow chart of FIG. 10 shows inputs on the left, outputs on the right, and processes down the middle.
- Each input (“i” steps) may be performed by one of a number of relevant actors. Alternatively, each process step has a result which is passed to the next step. Depending on the steps, process steps may include external input, as previous process result as input, or both. Outputs (“o” steps) provide visibility into the process and help provide human actors information to make decisions in future steps, or to provide entertainment value.
- step 1002 the system obtains image or video data of a part of a living body.
- Step 1002 is technically an input as well.
- step 1002 refers to obtaining the video or images through mobile application software 42 , making use of a device camera.
- the mobile application software 42 has an internal (non-API based) connection to the processing server 24 .
- the process begins at step 1004 i .
- the API 54 receives body data, which is subsequently provided to the processing server 24 .
- the API 54 is installed as an extension within other non-operating system software.
- the API 54 is installed on an imaging device or on a computer which contains data collected from an imaging device. This data may vary in format, number of dimensions, and content.
- the processing server 24 uses the input (for 1002 , 1004 i , or both) to generate a digital body model.
- the digital body part model may come in a number of formats.
- the digital body model is a 3D model; in others the digital model is a series of figures which model size, shape, and character; in still others the digital model is a number of 2D images including properties and metadata.
- the creation of the digital body model occurs first through format recognition and then computer vision.
- the processing server 24 first identifies the format of the input received in order to determine how said input is processed. Once the format is determined, the method of computer vision is determined. The result of the computer vision process is the digital body model. In some embodiments, such as those already including a 3D body model, the computer vision process is skipped.
- step 1004 o the API 54 exposes the digital body model. Accordingly, users, doctors of users, or other suitable persons relevant to the creation of an wearable are able to view the data on a number of hardware platforms.
- the digital body model is indexed by the user or order number, and thus searching the database 56 via the API 54 may return the digital body model output.
- step 1006 i relevant persons are enabled to either provide their own body models from 3D imaging devices, and or previously measured figures through the API 54 , or cause the API 54 to provide messaging to the processing server 24 accepting the body model input of step 1004 .
- the body models would come from other existing scanners, or models that were generated by the user.
- the API 54 accepts the pre-generated models as input and move that input forward in the process as if the user submitted body model was the result of step 1004 .
- step 1006 the processing server 24 generates a digital wearable 3D model based on the digital body model.
- the generation of the 3D model of the wearable proceeds as methods taught herein have disclosed, or by another suitable method known in the art.
- step 1006 o the API 54 exposes the 3D wearable model. Accordingly, users, doctors of users, or other suitable persons relevant to the creation of a wearable are able to view the 3D wearable model on a number of hardware platforms.
- the 3D wearable model is indexed by the user or order number, and thus searching the database 56 via the API 54 may return the 3D wearable model output.
- step 1008 i relevant persons are enabled through a user interface to cause the API 54 to provide messaging to the processing server 24 accepting the 3D wearable model input of step 1006 .
- the output of step 1006 o provides the basis for the user's decision.
- the 3D wearable model is transmitted to a 3D printer 26 for printing.
- 3D printing occurs through an additive process, though one skillful in the art would appreciate 3D printing through reduction.
- step 1010 where there are a number of separately printed components of the wearable, these components are assembled.
- the 3D printer 26 is operated by an external printing entity, multiple components are sent to a central location/entity for assembly.
- the system routes printing and/or assembly to remote locations based upon delivery address. These steps may be performed close to the delivery location where assets are available to provide the service.
- the processing server 24 uploads an assembly process video or time lapse to a host location.
- regular image captures or videos are taken and indexed by user or order number. These image captures or videos may be assembled into a wearable generation video and posted on a web site hosted by the processing server 24 , or into an external video hosting service (such as YouTube).
- step 1012 i if an external printer still holds the printed wearable, the wearable is shipped to the central customer management location.
- the central customer management ships printed wearable to a user and/or doctor.
- elements of the order are provided to a local asset for delivery to the user.
- FIG. 11 is a flowchart illustrating an embodiment for handling a number of input body data types. While many of these steps refer to the processing server 24 as the actor, multiple sources may actually perform these steps as illustrated by FIG. 10 .
- body data is obtained. As described above, the character of the body data may vary. Examples include: 2D/3D images/video with various levels of penetration and light exposure/usage (e.g. conventional photography/video; infrared imaging/video data; confocal microscopy data; lightfield imaging data; or ultrasound imaging data). This data may come from a number of sources.
- the body data is transmitted to the processing server 24 .
- the processing server 24 categorizes the data as video or still. Where the data is a still, in step 1108 the data is categorized as photograph(s) or scanner models.
- a scanner model refers to a 3D model generated by a body scanner.
- step 1110 where the body data is a video, the processing server 24 parses frames (stills) of the video and matches to model still images. The matching is performed with pixel and object comparisons. Objects are identified in the frames based on pixels. Pixels or objects are matched to pixels or objects in the model images.
- step 1112 the processing server 24 extracts the frames that best match the model images from the video as stills.
- step 1114 the processing server 24 converts the frames into image files.
- step 1116 where a still photograph is detected, the photograph is converted into an image format that computer vision may be performed on. This is relevant in the case of, for example, images that are not taken in the visible spectrum (such as infrared or ultrasound images).
- objects are identified from the pixels in the image file.
- images are matched to model images similarly to in step 1110 . However, depending on where the body data photographs originated from, the photographs may already match model images (e.g., where photo capture instructions are followed prior to transmitting the body data).
- step 1120 regardless of the original body data input, the current product of the process with be in the same image data format after the conversions in steps 1114 and 1118 . Accordingly, the processing server 24 runs the computer vision process on the images.
- the computer vision process identifies characteristics of the photographed/recorded body part.
- step 1122 the processing server 24 generates a digital body model.
- a digital model may take a number of forms, one of which originates from a body scanner in step 1108 without additional modification. Other digital body models are descriptive numbers in particular parameter fields.
- the processing server 24 generates a wearable tessellation file that corresponds to the digital body model.
- step 1126 a 3D printer prints that tessellation file.
- FIG. 12 is a flowchart illustrating wearable generation including concurrent computer vision and machine learning processes.
- FIG. 12 is a detailed look at step 1004 of FIG. 10 and some surrounding steps.
- the steps of FIG. 12 are generally performed by the processing power available within the entire system. However, the processing power may be distributed across a number of devices and servers. For example, some steps may be performed by a mobile device such as a smart phone while others are performed by a cloud server.
- step 1200 input body part image data is provided to the system.
- the input data may be provided in any of various ways (e.g. through direct upload from smartphone applications, web uploads, API uploads, partner application uploads, etc.).
- step 1202 the uploaded body part image data is processed to obtain a uniform image format for further processing.
- FIG. 11 details portions of the image pre-processing.
- the method proceeds to steps 1204 and 1206 .
- the system attempts to detect a body part in the subject images. This is performed both through computer vision and machine learning. Prior observations and models (e.g. hidden Markov model) influence the machine learning operation. The detection of a particular body part enables the system to determine the type of product that is most relevant. In some embodiments, the system performs the body part identification initially to enable the user to select a product type choice (e.g. footwear insole, bra, earphones, gloves, etc.).
- a product type choice e.g. footwear insole, bra, earphones, gloves, etc.
- step 1208 the system checks whether a body part was, in fact, detected. Where a body part was detected, the method proceeds, whereas where a body part was not detected, the method skips to observational steps to update the machine learning models.
- the user interface will additionally signal the user and the user may initiate the method again form the beginning.
- steps 1204 - 1208 are processed by local mobile devices owned by a user. Because these steps are often performed before the user selects a product type (steps identify body part and product chosen after body part identification), some efficiency may be gained by using local processing power as opposed to transmitting the data prior to making the body part identification.
- the product type selection user experience enables the system to “hide” the data transmission from the mobile device to the cloud server. Once a body part is identified, the user begins selecting a product style, types and sub-types. While the user is engaged in these operations, the body part image data is uploaded from the mobile device to the cloud server, and the user experience is not stalled.
- the system performs image segmentation using computer vision and machine learning.
- Prior observations and models e.g. hidden Markov model
- Image segmentation is used to identify particular elements in an image such as differentiating the body part from the background, as well as differentiating different curves and surfaces of the body part.
- the system identifies regions of the body part which are relevant for curvature of 3-D models (such as tessellation files).
- the system performs point extraction using computer vision and machine learning. Prior observations and models (e.g. hidden Markov model) influence the machine learning operation.
- the points extraction process generates a number of points along curves identified through segmentation. Each of these points is assigned a point in either 2-D or 3-D space. Measurements for point coordinates are provided based on solved distances using reference objects, or solving through image analysis.
- step 1220 the extracted data points are assembled into usable data for 3-D model generation or tessellation file generation.
- step 1222 the data points and body part images undergo post-processing.
- step 1224 The system adds the data points and body part image data to the total observations.
- step 1226 the system enables users, or administrators to do an audit review. This step is detailed in above paragraphs. At this point the data points are delivered to model generation and the rest of the 3-D printing process continues separately.
- step 1228 the system reviews and performs a performance assessment of the process.
- step 1230 the machine learning engine of the system updates the observations from the database and the performance assessment. If the process continues, in step 1234 , the machine leaning models are updated. The updated machine learning models are recycled into use into steps 1204 , 1210 , and 1216 for subsequent users.
- FIG. 13 is a flowchart illustrating distance measurement in images taken from single lens 2-D cameras.
- the mobile device includes a single camera and an inertial measurement unit (IMU) parallax distance measurement between two photographs may be used to determine a known distance and therefore calculate sizes of the body part.
- IMU inertial measurement unit
- step 1302 a first image is taken in a first position.
- step 1304 the camera is moved, and the IMU tracks the relative movement from the first position to a second position.
- step 1306 The camera then takes a second image.
- the method may be performed with a video clip as well. While the video clip is captured, the IMU tracks the movement of the mobile device relative to a first location. Time stamps between the video clip and the IMU tracking are matched up to identify single frames as static images. In step 1308 , the system identifies regions of interest in the images. This is explained in FIG. 12 at step 1214 .
- step 1310 given information from the IMU the system calculates the parallax angle between where the first image was captured and the second image.
- step 1312 the system calculates distance to the region or point of interest based on the parallax angles and distances between the first and second position.
- step 1514 the system is able to use geometric math to solve for the distance between a number of distances within each image. These distances are used to provide coordinates to a number of points in images, and then later used to develop 3-D models of objects within the images.
- FIG. 14 is a graphic illustration of a customized computer mouse generated through body scanning.
- the processes taught in this disclosure may be used to generate items interfacing with the contours of a user's body as well.
- One example of such an item is a computer mouse 76 .
- Mouse peripherals are designed to have a significant interface with the human body, specifically, a human hand and fingertips. Where body image data is collected on a user's fingers, the system may generate a computer mouse 76 with matching finger indentations 78 custom printed for a particular user.
- FIG. 15 is a graphic illustration of a customized ear headphone generated through body scanning.
- a wearable that can be custom designed in the disclosed system is a headphone 80 , the headphone has a speaker enclosure 82 which is custom formed to an ear cavity.
- Using body image data of a person's ear enables custom generation of fitted ear cavity speakers 82 .
- FIG. 16 is a graphic illustration of an assortment of wearables that are generated via body imaging followed by 3-D printing.
- the illustrations in FIG. 16 are intended to be illustrative of 3D-printable wearables that conform to a body part of the wearer. Examples include a bra 84 , a helmet 86 , a brace 88 , or goggles 90 . There are many possible wearables suited for many purposes.
- FIG. 17 is a graphic illustration of customized eye glasses generated through body scanning.
- Another example of a wearable that can be custom designed in the disclosed system are eye glasses.
- a number of segments may be customized to a user's body.
- FIG. 18 is a graphic illustration of a customized brace generated through body scanning.
- Another example of a wearable that can be custom designed in the disclosed system is a brace type orthotic. A number of segments may be customized to a user's body.
Landscapes
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Materials Engineering (AREA)
- Physics & Mathematics (AREA)
- Manufacturing & Machinery (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Biophysics (AREA)
- Wood Science & Technology (AREA)
- Architecture (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Disclosed is a platform for generating and delivering 3-D printed wearables. The platform includes scanning, image processing, machine learning, computer vision, and user input to generate a printed wearable. Scanning occurs in a number of ways across a number of devices. The variability of scanning generates a number of scanning output types. Outputs from the scanning process are normalized into a single type during image processing. The computer vision and machine learning portions of the platform use the normalized body scan to develop models that may be used by a 3D printer to generate a wearable customized to the user. The platform further provides opportunities for the user to check the work of the scanning, image processing, computer vision, and machine learning. The user input enables the platform to improve and inform the machine learning aspects.
Description
- This application is a continuation of U.S. patent application Ser. No. 15/390,406, entitled, “Generation of 3D-Printed Custom Wearables,” filed on Dec. 23, 2016, which claims benefit to U.S. Provisional Patent Application No. 62/275,242, entitled “Generation of 3D-Printed Custom Wearables,” filed on Jan. 6, 2016. The contents of the above-identified applications are incorporated by reference in their entirety.
- This disclosure relates to 3-D digital modeling and subsequent 3-D printing. This disclosure more particularly relates to input and output handling of image data to generate 3-D models for printing.
- 3-D printing can be used to create customized items. However, the present cost of ownership of a 3-D printer and the requisite skill required for proficient use is prohibitive for most people. Generation of 3-D models for printing that fit a particular wearer, as opposed to standardized sizing schemes, adds a further complexity. There are a number of ways to capture body data; however, going from a number of disparate data types to a usable 3-D model for 3-D printing in an accessible fashion is also a difficult problem.
-
FIG. 1 is a block diagram illustrating a system for the generation of customized 3D printed wearables; -
FIG. 2 is a flowchart illustrating a process for generating custom 3D printed wearables; -
FIG. 3 is a flowchart illustrating a process for acquiring images of the user and other data to be used in creating a customized 3D printed wearable; -
FIG. 4 is a flowchart illustrating a process by which the mobile device interacts with the user to acquire images of the user; -
FIG. 5 is a flowchart illustrating a process for performing computer vision on collected images of a user; -
FIG. 6 is an illustration of a coordinate graph including a collection of X,Y locations along a body curve; -
FIG. 7 is a flowchart illustrating a process for customizing tessellation models; -
FIG. 8 is an illustration of a negative normal technique to create complex tessellation files; -
FIG. 9 is a block diagram illustrating a distributed system for the generation of customized 3D printed wearables; -
FIG. 10 is a flowchart illustrating API access at a number of steps in wearable generation; -
FIG. 11 is a flowchart illustrating an embodiment for handling a number of input body data types; -
FIG. 12 is a flowchart illustrating wearable generation including simultaneous computer vision and machine learning processes; -
FIG. 13 is a flowchart illustrating distance measurement in images taken from single lens 2-D cameras; -
FIG. 14 is a graphic illustration of a customized computer mouse generated through body scanning; -
FIG. 15 is a graphic illustration of a customized ear headphone generated through body scanning; -
FIG. 16 is a graphic illustration of an assortment of wearables that are generated via body imaging followed by 3-D printing; -
FIG. 17 is a graphic illustration of customized eye glasses generated through body scanning; and -
FIG. 18 is a graphic illustration of a customized brace generated through body scanning. - To generate 3D printed wearable objects (“wearables”) with simple instructions and minimal processing, the technique introduced here enables commonly available mobile devices to be used to image the prospective wearer and enables a processing server or distributed set of servers to use the resulting images to generate of a tessellation model. These tessellation model can then be used to generate a 3D printed wearable object (hereinafter simply “wearable”). The term “wearable” refers to articles, adornments or items designed to be worn by a user, incorporated into another item worn by a user, act as an orthosis for the user, or interfacing with the contours of a user's body. An example of a wearable used throughout this disclosure to facilitate description is a shoe insole. A shoe insole is illustrative in that the shape and style one particular person would want over the shape and style another person would want tend to vary greatly across people, and customization is an important detail. Nonetheless, the teachings in this disclosure apply similarly to other types of wearables, such as bracelets, rings, bras, helmets, earphones, goggles, support braces (e.g. knee, wrist), gauge earrings, and body-contoured peripherals.
-
FIG. 1 is a block diagram illustrating a system for the generation of customized 3D printedwearables 20. Included in the system is the capability for providing body part input data. Provided as a first example of such a capability inFIG. 1 is a mobile processing device that includes a digital camera and is equipped to communicate over wireless network, such as a smartphone, tablet computer, a networked digital camera or other suitable known mobile devices in the art (hereafter, “mobile device”) 22, aprocessing server 24, and a3D printer 26. The system further can include amanual inspection computer 28. - The
mobile device 22 is a device that is capable of capturing and transmitting images over a wireless network, such as the Internet 30. In practice, a number ofmobile devices 22 can be used. In some embodiments,mobile device 22 is a handheld device. Examples ofmobile device 22 include a smart phone (e.g., Apple iPhone, Samsung Galaxy), a confocal microscopy body scanner, an infrared camera, a ultrasound camera, a digital camera, and tablet computer (e.g., Apple iPad or Dell Venture 8 7000). Themobile device 22 is a processor enabled device including acamera 34, anetwork transceiver 36A, auser interface 38A, and digital storage andmemory 40A containingclient application software 42. - The
camera 34 on the mobile device may be a simple digital camera or a more complex 3D camera, scanning device, or video capture device. Examples of 3D cameras include Intel RealSense cameras or Lytro light field cameras. Further examples of complex cameras may include scanners developed by TOM-CAT Solutions, LLC (the TOM-CAT, or iTOM-CAT), adapted versions of infrared cameras, ultrasound cameras, or adapted versions of intra-oral scanners by 3 Shape. - Simple digital cameras (including no sensors beyond 2-D optical) use reference objects of known size to calculate distances within images. Use of a 3D camera may reduce or eliminate the need for a reference object because 3D cameras are capable of calculating distances within a given image without any predetermined sizes/distances in the images.
- The mobile device also provides a
user interface 38A that is used in connection with theclient application software 42. Theclient application software 42 provides the user with the ability to select various 3D printed wearable products. The selection of products corresponds with camera instructions for images that the user is to capture. Captured images are delivered over the Internet 30 to theprocessing server 24. -
Processor 32B operatesprocessing server 24. Theprocessing server 24 receives image data from themobile device 22. Using the image data,server application software 44 performs image processing, machine learning and computer vision operations that populate characteristics of the user. Theserver application software 44 includescomputer vision tools 46 to aid in the performance of computer vision operations. Examples ofcomputer vision tools 46 include OpenCV or SimpleCV, though other suitable examples are known in the art and may be programmed to identify pixel variations in digital images. Pixel variation data is implemented as taught herein to produce desired results. - In some embodiments, a user or administrative user may perform manual checks and/or edits to the results of the computer vision operations. The manual checks are performed on the
manual inspection computer 28 or at a terminal that accessesprocessing server 24's resources. Theprocessing server 24 includes a number of premadetessellation model kits 48 corresponding to products that the user selects from theclient application software 42. Edits may affect both functional and cosmetic details of the wearable —edits include looseness/tightness, and high rise/low rise fit. Edits are further stored by theprocessing server 24 as observations to improve machine learning algorithms. - In some embodiments, the
tessellation model kits 48 are used as a starting point from which theprocessing server 24 applies customizations.Tessellation model kits 48 are a collection of data files that can be used to digitally render an object for 3D printing and to print the object using the3D printer 26. Common file types oftessellation model kits 48 include 0.3dm, 0.3ds, .blend, .bvh, .c4d, .dae, .dds, .dxf, .fbx, .lwo, .lws, .max, .mtl, .obj, .skp, .stl, .tga, or other suitable file types known in the art. The customizations generate a file for use with a 3D printer. Theprocessing server 24 is in communication with the3D printer 26 in order to print out the user's desired 3D wearable. In some embodiments, tessellation files 48 are generated on the fly from the input provided to the system. Thetessellation file 48 is instead generated without premade input through an image processing, computer vision, and machine learning process. - Numerous models of
3D printer 26 may be used by the invented system.3D printers 26 vary in size of printed article. Based on the type of 3D wearable users are selecting, varying sizes of3D printer 26 are appropriate. In the case where the 3D wearable is a bracelet, for example, a 6 cubic inch printer may be sufficient. When printing shoe insoles, however, a larger printer may be required. A majority of insoles can be printed with a 1 cubic foot printer, though some larger insoles require a larger 3-D printer. - Users of the system may take a number of roles. Some users may be administrators, some may be the intended wearer of an end, 3-D printed product, some users may facilitate obtaining input data for the system, and some may be agents working on behalf of any of user type previously mentioned.
-
FIG. 2 is a flowchart illustrating a process for generatingcustom 3D printed wearables. Instep 202, the mobile device accepts input from a user through the user interface concerning the selection of the type of wearable the user wants to purchase. In some embodiments, the mobile device uses a mobile application, or an application program interface (“API”) that include an appropriate user interface and enable communication between the mobile device and external web servers. Wearable examples previously mentioned include shoe insoles, bracelets, rings, bras, and gauge earrings. In addition to these examples, the product may be drilled down further into subclasses of wearables. Among shoe insoles, for example, there can be dress shoe, athletic shoe, walking shoe, and other suitable insoles known in the art. Each subclass of wearable can have construction variations. - In addition to choosing the wearable type and subclass, the user enters in account information such as payment and delivering address information. Some embodiments include social features by which the user is enabled to post the results of a 3D wearable customization process to a social network.
- In
step 204, the mobile device provides instructions to the user to operate the camera to capture images of the user (or more precisely, a body part of the user) in a manner which collects the data necessary to provide a customized wearable for that user. Instep 206, the mobile device transmits the collected image data to theprocessing server 24. Instep 208, the processing server performs computer vision operations on the image data in order to determine the size, shape, and curvature of the user (or body part of the user), where applicable to the chosen product type. - In
step 210, the server application software obtains size and curvature specifications from the computer vision process to apply these specifications and completes a tessellation model. In some embodiments, the process of applying specifications from computer vision involves the server application software altering a predetermined set of vertices on a premade tessellation model kit that most closely resembles the wearable the user wants to purchase. In addition, details relating to the particular chosen wearable type may be applied to the tessellation model kit as applicable (e.g., textures or shape modifications pertaining to a subclass of wearable). - In an alternative embodiment, the tessellation file is generated on the fly from the input provided to the system. The tessellation file is instead generated without premade input through an image processing, computer vision, and machine learning process.
- In
step 212, the processing server forwards the customized tessellation model to the 3D printer. Instep 214, the 3D printer prints the customized wearable based on the customized tessellation model. Instep 216, the wearable is delivered to the user. Delivery to the user depends on a number of factors such as the source of the order and the destination of the order. The system can route the orders to in-house or third party manufacturers that are located closer to the user (geographic location) in order to facilitate the delivery process. -
FIG. 3 is a flowchart illustrating a process for acquiring images of the user and other data to be used in creating a customized 3D printed wearable. Instep 302, upon initiation of the client application software in the mobile device, and in a manner similar to step 202 ofFIG. 2 , the mobile device prompts for and receives a user selection for wearable type and subclass. Instep 304, the client application software loads (e.g., from local or remote memory) the necessary instructions to provide for the wearable type and subclass used to properly direct a user to capture image data of the user's body part or parts as applicable to the wearable type. - In
step 306, the loaded instructions are provided to the user via the mobile device's user interface (e.g., via a touchscreen and/or audio speaker) to facilitate image data capture. The body part imaging may involve acquiring multiple images. The images required may depend on the kind of mobile device used and/or the wearable type selected. The instructions provided to the user can include details such as the orientation of the camera and the objects/body parts that should be in frame of the camera's view finder. - In
step 308, the mobile device camera captures body part image data. The body part image data may come in any of various formats, such as 2D images, 3D images, video clips, and body scanner imaging data. In some embodiments, the body part image data may come from a 3rd party apparatus (such as a body scanner from a doctor's office), a user uploading images via a website. - In
step 310, the client application software performs validation on the captured image. In some embodiments, the validation is performed by the server application software. The validation can use object and pixel recognition software incorporated into either the client or server application software to determine whether or not a given image is acceptable. For example, if the image expected is of the user's foot in a particular orientation, an image of a dog is recognized as unacceptable based on expected variations in pixel color. In the case of an unacceptable image, instep 312, the current image data is deleted or archived, and the user is instructed again to capture the correct image data. When image data is acceptable, the process moves on to step 314. - In
step 314, the application software determines if more images are required. If so, the instructions for the next image are presented and the user is once again expected to orient the mobile device correctly to capture an acceptable image. Where the application has all the image data required, the process continues to step 316. - In
step 316, the client application software accepts other user customizations that are unconnected to the size, curvature, or shape of the wearable. Examples of these other user customizations include color, material, and branding. Instep 318, the wearable data collected is transmitted by the mobile device to the processing server. -
FIG. 4 is a flowchart illustrating a process by which the mobile device interacts with the user to acquire images of the user. In some embodiments, particular image data is used, and multiple images may be requested. In the shoe insole example, where the user wishes to purchase insoles for both feet, five photos of image data for each pair of insoles to be fabricated may be requested (e.g., two images of the top of each of the user's feet, two of the inner side of each of the user's foot, and a background image of the space behind the side images, without the user's foot). Where the user wishes to obtain only a single insole, three images are used. The system does not take the images from the foot that will not serve to model a custom insole. - In
step 402, the mobile device provides the user with instructions for the top down views. In some embodiments, where a non-3D camera is used, the instructions include a reference object. In some embodiments, the reference object is a piece of standard-sized paper, such as letter-size (e.g., 8.5×11 inch) or A4 size. Because such paper has well-known dimensions and is commonly available in almost every home, it can be used as a convenient reference object. Based on length versus width proportions of the paper, the application software can determine automatically whether the paper is letter sized, legal size, A4 sized, or other suitable standardized sizes known in the art. Based on the style of the paper, the application software has dimensions of known size within frame of the image. In other embodiments, the user may be asked to indicate the paper size chosen via the user interface (e.g., letter-size or A4). - The instructions for the top down image direct the user to find an open floor space on a hard flat surface (such as wood or tile) in order to avoid warping the paper, thereby casing errors in the predetermined sizes. The user is instructed to place the paper flush against a wall, stand on the paper and aim the mobile device downward towards the top of the user's foot.
- In some embodiments, there is additional instruction to put a fold in the paper such that when placed flush with the wall, the paper does not slide under molding or other wall adornments. Additionally, the mobile device user interface includes a level or orientation instruction which is provided by an accelerometer or gyroscope onboard the mobile device. The level shows the user the acceptable angle at which image data is captured.
- In some embodiments, no reference object is necessary. Where the mobile device includes two cameras, parallax distance measurement between two photographs may be used to determine a known distance and therefore calculate sizes of the body part. In some cases it is preferable to perform a number of parallax distance measurements to different points between the two photographs in order to find comparative distances between those points enabling derivation of additional angular data between the two photographs. Similarly with the reference object, once the image has a first known distance, other sizes within the image (such as the shape of body parts) may be calculated with mathematical techniques known in the art.
- Where a single camera is used, additional sensors are utilized to provide data as necessary. The camera used in conjunction with an accelerometer, gyroscope, or inertial measurement unit (“IMU”), enables a similar effect as where there are two cameras. After the first image is taken, the camera is moved, and the relative movement from the first position is tracked by the IMU. The camera then takes a second image. Given information from the IMU the parallax angle between where the first image was captured and the second image can be calculated.
- The method may be performed with a video clip instead. While the video clip is captured, the IMU tracks the movement of the mobile device relative to a first location. Time stamps between the video clip and the IMU tracking are matched up to identify single frames as static images, the parallax angle between each is solvable and the distance to objects in the images are identifiable. In some embodiments, the video clip is an uninterrupted clip. The uninterrupted clip may pass around the body part capturing image data.
- In
step 404, the mobile device captures images of the user's foot from the top down. Reference is made to the use of a single foot; despite this, this process is repeated for each foot that the user wishes to purchase an insole for. Later, during image processing, computer vision, and machine learning operations on the processing server, the top down images are used to determine the length and width of the foot (at more than one location). Example locations for determining length and width include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones. An additional detail collected from the top down view is the skin tone of the user's foot. - In
step 406, the mobile application or API provides the user with directions to collect image data for the inner sides of the user's foot. This image is used to later process the curvature of the foot arch. The mobile application or API instructs the user to place the mobile device up against a wall and then place a foot into a shaded region of the viewfinder. Based upon predetermined specifications of the model of mobile device being used, and the orientation of the mobile device (indicated by onboard sensors) the application knows the height of the camera from the floor. Using a known model of mobile device provides a known or expected height for the camera lens. - In
step 408, the mobile device captures images of the inner side of the user's foot. Later, during computer vision operations on the processing server, the inner side images are mapped for the curvature of the user's foot arch. Using pixel color differences from the background and the foot, the computer vision process identifies a number of points (ex: 100) from the beginning of the arch to the end. In some embodiments, the server application software uses the skin tone captured from the top down images to aid the computer vision process to identify the foot from the background. - In some embodiments, additional images of the base of the user's foot are also taken. The server application software uses these photos to determine the depth of the user's foot arch. Without the base image data, the depth of the user's foot arch is estimated based on the height of the foot arch as derived from the inner foot side photos.
- In
step 410, the mobile device provides instructions to take an image matching the inner side foot photos, only without a foot. This image aids the computer vision process in differentiating between the background of the image and the foot from prior images. With some a predetermined degree of error tolerance, the difference between the inner side images and the background images should only be the lack of a foot, thus anything in both images would not be the user's foot. - The example in
FIG. 4 concerning a user's foot is intended to be illustrative. Many different body parts can be imaged with similar sets of photographs/media, which may vary in angle and/or number based on the body part. -
FIG. 5 is a flowchart illustrating a process for performing computer vision on collected user images in order to generate size and curvature specifications.FIG. 5 is directed to the example of a foot, though other body parts work similarly. The curves of each body part vary, the use of a foot in this example is represents a complex, curved body structure. The steps ofFIG. 5 are performed by the server application software. Instep 502, the processing server receives image data from the mobile device. Once received, instep - In
step 504, the server application software analyzes the image data to determine distances between known points or objects on the subject's body part. Example distances include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones. This process entails using predetermined or calculable distances based on a reference object or calculated distances with knowledge of camera movement to provide a known distance (and angle). In some embodiments, the reference object can be a piece of standard size paper (such as 8.5″×11″), as mentioned above. The application software then uses known distances to calculate unknown distances associated with the user's body part based on the image. - In
step 506, the processing server analyzes the image data for body part curvature. The computer vision process seeks an expected curve associated with the body part and with the type of wearable selected. Once the curve is found, instep 508, points are plotted along the curve in a coordinate graph (seeFIG. 6 ). Shown inFIG. 6 , the coordinategraph 50 includes an X,Y location along the curve in a collection ofpoints 52. Taken together, the collection ofpoints 52 model the curvature of the body part (here, the arch of a foot). - Returning to
FIG. 5 , instep 510, the processing server packages the data collected by the computer vision process into one or more files for inspection. In some embodiments, instep 512, an administrative user conducts an inspection of the output data from the computer vision process in relation to the acquired images. If there are obvious errors in the data (e.g., the curvature graph is of a shape clearly inconsistent with the curvature of the applicable body part), the generated data is deemed to have failed inspection and can be rejected. - In
step 514, a user or an administrator may perform a manual edit of the output data from computer vision reviewed image data. The system transmits a copy of the original images to the user/administrator for editing. The user edits the points and then transmits the edited images. The user only provides a reduced selection of points rather than an entire curvature. If no manual edit occurs, instep 516, the image data is processed through an enhancement process to further improve the distinction between foot and background. The enhancement process refers to image retouching to improve line clarity through editing sharpness, resolution, or selective color by changing using individual pixel, group pixel, or vector image edits. The current computer vision reviewed images are discarded, and the computer vision process is run again. If a manual edit occurs, instep 518, the processing server receives updated curvature and/or size specifications. - In
step 520, final copies of the size and curvature specifications of the user's subject body part are forwarded to a customization engine of the server application software on the processing server. -
FIG. 7 is a flowchart illustrating a process performed by the customization engine for customizing tessellation models. Instep 702, the user's selected wearable type and subclass are used to narrow the selection of tessellation model kits from a large number of provided options to a select group. For example, if the wearable type is a shoe insole, all other wearable type tessellation model kits are eliminated from the given printing task process. Then further if the shoe insole is of subclass “running shoe insole,” other shoe insole tessellation model kits are eliminated. For example, the remaining set of premade tessellation model kits may be those associated with different sizes of shoe, men's shoes or women's shoes, or of a limited variation of genetic foot types (ex: normal, narrow, flat, high arch, etc.). The narrowing of tessellation model kits reduces the amount of processing that needs to occur when customization is applied. - In
step 704, the computer vision data, including the size and curvature specifications, is imported into the customization engine. In some embodiments, the vision data is roughly categorized, thereby eliminating irrelevant tessellation model kits. Remaining is a single, determined model kit, which most closely resembles the size and curvature specifications. In some embodiments, the tessellation model is built from the ground up on the fly based on the observations in image processing, computer vision and machine learning. - In
step 706, the size and curvature specifications are applied to the determined model kit. In doing so, predetermined vertices of the determined model kit are altered using graph coordinates obtained from the computer vision operations. - In
step 708, other adjustments can be made to prepare the tessellation model for printing. The other adjustments can be either ornamental and/or functional but are generally unconnected to the measurements of the user obtained through computer vision operations. One possible technique for adjusting a tessellation model uses so-called “negative normals.” -
FIG. 8 illustrates a technique to create or adjust a complex tessellation file that can be used in generating 3D printed wearables as described herein. The technique illustrated inFIG. 8 is referred to as a “negative normal” technique. A 3-D object, model, or lattice is a hollow digital object. The object is constructed from a lattice of planar polygons, e.g., triangles or quadrilaterals. Each polygon 802 that makes up the lattice of the 3-D object has an outer surface and an inner surface. - The term “normal” in this context refers to a normal vector, i.e., a vector that is perpendicular to the surface of a polygon. In a positive object, positive normals point outward from the surface polygons and negative normals point inward from the surface polygons; a negative object is the reverse.
- Where a 3D object is generated with the inner side and outer surface reversed such that the inner, negative surface is on the exterior of the 3D object, a negative object is created. Where a negative object intersects with and extends from a positive object, a negative normal is created.
- A negative normal is treated as an error by conventional 3D printer software, which results in the printer not printing the portions of the positive object that have negative normals. However, negative normals can be used advantageously to create a 3D printable object with complex geometry.
-
FIG. 8 illustrates progressions of a positive object (shoe insole) with a negative object applied as a negative normal. The top-most 3-D object 54 is a simple shoe insole positive object, in which thepolygons 55 that make up the positive shoe insole are visible. The second object from the top 56 is a smoothed version of the positive insole object. The third object from the top 58 illustrates the use of a negative normal. - In the third object from the top 58, the
positive shoe insole 60 intersects with anegative ellipsoid prism 62. Thenegative ellipsoid 62 includes a horizontalupper surface 64 and an angled lower surface 66 which contacts the upper surface of thepositive shoe insole 60. Thenegative ellipsoid prism 62 further includes a number of vertically oriented negativecylindrical tubes 68. In the empty space where thetubes 68 are not present, the positive shoe insole is unaffected. Where the negative ellipsoid prism intersects the positive shoe insole, the top surface of thepositive shoe insole 60 is reduced in height. - In the fourth object from the top 70, the result of the negative normal technique is shown. The
positive shoe insole 60 has a number ofbumps 72 on the topmost surface caused from the removal of the surrounding polygon lattice. The negative normal technique does not alter the digital rendering, but only the 3-D printed object. This is because the digital rendering is a hollow object, whereas the 3-D printed object is solid. Hence, stating that the negative normal “removes” material from the positive object is a misnomer, strictly speaking, though it may be a convenient way of thinking of the technique. More precisely, the material is never applied to the 3D printed object in the first place. Thebottom-most object 74 inFIG. 8 illustrates the results of the negative normal technique used a number of times. - If the negative normal technique were not used in the example of
FIG. 7 , themany bumps 72 of final shoe insole would have to be generated individually. The amount of processing resources used to calculate a bump which is constructed of many of positive polygonal planes of varying size is substantially more than the processing power to create a simple negative shape which has comparatively fewer polygonal planes. -
FIG. 9 is a block diagram illustrating a distributed system for the generation of customized 3D printed wearables according to the technique introduced here. Embodiments of the system and method herein may be distributed. The hardware involved may communicate over a network operated by different parties, be directly connected operating by the same party, or any combination thereof. - For example, in some embodiments, the
3D printer 26 is a network location and outsourced to a contractor for 3D printing. This contrasts withFIG. 1 in which the3D printer 26 is directly connected with thebackend processing server 24. Accordingly, instructions are sent to the3D printer 26 over a network. Additionally, in themanual inspection computer 28 may be separate from thebackend processing server 24, in both a physical sense and as an acting entity. For example, themanual inspection computer 28 may be operated by a doctor of a patient who owns themobile device 22. In another configuration, themanual inspection computer 28 may be operated by the same corporate entity that operates theprocessing server 24. In yet another configuration, both themobile device 22 and themanual inspection computer 28 are operated by a doctor of a patient for whom body images/videos are taken. - The above are merely examples —there are multiple combinations or distributions of actors and hardware. In order to achieve this distribution, embodiments of the system introduced here include a software interface, such as application program interface (“API”) 54, which is used across the distributed hardware to coordinate inputs and outputs in order to reach a 3D-printed and delivered wearable. The
API 54 is instantiated on a number of the hardware objects of the system, and ultimately referencesdatabases 56 on theprocessing server 24. - The
database 56 stores body images/videos, associated 3D models of body parts, and 3D models of wearable which match the 3D models of the body parts. Each of these images/models are indexed by a user or order number. Devices which instantiate theAPI 54 may call up images/videos/materials at various points in the wearable generation process, provide input, and observe status of said materials. TheAPI 54 is able to provide query-able status updates for a given wearable order. In this way, the wearable generation has a great degree of transparency and modularity. -
FIG. 10 is a flowchart illustrating a process for API access at a number of steps in wearable generation. The process does not necessarily have to be performed by a single entity. Rather, there are a number of starting points and ending points where different entities may participate in the process. The flow chart ofFIG. 10 shows inputs on the left, outputs on the right, and processes down the middle. - Each input (“i” steps) may be performed by one of a number of relevant actors. Alternatively, each process step has a result which is passed to the next step. Depending on the steps, process steps may include external input, as previous process result as input, or both. Outputs (“o” steps) provide visibility into the process and help provide human actors information to make decisions in future steps, or to provide entertainment value.
- In
step 1002, the system obtains image or video data of a part of a living body.Step 1002 is technically an input as well. For the purposes ofFIG. 10 ,step 1002 refers to obtaining the video or images throughmobile application software 42, making use of a device camera. Themobile application software 42 has an internal (non-API based) connection to theprocessing server 24. - Alternatively, or in conjunction, the process begins at
step 1004 i. Instep 1004 i, theAPI 54 receives body data, which is subsequently provided to theprocessing server 24. In this case, theAPI 54 is installed as an extension within other non-operating system software. TheAPI 54 is installed on an imaging device or on a computer which contains data collected from an imaging device. This data may vary in format, number of dimensions, and content. - In
step 1004, theprocessing server 24 uses the input (for 1002, 1004 i, or both) to generate a digital body model. The digital body part model may come in a number of formats. In some embodiments, the digital body model is a 3D model; in others the digital model is a series of figures which model size, shape, and character; in still others the digital model is a number of 2D images including properties and metadata. - The creation of the digital body model occurs first through format recognition and then computer vision. The
processing server 24 first identifies the format of the input received in order to determine how said input is processed. Once the format is determined, the method of computer vision is determined. The result of the computer vision process is the digital body model. In some embodiments, such as those already including a 3D body model, the computer vision process is skipped. - In step 1004 o, the
API 54 exposes the digital body model. Accordingly, users, doctors of users, or other suitable persons relevant to the creation of an wearable are able to view the data on a number of hardware platforms. The digital body model is indexed by the user or order number, and thus searching thedatabase 56 via theAPI 54 may return the digital body model output. - In
step 1006 i, relevant persons are enabled to either provide their own body models from 3D imaging devices, and or previously measured figures through theAPI 54, or cause theAPI 54 to provide messaging to theprocessing server 24 accepting the body model input ofstep 1004. The body models would come from other existing scanners, or models that were generated by the user. TheAPI 54 accepts the pre-generated models as input and move that input forward in the process as if the user submitted body model was the result ofstep 1004. - In
step 1006, theprocessing server 24 generates a digital wearable 3D model based on the digital body model. The generation of the 3D model of the wearable proceeds as methods taught herein have disclosed, or by another suitable method known in the art. In step 1006 o, theAPI 54 exposes the 3D wearable model. Accordingly, users, doctors of users, or other suitable persons relevant to the creation of a wearable are able to view the 3D wearable model on a number of hardware platforms. The 3D wearable model is indexed by the user or order number, and thus searching thedatabase 56 via theAPI 54 may return the 3D wearable model output. - In
step 1008 i, relevant persons are enabled through a user interface to cause theAPI 54 to provide messaging to theprocessing server 24 accepting the 3D wearable model input ofstep 1006. The output of step 1006 o provides the basis for the user's decision. Instep 1008, the 3D wearable model is transmitted to a3D printer 26 for printing. In someembodiments 3D printing occurs through an additive process, though one skillful in the art would appreciate 3D printing through reduction. - In
step 1010, where there are a number of separately printed components of the wearable, these components are assembled. In some embodiments, where the3D printer 26 is operated by an external printing entity, multiple components are sent to a central location/entity for assembly. In some embodiments the system routes printing and/or assembly to remote locations based upon delivery address. These steps may be performed close to the delivery location where assets are available to provide the service. - At step 1010 o, the
processing server 24 uploads an assembly process video or time lapse to a host location. Where cameras are available at the3D printer 26, or the assembly process, regular image captures or videos are taken and indexed by user or order number. These image captures or videos may be assembled into a wearable generation video and posted on a web site hosted by theprocessing server 24, or into an external video hosting service (such as YouTube). - In
step 1012 i, if an external printer still holds the printed wearable, the wearable is shipped to the central customer management location. Instep 1012, the central customer management ships printed wearable to a user and/or doctor. In some embodiments, elements of the order are provided to a local asset for delivery to the user. -
FIG. 11 is a flowchart illustrating an embodiment for handling a number of input body data types. While many of these steps refer to theprocessing server 24 as the actor, multiple sources may actually perform these steps as illustrated byFIG. 10 . Instep 1102, body data is obtained. As described above, the character of the body data may vary. Examples include: 2D/3D images/video with various levels of penetration and light exposure/usage (e.g. conventional photography/video; infrared imaging/video data; confocal microscopy data; lightfield imaging data; or ultrasound imaging data). This data may come from a number of sources. Instep 1104 the body data is transmitted to theprocessing server 24. - Once transmitted, the format of the data is determined. In
step 1106, theprocessing server 24 categorizes the data as video or still. Where the data is a still, instep 1108 the data is categorized as photograph(s) or scanner models. A scanner model refers to a 3D model generated by a body scanner. - In
step 1110, where the body data is a video, theprocessing server 24 parses frames (stills) of the video and matches to model still images. The matching is performed with pixel and object comparisons. Objects are identified in the frames based on pixels. Pixels or objects are matched to pixels or objects in the model images. Instep 1112, theprocessing server 24 extracts the frames that best match the model images from the video as stills. Instep 1114 theprocessing server 24 converts the frames into image files. - In
step 1116, where a still photograph is detected, the photograph is converted into an image format that computer vision may be performed on. This is relevant in the case of, for example, images that are not taken in the visible spectrum (such as infrared or ultrasound images). In this step objects are identified from the pixels in the image file. Instep 1118, images are matched to model images similarly to instep 1110. However, depending on where the body data photographs originated from, the photographs may already match model images (e.g., where photo capture instructions are followed prior to transmitting the body data). - In
step 1120, regardless of the original body data input, the current product of the process with be in the same image data format after the conversions insteps processing server 24 runs the computer vision process on the images. The computer vision process identifies characteristics of the photographed/recorded body part. Instep 1122, theprocessing server 24 generates a digital body model. A digital model may take a number of forms, one of which originates from a body scanner instep 1108 without additional modification. Other digital body models are descriptive numbers in particular parameter fields. Regardless of the type of digital body model, instep 1124, theprocessing server 24 generates a wearable tessellation file that corresponds to the digital body model. Instep 1126, a 3D printer prints that tessellation file. -
FIG. 12 is a flowchart illustrating wearable generation including concurrent computer vision and machine learning processes.FIG. 12 is a detailed look atstep 1004 ofFIG. 10 and some surrounding steps. The steps ofFIG. 12 are generally performed by the processing power available within the entire system. However, the processing power may be distributed across a number of devices and servers. For example, some steps may be performed by a mobile device such as a smart phone while others are performed by a cloud server. - In
step 1200, input body part image data is provided to the system. The input data may be provided in any of various ways (e.g. through direct upload from smartphone applications, web uploads, API uploads, partner application uploads, etc.). Instep 1202, the uploaded body part image data is processed to obtain a uniform image format for further processing.FIG. 11 details portions of the image pre-processing. - Once the body image input data is in a uniform data format, the method proceeds to
steps steps - In
step 1208, the system checks whether a body part was, in fact, detected. Where a body part was detected, the method proceeds, whereas where a body part was not detected, the method skips to observational steps to update the machine learning models. The user interface will additionally signal the user and the user may initiate the method again form the beginning. - In some embodiments, steps 1204-1208 are processed by local mobile devices owned by a user. Because these steps are often performed before the user selects a product type (steps identify body part and product chosen after body part identification), some efficiency may be gained by using local processing power as opposed to transmitting the data prior to making the body part identification.
- Further, the product type selection user experience enables the system to “hide” the data transmission from the mobile device to the cloud server. Once a body part is identified, the user begins selecting a product style, types and sub-types. While the user is engaged in these operations, the body part image data is uploaded from the mobile device to the cloud server, and the user experience is not stalled.
- In
steps - In
step 1214, the system identifies regions of the body part which are relevant for curvature of 3-D models (such as tessellation files). Insteps - In
step 1220, the extracted data points are assembled into usable data for 3-D model generation or tessellation file generation. Instep 1222, the data points and body part images undergo post-processing. Instep 1224, The system adds the data points and body part image data to the total observations. Instep 1226, the system enables users, or administrators to do an audit review. This step is detailed in above paragraphs. At this point the data points are delivered to model generation and the rest of the 3-D printing process continues separately. - After steps 1220-1226, the data is added to a database. The process of
FIG. 12 continues with an assessment and learning phase. Instep 1228, the system reviews and performs a performance assessment of the process. Instep 1230, the machine learning engine of the system updates the observations from the database and the performance assessment. If the process continues, instep 1234, the machine leaning models are updated. The updated machine learning models are recycled into use intosteps -
FIG. 13 is a flowchart illustrating distance measurement in images taken from single lens 2-D cameras. Where the mobile device includes a single camera and an inertial measurement unit (IMU) parallax distance measurement between two photographs may be used to determine a known distance and therefore calculate sizes of the body part. Similarly with the reference object, once the image has a first known distance, other sizes within the image (such as the shape of body parts) may be calculated with mathematical techniques known in the art. - In
step 1302, a first image is taken in a first position. Instep 1304 the camera is moved, and the IMU tracks the relative movement from the first position to a second position. Instep 1306, The camera then takes a second image. - The method may be performed with a video clip as well. While the video clip is captured, the IMU tracks the movement of the mobile device relative to a first location. Time stamps between the video clip and the IMU tracking are matched up to identify single frames as static images. In
step 1308, the system identifies regions of interest in the images. This is explained inFIG. 12 atstep 1214. - In
step 1310, given information from the IMU the system calculates the parallax angle between where the first image was captured and the second image. Instep 1312, the system calculates distance to the region or point of interest based on the parallax angles and distances between the first and second position. Instep 1514, the system is able to use geometric math to solve for the distance between a number of distances within each image. These distances are used to provide coordinates to a number of points in images, and then later used to develop 3-D models of objects within the images. -
FIG. 14 is a graphic illustration of a customized computer mouse generated through body scanning. In addition to wearables, the processes taught in this disclosure may be used to generate items interfacing with the contours of a user's body as well. One example of such an item is acomputer mouse 76. Mouse peripherals are designed to have a significant interface with the human body, specifically, a human hand and fingertips. Where body image data is collected on a user's fingers, the system may generate acomputer mouse 76 with matchingfinger indentations 78 custom printed for a particular user. -
FIG. 15 is a graphic illustration of a customized ear headphone generated through body scanning. Another example of a wearable that can be custom designed in the disclosed system is aheadphone 80, the headphone has aspeaker enclosure 82 which is custom formed to an ear cavity. Using body image data of a person's ear enables custom generation of fittedear cavity speakers 82. -
FIG. 16 is a graphic illustration of an assortment of wearables that are generated via body imaging followed by 3-D printing. The illustrations inFIG. 16 are intended to be illustrative of 3D-printable wearables that conform to a body part of the wearer. Examples include abra 84, a helmet 86, a brace 88, orgoggles 90. There are many possible wearables suited for many purposes. -
FIG. 17 is a graphic illustration of customized eye glasses generated through body scanning. Another example of a wearable that can be custom designed in the disclosed system are eye glasses. A number of segments may be customized to a user's body. -
FIG. 18 is a graphic illustration of a customized brace generated through body scanning. Another example of a wearable that can be custom designed in the disclosed system is a brace type orthotic. A number of segments may be customized to a user's body. - Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the claims included below.
Claims (20)
1. A method comprising:
receiving, through an application program interface (API), body part image data of a part of a living body;
determining a set of physical dimensions of the body part by using a plurality of images of the body part image data as parallax viewpoints, wherein a position of each parallax viewpoint and a set of points on the part of the living body identified via pixel recognition are used to identify a depth, and wherein the depth is used to derive the set of physical dimensions;
generating a digital model of the body part based on the set of physical dimensions;
generating a 3D model of a wearable corresponding to the body part based on the digital model; and
sending print instructions to a 3D printer including information associated with printing the 3D model of the wearable to a physical wearable.
2. The method of claim 1 , further comprising:
receiving a message, by the API, from a user, the message accepting the 3D model of the wearable; and
printing, by a 3D printer, a wearable corresponding to the 3D model of the wearable.
3. The method of claim 1 , further comprising:
receiving, by a mobile device application, part images of the living body from a mobile device camera;
filtering, by the mobile device application, the body part images into accepted images and rejected images based upon a computer vision comparison to a set of expected images; and
communicating, by the mobile device application, accepted images to the API as the body part image data.
4. The method of claim 1 , further comprising:
receiving, by a mobile device application, images of the living body from a mobile device camera, the images including a predetermined reference object having known dimensions; and
wherein determining the set of physical dimensions further includes a comparison of the body part to the reference object.
5. The method of claim 1 , wherein the part of the living body is a foot, and the 3D model of a wearable is a footwear insert.
6. The method of claim 5 , further comprising:
receiving, by a mobile device application, foot images of the living body from a mobile device camera, the foot images including a predetermined reference object having known dimensions; and
wherein determining the set of physical dimensions further includes a comparison of the body part to the reference object.
7. The method of claim 6 , wherein the reference object is a sheet of standardized sized paper.
8. The method of claim 3 , further comprising:
issuing instructions, by the mobile device application, to a user of an associated mobile device depicting expected positioning of the body part.
9. The method of claim 1 , wherein the body part image data comprises at least one of:
infrared imaging data;
confocal microscopy data;
lightfield imaging data; or
ultrasound imaging data.
10. The method of claim 1 , further comprising:
collecting, by a mobile application of a mobile device, video data of a part of a living body, the video data including a video clip wherein a film perspective rotates about the part of the living body; and
extracting image frames from the video data, the image frames including the part of the living body captured at a number of perspectives, wherein the image frames are determined by comparing and matching each frame of the video data to reference frames, wherein the reference frames preexist the video data.
11. The method of claim 1 , further comprising:
printing, by a 3D printer, a wearable corresponding to the 3D model of the wearable;
generating a time-lapse video depicting said printing step; and
transmitting the time-lapse video to a web host for hosting.
12. The method of claim 1 , wherein the received body image data is of a first format, the first format is computer readable and one of a number of predetermined formats, and wherein the generating a digital model of the body part step further comprises:
identifying the first format from the number of predetermined formats;
determining a subset of data to extract from the body image data of the first format to use in converting at least a portion of the body image data to a result format;
extracting the subset of the body image data from the body image data; and
converting the subset of the body image data to the result format.
13. A system comprising:
a network-connected server configured to process body part image data of a part of a living body, and to generate a 3D model of a wearable corresponding to the body part based on the body part image data, wherein a set of physical dimensions of the body part is calculated using a plurality of images of the body part image data as parallax viewpoints, wherein a position of each parallax viewpoint and a set of points on the part of the living body identified via pixel recognition are used to identify a depth, and wherein the depth is used to derive the set of physical dimensions; and
an API instantiated on a number of devices external to the server and configured to communicate with the server, the API configured to receive the body part image data on a first device of the external devices and to expose the corresponding 3D model of the wearable to at least a subset of the number of external devices.
14. The system of claim 13 , further comprising:
a communication interface configured to output the 3D model of the wearable to a 3D printer to print the wearable from the 3D model of the wearable.
15. The system of claim 13 , further comprising:
a mobile device application instanced on a mobile device, the mobile device application in communication with the web server via a mobile device network communicator and is integrated with the API, the mobile device application configured to interface with a mobile device camera and capture body part image data.
16. The system of claim 14 , further comprising:
a video server that hosts time-lapse videos that are accessible from the Internet.
17. A method comprising:
receiving, through an application program interface (API), body part image data of a part of a living body;
determining a set of physical dimensions of the body part without use of a reference object in the body part image data, wherein the body part image data is derived from a single-lens visible light photodetector;
generating a digital model of the body part based on the set of physical dimensions;
generating a 3D model of a wearable corresponding to the body part based on the digital model; and
sending print instructions to a 3D printer including information associated with printing the 3D model of the wearable to a physical wearable.
18. The method of claim 17 , further comprising:
receiving, by a mobile device application, part images of the living body from a mobile device camera;
filtering, by the mobile device application, the body part images into accepted images and rejected images based upon a computer vision comparison to a set of expected images; and
communicating, by the mobile device application, accepted images to the API as the body part image data.
19. The method of claim 18 , further comprising:
issuing instructions, by the mobile device application, to a user of an associated mobile device depicting expected positioning of the body part.
20. The method of claim 17 , further comprising:
collecting, by a mobile application of a mobile device, video data of a part of a living body, the video data including a video clip wherein a film perspective rotates about the part of the living body; and
extracting image frames from the video data, the image frames including the part of the living body captured at a number of perspectives, wherein the image frames are determined by comparing and matching each frame of the video data to reference frames, wherein the reference frames preexist the video data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/784,713 US20200257266A1 (en) | 2016-01-06 | 2020-02-07 | Generating of 3d-printed custom wearables |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662275242P | 2016-01-06 | 2016-01-06 | |
US15/390,406 US10564628B2 (en) | 2016-01-06 | 2016-12-23 | Generating of 3D-printed custom wearables |
US16/784,713 US20200257266A1 (en) | 2016-01-06 | 2020-02-07 | Generating of 3d-printed custom wearables |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/390,406 Continuation US10564628B2 (en) | 2016-01-06 | 2016-12-23 | Generating of 3D-printed custom wearables |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200257266A1 true US20200257266A1 (en) | 2020-08-13 |
Family
ID=59235283
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/390,406 Expired - Fee Related US10564628B2 (en) | 2016-01-06 | 2016-12-23 | Generating of 3D-printed custom wearables |
US15/634,161 Active US10067500B2 (en) | 2016-01-06 | 2017-06-27 | Generating of 3D-printed custom wearables |
US16/784,713 Abandoned US20200257266A1 (en) | 2016-01-06 | 2020-02-07 | Generating of 3d-printed custom wearables |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/390,406 Expired - Fee Related US10564628B2 (en) | 2016-01-06 | 2016-12-23 | Generating of 3D-printed custom wearables |
US15/634,161 Active US10067500B2 (en) | 2016-01-06 | 2017-06-27 | Generating of 3D-printed custom wearables |
Country Status (6)
Country | Link |
---|---|
US (3) | US10564628B2 (en) |
EP (1) | EP3400548A4 (en) |
JP (2) | JP2019503906A (en) |
CN (1) | CN109219835A (en) |
AU (2) | AU2016384433B9 (en) |
WO (1) | WO2017120121A1 (en) |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104217350B (en) * | 2014-06-17 | 2017-03-22 | 北京京东尚科信息技术有限公司 | Virtual try-on realization method and device |
US9817439B2 (en) * | 2016-02-29 | 2017-11-14 | JumpStartCSR, Inc. | System, method and device for designing, manufacturing, and monitoring custom human-interfacing devices |
US9460557B1 (en) * | 2016-03-07 | 2016-10-04 | Bao Tran | Systems and methods for footwear fitting |
US11040491B2 (en) | 2016-10-19 | 2021-06-22 | Shapeways, Inc. | Systems and methods for identifying three-dimensional printed objects |
US11439508B2 (en) * | 2016-11-30 | 2022-09-13 | Fited, Inc. | 3D modeling systems and methods |
US20180160777A1 (en) | 2016-12-14 | 2018-06-14 | Black Brass, Inc. | Foot measuring and sizing application |
JP7225099B2 (en) | 2017-01-06 | 2023-02-20 | ナイキ イノベイト シーブイ | Systems, Platforms and Methods for Personalized Shopping Using Automated Shopping Assistants |
KR20230031996A (en) | 2017-06-27 | 2023-03-07 | 나이키 이노베이트 씨.브이. | System, platform and method for personalized shopping using an automated shopping assistant |
US10239259B2 (en) * | 2017-07-18 | 2019-03-26 | Ivan Ordaz | Custom insole |
EP3514757A1 (en) * | 2018-01-18 | 2019-07-24 | Koninklijke Philips N.V. | Spectral matching for assessing image segmentation |
GB2573496A (en) * | 2018-03-08 | 2019-11-13 | C & J Clark International Ltd | Article of footwear and method of assembling the same |
CN108665493B (en) * | 2018-04-17 | 2020-08-04 | 湖南华曙高科技有限责任公司 | Three-dimensional printing and scanning method, readable storage medium and three-dimensional printing and scanning control equipment |
CN108748987A (en) * | 2018-04-27 | 2018-11-06 | 合肥海闻自动化设备有限公司 | A kind of platform mechanism for shoes Portable industrial figure punch |
CN108995220B (en) * | 2018-07-17 | 2020-04-28 | 大连理工大学 | 3D printing path planning method for complex thin-wall structure object based on reinforcement learning |
US11054808B2 (en) | 2018-09-27 | 2021-07-06 | Intrepid Automation | Management platform for additive manufacturing production line |
CN113164265A (en) | 2018-11-12 | 2021-07-23 | 奥索冰岛有限公司 | Medical devices comprising filament-based structures |
US20200275862A1 (en) * | 2019-03-01 | 2020-09-03 | Wiivv Wearables Inc. | Multiple physical conditions embodied in body part images to generate an orthotic |
DE102019108820A1 (en) * | 2019-04-04 | 2020-10-08 | Onefid Gmbh | Device for manufacturing an individually configured insole for a shoe |
US10805709B1 (en) | 2019-04-10 | 2020-10-13 | Staton Techiya, Llc | Multi-mic earphone design and assembly |
IT201900006076A1 (en) * | 2019-04-18 | 2020-10-18 | Medere S R L | PROCESS FOR THE PRODUCTION OF CUSTOMIZED CUSTOMIZED INSOLES, WITH REMOTE ACQUISITION AND THREE-DIMENSIONAL PRINTING |
CN110070077B (en) * | 2019-05-09 | 2022-02-01 | 瑞昌芯迈科技有限公司 | Arch type identification method |
CN110051078A (en) * | 2019-05-09 | 2019-07-26 | 瑞昌芯迈科技有限公司 | A kind of customized insole design method and its manufacturing method |
US11176738B2 (en) * | 2019-05-15 | 2021-11-16 | Fittin, Llc | Method for calculating the comfort level of footwear |
WO2020257242A1 (en) | 2019-06-17 | 2020-12-24 | The Regents Of The University Of California | Systems and methods for fabricating conformal magnetic resonance imaging (mri) receive coils |
US11576794B2 (en) * | 2019-07-02 | 2023-02-14 | Wuhan United Imaging Healthcare Co., Ltd. | Systems and methods for orthosis design |
US11537203B2 (en) | 2019-07-23 | 2022-12-27 | BlueOwl, LLC | Projection system for smart ring visual output |
US11551644B1 (en) | 2019-07-23 | 2023-01-10 | BlueOwl, LLC | Electronic ink display for smart ring |
US11984742B2 (en) | 2019-07-23 | 2024-05-14 | BlueOwl, LLC | Smart ring power and charging |
US11853030B2 (en) * | 2019-07-23 | 2023-12-26 | BlueOwl, LLC | Soft smart ring and method of manufacture |
US11537917B1 (en) | 2019-07-23 | 2022-12-27 | BlueOwl, LLC | Smart ring system for measuring driver impairment levels and using machine learning techniques to predict high risk driving behavior |
US11462107B1 (en) | 2019-07-23 | 2022-10-04 | BlueOwl, LLC | Light emitting diodes and diode arrays for smart ring visual output |
US11637511B2 (en) | 2019-07-23 | 2023-04-25 | BlueOwl, LLC | Harvesting energy for a smart ring via piezoelectric charging |
US11909238B1 (en) | 2019-07-23 | 2024-02-20 | BlueOwl, LLC | Environment-integrated smart ring charger |
US11949673B1 (en) | 2019-07-23 | 2024-04-02 | BlueOwl, LLC | Gesture authentication using a smart ring |
US11594128B2 (en) | 2019-07-23 | 2023-02-28 | BlueOwl, LLC | Non-visual outputs for a smart ring |
CN110693132B (en) * | 2019-10-24 | 2021-03-23 | 瑞昌芯迈科技有限公司 | Customized insole design method based on pressure acquisition |
US11883306B2 (en) | 2019-11-12 | 2024-01-30 | Ossur Iceland Ehf | Ventilated prosthetic liner |
CA3170985A1 (en) * | 2020-02-14 | 2021-08-19 | Mytox Ink, LLC | Systems and methods for botulinum toxin or other drug injections for medical treatment |
WO2021168179A1 (en) | 2020-02-19 | 2021-08-26 | Auburn University | Methods for manufacturing individualized protective gear from body scan and resulting products |
US11238188B2 (en) | 2020-04-01 | 2022-02-01 | X Development Llc | Generating personalized exosuit designs |
CN111523970B (en) * | 2020-04-15 | 2021-03-26 | 南京市职业病防治院 | Personalized full-face mask customizing system based on 3D printing technology and customizing method thereof |
DE102020205563A1 (en) | 2020-04-30 | 2021-11-04 | Footprint Technologies GmbH | Recording of anatomical dimensions and determination of suitable garments |
US11853034B2 (en) | 2020-05-08 | 2023-12-26 | Skip Innovations, Inc. | Exosuit activity transition control |
EP3916346B1 (en) | 2020-05-27 | 2023-01-18 | Medere Srl | Method for the production of customised orthotics |
JP2023528376A (en) | 2020-05-29 | 2023-07-04 | ナイキ イノベイト シーブイ | Captured image processing system and method |
WO2022024195A1 (en) * | 2020-07-27 | 2022-02-03 | 株式会社Vrc | Server and information processing method |
US20220043940A1 (en) * | 2020-08-05 | 2022-02-10 | X Development Llc | 3d printed exosuit interface |
CN112192845A (en) * | 2020-09-29 | 2021-01-08 | 马鞍山实嘉信息科技有限公司 | Three-dimensional printing control system and three-dimensional printing method |
US11903896B2 (en) | 2020-10-26 | 2024-02-20 | Skip Innovations, Inc. | Flexible exosuit for assistive mobility |
WO2022093207A1 (en) * | 2020-10-28 | 2022-05-05 | Hewlett-Packard Development Company, L.P. | Computer vision model generation |
CN112971265B (en) * | 2021-02-05 | 2022-05-03 | 重庆小爱科技有限公司 | Customized multifunctional shoe and manufacturing method thereof |
DE102021110751A1 (en) | 2021-04-27 | 2022-10-27 | Personomic - Paul Eichinger, Christian Renninger, Andreas Schulz GbR (vertretungsberechtigte Gesellschafter: Paul Eichinger, 71254 Ditzingennger; Christian Renninger, 70197 Stuttgart; Andreas Schulz, 70197 Stuttgart) | Method and computer program for producing an individualized handle |
WO2023276868A1 (en) * | 2021-06-28 | 2023-01-05 | Dic株式会社 | Data providing device, method for providing foot care product, and foot care product |
KR102338339B1 (en) * | 2021-07-19 | 2021-12-10 | 최혁 | Method for constructing custom insoles |
CN114612606A (en) * | 2022-02-11 | 2022-06-10 | 广东时谛智能科技有限公司 | Shoe body exclusive customization method and device based on graphic elements and color matching data |
CN114670451B (en) * | 2022-03-16 | 2024-05-10 | 艺声匠心(山西)科技有限公司 | Manufacturing method of 3D (three-dimensional) systematic customized earphone |
CN114834041B (en) * | 2022-04-12 | 2023-10-27 | 深圳市广德教育科技股份有限公司 | Model manufacturing method based on customer human body parameters |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040133431A1 (en) * | 2001-01-31 | 2004-07-08 | Toma Udiljak | Integrated internet-based orthotic shoe insole marketing and production system |
WO2006059246A2 (en) * | 2004-11-08 | 2006-06-08 | Dspv, Ltd. | System and method of enabling a cellular/wireless device with imaging capabilities to decode printed alphanumeric characters |
WO2007041345A2 (en) | 2005-09-30 | 2007-04-12 | Aetrex Worldwide, Inc. | Equilateral foot bed and systems having same |
US7493230B2 (en) | 2006-06-06 | 2009-02-17 | Aetrex Worldwide, Inc. | Method and apparatus for customizing insoles for footwear |
US7656402B2 (en) * | 2006-11-15 | 2010-02-02 | Tahg, Llc | Method for creating, manufacturing, and distributing three-dimensional models |
JP2010533008A (en) * | 2007-06-29 | 2010-10-21 | スリーエム イノベイティブ プロパティズ カンパニー | Synchronous view of video data and 3D model data |
ES2724115T3 (en) * | 2007-06-29 | 2019-09-06 | Midmark Corp | Graphical user interface for computer-assisted margin marking on dentures |
EP2200463A1 (en) | 2007-09-25 | 2010-06-30 | Aetrex Worldwide, Inc. | Articles prepared using recycled materials and methods of preparation thereof |
IL188645A (en) | 2008-01-07 | 2011-12-29 | Eliaho Gerby | Foot measuring device |
US20120023776A1 (en) | 2009-03-09 | 2012-02-02 | Aetrex Worldwide, Inc. | Shoe sole inserts for pressure distribution |
CN103025192A (en) | 2010-06-25 | 2013-04-03 | 安泰国际公司 | Shoe with conforming upper |
US9448636B2 (en) | 2012-04-18 | 2016-09-20 | Arb Labs Inc. | Identifying gestures using gesture data compressed by PCA, principal joint variable analysis, and compressed feature matrices |
US8934675B2 (en) | 2012-06-25 | 2015-01-13 | Aquifi, Inc. | Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints |
US20150165690A1 (en) * | 2012-07-18 | 2015-06-18 | Adam P. Tow | Systems and methods for manufacturing of multi-property anatomically customized devices |
US9760674B2 (en) | 2013-07-26 | 2017-09-12 | Aetrex Worldwide, Inc. | Systems and methods for generating orthotic device models from user-based data capture |
JP6099232B2 (en) | 2013-08-22 | 2017-03-22 | ビスポーク, インコーポレイテッド | Method and system for creating custom products |
US20150190970A1 (en) * | 2014-01-03 | 2015-07-09 | Michael Itagaki | Texturing of 3d medical images |
US20150382123A1 (en) | 2014-01-16 | 2015-12-31 | Itamar Jobani | System and method for producing a personalized earphone |
US10528032B2 (en) | 2014-10-08 | 2020-01-07 | Aetrex Worldwide, Inc. | Systems and methods for generating a patterned orthotic device |
US9984409B2 (en) | 2014-12-22 | 2018-05-29 | Ebay Inc. | Systems and methods for generating virtual contexts |
WO2016183582A1 (en) | 2015-05-14 | 2016-11-17 | Foot Innovations, Llc | Systems and methods for making custom orthotics |
-
2016
- 2016-12-23 US US15/390,406 patent/US10564628B2/en not_active Expired - Fee Related
- 2016-12-30 EP EP16884238.3A patent/EP3400548A4/en not_active Withdrawn
- 2016-12-30 WO PCT/US2016/069603 patent/WO2017120121A1/en active Application Filing
- 2016-12-30 AU AU2016384433A patent/AU2016384433B9/en not_active Ceased
- 2016-12-30 CN CN201680083237.8A patent/CN109219835A/en active Pending
- 2016-12-30 JP JP2018535879A patent/JP2019503906A/en active Pending
-
2017
- 2017-06-27 US US15/634,161 patent/US10067500B2/en active Active
-
2020
- 2020-02-07 US US16/784,713 patent/US20200257266A1/en not_active Abandoned
- 2020-06-11 AU AU2020203848A patent/AU2020203848A1/en not_active Abandoned
- 2020-10-15 JP JP2020173890A patent/JP2021008126A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN109219835A (en) | 2019-01-15 |
WO2017120121A1 (en) | 2017-07-13 |
AU2016384433A1 (en) | 2018-07-26 |
AU2016384433B2 (en) | 2020-03-19 |
US20170293286A1 (en) | 2017-10-12 |
JP2021008126A (en) | 2021-01-28 |
JP2019503906A (en) | 2019-02-14 |
AU2020203848A1 (en) | 2020-07-02 |
US20170190121A1 (en) | 2017-07-06 |
AU2016384433B9 (en) | 2020-03-26 |
EP3400548A1 (en) | 2018-11-14 |
EP3400548A4 (en) | 2019-10-09 |
US10067500B2 (en) | 2018-09-04 |
US10564628B2 (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200257266A1 (en) | Generating of 3d-printed custom wearables | |
CN110662484B (en) | System and method for whole body measurement extraction | |
US10380794B2 (en) | Method and system for generating garment model data | |
JP2017531950A (en) | Method and apparatus for constructing a shooting template database and providing shooting recommendation information | |
US10593104B2 (en) | Systems and methods for generating time discrete 3D scenes | |
JP2021192250A (en) | Real time 3d capture using monocular camera and method and system for live feedback | |
KR102097016B1 (en) | Apparatus and methdo for analayzing motion | |
CN105373929B (en) | Method and device for providing photographing recommendation information | |
KR20190007535A (en) | Imaging a body | |
JP2009020761A (en) | Image processing apparatus and method thereof | |
JP2010128742A (en) | Three-dimensional data creation device | |
US20200211170A1 (en) | Live viewfinder verification of image viability | |
Lussu et al. | Ultra close-range digital photogrammetry in skeletal anthropology: A systematic review | |
WO2016184285A1 (en) | Article image processing method, apparatus and system | |
CN115803783A (en) | Reconstruction of 3D object models from 2D images | |
TW201241781A (en) | Interactive service methods and systems for virtual glasses wearing | |
WO2018045766A1 (en) | Method and device for photographing, mobile terminal, computer storage medium | |
JP2019139356A (en) | Build-to-order system | |
KR100952382B1 (en) | Animation automatic generating apparatus of user-based and its method | |
Phan et al. | Create 3D Models from Photos Captured by Sony Alpha 7 Mark 2 Digital Camera | |
JP2021140405A (en) | Image search device, image search method and image search program | |
KR20080073982A (en) | Apparatus and method of updating feature information of object, and object recognition system and method employing the same | |
JP2008282282A (en) | Nostalgic image display device, nostalgic image display method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |