US20220215598A1 - Infinitely layered camouflage - Google Patents
Infinitely layered camouflage Download PDFInfo
- Publication number
- US20220215598A1 US20220215598A1 US17/405,499 US202117405499A US2022215598A1 US 20220215598 A1 US20220215598 A1 US 20220215598A1 US 202117405499 A US202117405499 A US 202117405499A US 2022215598 A1 US2022215598 A1 US 2022215598A1
- Authority
- US
- United States
- Prior art keywords
- camouflage pattern
- background
- scene
- model
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims description 26
- 238000009877 rendering Methods 0.000 claims description 9
- 230000001788 irregular Effects 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 125000000391 vinyl group Chemical group [H]C([*])=C([H])[H] 0.000 claims description 4
- 229920002554 vinyl polymer Polymers 0.000 claims description 4
- 239000004744 fabric Substances 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 238000007639 printing Methods 0.000 claims description 3
- ORQBXQOJMQIAOY-UHFFFAOYSA-N nobelium Chemical compound [No] ORQBXQOJMQIAOY-UHFFFAOYSA-N 0.000 description 13
- 239000000203 mixture Substances 0.000 description 4
- 241000272525 Anas platyrhynchos Species 0.000 description 3
- 235000014676 Phragmites communis Nutrition 0.000 description 3
- 244000273256 Phragmites communis Species 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 244000081757 Phalaris arundinacea Species 0.000 description 1
- 241000209504 Poaceae Species 0.000 description 1
- 240000001398 Typha domingensis Species 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010017 direct printing Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000010023 transfer printing Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F41—WEAPONS
- F41H—ARMOUR; ARMOURED TURRETS; ARMOURED OR ARMED VEHICLES; MEANS OF ATTACK OR DEFENCE, e.g. CAMOUFLAGE, IN GENERAL
- F41H3/00—Camouflage, i.e. means or methods for concealment or disguise
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Definitions
- the present invention relates generally to camouflaging objects by applying a camouflage pattern to them via printing, wrapping, or covering. More particularly, this invention pertains to generating camouflage patterns.
- Camouflage patterns are generated by capturing images of an environment from a given perspective (e.g., over reed grasses in a duck pond) and repeating that image to generate a never-ending camouflage pattern.
- the captures images may be edited such as by making irregular or stitching them one to the next to generate the repeating image such that breaks between repeating edges are not obvious to an ordinary observer of the pattern (or to wildlife viewing the pattern).
- the repeating image may be synthesized.
- images of elements e.g., a group of reeds, a clump of grasses, a cattail, etc.
- images of elements e.g., a group of reeds, a clump of grasses, a cattail, etc.
- images of elements e.g., a group of reeds, a clump of grasses, a cattail, etc.
- the composite image of the element is generally shrunk by about 50 percent before being added to the camouflage pattern's repeating image which also forces the element to appear more universally focused (i.e., having more layers than it actually does) to an observer.
- the lack of focus throughout the object and pattern, particularly at different depths of field in the image becomes evident to many observers and may begin to be noticed by wildlife.
- aspects of the present invention provide a camouflage pattern appearing to have infinite focus and depth of field even at 100 percent size for the elements in the camouflage pattern.
- 3D models of elements to be used in the camouflage pattern are captured or generated.
- the models are then arranged in a scene with a background (e.g., an infinite background) via 3D graphics editing programs such as is used to render computer generated graphics in video games and movies.
- a 2D capture of the scene thus shows all visible surfaces of the elements in the scene in focus at all depths of field.
- the elements may or may not be shaded by one another from the perspective of the captured image in the 3D environment.
- a method of making a camouflage pattern includes receiving a three-dimensional model of a first element.
- the three-dimensional model of the first element is combined with a background to create a scene.
- a view of the scene is rendered in a two-dimensional output format.
- the rendered view of the scene and the two-dimensional output format is the camouflage pattern.
- and object has a camouflage pattern thereon or a surface thereof.
- the camouflage pattern includes a background and a first element. All visible surfaces at every depth of the first element are in focus.
- a non-transitory computer readable medium has computer executable instructions stored thereon representative of an image file.
- the image file is representative of a camouflage pattern including a background and a first element. All visible surfaces at every depth of the first element are in focus when the camouflage pattern is rendered.
- FIG. 1 is a plan view of a segment of a camouflage pattern (irregular edges or outline of the pattern not shown).
- FIG. 2 is an isometric view of an object having the camouflage pattern of FIG. 1 thereon.
- FIG. 3 is a flow chart of a method of creating an infinitely layered camouflage pattern.
- an upright position is considered to be the position of apparatus components while in proper operation or in a natural resting position as described herein.
- Vertical, horizontal, above, below, side, top, bottom and other orientation terms are described with respect to this upright position during operation unless otherwise specified.
- the term “when” is used to specify orientation for relative positions of components, not as a temporal limitation of the claims or apparatus described and claimed herein unless otherwise specified.
- the terms “above”, “below”, “over”, and “under” mean “having an elevation or vertical height greater or lesser than” and are not intended to imply that one object or component is directly over or under another object or component.
- a pattern 100 includes a plurality of major elements.
- the pattern 100 may be, for example, a camouflage pattern such as Mossy Oak Breakup.
- the pattern 100 includes a first major element 102 , a second major element 104 , and a third major element 106 .
- the pattern 100 may include a background 108 within which the major elements 102 , 104 , and 106 reside and blend into.
- major elements may include, for example, leaves, branches, acorns, sticks, reeds, grass, or dirt.
- the background 108 may be, for example, tree bark or a leaf covered ground, or may be an artificially created three-dimensional (3D) infinite background (typically including dirt, tree bark, leaves, snow, moss, etc. and/or patches of similar colors).
- an object 200 has a surface 202 to which the pattern 100 has been applied via dip transfer printing, vinyl wrap, or direct printing.
- Application of the camouflage pattern 100 to the surface 202 of the object 200 is shown leaving a void 204 , which sometimes occurs with the dip transfer process.
- Various methods of repairing voids are known in the art.
- the camouflage pattern 100 is repeated on the object 200 , and the camouflage pattern 100 has an irregular outline or parameter. That is, the outline of the camouflage pattern 100 is not rectangular, but instead includes protrusions and recesses configured to interlock with one another when the image forming the camouflage pattern 100 is repeated.
- the camouflage pattern 100 includes a plurality of elements ( 102 , 104 , and 1 of 6 ), and every visible surface of each of the plurality of elements at every depth of each element is in focus.
- a method of making a camouflage pattern 300 begins with receiving a 3D model of a first element 102 at 302 .
- a 3D model of a second element 104 is received at 304 .
- receiving the 3D model of the first element 102 includes capturing a point cloud representative of the first element 102 and calculating a wire mesh model of the first element 102 from the captured point cloud. This may be accomplished via a 3D scanner such as the SMARTTECH 3D Micron3D color 24 Mpix scanner.
- capturing the point cloud representative of the first element 102 includes using a shadowless or shadeless capture system.
- the 3D model of the first element 102 and the 3D model of the second element 104 are combined with a background 108 at 306 to create a scene.
- a view of the scene is rendered in a two-dimensional output format to generate the camouflage pattern 100 .
- rendering the view of the scene includes rendering all visible surfaces of the 3D model of the first element 102 in focus. That is, all visible surfaces of the 3D model of the first element are rendered with a depth of field exceeding a depth of the 3D model of the first element 102 .
- receiving the 3D model of the first element 102 includes capturing surfaces of the first element 102 that do not appear in the camouflage pattern 100 (because they are not visible from the viewing perspective of the 3D scene rendered in the 2D output format).
- the method 300 further includes printing the rendered view of the scene on a dip transfer film, a fabric, or a vinyl wrap.
- each pixel of the camouflage pattern 100 is its own layer because every point within the 3D model may be a different distance from the point of capture (i.e., camera perspective or view), but every point is kept in focus.
- the elements ( 102 , 104 , and 106 ) may overlap one another, repeat within the two-dimensional image forming the camouflage pattern 100 , and appear at different distances from the point of capture or viewpoint of the three-dimensional scene upon which the two-dimensional view forming the camouflage pattern 100 is based. Additionally, shadowing within the three-dimensional scene may be eliminated, provided from the point of view of capture of the three-dimensional scene, or determined from a light source location different from the point of capture (i.e., viewpoint) of the three-dimensional scene.
- a non-transitory computer readable medium has computer executable instructions stored thereon representative of an image file.
- the image file is representative of the camouflage pattern 100 .
- the camouflage pattern 100 includes a background and a first element. All visible surfaces at every depth of the first element are in focus when the camouflage pattern is rendered from the image file.
- a general purpose processor e.g., microprocessor, conventional processor, controller, microcontroller, state machine or combination of computing devices
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- steps of a method or process described herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two.
- a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
- a controller, processor, computing device, client computing device or computer includes at least one or more processors or processing units and a system memory.
- the controller may also include at least some form of computer readable media.
- computer readable media may include computer storage media and communication media.
- Computer readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology that enables storage of information, such as computer readable instructions, data structures, program modules, or other data.
- Communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
- server is not intended to refer to a single computer or computing device.
- a server will generally include an edge server, a plurality of data servers, a storage database (e.g., a large scale RAID array), and various networking components. It is contemplated that these devices or functions may also be implemented in virtual machines and spread across multiple physical computing devices.
- compositions and/or methods disclosed and claimed herein may be made and/or executed without undue experimentation in light of the present disclosure. While the compositions and methods of this invention have been described in terms of the embodiments included herein, it will be apparent to those of ordinary skill in the art that variations may be applied to the compositions and/or methods and in the steps or in the sequence of steps of the method described herein without departing from the concept, spirit, and scope of the invention. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope, and concept of the invention as defined by the appended claims.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
Abstract
A camouflage pattern is provided that appears to have infinite focus and depth of field even at 100 percent size for the elements in the camouflage pattern. Generally, three-dimensional (3D) models of elements to be used in the camouflage pattern are captured or generated. The models are then arranged in a scene with a background (e.g., an infinite background) via 3D graphics editing programs such as is used to render computer generated graphics in video games and movies. A two-dimensional (2D) capture of the scene thus shows all visible surfaces of the elements in the scene in focus at all depths of field. The elements may or may not be shaded by one another from the perspective of the image capture location in the 3D environment.
Description
- A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the reproduction of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
- This application claims priority to U.S. patent application Ser. No. 16/450,642 entitled “INFINITELY LAYERED CAMOUFLAGE” filed on Jun. 24, 2019.
- Not Applicable
- Not Applicable
- The present invention relates generally to camouflaging objects by applying a camouflage pattern to them via printing, wrapping, or covering. More particularly, this invention pertains to generating camouflage patterns.
- Current camouflage patterns are generated by capturing images of an environment from a given perspective (e.g., over reed grasses in a duck pond) and repeating that image to generate a never-ending camouflage pattern. The captures images may be edited such as by making irregular or stitching them one to the next to generate the repeating image such that breaks between repeating edges are not obvious to an ordinary observer of the pattern (or to wildlife viewing the pattern). Alternatively, the repeating image may be synthesized. That is, images of elements (e.g., a group of reeds, a clump of grasses, a cattail, etc.) in the environment may be taken, stitched together, and overlaid onto a background to generate the image used as the repeating image in the never-ending camouflage pattern. Examples of varying versions of these techniques may be found in U.S. Pat. Nos. 9,208,398; 9,322,620; 9,631,900; 9,746,288; 9,835,415; 9,920,464; and 9,952,020.
- These image-based techniques appear flat. That is, overhead images of a duck blind look like overhead images of a duck blind that lack depth of field because the focal length of the camera cannot extend through the field depth of the captured image due to physical limitations of camera sensor size and aperture size. Piecing elements onto a background overcomes some of the depth of field issues because multiple images at different focal points of the elements in the environment may be stitched together in 2-dimensional (2D) image editing software to use the in-focus portion of each of the images representing the element or object. For example, 5-10 close up images of a stick at varying focal lengths or points may be stitched together to represent a 6 inch diameter stick about a foot long. Thus, only 5-10 layers, points, or surfaces on the stick are actually in focus, and there may be even fewer layers (e.g., 1 or 2) in focus for smaller elements such as leaves. The composite image of the element is generally shrunk by about 50 percent before being added to the camouflage pattern's repeating image which also forces the element to appear more universally focused (i.e., having more layers than it actually does) to an observer. However, at 100 percent size, the lack of focus throughout the object and pattern, particularly at different depths of field in the image, becomes evident to many observers and may begin to be noticed by wildlife.
- Aspects of the present invention provide a camouflage pattern appearing to have infinite focus and depth of field even at 100 percent size for the elements in the camouflage pattern. Generally, three-dimensional (3D) models of elements to be used in the camouflage pattern are captured or generated. The models are then arranged in a scene with a background (e.g., an infinite background) via 3D graphics editing programs such as is used to render computer generated graphics in video games and movies. A 2D capture of the scene thus shows all visible surfaces of the elements in the scene in focus at all depths of field. The elements may or may not be shaded by one another from the perspective of the captured image in the 3D environment.
- In one aspect, a method of making a camouflage pattern includes receiving a three-dimensional model of a first element. The three-dimensional model of the first element is combined with a background to create a scene. A view of the scene is rendered in a two-dimensional output format. The rendered view of the scene and the two-dimensional output format is the camouflage pattern.
- In another aspect, and object has a camouflage pattern thereon or a surface thereof. The camouflage pattern includes a background and a first element. All visible surfaces at every depth of the first element are in focus.
- In another aspect, a non-transitory computer readable medium has computer executable instructions stored thereon representative of an image file. The image file is representative of a camouflage pattern including a background and a first element. All visible surfaces at every depth of the first element are in focus when the camouflage pattern is rendered.
-
FIG. 1 is a plan view of a segment of a camouflage pattern (irregular edges or outline of the pattern not shown). -
FIG. 2 . is an isometric view of an object having the camouflage pattern ofFIG. 1 thereon. -
FIG. 3 is a flow chart of a method of creating an infinitely layered camouflage pattern. - Reference will now be made in detail to optional embodiments of the invention, examples of which are illustrated in accompanying drawings. Whenever possible, the same reference numbers are used in the drawing and in the description referring to the same or like parts.
- While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention and do not delimit the scope of the invention.
- To facilitate the understanding of the embodiments described herein, a number of terms are defined below. The terms defined herein have meanings as commonly understood by a person of ordinary skill in the areas relevant to the present invention. Terms such as “a,” “an,” and “the” are not intended to refer to only a singular entity, but rather include the general class of which a specific example may be used for illustration. The terminology herein is used to describe specific embodiments of the invention, but their usage does not delimit the invention, except as set forth in the claims.
- As described herein, an upright position is considered to be the position of apparatus components while in proper operation or in a natural resting position as described herein. Vertical, horizontal, above, below, side, top, bottom and other orientation terms are described with respect to this upright position during operation unless otherwise specified. The term “when” is used to specify orientation for relative positions of components, not as a temporal limitation of the claims or apparatus described and claimed herein unless otherwise specified. The terms “above”, “below”, “over”, and “under” mean “having an elevation or vertical height greater or lesser than” and are not intended to imply that one object or component is directly over or under another object or component.
- The phrase “in one embodiment,” as used herein does not necessarily refer to the same embodiment, although it may. Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without operator input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment.
- Referring to
FIG. 1 , apattern 100 includes a plurality of major elements. Thepattern 100 may be, for example, a camouflage pattern such as Mossy Oak Breakup. Thepattern 100 includes a firstmajor element 102, a secondmajor element 104, and a thirdmajor element 106. Thepattern 100 may include abackground 108 within which themajor elements background 108 may be, for example, tree bark or a leaf covered ground, or may be an artificially created three-dimensional (3D) infinite background (typically including dirt, tree bark, leaves, snow, moss, etc. and/or patches of similar colors). - Referring to
FIG. 2 , anobject 200 has asurface 202 to which thepattern 100 has been applied via dip transfer printing, vinyl wrap, or direct printing. Application of thecamouflage pattern 100 to thesurface 202 of theobject 200 is shown leaving avoid 204, which sometimes occurs with the dip transfer process. Various methods of repairing voids are known in the art. In one embodiment, thecamouflage pattern 100 is repeated on theobject 200, and thecamouflage pattern 100 has an irregular outline or parameter. That is, the outline of thecamouflage pattern 100 is not rectangular, but instead includes protrusions and recesses configured to interlock with one another when the image forming thecamouflage pattern 100 is repeated. In one embodiment, thecamouflage pattern 100 includes a plurality of elements (102, 104, and 1 of 6), and every visible surface of each of the plurality of elements at every depth of each element is in focus. - Referring to
FIG. 3 , a method of making a camouflage pattern 300 begins with receiving a 3D model of afirst element 102 at 302. Optionally, a 3D model of asecond element 104 is received at 304. In one embodiment, receiving the 3D model of thefirst element 102 includes capturing a point cloud representative of thefirst element 102 and calculating a wire mesh model of thefirst element 102 from the captured point cloud. This may be accomplished via a 3D scanner such as theSMARTTECH 3D Micron3D color 24 Mpix scanner. In one embodiment, capturing the point cloud representative of thefirst element 102 includes using a shadowless or shadeless capture system. - The 3D model of the
first element 102 and the 3D model of thesecond element 104 are combined with abackground 108 at 306 to create a scene. - At 308, a view of the scene is rendered in a two-dimensional output format to generate the
camouflage pattern 100. In one embodiment, rendering the view of the scene includes rendering all visible surfaces of the 3D model of thefirst element 102 in focus. That is, all visible surfaces of the 3D model of the first element are rendered with a depth of field exceeding a depth of the 3D model of thefirst element 102. In one embodiment, receiving the 3D model of thefirst element 102 includes capturing surfaces of thefirst element 102 that do not appear in the camouflage pattern 100 (because they are not visible from the viewing perspective of the 3D scene rendered in the 2D output format). In one embodiment, the method 300 further includes printing the rendered view of the scene on a dip transfer film, a fabric, or a vinyl wrap. - Utilizing a 3D scanner to generate a wire mesh model of the first element 102 (and
other elements 104, 106) results in a 3D model of the first element wanted to wherein every point on the 3D model is in focus, therefore, acamouflage pattern 100 generated based on the 3D model of thefirst element 102 and aninfinite background 108 results in a camouflage pattern having an unlimited or infinite number of layers. Practically speaking, each pixel of thecamouflage pattern 100 is its own layer because every point within the 3D model may be a different distance from the point of capture (i.e., camera perspective or view), but every point is kept in focus. - Although shown herein (e.g., at
FIG. 1 ) with elements (102, 104, and 106) separated from one another, it is contemplated within the scope of the claims that the elements (102, 104, and 106) may overlap one another, repeat within the two-dimensional image forming thecamouflage pattern 100, and appear at different distances from the point of capture or viewpoint of the three-dimensional scene upon which the two-dimensional view forming thecamouflage pattern 100 is based. Additionally, shadowing within the three-dimensional scene may be eliminated, provided from the point of view of capture of the three-dimensional scene, or determined from a light source location different from the point of capture (i.e., viewpoint) of the three-dimensional scene. - In one embodiment, a non-transitory computer readable medium has computer executable instructions stored thereon representative of an image file. The image file is representative of the
camouflage pattern 100. Thecamouflage pattern 100 includes a background and a first element. All visible surfaces at every depth of the first element are in focus when the camouflage pattern is rendered from the image file. - It will be understood by those of skill in the art that information and signals may be represented using any of a variety of different technologies and techniques (e.g., data, instructions, commands, information, signals, bits, symbols, and chips may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof). Likewise, the various illustrative logical blocks, modules, circuits, and algorithm steps described herein may be implemented as electronic hardware, computer software, or combinations of both, depending on the application and functionality. Moreover, the various logical blocks, modules, and circuits described herein may be implemented or performed with a general purpose processor (e.g., microprocessor, conventional processor, controller, microcontroller, state machine or combination of computing devices), a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), a field programmable gate array (“FPGA”) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Similarly, steps of a method or process described herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. Although embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that various modifications can be made therein without departing from the spirit and scope of the invention as set forth in the appended claims.
- A controller, processor, computing device, client computing device or computer, such as described herein, includes at least one or more processors or processing units and a system memory. The controller may also include at least some form of computer readable media. By way of example and not limitation, computer readable media may include computer storage media and communication media. Computer readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology that enables storage of information, such as computer readable instructions, data structures, program modules, or other data. Communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. Those skilled in the art should be familiar with the modulated data signal, which has one or more of its characteristics set or changed in such a manner as to encode information in the signal. Combinations of any of the above are also included within the scope of computer readable media. As used herein, server is not intended to refer to a single computer or computing device. In implementation, a server will generally include an edge server, a plurality of data servers, a storage database (e.g., a large scale RAID array), and various networking components. It is contemplated that these devices or functions may also be implemented in virtual machines and spread across multiple physical computing devices.
- This written description uses examples to disclose the invention and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
- It will be understood that the particular embodiments described herein are shown by way of illustration and not as limitations of the invention. The principal features of this invention may be employed in various embodiments without departing from the scope of the invention. Those of ordinary skill in the art will recognize numerous equivalents to the specific procedures described herein. Such equivalents are considered to be within the scope of this invention and are covered by the claims.
- All of the compositions and/or methods disclosed and claimed herein may be made and/or executed without undue experimentation in light of the present disclosure. While the compositions and methods of this invention have been described in terms of the embodiments included herein, it will be apparent to those of ordinary skill in the art that variations may be applied to the compositions and/or methods and in the steps or in the sequence of steps of the method described herein without departing from the concept, spirit, and scope of the invention. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope, and concept of the invention as defined by the appended claims.
- Thus, although there have been described particular embodiments of the present invention of a new and useful INFINITELY LAYERED CAMOUFLAGE, it is not intended that such references be construed as limitations upon the scope of this invention except as set forth in the following claims.
Claims (18)
1. A method of making a camouflage pattern, said method comprising:
receiving a three-dimensional (3D) model of a first element;
combining the 3D model of the first element with a background to create a scene; and
rendering a view of the scene in a two-dimensional (2D) output format, wherein the rendered view of the scene in the 2D output format is the camouflage pattern.
2. The method of claim 1 , further comprising printing the rendered view of the scene on a dip transfer film, a fabric, or a vinyl wrap.
3. The method of claim 1 , wherein rendering the view of the scene comprises rendering all visible surfaces of the 3D model of the first element in focus.
4. The method of claim 1 , wherein rendering the view of the scene comprises rendering the visible surfaces of the 3D model of the first element with a depth of field exceeding a depth of the 3D model of the first element.
5. The method of claim 1 , further comprising:
receiving a 3D model of a second element, wherein:
the background is a 3D background;
said combining comprises placing the 3D model of the first element and the 3D model of the second element within the 3D background to create the scene; and
said rendering a view of the scene comprises rendering the view of the scene in a two-dimensional (2D) output format, wherein the rendered view of the scene in the 2D output format is the camouflage pattern.
6. The method of claim 1 , wherein receiving the 3D model of the first element comprises:
capturing a point cloud representative of the first element; and
calculating a wire mesh of the first element from the captured point cloud.
7. The method of claim 1 , wherein receiving the 3D model of the first element comprises:
capturing a point cloud representative of the first element using a shadowless capture system; and
calculating a wire mesh of the first element from the captured point cloud.
8. The method of claim 1 , wherein receiving the 3D model of the first element comprises capturing surfaces of the first element that do not appear in the camouflage pattern.
9. The method of claim 1 , wherein the background is an infinite background.
10. The method of claim 1 , wherein the background is a 3D background having infinite depth.
11. An object having a camouflage pattern thereon, said camouflage pattern comprising:
a background; and
a first element, wherein all visible surfaces at every depth of the first element are in focus.
12. The object of claim 11 , wherein the background is an infinite background.
13. The object of claim 11 , wherein the object is a dip transfer film, fabric, or vinyl wrap.
14. The object of claim 11 , wherein the camouflage pattern is repeated on the object, and the camouflage pattern has an irregular outline.
15. The object of claim 11 , wherein the camouflage pattern further comprises a plurality of elements and every visible surface of each of the plurality of elements at every depth of each element is in focus.
16. A non-transitory computer readable medium having computer executable instructions stored thereon representative of an image file, said image file representative of a camouflage pattern, said camouflage pattern comprising:
a background; and
a first element, wherein all visible surfaces at every depth of the first element are in focus when the camouflage pattern is rendered.
17. The non-transitory computer readable medium of claim 16 , wherein the background is an infinite background.
18. The non-transitory computer readable medium of claim 16 , wherein the camouflage pattern further comprises a plurality of elements and every visible surface of each of the plurality of elements at every depth of each element is in focus when the camouflage pattern is repeated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/405,499 US20220215598A1 (en) | 2019-06-24 | 2021-08-18 | Infinitely layered camouflage |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/450,642 US11127172B2 (en) | 2019-06-24 | 2019-06-24 | Infinitely layered camouflage |
US17/405,499 US20220215598A1 (en) | 2019-06-24 | 2021-08-18 | Infinitely layered camouflage |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/450,642 Continuation US11127172B2 (en) | 2019-06-24 | 2019-06-24 | Infinitely layered camouflage |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220215598A1 true US20220215598A1 (en) | 2022-07-07 |
Family
ID=74038963
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/450,642 Active US11127172B2 (en) | 2019-06-24 | 2019-06-24 | Infinitely layered camouflage |
US17/405,499 Pending US20220215598A1 (en) | 2019-06-24 | 2021-08-18 | Infinitely layered camouflage |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/450,642 Active US11127172B2 (en) | 2019-06-24 | 2019-06-24 | Infinitely layered camouflage |
Country Status (1)
Country | Link |
---|---|
US (2) | US11127172B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11127172B2 (en) * | 2019-06-24 | 2021-09-21 | J. Patrick Epling | Infinitely layered camouflage |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652963A (en) * | 1995-10-02 | 1997-08-05 | Davison; George M. | Camouflage and protective headgear |
US20040017385A1 (en) * | 2002-07-19 | 2004-01-29 | Cosman Michael A. | System and method for combining independent scene layers to form computer generated environments |
US20060244907A1 (en) * | 2004-12-06 | 2006-11-02 | Simmons John C | Specially coherent optics |
US20080079719A1 (en) * | 2006-09-29 | 2008-04-03 | Samsung Electronics Co., Ltd. | Method, medium, and system rendering 3D graphic objects |
US20110316978A1 (en) * | 2009-02-25 | 2011-12-29 | Dimensional Photonics International, Inc. | Intensity and color display for a three-dimensional metrology system |
US20120069197A1 (en) * | 2010-09-16 | 2012-03-22 | Stephen Michael Maloney | Method and process of making camouflage patterns |
US20150116321A1 (en) * | 2013-10-29 | 2015-04-30 | Travis Christopher Fortner | Camouflage and Similar Patterns Method and Technique of Creating Such Patterns |
US20150154745A1 (en) * | 2011-03-07 | 2015-06-04 | Stéphane Lafon | 3D Object Positioning in Street View |
US20200170760A1 (en) * | 2017-05-27 | 2020-06-04 | Medicim Nv | Method for intraoral scanning directed to a method of processing and filtering scan data gathered from an intraoral scanner |
US20200402269A1 (en) * | 2019-06-24 | 2020-12-24 | J. Patrick Epling | Infinitely layered camouflage |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CH707045A2 (en) | 2012-09-17 | 2014-03-31 | Ssz Camouflage Technology Ag | Adaptive visual camouflage. |
US9208398B2 (en) | 2012-11-19 | 2015-12-08 | Nta Enterprise, Inc. | Image processing for forming realistic stratum detritus detail in a camouflage pattern and a camouflage pattern formed thereby |
US9062938B1 (en) | 2014-12-12 | 2015-06-23 | The United States Of America As Represented By The Secretary Of The Army | Camouflage patterns |
US9074849B1 (en) | 2014-12-12 | 2015-07-07 | The United States Of America As Represented By The Secretary Of The Army | Camouflage for garment assembly |
KR101677929B1 (en) | 2016-06-20 | 2016-11-21 | 주식회사 동아티오엘 | Camouflaging fabrics by jacquard loom and its weaving method |
CN106052816A (en) * | 2016-06-22 | 2016-10-26 | 安庆海纳信息技术有限公司 | Thousand-seed weighing instrument based on machine vision |
-
2019
- 2019-06-24 US US16/450,642 patent/US11127172B2/en active Active
-
2021
- 2021-08-18 US US17/405,499 patent/US20220215598A1/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652963A (en) * | 1995-10-02 | 1997-08-05 | Davison; George M. | Camouflage and protective headgear |
US20040017385A1 (en) * | 2002-07-19 | 2004-01-29 | Cosman Michael A. | System and method for combining independent scene layers to form computer generated environments |
US20060244907A1 (en) * | 2004-12-06 | 2006-11-02 | Simmons John C | Specially coherent optics |
US20080079719A1 (en) * | 2006-09-29 | 2008-04-03 | Samsung Electronics Co., Ltd. | Method, medium, and system rendering 3D graphic objects |
US20110316978A1 (en) * | 2009-02-25 | 2011-12-29 | Dimensional Photonics International, Inc. | Intensity and color display for a three-dimensional metrology system |
US20120069197A1 (en) * | 2010-09-16 | 2012-03-22 | Stephen Michael Maloney | Method and process of making camouflage patterns |
US20150154745A1 (en) * | 2011-03-07 | 2015-06-04 | Stéphane Lafon | 3D Object Positioning in Street View |
US20150116321A1 (en) * | 2013-10-29 | 2015-04-30 | Travis Christopher Fortner | Camouflage and Similar Patterns Method and Technique of Creating Such Patterns |
US20200170760A1 (en) * | 2017-05-27 | 2020-06-04 | Medicim Nv | Method for intraoral scanning directed to a method of processing and filtering scan data gathered from an intraoral scanner |
US20200402269A1 (en) * | 2019-06-24 | 2020-12-24 | J. Patrick Epling | Infinitely layered camouflage |
US11127172B2 (en) * | 2019-06-24 | 2021-09-21 | J. Patrick Epling | Infinitely layered camouflage |
Also Published As
Publication number | Publication date |
---|---|
US11127172B2 (en) | 2021-09-21 |
US20200402269A1 (en) | 2020-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11055827B2 (en) | Image processing apparatus and method | |
CN106462944B (en) | High-resolution panorama VR generator and method | |
DE112020003794T5 (en) | Depth-aware photo editing | |
US8831273B2 (en) | Methods and systems for pre-processing two-dimensional image files to be converted to three-dimensional image files | |
CN106327454B (en) | The method and apparatus of composograph | |
US20230186550A1 (en) | Optimizing generation of a virtual scene for use in a virtual display environment | |
CA2662355A1 (en) | Mosaic oblique images and methods of making and using same | |
KR102380862B1 (en) | Method and apparatus for image processing | |
CN103177432B (en) | A kind of by coded aperture camera acquisition panorama sketch method | |
JP2013542505A (en) | Method and apparatus for censoring content in an image | |
US20210217225A1 (en) | Arbitrary view generation | |
US20210125305A1 (en) | Video generation device, video generation method, program, and data structure | |
CN104867105B (en) | Picture processing method and device | |
Xu et al. | A general texture mapping framework for image-based 3D modeling | |
US20220215598A1 (en) | Infinitely layered camouflage | |
Griffiths et al. | OutCast: Outdoor Single‐image Relighting with Cast Shadows | |
US11341611B2 (en) | Automatic generation of perceived real depth animation | |
CN112511815B (en) | Image or video generation method and device | |
CN110717980A (en) | Regional three-dimensional reconstruction method and device and computer readable storage medium | |
Sun et al. | Seamless view synthesis through texture optimization | |
Hanika et al. | Camera space volumetric shadows | |
US10078905B2 (en) | Processing of digital motion images | |
JP6123341B2 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
Birsak et al. | Seamless texturing of archaeological data | |
CN110363842A (en) | Based on 3 D stereo reconstructing method, system and the storage medium from zero filling Branch cut |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
STCC | Information on status: application revival |
Free format text: WITHDRAWN ABANDONMENT, AWAITING EXAMINER ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |