US10275929B2 - Apparatus and method for applying a two-dimensional image on a three-dimensional model - Google Patents

Apparatus and method for applying a two-dimensional image on a three-dimensional model Download PDF

Info

Publication number
US10275929B2
US10275929B2 US15/645,935 US201715645935A US10275929B2 US 10275929 B2 US10275929 B2 US 10275929B2 US 201715645935 A US201715645935 A US 201715645935A US 10275929 B2 US10275929 B2 US 10275929B2
Authority
US
United States
Prior art keywords
point
triangle
dimensional image
dimensional model
membrane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/645,935
Other versions
US20170309058A1 (en
Inventor
Amy Jennings
Stuart Jennings
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hybrid Software Development NV
Original Assignee
Creative Edge Software LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Creative Edge Software LLC filed Critical Creative Edge Software LLC
Priority to US15/645,935 priority Critical patent/US10275929B2/en
Assigned to CREATIVE EDGE SOFTWARE LLC reassignment CREATIVE EDGE SOFTWARE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JENNINGS, Amy, JENNINGS, STUART
Publication of US20170309058A1 publication Critical patent/US20170309058A1/en
Application granted granted Critical
Publication of US10275929B2 publication Critical patent/US10275929B2/en
Assigned to HYBRID SOFTWARE DEVELOPMENT NV reassignment HYBRID SOFTWARE DEVELOPMENT NV ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CREATIVE EDGE SOFTWARE LLC
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • G06T11/10
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/52Parallel processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/016Exploded view

Definitions

  • This invention relates to an apparatus and method for applying a two-dimensional image on a three-dimensional model. More specifically, but not exclusively, this invention relates to applying a label on a three-dimensional model.
  • a method of applying a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices comprising the steps of: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a point outside the boundary of the two dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
  • the present invention provides a method which may extract UV-coordinates for a particular area of the three-dimensional model, corresponding to the area on which the two-dimensional image is to be applied. Accordingly, the computational power required to apply the two-dimensional image is significantly reduced, as the conventional step of UV-mapping the entire model is not carried out.
  • the user may therefore select a first point in an area on the three-dimensional model to apply the two-dimensional image, and the processor extracts spatial data from that area until it identifies a point on the model being a greater distance from the first point than the distance from the centre to the boundary of the two-dimensional image.
  • the two-dimensional image may therefore be applied to the three-dimensional model and displayed on the computing apparatus' display device in real time. This also allows the user to ‘drag’ the two-dimensional image across the three-dimensional model, i.e. selecting successive first points on the model, and the processor may apply the two-dimensional image to the model on the fly.
  • the method may further comprise the step of generating a single three-dimensional model from a plurality of three-dimensional models to form a membrane, wherein the first point is on the membrane.
  • the present invention may therefore apply the two-dimensional image to the membrane enveloping the models.
  • the membrane may contain most or all the vertices of the three-dimensional model, or in the case of a plurality of three-dimensional models, each representing various parts of a complex product, the membrane may contain most or all the vertices of each three-dimensional model.
  • the method may further comprise the step of applying a smoothing technique to the membrane.
  • a smoothing technique to the membrane.
  • any sudden changes in curvature on the membrane e.g. due to a gap in the membrane
  • the two-dimensional image may then be applied in a real-life manner.
  • the label may be applied to the model such that it passes over the gap, in a similar manner to real-life.
  • a computer program product comprising computer executable code which when executed on a computer may cause the computer to perform the method of the first aspect of the invention.
  • a computing apparatus comprising a processor arranged to apply a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the processor configured for: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a greater distance from the first point than the distance from an outer edge to the boundary of the two-dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the
  • the computing apparatus may further comprise a display device, wherein the processor is configured to cause the display device to display the three-dimensional model including the applied two-dimensional image.
  • FIG. 1 is flow diagram representing an embodiment of a method of the present invention.
  • FIG. 2 is a schematic diagram illustrating a computing apparatus configured to carry out the steps of the method of FIG. 1 .
  • FIG. 1 An embodiment of a method of applying a two-dimensional image to a three-dimensional model in a virtual environment will now be described with reference to FIG. 1 .
  • the embodiment will detail an example of placing a label (the two-dimensional image) onto a bottle (the three-dimensional model), but, on reviewing the following description, the skilled person will understand that the method may be applied to any type of two-dimensional image being placed on a three-dimensional model.
  • a first image file representing a label is created.
  • the first image file may be created in a suitable graphics editing computer program, such as Adobe Illustrator, and consists of a two-dimensional image representing a label (and shall hereinafter be referred to as the “label”).
  • the label is rectangular and includes a graphical portion enveloped by cutting lines.
  • a first model file representing a bottle is created.
  • the first model file may be created in any suitable three-dimensional modelling software, such as 3DS Max or AutoCAD.
  • the first model file consists of a three-dimensional representation of a bottle (and shall hereinafter be referred to as the “bottle”), defined by a triangular mesh.
  • the triangular mesh represents an outer surface of the bottle, and is a non-UV mapped model.
  • the triangular mesh is used to create a set of data which records the adjacency information of all the triangles within the mesh, in one embodiment a structure such as a half-edge data structure could be used. This allows traversal of the mesh starting from a single point on the mesh. This is labelled as the membrane which consists of a set of triangles with adjacency information.
  • a plurality of triangular meshes may be merged together (such as one representing a bottle and another representing a bottle cap).
  • a single membrane which includes the form of both the cap and bottle is created, this is achieved by starting with a simple shape which encompasses both cap and bottle which is then shrunk to the form of the bottle and cap combined. This single mesh is then used to generate a membrane which is used to place the 2D label on both bottle and cap.
  • a ‘smoothing’ operation is applied to the membrane.
  • smoothing techniques may be used, such as Laplacian polygon smoothing (however, the skilled person will understand that other smoothing techniques may be used).
  • This technique ensures that the membrane closely follows a real-life surface that the label will be applied to. For example, if the bottle contained a small gap, the membrane may be smoothed such that it passes over the gap without any change in curvature. This closely represents how the real-life label would be applied to the real-life bottle, as the real-life label would simply be pasted over the gap without any change in curvature.
  • the label is then applied to the membrane using the following algorithm.
  • an initial point of reference is taken as the centre of the label.
  • a collision routine is used to correlate the centre of the label with an initial hit point on the triangle mesh.
  • the initial hit point may be any arbitrary point on the triangle mesh.
  • the triangle containing the starting point on the membrane is chosen.
  • the starting point is determined as a point on the membrane which is closest to the initial hit point on the triangular mesh.
  • the distance between this point (horizontal and vertical) and each of the three vertices in the starting triangle is calculated and stored, then each of the three neighbouring triangles is added to a set of triangles to be processed.
  • An iterative process is then employed in the following manner. For each of the triangles in the triangle list, calculate the distance between the vertex in which the distance to the starting point is known and the vertex whose distance is still to be calculated, and store this distance. This distance is calculated using geodesic techniques.
  • the process can be split so that the label is processed as two separate labels; the processing order defines the overlap direction.
  • an additional 2D rotation can be applied to all the calculated distances, which would have the effect of rotating the label around the centre point.
  • this process results in a data set containing a set of points on the membrane and a set of distances (horizontal and vertical) between each point in the set of points on the membrane and the starting point.
  • the data set may then be converted into normalized UV coordinates using a transform.
  • the distances are stored as vectors from the starting point to each point, and are converted to UV coordinates by a translation by 0.5 and scaled to the particular height and width of the label.
  • the normalized UV coordinates may then be used to apply the label to the bottle. That is, the label may be treated as a texture which may then be correlated with the UV coordinates. Accordingly, a plurality of points within the label may be correlated with particular UV coordinates, such that the label may then be placed on the bottle in the virtual environment.
  • the process is suitable for applying a two-dimensional image onto any three-dimensional model.
  • the three dimensional model extracts UV coordinates for only a part of the three-dimensional model that the label is to be applied to, and thus reduces the computational power required for prior art techniques which involve a fully mapped UV model.
  • the two-dimensional image may be applied to any portion of the three-dimensional model, and the algorithm detailed above applies the centre of the label to a starting point on the membrane in that portion and extracts the UV coordinates from the membrane of the surrounding area.
  • the image may be applied in real-time. That is, the user may slide the image across the membrane of the three-dimensional model and the computational power required is reduced to the point that the image may be applied to the three-dimensional model “on-the-fly”.
  • a plurality of triangular meshes may be merged together.
  • a label may be applied to a complex product (i.e. an object having multiple constituent parts, such as a bottle and bottle cap).
  • a smoothing operation may be applied to the membrane to smooth any gaps or grooves between the parts of the complex product to ensure the label is applied appropriately.
  • the three-dimensional model consists of one or more triangular meshes to which a single membrane is generated.
  • any form of polygonal meshed model would be appropriate for the present invention.
  • the application of a membrane is preferable as it allows the user to apply a smoothing technique such that the two-dimensional image is applied in a real-life manner.
  • the smoothing step is non-essential.
  • the algorithm to extract the UV coordinates may be based on the vertices of the triangular mesh rather than points of a smoothed membrane.
  • a label is applied to a model of a bottle.
  • the method of the present invention is suitable for applying any two-dimensional image to any three-dimensional model.
  • the method would be suitable for applying graphics to the bodywork of a vehicle in a virtual environment.
  • the method of the present invention described above may be implemented on a computing apparatus 1 , such as a personal computer or mobile computing device (e.g. a tablet).
  • the computing device 1 includes a processor 3 configured for implementing the method of the present invention, and a display device 5 configured for displaying the three-dimensional model including the applied two-dimensional image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

A method and apparatus for applying a two-dimensional image on a three-dimensional model composed of a polygonal mesh. The method comprises generating an adjacency structure for all triangles within the mesh, identifying a triangle within membrane containing the desired center point, calculating spatial distances between the three vertices and desired center checking each triangle edge to see if the distances show an intersection, if a collision is detected add the triangle to the list and iteratively processing all the triangles in the list calculate the spatial data of the single unknown vertex, check the two edges of the triangle to see if the calculated distances show an intersection, if an intersection occurs add this new triangle to the list; transforming into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims priority to the following applications and is a continuation application of U.S. patent application Ser. No. 14/771,553 filed Aug. 31, 2015, which claims priority to PCT International Application No. PCT/GB2014/050694 filed on Oct. 3, 2014, which claims priority to British Patent Application No. GB1304321.1 filed Mar. 11, 2013, the entirety of the disclosures of which are expressly incorporated herein by reference.
STATEMENT RE: FEDERALLY SPONSORED RESEARCH/DEVELOPMENT
Not Applicable
BACKGROUND
This invention relates to an apparatus and method for applying a two-dimensional image on a three-dimensional model. More specifically, but not exclusively, this invention relates to applying a label on a three-dimensional model.
In product development, the visual appearance of the product is an important part of the marketing strategy. Accordingly, developers typically use computer editing suites to design their packaging. In some industries, containers for products are manufactured separately to the brand material, and the brand materials are subsequently fixed to the container. This is common practice in, for example, the drinks industry, in which bottles are manufactured to contain the product, and a label is produced separately and applied to the bottle.
Separate computer editing suites exist for creating the label and creating a model of the bottle. To visualize the label on the bottle, a UV map of the model is created and the label is applied as a texture. This is computationally expensive and also requires a lot of user intervention to ensure the label is applied correctly. This leads to a significant delay for the computer to render the label onto the model. Accordingly, the label cannot be placed on the bottle and moved in real-time, as the processing requirements outweigh the processing power available.
It is therefore desirable to alleviate some or all of the above problems.
BRIEF SUMMARY
According to a first aspect of the invention, there is provided a method of applying a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the method comprising the steps of: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a point outside the boundary of the two dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
The present invention provides a method which may extract UV-coordinates for a particular area of the three-dimensional model, corresponding to the area on which the two-dimensional image is to be applied. Accordingly, the computational power required to apply the two-dimensional image is significantly reduced, as the conventional step of UV-mapping the entire model is not carried out.
On a computing apparatus adapted to carry out the method of the present invention, the user may therefore select a first point in an area on the three-dimensional model to apply the two-dimensional image, and the processor extracts spatial data from that area until it identifies a point on the model being a greater distance from the first point than the distance from the centre to the boundary of the two-dimensional image. As the processing power required has been reduced significantly, the two-dimensional image may therefore be applied to the three-dimensional model and displayed on the computing apparatus' display device in real time. This also allows the user to ‘drag’ the two-dimensional image across the three-dimensional model, i.e. selecting successive first points on the model, and the processor may apply the two-dimensional image to the model on the fly.
The method may further comprise the step of generating a single three-dimensional model from a plurality of three-dimensional models to form a membrane, wherein the first point is on the membrane. The present invention may therefore apply the two-dimensional image to the membrane enveloping the models. The membrane may contain most or all the vertices of the three-dimensional model, or in the case of a plurality of three-dimensional models, each representing various parts of a complex product, the membrane may contain most or all the vertices of each three-dimensional model.
The method may further comprise the step of applying a smoothing technique to the membrane. Thus, any sudden changes in curvature on the membrane (e.g. due to a gap in the membrane) may be smoothed over. Accordingly, the two-dimensional image may then be applied in a real-life manner. For example, in the case of a label being applied to a three-dimensional model, the label may be applied to the model such that it passes over the gap, in a similar manner to real-life.
A computer program product comprising computer executable code which when executed on a computer may cause the computer to perform the method of the first aspect of the invention.
According to a second aspect of the invention, there is provided a computing apparatus comprising a processor arranged to apply a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the processor configured for: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a greater distance from the first point than the distance from an outer edge to the boundary of the two-dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
The computing apparatus may further comprise a display device, wherein the processor is configured to cause the display device to display the three-dimensional model including the applied two-dimensional image.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features and advantages of the various embodiments disclosed herein will be better understood with respect to the following description and drawings, in which like numbers refer to like parts throughout, and in which:
FIG. 1 is flow diagram representing an embodiment of a method of the present invention; and
FIG. 2 is a schematic diagram illustrating a computing apparatus configured to carry out the steps of the method of FIG. 1.
DETAILED DESCRIPTION
The above description is given by way of example, and not limitation. Given the above disclosure, one skilled in the art could devise variations that are within the scope and spirit of the invention disclosed herein. Further, the various features of the embodiments disclosed herein can be used alone, or in varying combinations with each other and are not intended to be limited to the specific combination described herein. Thus, the scope of the claims is not to be limited by the illustrated embodiments.
An embodiment of a method of applying a two-dimensional image to a three-dimensional model in a virtual environment will now be described with reference to FIG. 1. The embodiment will detail an example of placing a label (the two-dimensional image) onto a bottle (the three-dimensional model), but, on reviewing the following description, the skilled person will understand that the method may be applied to any type of two-dimensional image being placed on a three-dimensional model.
As an initial step, a first image file representing a label is created. The first image file may be created in a suitable graphics editing computer program, such as Adobe Illustrator, and consists of a two-dimensional image representing a label (and shall hereinafter be referred to as the “label”). In this embodiment, the label is rectangular and includes a graphical portion enveloped by cutting lines.
As a further initial step, a first model file representing a bottle is created. The first model file may be created in any suitable three-dimensional modelling software, such as 3DS Max or AutoCAD. The first model file consists of a three-dimensional representation of a bottle (and shall hereinafter be referred to as the “bottle”), defined by a triangular mesh. The skilled person will understand that the triangular mesh represents an outer surface of the bottle, and is a non-UV mapped model.
The triangular mesh is used to create a set of data which records the adjacency information of all the triangles within the mesh, in one embodiment a structure such as a half-edge data structure could be used. This allows traversal of the mesh starting from a single point on the mesh. This is labelled as the membrane which consists of a set of triangles with adjacency information.
As explained in more detail below, a plurality of triangular meshes may be merged together (such as one representing a bottle and another representing a bottle cap).
In this embodiment, a single membrane which includes the form of both the cap and bottle is created, this is achieved by starting with a simple shape which encompasses both cap and bottle which is then shrunk to the form of the bottle and cap combined. This single mesh is then used to generate a membrane which is used to place the 2D label on both bottle and cap.
In this embodiment, a ‘smoothing’ operation is applied to the membrane. Various smoothing techniques may be used, such as Laplacian polygon smoothing (however, the skilled person will understand that other smoothing techniques may be used). This technique ensures that the membrane closely follows a real-life surface that the label will be applied to. For example, if the bottle contained a small gap, the membrane may be smoothed such that it passes over the gap without any change in curvature. This closely represents how the real-life label would be applied to the real-life bottle, as the real-life label would simply be pasted over the gap without any change in curvature.
The label is then applied to the membrane using the following algorithm. As a first step, an initial point of reference is taken as the centre of the label. A collision routine is used to correlate the centre of the label with an initial hit point on the triangle mesh. The initial hit point may be any arbitrary point on the triangle mesh.
In a next step of the algorithm (as shown in FIG. 1), the triangle containing the starting point on the membrane is chosen. The starting point is determined as a point on the membrane which is closest to the initial hit point on the triangular mesh. The distance between this point (horizontal and vertical) and each of the three vertices in the starting triangle is calculated and stored, then each of the three neighbouring triangles is added to a set of triangles to be processed. An iterative process is then employed in the following manner. For each of the triangles in the triangle list, calculate the distance between the vertex in which the distance to the starting point is known and the vertex whose distance is still to be calculated, and store this distance. This distance is calculated using geodesic techniques. Check the remaining two edges of the triangle to see if the 2D coordinates of the triangle intersect the boundary rectangle of the two dimensional label. If an edge does intersect then add the triangle neighbour which uses that edge to the list of triangles to process. The process is run in an iterative manner until there are no more triangles within the list to process.
If the 2D label is likely to overlap itself, the process can be split so that the label is processed as two separate labels; the processing order defines the overlap direction.
In one embodiment, an additional 2D rotation can be applied to all the calculated distances, which would have the effect of rotating the label around the centre point.
The skilled person will understand that this process results in a data set containing a set of points on the membrane and a set of distances (horizontal and vertical) between each point in the set of points on the membrane and the starting point. The data set may then be converted into normalized UV coordinates using a transform. For example, the distances are stored as vectors from the starting point to each point, and are converted to UV coordinates by a translation by 0.5 and scaled to the particular height and width of the label.
The normalized UV coordinates may then be used to apply the label to the bottle. That is, the label may be treated as a texture which may then be correlated with the UV coordinates. Accordingly, a plurality of points within the label may be correlated with particular UV coordinates, such that the label may then be placed on the bottle in the virtual environment.
The skilled person will understand that the process is suitable for applying a two-dimensional image onto any three-dimensional model. The three dimensional model extracts UV coordinates for only a part of the three-dimensional model that the label is to be applied to, and thus reduces the computational power required for prior art techniques which involve a fully mapped UV model. The two-dimensional image may be applied to any portion of the three-dimensional model, and the algorithm detailed above applies the centre of the label to a starting point on the membrane in that portion and extracts the UV coordinates from the membrane of the surrounding area.
The skilled person will also realise that by reducing the computational power required to apply the two-dimensional image to the three-dimensional model, the image may be applied in real-time. That is, the user may slide the image across the membrane of the three-dimensional model and the computational power required is reduced to the point that the image may be applied to the three-dimensional model “on-the-fly”.
As noted above, a plurality of triangular meshes may be merged together. In this manner, a label may be applied to a complex product (i.e. an object having multiple constituent parts, such as a bottle and bottle cap). Next, a smoothing operation may be applied to the membrane to smooth any gaps or grooves between the parts of the complex product to ensure the label is applied appropriately.
In the above embodiments, the three-dimensional model consists of one or more triangular meshes to which a single membrane is generated. The skilled person will understand that any form of polygonal meshed model would be appropriate for the present invention. Furthermore, the skilled person will understand that the application of a membrane is preferable as it allows the user to apply a smoothing technique such that the two-dimensional image is applied in a real-life manner. However, the smoothing step is non-essential. For example, the algorithm to extract the UV coordinates may be based on the vertices of the triangular mesh rather than points of a smoothed membrane.
In the above embodiment, a label is applied to a model of a bottle. However, the skilled person will understand that the method of the present invention is suitable for applying any two-dimensional image to any three-dimensional model. For example, the method would be suitable for applying graphics to the bodywork of a vehicle in a virtual environment.
The method of the present invention described above may be implemented on a computing apparatus 1, such as a personal computer or mobile computing device (e.g. a tablet). As shown in FIG. 2, the computing device 1 includes a processor 3 configured for implementing the method of the present invention, and a display device 5 configured for displaying the three-dimensional model including the applied two-dimensional image.
The skilled person will understand that any combination of elements is possible within the scope of the invention, as claimed.

Claims (14)

The invention claimed is:
1. A method of applying a two-dimensional image to a three-dimensional model in a virtual environment, the three-dimensional model composed of a polygon mesh having a plurality of vertices, the method comprising the steps of:
creating a membrane by starting with a simple shape for the three dimensional model then shrinking the shape to the form of the three-dimensional model;
generating an initial adjacency structure for all triangles within the polygon mesh;
identifying a first triangle within the membrane which contains a desired centre point of the two-dimensional image, the starting point being a point on the membrane which is closest to the initial hit point on the triangular mesh;
calculating horizontal and vertical spatial distances between the three vertices of the first triangle and desired centre point of the two-dimensional image;
checking each edge of the first triangle to see if the calculated distances show an intersection on the two-dimensional image, and if a collision is detected adding a new triangle neighbour which neighbours the first triangle, to a list of triangles to process;
iteratively processing each of the triangles in the list, until the list is empty by:
(a) calculating the spatial data of the single unknown vertex within the triangle being processed;
(b) checking the two edges of the triangle connected to the said single unknown vertex to see if the calculated distances show an intersection with the two-dimensional image, and;
(c) if an intersection occurs adding this new triangle neighbour to the triangle list once empty, transforming the vertices of the triangles and spatial data into UV-coordinates; and
applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
2. A method as claimed in claim 1, further comprising the step of applying a smoothing technique to the membrane.
3. A method as claimed in claim 1, wherein the three-dimensional model is composed of a plurality of polygonal meshes and the membrane is applied to the plurality of polygonal meshes.
4. A method as claimed in claim 1, further comprising providing a display device, and causing the display device to display the three-dimensional model including the applied two-dimensional image.
5. A computer program product comprising computer executable code which when executed on a computer causes the computer to apply a two-dimensional image to a three-dimensional model in a virtual environment, the three-dimensional model composed of a polygon mesh having a plurality of vertices, the computer program product being configured for:
creating a membrane by generating an initial adjacency structure for all triangles within the polygon mesh;
identifying a first triangle within the membrane which contains a desired centre point of the two-dimensional image;
calculating horizontal and vertical spatial distances between the three vertices of the first triangle and desired centre point of the two-dimensional image;
checking each edge of the first triangle to see if the calculated distances show an intersection on the two-dimensional image, and if a collision is detected adding the triangle which neighbours the first triangle to a list of triangles to process;
iteratively processing each triangle in the list until the list is empty by:
(a) calculating the spatial data of the single unknown vertex within the triangle being processed;
(b) checking the two edges of the triangle connected to the said single unknown vertex to see if the calculated distances show an intersection with the two-dimensional image; and;
(c) if an intersection occurs adding this new triangle neighbour to the triangle list once the list is empty, transforming the points and spatial data into UV-coordinates; and
applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
6. A computer program product as claimed in claim 5, further configured for applying a smoothing technique to the membrane.
7. A computer program product as claimed in claim 5, wherein the three-dimensional model is composed on a plurality of polygonal meshes to form the membrane.
8. A computer program product as claimed in claim 5, further comprising a display device, wherein the processor is configured to cause the display device to display the three-dimensional model including the applied two-dimensional image.
9. A method of applying a two-dimensional image to a three-dimensional model in a virtual environment, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the method comprising the steps of:
identifying a first point corresponding to a vertex of the plurality of vertices;
identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point;
calculating horizontal and vertical spatial data between the second point and the first point;
iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a point outside the boundary of the two dimensional image; and
transforming the first point, second point, successive points, stop point and the respective spatial data of these points into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
10. A method as claimed in claim 9, further comprising the step of applying an isosurface to the polygonal mesh to form a membrane, wherein the first point is on the membrane.
11. A method as claimed in claim 10, further comprising the step of applying a smoothing technique to the membrane.
12. A method as claimed in either claim 10, wherein the three-dimensional model is composed on a plurality of polygonal meshes and the isosurface is applied to the plurality of polygonal meshes.
13. A method as claimed in claim 9, wherein the stop point is a greater distance from the first point than a radius of a bounding circle for the two-dimensional image.
14. A method as claimed in claim 9, further comprising providing a display device, and causing the display device to display the three-dimensional model including the applied two-dimensional image.
US15/645,935 2013-03-11 2017-07-10 Apparatus and method for applying a two-dimensional image on a three-dimensional model Active US10275929B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/645,935 US10275929B2 (en) 2013-03-11 2017-07-10 Apparatus and method for applying a two-dimensional image on a three-dimensional model

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GB1304321.1 2013-03-11
GBGB1304321.1A GB201304321D0 (en) 2013-03-11 2013-03-11 Apparatus and method for applying a two-dimensional image on a three-dimensional model
PCT/GB2014/050694 WO2014140540A1 (en) 2013-03-11 2014-03-10 Apparatus and method for applying a two-dimensional image on a three-dimensional model
US14/771,553 US20160012629A1 (en) 2013-03-11 2014-10-03 Apparatus and method for applying a two-dimensional image on a three-dimensional model
US15/645,935 US10275929B2 (en) 2013-03-11 2017-07-10 Apparatus and method for applying a two-dimensional image on a three-dimensional model

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
PCT/GB2014/050694 Continuation WO2014140540A1 (en) 2013-03-11 2014-03-10 Apparatus and method for applying a two-dimensional image on a three-dimensional model
US14/771,553 Continuation US20160012629A1 (en) 2013-03-11 2014-10-03 Apparatus and method for applying a two-dimensional image on a three-dimensional model

Publications (2)

Publication Number Publication Date
US20170309058A1 US20170309058A1 (en) 2017-10-26
US10275929B2 true US10275929B2 (en) 2019-04-30

Family

ID=48189693

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/771,553 Abandoned US20160012629A1 (en) 2013-03-11 2014-10-03 Apparatus and method for applying a two-dimensional image on a three-dimensional model
US15/645,935 Active US10275929B2 (en) 2013-03-11 2017-07-10 Apparatus and method for applying a two-dimensional image on a three-dimensional model

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/771,553 Abandoned US20160012629A1 (en) 2013-03-11 2014-10-03 Apparatus and method for applying a two-dimensional image on a three-dimensional model

Country Status (5)

Country Link
US (2) US20160012629A1 (en)
EP (1) EP2973421B1 (en)
CN (1) CN105190702B (en)
GB (1) GB201304321D0 (en)
WO (1) WO2014140540A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201304321D0 (en) 2013-03-11 2013-04-24 Creative Edge Software Llc Apparatus and method for applying a two-dimensional image on a three-dimensional model
EP3408641B1 (en) * 2016-01-28 2021-05-26 Siemens Healthcare Diagnostics Inc. Methods and apparatus for multi-view characterization
KR102787157B1 (en) * 2016-12-07 2025-03-26 삼성전자주식회사 Methods and devices of reducing structure noises through self-structure analysis
GB201715952D0 (en) 2017-10-02 2017-11-15 Creative Edge Software Llc 3D computer modelling method
US10521970B2 (en) * 2018-02-21 2019-12-31 Adobe Inc. Refining local parameterizations for applying two-dimensional images to three-dimensional models
EP3667623A1 (en) * 2018-12-12 2020-06-17 Twikit NV A system for optimizing a 3d mesh
WO2021054976A1 (en) * 2019-09-20 2021-03-25 Hewlett-Packard Development Company, L.P. Media modification marks based on image content
CN112652033B (en) * 2019-10-10 2025-02-14 中科星图股份有限公司 A two-dimensional and three-dimensional integrated polygonal graphics generation method, device and storage medium
US12204828B2 (en) * 2020-07-29 2025-01-21 The Procter & Gamble Company Three-dimensional (3D) modeling systems and methods for automatically generating photorealistic, virtual 3D package and product models from 3D and two-dimensional (2D) imaging assets
CN112614046B (en) * 2020-12-17 2024-02-23 武汉达梦数据技术有限公司 Method and device for drawing three-dimensional model on two-dimensional plane

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0889437A2 (en) 1997-04-14 1999-01-07 Adobe Systems, Inc. Raster image mapping
US5903270A (en) 1997-04-15 1999-05-11 Modacad, Inc. Method and apparatus for mapping a two-dimensional texture onto a three-dimensional surface
US20010056308A1 (en) * 2000-03-28 2001-12-27 Michael Petrov Tools for 3D mesh and texture manipulation
US20020050988A1 (en) * 2000-03-28 2002-05-02 Michael Petrov System and method of three-dimensional image capture and modeling
US7027050B1 (en) 1999-02-04 2006-04-11 Canon Kabushiki Kaisha 3D computer graphics processing apparatus and method
US20060290695A1 (en) * 2001-01-05 2006-12-28 Salomie Ioan A System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display
US7280106B2 (en) 2002-10-21 2007-10-09 Canon Europa N.V. Apparatus and method for generating texture maps for use in 3D computer graphics
US20080225044A1 (en) * 2005-02-17 2008-09-18 Agency For Science, Technology And Research Method and Apparatus for Editing Three-Dimensional Images
US20090040224A1 (en) * 2007-08-06 2009-02-12 The University Of Tokyo Three-dimensional shape conversion system, three-dimensional shape conversion method, and program for conversion of three-dimensional shape
US7511718B2 (en) 2003-10-23 2009-03-31 Microsoft Corporation Media integration layer
US20110298800A1 (en) 2009-02-24 2011-12-08 Schlichte David R System and Method for Mapping Two-Dimensional Image Data to a Three-Dimensional Faceted Model
CN102521852A (en) 2011-11-24 2012-06-27 中国船舶重工集团公司第七0九研究所 Showing method for target label independent of three-dimensional scene space
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
US8654121B1 (en) 2009-10-02 2014-02-18 Pixar Structured polygonal mesh retesselation
WO2014140540A1 (en) 2013-03-11 2014-09-18 Creative Edge Software Llc Apparatus and method for applying a two-dimensional image on a three-dimensional model

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0889437A2 (en) 1997-04-14 1999-01-07 Adobe Systems, Inc. Raster image mapping
US6124858A (en) 1997-04-14 2000-09-26 Adobe Systems Incorporated Raster image mapping
US5903270A (en) 1997-04-15 1999-05-11 Modacad, Inc. Method and apparatus for mapping a two-dimensional texture onto a three-dimensional surface
US7027050B1 (en) 1999-02-04 2006-04-11 Canon Kabushiki Kaisha 3D computer graphics processing apparatus and method
US20010056308A1 (en) * 2000-03-28 2001-12-27 Michael Petrov Tools for 3D mesh and texture manipulation
US20020050988A1 (en) * 2000-03-28 2002-05-02 Michael Petrov System and method of three-dimensional image capture and modeling
US20060227133A1 (en) * 2000-03-28 2006-10-12 Michael Petrov System and method of three-dimensional image capture and modeling
US20060232583A1 (en) * 2000-03-28 2006-10-19 Michael Petrov System and method of three-dimensional image capture and modeling
US20060290695A1 (en) * 2001-01-05 2006-12-28 Salomie Ioan A System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display
US7280106B2 (en) 2002-10-21 2007-10-09 Canon Europa N.V. Apparatus and method for generating texture maps for use in 3D computer graphics
US7511718B2 (en) 2003-10-23 2009-03-31 Microsoft Corporation Media integration layer
US20080225044A1 (en) * 2005-02-17 2008-09-18 Agency For Science, Technology And Research Method and Apparatus for Editing Three-Dimensional Images
US20090040224A1 (en) * 2007-08-06 2009-02-12 The University Of Tokyo Three-dimensional shape conversion system, three-dimensional shape conversion method, and program for conversion of three-dimensional shape
US20110298800A1 (en) 2009-02-24 2011-12-08 Schlichte David R System and Method for Mapping Two-Dimensional Image Data to a Three-Dimensional Faceted Model
US8654121B1 (en) 2009-10-02 2014-02-18 Pixar Structured polygonal mesh retesselation
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
CN102521852A (en) 2011-11-24 2012-06-27 中国船舶重工集团公司第七0九研究所 Showing method for target label independent of three-dimensional scene space
WO2014140540A1 (en) 2013-03-11 2014-09-18 Creative Edge Software Llc Apparatus and method for applying a two-dimensional image on a three-dimensional model
US20160012629A1 (en) 2013-03-11 2016-01-14 Creative Edge Software Llc Apparatus and method for applying a two-dimensional image on a three-dimensional model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Igarashi T. et al. Adaptive Unwrapping for Interactive Texture Painting, Proceedings of the 2001 Synposium on Interactive 3D Graphics, Mar. 19-21, 2001; pp. 209-216 New York, New York.
Litwinowicz, Peter, Efficient Techniques for Interactive Texture Placement, Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, 1994, pp. 119-122, New York New York.
The State Intellectual Property Office of the People'S Republic of China, Notice of First Office Action, May 27, 2017, 6 pages, Beijing, China.

Also Published As

Publication number Publication date
WO2014140540A1 (en) 2014-09-18
EP2973421B1 (en) 2021-02-24
GB201304321D0 (en) 2013-04-24
CN105190702A (en) 2015-12-23
CN105190702B (en) 2018-06-26
US20170309058A1 (en) 2017-10-26
EP2973421A1 (en) 2016-01-20
US20160012629A1 (en) 2016-01-14

Similar Documents

Publication Publication Date Title
US10275929B2 (en) Apparatus and method for applying a two-dimensional image on a three-dimensional model
US10685420B2 (en) Tile based computer graphics
US11270507B1 (en) Rendering textures utilizing sharp displacement mapping
Nieto et al. Cage based deformations: a survey
US9135750B2 (en) Technique for filling holes in a three-dimensional model
US10140736B2 (en) Graphics processing systems
US8269770B1 (en) Tessellation of trimmed parametric surfaces by walking the surface
US8718368B2 (en) Text flow in and around irregular containers
US9367943B2 (en) Seamless fracture in a production pipeline
CN102184522A (en) Vertex data storage method, graphic processing unit and refiner
US20130257856A1 (en) Determining a View of an Object in a Three-Dimensional Image Viewer
KR102477265B1 (en) Graphics processing apparatus and method for determining LOD (level of detail) for texturing of graphics pipeline thereof
US20230196676A1 (en) Rendering three-dimensional objects utilizing sharp tessellation
CN106530379B (en) Method and apparatus for performing path delineation
CN109410213A (en) Polygon pel method of cutting out, computer readable storage medium, electronic equipment based on bounding box
US9858709B2 (en) Apparatus and method for processing primitive in three-dimensional (3D) graphics rendering system
US11869123B2 (en) Anti-aliasing two-dimensional vector graphics using a compressed vertex buffer
US8659600B2 (en) Generating vector displacement maps using parameterized sculpted meshes
Li et al. Skeleton-enhanced line drawings for 3D models
Fu et al. Generating Chinese calligraphy on freeform shapes
JP6802129B2 (en) Information processing equipment, methods and programs
CN119850401B (en) A GPU image rendering method and system based on embedded devices
US20250173949A1 (en) Mapping visibility state to texture maps for accelerating light transport simulation
Mesteskiy et al. Bezier curve based continuous medial representation for shape analysis: A theoretical framework
CN120339454A (en) Image processing method, device, electronic device, storage medium and product

Legal Events

Date Code Title Description
AS Assignment

Owner name: CREATIVE EDGE SOFTWARE LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JENNINGS, AMY;JENNINGS, STUART;REEL/FRAME:043151/0324

Effective date: 20160127

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HYBRID SOFTWARE DEVELOPMENT NV, BELGIUM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CREATIVE EDGE SOFTWARE LLC;REEL/FRAME:059417/0968

Effective date: 20220328

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4