KR101653802B1 - 3 dimenstional graphic data capturing system and method thereof - Google Patents

3 dimenstional graphic data capturing system and method thereof Download PDF

Info

Publication number
KR101653802B1
KR101653802B1 KR1020150039943A KR20150039943A KR101653802B1 KR 101653802 B1 KR101653802 B1 KR 101653802B1 KR 1020150039943 A KR1020150039943 A KR 1020150039943A KR 20150039943 A KR20150039943 A KR 20150039943A KR 101653802 B1 KR101653802 B1 KR 101653802B1
Authority
KR
South Korea
Prior art keywords
3d
object
capture area
graphic data
objects
Prior art date
Application number
KR1020150039943A
Other languages
Korean (ko)
Inventor
김시진
Original Assignee
주식회사 엔씨소프트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 엔씨소프트 filed Critical 주식회사 엔씨소프트
Priority to KR1020150039943A priority Critical patent/KR101653802B1/en
Application granted granted Critical
Publication of KR101653802B1 publication Critical patent/KR101653802B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

A 3D capture system and method are disclosed. The method comprising: displaying a user interface for setting a capture area on an execution screen of 3D software; Determining a 3D object belonging to the set capture area, and obtaining 3D modeling data of the determined 3D object; And exporting the acquired 3D modeling data as graphic data for 3D printer output, and may be implemented in the form of a computer running on a computer or a computer executing the method.

Description

[0001] The present invention relates to a 3D capturing system and method,

The present invention relates to a method for generating data for a 3D printer on software processing 3D graphics.

3D printer technology has been recognized as one of the top 10 technologies in the World Economic Forum (WEF) in 2012 and is being considered as one of the technologies that will change human civilization in the future.

Currently, commercially available 3D printers have a rapid prototyping method in which a three-dimensional shape is formed by stacking a large number of layers of hardened powder or liquid, and a method of making a product by cutting a large synthetic resin into a round blade.

The scope of 3D printer technology is unlimited, and there is a great interest in the technology in the plastic and figure industries.

In the past, it was common for the artist to make a work using sculptures, to produce a resin kit by using resin or other materials, or to produce a plastic product by making a casting and plastics material.

However, when a 3D printer is used, it is possible to easily produce various products using 3D graphic data generated in a predetermined format.

Using 3D printer technology, which has the advantage of small quantity production of various products, it becomes possible to produce figures etc. in response to various demands of users.

In particular, if various contents using 3D graphic data are converted into data of a type recognizable by the 3D printer, various demands of users who want to have figures related to specific contents can be satisfied.

However, since it is necessary to have expert knowledge to modify or convert 3D graphic data, it is much more difficult and costly to purchase a ready-made article if users want to output their desired figures to a 3D printer I can not help it.

1. Korean Patent Laid-Open Publication No. 10-2014-0061340 "Game screen shot management device using EXIF metadata and method thereof" 2. Korea Patent Registration No. 10-0771839 " Online Game Screen Capture and Character Position Confirmation Providing System and Method " 3. Korean Patent Registration No. 10-0682455 "Game scrap system, game scrap method, and computer readable recording medium recording a program for executing the method"

The present invention proposes a method for exporting a specific scene to a 3D printer so that it can be output directly on software such as a game for rendering and displaying 3D graphic data in real time.

Especially, by connecting the objects that are not in contact with the background or the main object to find the optimal path and by complementing the generation of cavities caused by the cropping of the 3D graphic data, And how to use it.

In order to achieve the above object, a 3D capturing method according to the present invention includes a computer system providing a user interface for setting a capture area on an execution screen of 3D software,

Determining a 3D object belonging to the set capture area, and acquiring 3D modeling data of the determined 3D object; and

And outputting the obtained 3D modeling data to graphic data for 3D printer output.

In operation 110, the computer system may set the capture area by expanding the quadrangular area on the execution window of the 3D software to the hexahedron area based on the predetermined center point on the coordinate system of the 3D software.

In operation 120, the computer system selects an object to be included in 3D printer output graphic data among 3D objects belonging to the set capture area.

If there are two or more objects not in contact with each other in the set capture area, the remaining objects excluding the object having the largest volume among them can be excluded from the 3D printer output graphic data.

If two or more objects that are not in contact with each other are included by the user's selection, an extension line connecting the objects is created, and the generated extension line is converted into a three-dimensional shape having a predetermined volume and included in 3D printer output graphic data.

Alternatively, if the computer system includes at least two objects that are not in contact with each other in step 120, any one of them may be included in the graphic data for 3D printer output after changing the angle of the axis, .

Specifically, a first axis connecting a shortest distance between the two objects, a second axis connecting a first axis and a center of gravity of the moved object from a point where the moved object meets,

The angle of the second axis may be changed so that the first axis and the second axis are located on the same line, and then the object may be moved and contacted with each other.

If it is determined that the contact point is located on the second axis and the second axis is not rotated on the second axis when the one of the objects moves and touches the other one of the other objects, The movement of any one of the objects can be resumed.

Meanwhile, when only a part of one 3D object is included in the set capture area, the polygon constituting the 3D object is cropped and included in 3D printer output graphic data.

For example, in the case of a background object, the background object can be cropped according to a set capture area or a predetermined size and shape.

When the inside of the cropped object is empty and the blank portion is exposed to the outside, a virtual surface connecting the cropped region outline is created and added to the graphic data for 3D printer output, thereby obtaining a 3D printer output result of the 3D shape.

According to another aspect of the present invention, there is provided a computer program for executing a 3D capture method, comprising: displaying a user interface for setting a capture area on an execution screen of software;

Determining a 3D object belonging to the set capture area, and obtaining 3D modeling data of the determined 3D object; And

And outputting the obtained 3D modeling data as graphic data for 3D printer output, including instructions for executing on a computer, and recorded on a recording medium.

In order to achieve the above object, the 3D capture system according to the present invention can be implemented in the form of a computer system including a display and a processor.

At this time, the display displays a user interface for setting a capture area on the execution screen of the 3D software according to the processing of the processor,

The processor determines the 3D object belonging to the capture area set by the user, acquires the 3D modeling data of the determined 3D object, and then outputs the 3D modeling data to the 3D printer output graphic data.

At this time, the processor selects the object to be included in the 3D printer output graphic data among the 3D objects belonging to the set capture area by a predetermined algorithm, and then merges the 3D modeling data of the selected objects into a single object Then export the graphic data for 3D printer output.

If there are at least two objects that do not touch each other among the 3D objects belonging to the set capture area, the processor creates an extension line connecting the two objects. The extension line is set to a thickness in proportion to the volume of the object. To be included in graphic data for 3D printer output, or to process any one of the objects so that they are in contact with each other, and then export them.

If the capturing area includes only a part of the 3D object, the processor crops a part belonging to the capturing area as a polygon constituting the 3D object to be included in 3D printer output graphic data,

If the inside of the cropped 3D object is empty, a virtual surface connecting the cropped area outline of the cropped 3D object may be generated and included in the 3D printer output graphic data.

According to the present invention, a user sets an area to be captured on a software such as a game which displays 3D graphics data in real time and displays it, and selects a simple selection option, You can get it easily.

For example, in the case of an online game, each user can have his / her own figure by outputting his / her online game player character through a 3D printer, and further, a specific scene of the game can be easily produced in the form of a diorama.

Particularly, it is possible to obtain graphic data for 3D printer output including such non-contact objects by finding and connecting an object in a state not in contact with a background or a main object to an optimum path.

In addition, a 3D printer output capable of stable mounting can be obtained by filling voids generated as a part of 3D graphic data is cropped for outputting to a 3D printer.

1 is a block diagram illustrating a computer system in which the present invention is implemented,
2 is a flowchart illustrating a 3D capturing method according to the present invention,
3 is a conceptual diagram illustrating a process of capturing graphic data for 3D printer output on a software execution screen,
4 is a view for explaining a process of moving an object that does not touch each other,
FIG. 5 is a view for explaining a process of filling a cavity generated in a graphic data crop,
6 is a diagram illustrating an output process by the 3D printer,
FIG. 7 is a diagram illustrating an output result of the 3D printer.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Hereinafter, the present invention will be described in detail with reference to preferred embodiments of the present invention and the accompanying drawings, wherein like reference numerals refer to like elements.

It is to be understood that when an element is referred to as being "comprising" another element in the description of the invention or in the claims, it is not to be construed as being limited to only that element, And the like.

Also, in the description of the invention or the claims, the components named as "means", "parts", "modules", "blocks" refer to units that process at least one function or operation, Each of which may be implemented by software or hardware, or a combination thereof.

Hereinafter, the 3D software is collectively referred to as a type of software in which 3D graphic data including polygons and mapping data are rendered in real time and displayed on a screen. For example, a 3D game program, graphic software for 3D graphic modeling, and the like.

Hereinafter, a 3D object refers to an object rendered in real time on 3D software. A character, a background, an attachment attached to a character or a background, occupies a predetermined area on the coordinate system of the 3D software, may be composed of a plurality of polygons, or may be a vector object. On the other hand, each 3D object may include texture mapping data that is mapped externally.

Hereinafter, a 3D printer refers to a printer that outputs 3D graphics data using a prepared material according to 3D graphic data input in a predetermined format. It should be construed as belonging to the 3D printer as long as it satisfies the above definition in addition to the well-known known methods such as rapid prototyping method in which a powder or a plastic liquid is cured by laminating layers, or modeling of a synthetic resin mass is performed.

Hereinafter, the graphic data for output of a 3D printer refers to data relating to a three-dimensional shape having a format that can be recognized by a 3D printer, that is, a driver for driving a 3D printer, and is input to the 3D printer so that the 3D printer outputs the three- Means data that can be executed.

The present invention relates to a 3D capture system and method, and can be implemented in the form of a computer system, a method executed on a computer, or a computer program.

1 is a block diagram illustrating a computer system in which the present invention is implemented.

1, the computer system 100 has a processor 101 and a display 102, a memory 103,

Processor 101 is a means for executing instructions and may take the form of a chipset, such as a CPU.

The display 102 may be in the form of an LCD monitor or the like as a means for visually displaying information.

The memory 103 may be in the form of RAM as a means for storing information in a non-volatile manner.

The data storage device 104 may be data such as a hard disk drive or a solid state drive (SSD).

On the other hand, 3D software is stored in the data storage device 104, and the processor 101 loads the corresponding 3D software into the memory 103, and then executes the instruction to display it on the display 102. [

The processor 101 provides a user interface so that the user can set the capture area on the 3D software.

As the user selects a capture area corresponding to a predetermined three-dimensional space within the coordinate system of the 3D software, the processor 101 selects among the 3D objects included in the set capture area to be included in the 3D printer output data, And obtain modeling data of the 3D objects to be finally exported through post-processing.

Then, they are merged and converted into a single file format to generate data for 3D printer output, and the converted data is stored in the data storage device 104 again.

2 is a flowchart illustrating a 3D capturing method according to the present invention.

The 3D capture method according to the present invention can be executed on the computer system 100 as described above.

In a micro perspective, the processor 101 displays the user interface on the display 102, executes instructions on the memory 103, executes the instructions on the memory 103, It can be understood that it is executed in such a manner as to store data in the storage device 104. [

Referring to FIG. 2, the computer system 100 first provides a user interface so that a user can set a capture area on an execution screen of 3D software.

As the user selects an area using the user interface, a capture area is set (S110).

At this time, it is preferable that the execution of the 3D software is suspended until the capture area is set and the 3D printer output data is exported.

For example, if the 3D software is game software, temporarily stop the game play and allow the user to select a part of the screen.

That is, data for 3D printer output is exported to a partial area on the game coordinate system at the instant when game play is stopped.

3 shows a state in which graphic data for 3D printer output is captured on a software execution screen.

As shown in the left side of FIG. 3, as the user executes the 3D capture function, a user interface for setting the capture area on the execution screen of the 3D software is displayed.

For example, the user can set an area to be captured by dragging a mouse or the like using the user interface.

If the user is skilled in selecting a stereoscopic three-dimensional area, the capture area corresponding to the hexahedron area can be set by setting the X and Y axis coordinate range as well as the Z axis coordinate range on the coordinate system of the 3D software.

However, if the object to be 3D captured (for example, a player character) is displayed at the center of the coordinate system, the capture area can be selected by simply dragging a rectangle on the screen.

When the user selects a rectangular area (i.e., a two-dimensional area specified by the X and Y axis coordinate ranges) through a mouse drag, the computer system 100 displays the selected rectangle Shape area is expanded into a hexahedral area to set the capture area.

For example, the 3D software may be game software, and the center point may be the coordinates of the center of the player character of the user displayed at the center of the screen. It is possible to easily set the capture area by moving the center coordinates of the selected rectangular area to the center point and then expanding the Z-axis area to become a cube.

The left side of FIG. 3 shows that the set capture area is hatched.

Meanwhile, as the capture area is set as described above, the computer system 100 determines 3D objects belonging to the set capture area and acquires 3D modeling data of the determined 3D object (S120).

However, if the predetermined condition is satisfied during execution of the 3D software, the computer system 100 automatically sets a predetermined area on the coordinate system of 3D software as a capture area in step S110 , And may proceed to step S120.

For example, when the 3D software is game software, it may be set to automatically capture a predetermined specific event such as "passing a finish point" or "skill hit ". In this case, Although the interface may be separately provided, the computer system 100 may automatically set the capture area and automatically perform capture.

At this time, a plurality of 3D objects may exist in the set capture area. In addition, in 3D software, each object has a sense of reality through texture mapping, as well as various graphic effects.

Therefore, it is impossible to directly convert the set capture area into graphics data for output of the 3D printer, and therefore the following processing is performed.

1. Selection of objects

In step S120, the computer system 100 first selects an object to be included in the 3D printer output graphic data among 3D objects belonging to the set capture area.

For example, the background object 2 and the character object 1 can be selected by default. In addition, the background object (2) and other objects (3) in contact with the character object (1) are included as objects to be included in the 3D printer output graphic data.

At this time, the character object is a type of the 3D object, which means that the character is rendered in 3D form.

In the case of a game program, a character object usually corresponds to an object operated by a user directly in the game, and can be changed in such a manner as to directly set its appearance or to wear or replace the equipment.

On the other hand, the background object corresponds to the background in which the character object is displayed. Preferably, the character object may be displayed above the background object while being in contact with the background object.

On the other hand, the background object 2 and the other objects 3 not in contact with the character object 1 are excluded from the object to be included in the 3D printer output graphic data.

It is preferable that the 3D printer output is preferably expressed in one lump. Therefore, the background object 2 and the object located away from the character object 1, such as floating in the air on the game coordinate system, .

In addition, it excludes all particles due to graphic effects.

In the above description, the object selection process is more generally described. In the case where there are two or more objects that are not in contact with each other in the set capture area, the remaining objects except for the object having the largest volume are referred to as 3D It can be excluded from graphic data for printer output.

2. Creation of extension line

However, if it is desired to include two or more objects that are not in contact with each other, the following processing is performed.

In the capture area shown in the left side of FIG. 3, the player character stands on the background, and a bird is displayed thereon. According to the algorithm described above, the bird is excluded from the 3D printer output graphic data by default.

However, if the user separately selects a new one and includes it in graphic data for 3D printer output, the computer system 100 basically connects the background object 2 or the character object 1 included with the selected new-other object 3 Generate extension lines.

The extension line can be basically created by finding a line corresponding to the shortest distance from the selected objects or by obtaining a line extending from the other object 3 in the vertical direction to the background object 2.

When generating an extension line corresponding to the shortest distance, it is possible to obtain the shortest distance from the center of gravity of at least one of the two objects to be connected to the extension line to the other one, thereby generating an extension line corresponding to the shortest distance.

The shortest distance from the center of gravity of the background object 2 or the other object 3 not in contact with the character object 1 to the background object 2 or the character object 1 is found in the example of Fig. Can be generated.

A known algorithm can be used as an algorithm for finding the path of the shortest distance.

When the extension line 4 is thus obtained, the obtained extension line 4 is converted into a 3D object having a predetermined thickness and is included in 3D printer output graphic data.

On the other hand, the thickness and the number of the extension lines 4 can be dynamically determined in accordance with the volume of the background object 2 or other objects 3 remote from the character object 1.

The thickness of the extension line 4 may be determined so as to have a sufficient supporting force depending on the material used in the 3D printer.

When the volume of the other object 3 is equal to or larger than a predetermined value, it is determined that the single extended line 4 is difficult to support and the background object 2 or the character object (4) leading to the line (1).

On the other hand, when the other object 3 is connected to the character object 1, depending on the ratio of the volume of the other object 3 to the volume of the character object 1, It is determined that the character object 1 can not support the other object 3 and the other object 3 and the background object 4 are used instead of the extension line 4 connecting the other object 3 and the character object 1 2 can be generated.

In the case where the volume ratio of the other object 3 is larger than the background object 2 by a certain degree or more, it is impossible to support the other object 3 by the extension line 4, . In this case, it is needless to say that the extension line 4 is not generated.

The right side of FIG. 3 illustrates that the 3D objects belonging to the set capture area are merged into graphic data for a single 3D printer output. In this case, an extension line from the character object 1 to the other object 3 (4) is displayed in a three-dimensional shape having a predetermined thickness.

In this case, it is possible to export graphic data for 3D printer output including a plurality of 3D objects that do not touch each other.

3. Linking objects through coordinate movement

In the above description, a method of exporting graphic data for 3D printer output including objects that do not touch each other by generating the extension line 4 has been described.

However, instead of generating the extension line 4, at least one of the objects that do not touch each other may be moved along the shortest path so as to be in contact with each other.

For example, in the example of FIG. 3, the other objects 3 can be moved in coordination along the shortest path to the background object 2 and the character object 1, so as to be in contact with each other.

At this time, the movement of the coordinates of the other objects 3 in contact with the background object 2 or the character object 1 may be stopped. However, until the area in contact with each other can sufficiently support the other objects 3 Direction by a predetermined distance.

On the other hand, it is possible to optimize the distribution of the weight of the result when outputting to the 3D printer by switching the axis of the other object 3 through the following process, in addition to simply moving the other object 3 along the shortest path .

4 is a view for explaining the process.

First, a first axis 5 connecting the shortest distance of at least two objects that are not in contact with each other is set.

4A, the shortest path connecting the character object 1 and the other object 3 is found, and then the shortest path is set to the first axis 5. [

Next, an object to be moved among the two objects that do not touch each other at the end point of the first axis 5 - a second axis 6 connecting the center of gravity of the other object 3 in the example of FIG. 4 (a) ).

4 (a), it can be seen that the first shaft 5 and the second shaft 6 are not located on a straight line and are bent at a predetermined angle.

Thus, the angle of the second shaft 6 is changed so as to be aligned with the first shaft 5.

Next, as illustrated in FIG. 4A, the other objects 3 are moved to bring them into contact with the character object 1.

At this time, it is preferable that the portion corresponding to the contact point between the second shaft 6 and the first shaft 5 contacts the character object 1 first. If the other part of the other object 3 comes into contact with the character object 1 in advance, the second shaft 6 is rotated clockwise or counterclockwise by a predetermined angle, To be distant.

Then, the movement of the other object (3) along the first axis (5) is resumed.

Thus, when the portion corresponding to the contact point between the second shaft 6 and the first shaft 5 first touches the character object 1, the movement of the other object 3 is terminated.

Through the above process, the computer system 100 can automatically process axis transformation and position movement of the other objects 3, but the user can appropriately rotate the other objects 3 according to his / Or moved.

4. Crop processing

Depending on the type of the 3D software or the capture area selected by the user, the 3D objects included in the capture area may be completely included, but in some cases, only a part of the 3D object may extend over the capture area.

For example, it is necessary to process the case where a part of the player character of another user other than the player character of the user accidentally enters the capture area, or only a part of the background is included in the capture area as illustrated on the left side of FIG. Do.

At this time, it is possible to decide whether to exclude from the graphic data for 3D printer output, whether to include it as a whole, or to include it by cropping using the attribute value of the 3D object included in the capture area.

For example, if the 3D object is the background object (2), the 3D object is cropped only as much as the capture area and included in the graphic data for 3D printer output. If the 3D object is not the background, .

Conventionally, users who have a hobby in modeling can decorate a diorama by arranging one or more plastic models or figures in the background. By cropping the background object 2 by the set capture area, a similar output .

If the algorithm is more generalized, it may be determined whether or not the 3D object is included in graphic data for 3D printer output according to whether or not the 3D object has a specific attribute value.

On the other hand, if only a part of one 3D object is included in the set capture area, the crop can be processed by removing all the polygons that do not belong to the capture area as the polygons constituting the 3D object.

5. Background processing

When the user does not desire the form of the background object 2 close to the set for the diorama, the size and shape of the background can be processed so as to have a preset predetermined shape, instead of being cropped into the set capture area.

For example, the background object 2 can be made in a size and shape in which the character object 1 is highlighted by cropping it into an elliptical shape, such as the background object 2 shown on the right side of FIG.

You may want to let the user select a few predefined shapes.

6. Treatment of cavities

When a 3D object included in a capture area is cropped, a polygon forming the object is concentrated on the outer surface of the 3D object, and a hollow cavity is formed in the 3D object. When a 3D object is cropped, The output result may not have a sufficient three-dimensional shape.

In particular, this happens when the cropped 3D object is a background object (2).

FIG. 5 conceptually illustrates a process of filling a void generated in a graphic data crop.

The left side of Fig. 5 shows a cross section of the background object 2 that has been cropped along the outline of the ellipse. It can be seen that the bottom is empty, and it consists of a surface that is rounded upwards. In other words, it can be seen that a cavity is formed in the bottom surface portion of the background object 2.

In this case, the computer system 10 creates a virtual face connecting the cropped outline of the 3D object.

Then, the surface is included in the 3D printer output graphic data, or the inner space of the surface and the corresponding cropped background object 2 is filled and included in the 3D printer output graphic data.

The right side of FIG. 5 shows a background object 2 having a three-dimensional shape filled with an inner cavity.

Through the above process, the objects to be included in the 3D printer output graphic data are selected as the 3D objects belonging to the set capture area, and when the post-processing of the selected objects is completed, the computer system 100 performs 3D modeling The data is exported as graphic data for 3D printer output (S130).

For this purpose, the computer system 100 merge the polygons constituting these objects into a single object by merge.

If any one object is a vector object, convert it to a polygon and perform a merge.

The 3D modeling data of each object may further include texture mapping data in addition to the polygon. The computer system 100 may generate color information for each region of the merged object using the texture mapping data.

Thereafter, the computer system 100 uses the converting tool to export the graphic data for 3D printer output. That is, the image data is converted into a file format that can be recognized by the 3D printer.

The user can save the converted 3D printer output graphic data in a portable storage device or the like, or transmit the converted 3D printer output to the 3D printer 10 in a stereoscopic form.

FIG. 6 illustrates an output process by the 3D printer 10.

The 3D printer 10 receiving the 3D printer output graphic data outputs the result including the background object 2 and the character object 1 in a three-dimensional form.

FIG. 7 is a diagram illustrating an output result of the 3D printer.

It is possible to confirm that the objects included in the capture area set by the user are transformed so as to be output integrally through a predetermined post-process, and then output in a three-dimensional shape.

Meanwhile, the 3D capturing method according to an embodiment of the present invention may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, It belongs to the scope of right.

The present invention can be applied to the fields of 3D graphics software technology and 3D printer application technology.

1: Character object
2: Background object
3: Other Objects
4: Extension line
5: 1st axis
6: 2nd axis
10: 3D Printers
100: Computer system
101: Processor
102: Display
103: Memory
104: Data storage device

Claims (26)

  1. A computer system,
    110) setting a capture area on 3D software;
    120) determining a 3D object belonging to the set capture area and acquiring 3D modeling data of the determined 3D object; And
    And outputting the obtained 3D modeling data as graphic data for 3D printer output,
    In operation 120, an object to be included in 3D printer output graphic data among the 3D objects belonging to the set capture area is selected,
    When the object selected by the user includes at least two objects that are not touching each other, an extension line connecting the two objects is generated, and the generated extension line is connected to the 3D printer A 3D capture method for inclusion in output graphic data.
  2. The method according to claim 1,
    Wherein the extension line is generated along a path corresponding to the shortest distance from the center of gravity of at least one of the two corresponding objects to the other one.
  3. The method according to claim 1,
    Wherein the extension line determines the thickness or the number corresponding to the volume of the object.
  4. A computer system,
    110) setting a capture area on 3D software;
    120) determining a 3D object belonging to the set capture area and acquiring 3D modeling data of the determined 3D object; And
    And outputting the obtained 3D modeling data as graphic data for 3D printer output,
    In operation 120, an object to be included in 3D printer output graphic data among the 3D objects belonging to the set capture area is selected,
    The 3D capturing method according to claim 1, further comprising the step of moving at least one of the selected objects so that they are in contact with each other if the objects selected by the user include at least two objects that are not in contact with each other.
  5. 5. The method of claim 4,
    A first axis connecting the shortest distance of at least two objects not in contact with each other, a second axis connecting a first axis and a center of gravity of the moved object from a point where the moved object meets,
    And changing the angle of the second axis so that the first axis and the second axis coincide with each other;
  6. 6. The method of claim 5,
    Determining whether a contact is located on the second axis if any one of the objects is moved and touched with another one of the other objects,
    And if the object is not located on the second axis, rotating the second axis by a predetermined angle to resume movement of any one of the objects.
  7. A computer system,
    110) setting a capture area on 3D software;
    120) determining a 3D object belonging to the set capture area and acquiring 3D modeling data of the determined 3D object; And
    And outputting the obtained 3D modeling data as graphic data for 3D printer output,
    In operation 120, an object to be included in 3D printer output graphic data among the 3D objects belonging to the set capture area is selected,
    Determining a background object and a character object, and incorporating the background object and other objects in contact with the character object into graphic data for 3D printer output.
  8. A computer system,
    110) setting a capture area on 3D software;
    120) determining a 3D object belonging to the set capture area and acquiring 3D modeling data of the determined 3D object; And
    And outputting the obtained 3D modeling data as graphic data for 3D printer output,
    In operation 120, an object to be included in 3D printer output graphic data among the 3D objects belonging to the set capture area is selected,
    In step 120, if only a part of one 3D object is included in the set capture area, the polygon forming the 3D object is cropped to be included in 3D printer output graphic data 3D capture method.
  9. 9. The method of claim 8,
    And creating a virtual face connecting the cropped region outline of the cropped 3D object to the 3D printer output graphic data when the inside of the cropped 3D object is empty.
  10. A computer system,
    110) setting a capture area on 3D software;
    120) determining a 3D object belonging to the set capture area and acquiring 3D modeling data of the determined 3D object; And
    And outputting the obtained 3D modeling data as graphic data for 3D printer output,
    In operation 120, an object to be included in 3D printer output graphic data among the 3D objects belonging to the set capture area is selected,
    The 3D capturing method according to claim 120, wherein, in the case where only a part of the background object is included in the set capture area, the background object is cropped into a predetermined shape and included in 3D printer output graphic data.
  11. 11. The method of claim 10,
    And creating a virtual face connecting the outline of the background object when the bottom face of the background object is empty, and incorporating the virtual face into the 3D printer output graphic data.
  12. 12. The method according to any one of claims 1 to 11,
    In the 3D capturing method, a rectangular region is selected by a user on an execution screen of the 3D software, and a capture region is set by expanding the rectangular region based on a predetermined center point on a coordinate system of 3D software to a hexahedral region.
  13. 12. The method according to any one of claims 1 to 11,
    In operation 110, when a predetermined condition is satisfied during the execution of 3D software, a predetermined area is set as a capture area on the coordinate system of 3D software and the process proceeds to the next step.
  14. 12. The method according to any one of claims 1 to 11,
    In operation 130, the acquired 3D modeling data is merged, converted into a single object, and then exported as graphic data for 3D printer output.
  15. 12. The method according to any one of claims 1 to 11,
    In the 3D capturing method, 3D modeling data including a polygon and mapping data of the object may be obtained when the 3D modeling data is acquired.
  16. 16. The method of claim 15,
    Generating color information by using the mapping data, and exporting the color information by including the generated color information as 3D printer output graphic data.
  17. A display and a processor,
    The display displays a user interface for setting a capture area on the execution screen of the 3D software according to the processing of the processor,
    The processor determines a 3D object belonging to a capture area set by the user, acquires 3D modeling data of the determined 3D object, and then outputs the 3D modeling data to 3D printer output graphic data,
    The processor selects an object to be included in graphic data for 3D printer output among 3D objects belonging to the set capture area and then exports graphic data for 3D printer output using 3D modeling data of the selected objects,
    When there are at least two objects that do not touch each other among the 3D objects belonging to the set capture area, the processor generates an extension line connecting the two objects to be included in graphic data for 3D printer output, So as to be in contact with each other so as to export graphic data for 3D printer output.
  18. A display and a processor,
    The display displays a user interface for setting a capture area on the execution screen of the 3D software according to the processing of the processor,
    The processor determines a 3D object belonging to a capture area set by the user, acquires 3D modeling data of the determined 3D object, and then outputs the 3D modeling data to 3D printer output graphic data,
    The processor selects an object to be included in graphic data for 3D printer output among 3D objects belonging to the set capture area and then exports graphic data for 3D printer output using 3D modeling data of the selected objects,
    Wherein the processor captures a portion belonging to the capture area as a polygon constituting the 3D object when the part of the 3D object is included in the set capture area,
    And creating a virtual surface connecting the cropped region outline of the cropped 3D object to the 3D printer output graphic data when the inside of the cropped 3D object is empty.
  19. The method according to claim 17 or 18,
    Wherein the processor merges the 3D modeling data of the selected objects into a single object, and then exports the 3D object data as 3D printer output graphic data.
  20. delete
  21. delete
  22. delete
  23. delete
  24. delete
  25. delete
  26. delete
KR1020150039943A 2015-03-23 2015-03-23 3 dimenstional graphic data capturing system and method thereof KR101653802B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150039943A KR101653802B1 (en) 2015-03-23 2015-03-23 3 dimenstional graphic data capturing system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150039943A KR101653802B1 (en) 2015-03-23 2015-03-23 3 dimenstional graphic data capturing system and method thereof

Publications (1)

Publication Number Publication Date
KR101653802B1 true KR101653802B1 (en) 2016-09-05

Family

ID=56939050

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150039943A KR101653802B1 (en) 2015-03-23 2015-03-23 3 dimenstional graphic data capturing system and method thereof

Country Status (1)

Country Link
KR (1) KR101653802B1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100682455B1 (en) 2005-03-17 2007-02-15 엔에이치엔(주) Game scrap system, game scrap method, and computer readable recording medium recording program for implementing the method
KR100771839B1 (en) 2007-02-06 2007-10-30 여호진 Online game picture capturing and character position system and method
JP2011044882A (en) * 2009-08-20 2011-03-03 Nikon Corp Image processing apparatus, and image processing program
KR20140061340A (en) 2014-04-25 2014-05-21 주식회사 엔씨소프트 Apparatus and method of managing game screenshot based on exif meta-data
KR20140062831A (en) * 2012-11-15 2014-05-26 (주)다인디지컬처 Method for acquiring and processing a variety of three-dimensional data to product a precise wide-area scale model
KR20140124427A (en) * 2012-03-19 2014-10-24 가부시키가이샤 리코 Image processing apparatus, image processing method, and computer-readable recording medium
KR20150012180A (en) * 2013-07-24 2015-02-03 한국전자통신연구원 3D object printing support device, 3D object printing support method, and 3D object printing service apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100682455B1 (en) 2005-03-17 2007-02-15 엔에이치엔(주) Game scrap system, game scrap method, and computer readable recording medium recording program for implementing the method
KR100771839B1 (en) 2007-02-06 2007-10-30 여호진 Online game picture capturing and character position system and method
JP2011044882A (en) * 2009-08-20 2011-03-03 Nikon Corp Image processing apparatus, and image processing program
KR20140124427A (en) * 2012-03-19 2014-10-24 가부시키가이샤 리코 Image processing apparatus, image processing method, and computer-readable recording medium
KR20140062831A (en) * 2012-11-15 2014-05-26 (주)다인디지컬처 Method for acquiring and processing a variety of three-dimensional data to product a precise wide-area scale model
KR20150012180A (en) * 2013-07-24 2015-02-03 한국전자통신연구원 3D object printing support device, 3D object printing support method, and 3D object printing service apparatus
KR20140061340A (en) 2014-04-25 2014-05-21 주식회사 엔씨소프트 Apparatus and method of managing game screenshot based on exif meta-data

Similar Documents

Publication Publication Date Title
Wang et al. Feature based 3D garment design through 2D sketches
US8874248B2 (en) Image processing method and method of three-dimensional printing incorporating the same
Cutler et al. A procedural approach to authoring solid models
JP3654616B2 (en) Hierarchical polygon data generation apparatus and method, and three-dimensional real-time video generation apparatus and method using the hierarchical polygon data
US6549201B1 (en) Method for constructing a 3D polygonal surface from a 2D silhouette by using computer, apparatus thereof and storage medium
US7979251B2 (en) Automatic generation of building instructions for building element models
KR20120114253A (en) Improvements relating to user interfaces for designing objects
EP2729225B1 (en) Method and system for designing and producing a user-defined toy construction element
Mitra et al. Shadow art
EP2187355B1 (en) System and method for dependency graph evaluation for animation
WO2006122212A2 (en) Statistical rendering acceleration
JPH05210745A (en) Method and device for processing three-dimensional graphics
US20030191554A1 (en) Method and system for the generation of a computer model
KR101842106B1 (en) Generating augmented reality content for unknown objects
US8629871B2 (en) Systems and methods for rendering three-dimensional objects
US20090319892A1 (en) Controlling the Motion of Virtual Objects in a Virtual Space
Dewaele et al. Interactive global and local deformations for virtual clay
US20100241403A1 (en) Automatic generation of building instructions for building element models
Turner et al. Sketching space
JPH08194840A (en) Graphic input and output device
US8731876B2 (en) Creating editable feature curves for a multi-dimensional model
US9984409B2 (en) Systems and methods for generating virtual contexts
US20120075297A1 (en) System and method for smoothing three dimensional images
JP5665872B2 (en) Shape optimization based on connectivity for real-time rendering
CN105121134B (en) The formation section printd used in three-dimensional and/or the method for texture

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant