KR20170090016A - System and method for converting 2-dimensional image to 3-dimensional image - Google Patents

System and method for converting 2-dimensional image to 3-dimensional image Download PDF

Info

Publication number
KR20170090016A
KR20170090016A KR1020160010193A KR20160010193A KR20170090016A KR 20170090016 A KR20170090016 A KR 20170090016A KR 1020160010193 A KR1020160010193 A KR 1020160010193A KR 20160010193 A KR20160010193 A KR 20160010193A KR 20170090016 A KR20170090016 A KR 20170090016A
Authority
KR
South Korea
Prior art keywords
dimensional image
dimensional
contour
pixel
value
Prior art date
Application number
KR1020160010193A
Other languages
Korean (ko)
Other versions
KR101779532B1 (en
Inventor
권혁홍
이수현
Original Assignee
대진대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 대진대학교 산학협력단 filed Critical 대진대학교 산학협력단
Priority to KR1020160010193A priority Critical patent/KR101779532B1/en
Publication of KR20170090016A publication Critical patent/KR20170090016A/en
Application granted granted Critical
Publication of KR101779532B1 publication Critical patent/KR101779532B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The present invention discloses a method and system for restoring a two-dimensional image into a three-dimensional image, which enables restoration of a two-dimensional image of a user into a three-dimensional image in order to produce a three-dimensional shape based on a photograph of a user's image .
A method of restoring a disclosed two-dimensional image into a three-dimensional image includes the steps of: (a) calling up a two-dimensional image by an image loading unit and dividing the two-dimensional image by a predetermined pixel unit; (b) a feature detecting unit detecting a predetermined number or more of contour points in the two-dimensional image; (c) setting a depth value for the three-dimensional transformation for every two pixels around the detected predetermined number or more contour points for the two-dimensional image; (d) The control unit selects the straight line S including the predetermined number or more of contour points, divides the straight line S into a top curve and a bottom curve with respect to each selected line straight line, Calculating a three-dimensional machining path coordinate based on a depth value of the three-dimensional machining path; And (e) rendering the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates.
According to the present invention, a three-dimensional shape can be restored based on a photograph of a user.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method and system for converting a two-dimensional image into a three-

The present invention relates to a method and system for restoring a two-dimensional image into a three-dimensional image, and more particularly, to restoring a two-dimensional image of a user into a three-dimensional image in order to produce a stereoscopic shape based on a photograph And more particularly, to a method and system for restoring a two-dimensional image into a three-dimensional image.

In recent years, as interest in three-dimensional stereoscopic images has increased, research has been actively conducted on methods for generating three-dimensional stereoscopic images. From the beginning of the study of 3-Dimensional (3D) graphics, the ultimate goal of the researchers is to create a realistic graphic screen like real images.

Therefore, in the field of traditional modeling technology, studies using polygonal models have been conducted, and as a result, modeling and rendering techniques have been developed to a sufficient degree to provide a highly realistic three-dimensional environment.

However, the process for creating a complex model requires a lot of expert effort and time. In addition, realistic and complex environments require enormous amounts of information, resulting in low efficiency in storage and transmission.

Korean Published Patent Application No. 2014-0002151 (Published on Jan. 08, 2014)

It is an object of the present invention to solve the above-mentioned problems, and it is an object of the present invention to provide a three-dimensional image processing method and apparatus capable of restoring a two-dimensional image of a user into a three- And to provide a method and system for restoring images.

According to an aspect of the present invention, there is provided a system for restoring a two-dimensional image into a three-dimensional image, the system including: an image loading unit for loading a two-dimensional image and dividing the two- A feature detector for detecting a predetermined number or more of contour points in the two-dimensional image; Dimensional image, a depth value for three-dimensional transformation is set for every pixel centered on the detected number of contour points or more, a front surface S including the predetermined number or more of contour points is selected, Calculating a three-dimensional machining path coordinate based on a depth value of pixels on the straight line forming the upper and lower curves by dividing the straight line facing each other into an upper curve and a lower curve; And a rendering unit rendering the two-dimensional image into a three-dimensional image according to the calculated three-dimensional processing path coordinates.

The feature detecting unit may be configured to designate an area of one or more pixels having a lightness difference of a plurality of pixels in the two-dimensional image as a reference point difference or more, and to set the outline point as a constant It is possible to detect it by more than the number.

In addition, the feature detecting unit may determine a predetermined number or more of contour points by adding a specific number of detailed contour points between the detected contour points.

In addition, the control unit may set the depth values according to the magnitude of the lightness and darkness values of the pixels of the detected number of contour points, And sets a depth value greater than a depth value for a pixel of the contour point when the contrast value for the pixel of the contour point is higher than the contrast value for the pixel of the contour point, May be set to a depth value lower than the depth value for the pixel of the contour point when the contrast value is lower than the contrast value for the pixel of the contour point.

If the slope exists between the contour points of the predetermined number or more based on the depth value of the pixels on the straight line forming the upper and lower curves, the control unit calculates the slope of the slopes of the slopes , Comparing the lightness values of the pixels at each contour point, and comparing the depth value of the pixel at the contour point side with the larger lightness value and the slope of the straight lines forming the slant surface side of the contour point with the greater lightness value The three-dimensional machining path coordinates can be calculated so as to have a larger value than the target contour point side.

According to another aspect of the present invention, there is provided a method of restoring a two-dimensional image into a three-dimensional image, the method comprising the steps of: (a) (b) a feature detecting unit detecting a predetermined number or more of contour points in the two-dimensional image; (c) setting a depth value for the three-dimensional transformation for every two pixels around the detected predetermined number or more contour points for the two-dimensional image; (d) The control unit selects the straight line S including the predetermined number or more of contour points, divides the straight line S into a top curve and a bottom curve with respect to each selected line straight line, Calculating a three-dimensional machining path coordinate based on a depth value of the three-dimensional machining path; And (e) rendering the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates.

In the step (b), the feature detecting unit may designate, as a contour point, an area of one or more pixels having a difference in contrast between a plurality of pixels in the two-dimensional image, It is possible to detect a certain number or more of the entire area of the image.

Also, in the step (b), the feature detector may determine a certain number or more of contour points by adding a specific number of detailed contour points between the detected contour points.

In addition, in the step (c), the control unit compares the lightness and darkness values of the pixels of the detected number of contour points or more with each other to set the depth value according to the magnitude of the lightness value, Comparing the lightness value of the surrounding pixels based on the lightness value of the pixel of the contour point and comparing the lightness value of the pixels around the contour point with the lightness value of the pixel of the contour point, Depth value and may be set to a depth value lower than the depth value for the pixel of the contour point when the contrast value for the pixel of the contour point is lower than the contrast value.

In the step (d), when the slope exists between the contour points of the predetermined number or more based on the depth value of the pixels on the straight line making up the upper and lower curves, For the slope of the straight lines, the lightness values for the pixels of each contour point are compared with each other, and the depth value for the pixel on the contour point side where the lightness value is larger and the depth value for the pixel on the contour point side The three-dimensional machining path coordinates can be calculated so that the slope of the straight lines has a value larger than that of the contour point side to be compared.

According to the present invention, a three-dimensional image for producing a three-dimensional shape can be restored based on a photograph of a user's image.

FIG. 1 is a schematic view of a functional block of a system for restoring a two-dimensional image into a three-dimensional image according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating a method of restoring a two-dimensional image into a three-dimensional image according to an exemplary embodiment of the present invention. Referring to FIG.
3 is a diagram illustrating an example of a process of generating three-dimensional machining path coordinates according to an embodiment of the present invention.
4 is a diagram illustrating an example of comparing brightness values of pixels of contour points set according to an embodiment of the present invention.
5 is a view illustrating an example of setting depth values of contour points according to the magnitude of the contrast value according to an embodiment of the present invention.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art to which the present invention pertains. The present invention may be embodied in many different forms and is not limited to the embodiments described herein.

In order to clearly illustrate the present invention, parts not related to the description are omitted, and the same or similar components are denoted by the same reference numerals throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "comprising ", it means that it can include other elements as well, without departing from the other elements unless specifically stated otherwise.

If any part is referred to as being "on" another part, it may be directly on the other part or may be accompanied by another part therebetween. In contrast, when a section is referred to as being "directly above" another section, no other section is involved.

The terms first, second and third, etc. are used to describe various portions, components, regions, layers and / or sections, but are not limited thereto. These terms are only used to distinguish any moiety, element, region, layer or section from another moiety, moiety, region, layer or section. Thus, a first portion, component, region, layer or section described below may be referred to as a second portion, component, region, layer or section without departing from the scope of the present invention.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the invention. The singular forms as used herein include plural forms as long as the phrases do not expressly express the opposite meaning thereto. Means that a particular feature, region, integer, step, operation, element and / or component is specified and that the presence or absence of other features, regions, integers, steps, operations, elements, and / It does not exclude addition.

Terms indicating relative space such as "below "," above ", and the like may be used to more easily describe the relationship to other portions of a portion shown in the figures. These terms are intended to include other meanings or acts of the apparatus in use, as well as intended meanings in the drawings. For example, when inverting a device in the figures, certain portions that are described as being "below" other portions are described as being "above " other portions. Thus, an exemplary term "below" includes both up and down directions. The device can be rotated by 90 degrees or rotated at different angles, and terms indicating relative space are interpreted accordingly.

Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Commonly used predefined terms are further interpreted as having a meaning consistent with the relevant technical literature and the present disclosure, and are not to be construed as ideal or very formal meanings unless defined otherwise.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.

FIG. 1 is a schematic view of a functional block of a system for restoring a two-dimensional image into a three-dimensional image according to an embodiment of the present invention.

Referring to FIG. 1, a system 100 for restoring a two-dimensional image into a three-dimensional image according to the present invention includes an image loading unit 110, a feature detector 120, a controller 130, A rending unit 140, a storage unit 150, and a display unit 160.

The image loading unit 110 loads a two-dimensional image and divides the two-dimensional image into a predetermined number of pixels.

The feature detecting unit 120 detects a predetermined number or more of contour points in the two-dimensional image.

That is, the feature detecting unit 120 may designate a region of one or more pixels having a difference in contrast between a plurality of pixels in a two-dimensional image as a contour point, Or more.

In addition, the feature detecting unit 120 may determine a predetermined number or more of contour points by adding a specific number of detailed contour points between the detected contour points.

The controller 130 sets a depth value for three-dimensional conversion for every pixel centered on the detected number of contour points or more for a two-dimensional image, selects a straight line S including a predetermined number or more of contour points The 3D rendering path coordinates are calculated on the basis of the depth values of the linearly arranged pixels constituting the upper and lower curves, and the rendering unit 140 calculates the three- To be rendered as a three-dimensional image.

That is, the controller 130 compares the detected lightness and darkness values of pixels of the detected number of contour points or more with each other to set depth values according to the magnitude of the lightness and darkness values, And sets a depth value greater than a depth value for a pixel of the contour point when the contrast value of the surrounding pixels is higher than the contrast value for the pixel of the contour point, Is set to a depth value lower than the depth value for the pixel of the contour point when the contrast value is lower than the contrast value for the pixel of the contour point.

If there is an inclined plane between a certain number of contour points or more on the basis of the depth value of pixels on the straight line forming the upper and lower curves, the control unit 130 determines whether or not the inclination of the straight lines forming the inclined plane And compares the lightness values of the pixels at each contour point so as to calculate the three-dimensional work path coordinates so that the depth value for the pixels of the contour point having the larger lightness value and the corresponding slope have larger values.

The rendering unit 140 renders a two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates.

The storage unit 150 stores at least one two-dimensional image as a source image and stores three-dimensional images rendered by the rendering unit 140.

The display unit 160 displays a two-dimensional image loaded by the image loading unit 110 on the screen or a three-dimensional image rendered by the rendering unit 140 on the screen.

FIG. 2 is a flowchart illustrating a method of restoring a two-dimensional image into a three-dimensional image according to an exemplary embodiment of the present invention. Referring to FIG.

Referring to FIG. 2, a system 100 for restoring a two-dimensional image into a three-dimensional image according to an embodiment of the present invention loads the two-dimensional image by the image loading unit 110 and divides the two- ).

Next, the feature detecting unit 120 detects a predetermined number or more of contour points in the two-dimensional image (S220).

That is, the feature detecting unit 120 divides an area of one or more pixels having a difference in contrast between a plurality of pixels in a two-dimensional image by more than a predetermined reference value into contour points m i-2 , m i-1 , m i, m i + 1, m i + 2) specified in, and over the number of calendar for these contour points on the entire region of the two-dimensional image (m i-2, m i -1, m i, m i + 1 , m i + 2 ).

In addition, the feature detecting unit 120 may determine a predetermined number or more of contour points by adding a specific number of detailed contour points between the detected contour points.

Next, the controller 130 sets a depth value for three-dimensional conversion for every pixel centered on the detected number of contour points or more on the two-dimensional image (S230).

That is, the control unit 130 determines whether or not the pixels of the contour points m i-2 , m i-1 , m i , m i + 1 , m i + D i-1 , d i , d i + 1 ,...) According to the magnitude of the contrast value. FIG. 4 is a diagram illustrating an example of comparing contrast values for pixels of contour points set according to an embodiment of the present invention, and FIG. 5 is a graph illustrating an example of comparing depth values of contour points according to an exemplary embodiment of the present invention, And Fig. Then, the controller 130 compares the lightness values of the surrounding pixels based on the lightness values of the pixels of the contour points equal to or greater than a predetermined number in the same manner. If the contrast values are higher than the lightness value of the pixels of the contour point, Is set to a depth value that is greater than the depth value for the pixel of the point and is set to a depth value that is lower than the depth value for the pixel of the contour point if the contrast value for the pixel of the contour point is lower. That is, as shown in FIG. 5, the control unit 130 sets the depth value of the m i-1 contour point to be the largest as the contrast value of each pixel is high, and the depth value of the contour point m i- larger the depth value, to set the following as the m i + 1 to a great depth value of the contour points in the order (d i-1> d i > d i + 1) depth value.

Next, the controller 130 selects a straight line S including a predetermined number or more of contour points m i-2 , m i-1 , m i , m i + 1 , m i + The three-dimensional machining path coordinates are calculated on the basis of the depth values of the linearly arranged pixels constituting the upper and lower curves, by dividing the line-facing surfaces into an upper curve and a lower curve at step S240.

Here, calculating the three-dimensional machining path coordinates is for generating a two-dimensional image of a user as a three-dimensional three-dimensional image and applying the same to materials such as bronze or iron to cut or shape a three-dimensional three-dimensional shape of the user.

When the three-dimensional machining path coordinates are calculated, it is possible to melt the materials such as iron or bronze using a 3D printer and to laminate them along three-dimensional machining path coordinates on a point-by-point basis, or to make materials such as hexahedral iron or bronze, It can be manufactured by cutting along the machining path coordinates.

Since it is general to laminate the three-dimensional machining path coordinates using a 3D printer, in the embodiment of the present invention, the three-dimensional machining path coordinates of the hexahedron material are cut using a tool to produce a three- Dimensional machining path coordinate is calculated as an example.

First, the controller 130 selects the straight line S and calculates the machining path using the upper curve and the lower curve on the selected straight line surface as the co-ordinate coordinates T (i) = (p (i), a i))].

The control unit 130 is the error of the tool on the upper curve and the lower curve the combined clutch i-th interval and the error (e ki) and the total combined clutch curve between the tool of the curve generated by combined clutch, as shown in FIG. 3 for a (m i) (e k ). 3 is a diagram illustrating an example of a process of generating three-dimensional machining path coordinates according to an embodiment of the present invention. 3, assuming that the upper curve [U i ] i = 1 ... N and the lower curve [V i ] i = 1 ... N of the ruled surface S are given, The lower curve can be approximated by a polyline of N dots, and the line connecting the i-th point of the upper / lower curve is always above S

Figure pat00001
)

Thus, each point of any endocrine curve ([m i (λ)] i = 1 ... N ) generated by the following equation 1 is always above S:

Figure pat00002

As described above, the machining path is defined as a set of co-ordinate coordinates T (k) = (p (k), a (k)). The tool axis is located on the straight line p (k) + ha (k) since p (k) is the coordinate of the tool end point at point k, and a (k) is the direction of the tool axis. And if the taper angle of the tool is φ, if the tool radius at position h 0 is r 0, then h The tool radius r (h) at the position can be expressed by the following equation (2).

Figure pat00003

In Figure 3, the i < th > section of the co-

Figure pat00004
The shortest distance between to be. Also
Figure pat00006
The tool radius at that position is
Figure pat00007
to be. At this time
Figure pat00008
Wow
Figure pat00009
Can be easily obtained by calculating the shortest distance between two straight lines. Thus, according to the following equation (3)
Figure pat00010
The error between the section and the tool
Figure pat00011
, The error of the tool with respect to the total endogenous curve [m i (λ)] is
Figure pat00012
.

Figure pat00013

At this time, when e k is greater than 0,

In summary, the upper and lower curves are approximated with polyline according to the required accuracy, and the ending curves are generated with an appropriate λ to estimate the error of the machining path by obtaining the error with the endometrical curve for each tool position in the machining path. In general, it is appropriate to use λ = {0, 0.5, 1}, ie given upper / lower curves and intermediate curves.

The controller 130 uses a gradient descent (GD) technique to generate machining path coordinates. The GD technique is one of the most general nonlinear optimization techniques. The GD technique changes the learning parameters in a steep direction of a given objective function and searches the learning variable to output the maximum or minimum value of the objective function. The application method of the GD method is as follows: 1) Define the target function E (θ). Where θ is the learning variable. 2)

Figure pat00014
. 3) Current learning variable value
Figure pat00015
For
Figure pat00016
Calculate the value. 4) Modify the learning variable according to the following equation (4).

Figure pat00017

here,

Figure pat00018
Represents the learning rate and adjusts the learning speed and precision.

Repeat 3) and 4) until the stop condition is satisfied. The stop condition is usually determined by the maximum number of iterations or whether the error has reached the target value.

Such a GD technique can be applied if the target function is only possible with the first derivative, and therefore, it is possible to broaden the application width, to implement it easily, and to adjust the learning direction by including various constraints in the target function.

However, since the learning speed is slow, an appropriate learning coefficient value should be set. If the learning coefficient is too small, the learning time becomes unnecessarily long. If the learning coefficient is too large, a phenomenon of diverging without converging to the optimum value occurs. For this reason, the Newton method that utilizes the second derivative information rather than the GD method that uses only the first-order differential information, and the quasi-Newton method that approximates the second derivative are applied.

However, in spite of these drawbacks, GD is applied in many fields and techniques, among which Deep Belief Networks (DBN) is one of the most popular machine learning techniques in recent years. DBN, a type of neural network model, is attracting attention due to its powerful performance and scalability. In the case of DBN, the optimization technique of GD series is mainly used because of the internal randomness.

In order to process the three-dimensional shape of the user along the three-dimensional machining path coordinate, the side surface of the tool is brought into close contact with the surface of the tool as close as possible to minimize the error at an arbitrary time k , and the tool proceeds at a constant speed, . To satisfy this condition, we simplify the optimization problem as follows.

That is, the controller 130 is the upper curve [U i] of the line facing (ruled surface) i = 1 ... N and the lower curve [V i] i = 1 ... stop curve from N [m i (λ = 0.5) = (u i + v i ) / 2] i = 1 ... N. Minimize the error with this top / middle / bottom polyline curve instead of sticking the tool to the wire surface.

A simple way to minimize the error between the polyline curve [m i ] and the co-construct T (k) is to first find a feature point c k that replaces [m i ] and repeat the process of reducing the error between the constructions and the feature points. In this case, the feature points are And the point at which the error e k becomes the minimum.

The control unit 130 calculates a feature point c k that minimizes the tool error e k for the tool construction and the total endocrine curve.

Figure pat00019
)

In order to obtain the machining path in which the tool advances at a constant speed, the machining path is defined as a set of N co-ordinate coordinates T i = (p i , a i ), i Th co-ordinate coordinate T i is the i Th point m i To minimize the error. To do this, one of the top, middle, and bottom curves is defined and the feature point is set as c k = m i .

The controller 130 sets the objective function E i (θ = (p i , a i )) in which the factor (p i , a i ) of the co-ordinate coordinates is set as a learning variable θ = (p i , a i ) And the learning variable is updated until the stop condition is satisfied to generate the machining path.

In addition, when there is an inclined plane between a certain number of contour points or more on the basis of the depth value of pixels on the straight line forming the upper and lower curves, the control unit 130 determines whether or not the inclination of the straight lines forming the inclined plane , Comparing the lightness values of the pixels at each contour point, and comparing the depth value of the pixel at the contour point side with the larger lightness value and the slope of the straight lines forming the slant surface side of the contour point with the greater lightness value The three-dimensional machining path coordinates are calculated so as to have a larger value than the target contour point side.

The controller 130 compares the contrast values of the pixels of each contour point with the slopes of the straight lines forming the sloped surface of the straight line S so as to calculate the depth value And the three-dimensional machining path coordinates are calculated so that the slope of the straight lines forming the slope side of the contour point side with the smaller lightness value is smaller than the slope of the contour point side of the comparison object side.

When the three-dimensional machining path coordinates are calculated, the control unit 130 transfers the calculated three-dimensional machining path coordinates to the rendering unit 140 and controls to render them as a three-dimensional image.

Then, the rendering unit 140 renders the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates (S250).

Accordingly, the user can convert the three-dimensional image into a three-dimensional image of his or her own photograph (two-dimensional image), and confirm the displayed three-dimensional image.

As described above, according to the present invention, it is possible to restore a two-dimensional image of a user into a three-dimensional image in order to produce a three-dimensional shape based on a photograph of a user's image, Method and system can be realized.

It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims and their equivalents. Only. It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .

100: System for restoring a two-dimensional image to a three-dimensional image
110: image loading section
120:
130:
140:
150:
150:

Claims (10)

An image loading unit for loading a two-dimensional image and dividing the two-dimensional image by a predetermined pixel unit;
A feature detector for detecting a predetermined number or more of contour points in the two-dimensional image;
Dimensional image, a depth value for three-dimensional transformation is set for every pixel centered on the detected number of contour points or more, a front surface S including the predetermined number or more of contour points is selected, Calculating a three-dimensional machining path coordinate based on a depth value of pixels on the straight line forming the upper and lower curves by dividing the straight line facing each other into an upper curve and a lower curve; And
A rendering unit for rendering the two-dimensional image into a three-dimensional image according to the calculated three-dimensional processing path coordinates;
Dimensional image is reconstructed into a three-dimensional image.
The method according to claim 1,
Wherein the feature detecting unit specifies an area of one or more pixels having a lightness difference of a plurality of pixels among the plurality of pixels in the two-dimensional image by a predetermined reference difference or more as a contour point and sets the contour point to a predetermined number or more Dimensional image is reconstructed into a three-dimensional image.
3. The method of claim 2,
Wherein the feature detection unit determines a predetermined number or more of contour points by adding a specific number of detailed contour points between the detected contour points.
The method according to claim 1,
Wherein the control unit sets the depth values according to the magnitude of the lightness and darkness values by comparing the lightness and darkness values of the pixels of the detected number of contour points or more with respect to the pixels of the contour point, Value to a depth value that is greater than a depth value for a pixel of the contour point when the contrast value for the pixel of the contour point is higher than the contrast value for the pixel of the contour point, The depth value for the pixel of the contour point is lower than the depth value for the pixel of the contour point if the contrast value for the pixel of the point is lower than the contrast value for the pixel of the point.
The method according to claim 1,
Wherein the control unit controls the angle of inclination of the straight lines forming the upper and lower curves so that when there is an inclined plane between the predetermined number or more of the contour points on the basis of the depth value of the pixels on the straight line forming the upper and lower curves, The depth value for the pixel on the side of the contour point having the larger lightness value and the slope of the straight lines forming the inclined surface on the side of the contour point having the larger lightness value are compared with each other, Dimensional processing path coordinates are calculated so as to have a larger value than the point side.
(a) calling the image loading unit to divide a two-dimensional image into a predetermined number of pixels;
(b) a feature detecting unit detecting a predetermined number or more of contour points in the two-dimensional image;
(c) setting a depth value for the three-dimensional transformation for every two pixels around the detected predetermined number or more contour points for the two-dimensional image;
(d) The control unit selects the straight line S including the predetermined number or more of contour points, divides the straight line S into a top curve and a bottom curve with respect to each selected line straight line, Calculating a three-dimensional machining path coordinate based on a depth value of the three-dimensional machining path; And
(e) rendering the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates;
Dimensional image is reconstructed into a three-dimensional image.
The method according to claim 6,
Wherein the feature detecting unit designates a region of one or more pixels having a difference in contrast between a plurality of pixels in the two-dimensional image by a predetermined reference difference or more as an outline point, A method for restoring a two-dimensional image into a three-dimensional image, the method comprising the steps of:
8. The method of claim 7,
Wherein the step (b) further comprises the step of the feature detector adding a specific number of detailed contour points between the detected contour points to determine a certain number or more of contour points.
The method according to claim 6,
Wherein the controller sets the depth values according to the magnitudes of the lightness and darkness values by comparing the lightness and darkness values of the pixels of the predetermined number or more of the detected edge points with each other, A depth value greater than the depth value for the pixel of the contour point when the contrast value for the pixel of the contour point is higher than the contrast value for the pixel of the contour point, And sets the depth value to a depth value lower than the depth value for the pixel of the contour point when the contrast value for the pixel of the contour point is lower than the contrast value for the pixel of the contour point.
The method according to claim 6,
In the step (d), when there is an inclined plane between the predetermined number or more of the contour points on the basis of the depth value of pixels on the linear plane constituting the upper and lower curves, For each of the contour points, the contrast values for the pixels of each contour point are compared with each other, and the depth value for the pixel on the side of the contour point whose brightness value is larger and the depth value for the pixel on the contour point side Wherein the three-dimensional processing path coordinates are calculated such that the slope has a larger value than the side of the contour point to be compared.
KR1020160010193A 2016-01-27 2016-01-27 System and method for converting 2-dimensional image to 3-dimensional image KR101779532B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160010193A KR101779532B1 (en) 2016-01-27 2016-01-27 System and method for converting 2-dimensional image to 3-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160010193A KR101779532B1 (en) 2016-01-27 2016-01-27 System and method for converting 2-dimensional image to 3-dimensional image

Publications (2)

Publication Number Publication Date
KR20170090016A true KR20170090016A (en) 2017-08-07
KR101779532B1 KR101779532B1 (en) 2017-09-19

Family

ID=59653662

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160010193A KR101779532B1 (en) 2016-01-27 2016-01-27 System and method for converting 2-dimensional image to 3-dimensional image

Country Status (1)

Country Link
KR (1) KR101779532B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785427A (en) * 2018-12-26 2019-05-21 长江勘测规划设计研究有限责任公司 The method of three-dimensional modeling is quickly carried out using X-Y scheme
KR20190070623A (en) * 2017-12-13 2019-06-21 중앙대학교 산학협력단 Apparatus and method for bas-relief modeling
KR20200048237A (en) 2018-10-29 2020-05-08 방성환 Server for generatig 3d image and method for generating stereoscopic image
CN116756836A (en) * 2023-08-16 2023-09-15 中南大学 Tunnel super-undermining volume calculation method, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100414629B1 (en) 1995-03-29 2004-05-03 산요덴키가부시키가이샤 3D display image generation method, image processing method using depth information, depth information generation method
KR101570359B1 (en) * 2014-11-28 2015-11-19 한국델켐 (주) system and method for generating flank milling tool path

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190070623A (en) * 2017-12-13 2019-06-21 중앙대학교 산학협력단 Apparatus and method for bas-relief modeling
KR20200048237A (en) 2018-10-29 2020-05-08 방성환 Server for generatig 3d image and method for generating stereoscopic image
CN109785427A (en) * 2018-12-26 2019-05-21 长江勘测规划设计研究有限责任公司 The method of three-dimensional modeling is quickly carried out using X-Y scheme
CN116756836A (en) * 2023-08-16 2023-09-15 中南大学 Tunnel super-undermining volume calculation method, electronic equipment and storage medium
CN116756836B (en) * 2023-08-16 2023-11-14 中南大学 Tunnel super-undermining volume calculation method, electronic equipment and storage medium

Also Published As

Publication number Publication date
KR101779532B1 (en) 2017-09-19

Similar Documents

Publication Publication Date Title
US10510180B2 (en) Learning to reconstruct 3D shapes by rendering many 3D views
US9978177B2 (en) Reconstructing a 3D modeled object
EP3032495B1 (en) Texturing a 3d modeled object
KR101779532B1 (en) System and method for converting 2-dimensional image to 3-dimensional image
CN110288695B (en) Single-frame image three-dimensional model surface reconstruction method based on deep learning
JP7403528B2 (en) Method and system for reconstructing color and depth information of a scene
KR102096673B1 (en) Backfilling points in a point cloud
JP4677536B1 (en) 3D object recognition apparatus and 3D object recognition method
KR100634537B1 (en) Apparatus and method for processing triangulation of 3-D image, computer-readable storing medium storing a computer program for controlling the apparatus
CN102184540B (en) Sub-pixel level stereo matching method based on scale space
CN103123727A (en) Method and device for simultaneous positioning and map building
KR101867991B1 (en) Motion edit method and apparatus for articulated object
US20140300941A1 (en) Method and apparatus for generating hologram based on multi-view image
CN106408596B (en) Sectional perspective matching process based on edge
CN104715504A (en) Robust large-scene dense three-dimensional reconstruction method
EP2372652A1 (en) Method for estimating a plane in a range image and range image camera
Natali et al. Rapid visualization of geological concepts
WO2020230214A1 (en) Depth estimation device, depth estimation model learning device, depth estimation method, depth estimation model learning method, and depth estimation program
Sreeni et al. Haptic rendering of dense 3D point cloud data
Shivakumar et al. Real time dense depth estimation by fusing stereo with sparse depth measurements
CN104796624A (en) Method for editing and propagating light fields
US8837815B2 (en) Method of filtering a disparity mesh obtained from pixel images
US10783707B2 (en) Determining a set of facets that represents a skin of a real object
Leeper et al. Constraint based 3-dof haptic rendering of arbitrary point cloud data
Aydar et al. A low-cost laser scanning system design

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant