KR20160088814A - Conversion Method For A 2D Image to 3D Graphic Models - Google Patents
Conversion Method For A 2D Image to 3D Graphic Models Download PDFInfo
- Publication number
- KR20160088814A KR20160088814A KR1020160004611A KR20160004611A KR20160088814A KR 20160088814 A KR20160088814 A KR 20160088814A KR 1020160004611 A KR1020160004611 A KR 1020160004611A KR 20160004611 A KR20160004611 A KR 20160004611A KR 20160088814 A KR20160088814 A KR 20160088814A
- Authority
- KR
- South Korea
- Prior art keywords
- image
- dimensional
- model
- vertex
- present
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
-
- G06T7/0081—
-
- G06T7/0085—
-
- G06T2207/20144—
Abstract
The present invention relates to a method of converting a 2D image into a 3D model, and more particularly to a method of processing an image from a 2D image and automatically converting the image information into a three-dimensional model.
The present invention includes the steps of receiving (101) an image,
Pre-processing (102) an image,
A segmentation step 103 for segmenting the image,
Applying (104) a vertex to the region in the image,
The three-dimensional vertex position calculation step 105,
The rear model generation step 106,
A front model creation step 107, a front side vertex adjustment step 108,
Dimensional image including the rendering step 109 into a three-dimensional graphic model.
Further, in the present invention, the step (105) of calculating the position of the three-
(CenterX, centerY) of the image foreground region by an average calculation method,
(V x , V y , V z ) of the three-dimensional model assigned to the coordinates (x, y) on the image space, To a < / RTI >
(The above equation is expressed as V x = S x (x-center X)
V y = S y (center X - y)
V z = S z r (y) cos?
The above equation θ = π (centerX -x) / 2r (y) in, S x, S y, S z Is the scaling factor, and r (y) is the length of the horizontal axis image in the foreground of the image plane)
Description
The present invention relates to a method of converting a 2D image into a 3D model, and more particularly to a method of processing an image from a 2D image and automatically converting the image information into a three-dimensional model.
In general, there are many ways to model a specific object (including a person's face) in three dimensions. One of them is a method of directly modeling a three-dimensional object with scanned data by scanning a specific object in three dimensions, and the other is a method of photographing a specific object at various angles, There is a method of modeling a specific object by modifying the object.
In the case of electronics, it is possible to directly obtain information on the curvature and the actual color of a specific object by using specialized equipment such as a three-dimensional scanner, so that a highly precise three-dimensional object modeling can be performed for a specific object. However, this method requires expensive equipment, and the three-dimensional object modeling party can not perform the three-dimensional object modeling until it is scanned by going to a specific place where the three-dimensional scanner is located.
In the latter case, three-dimensional object modeling is performed using flat photographs taken at various angles with respect to a specific object and a pre-made three-dimensional general object to solve the problem of the former. More specifically, first of all, a specific object to be made of a three-dimensional object is photographed from various angles. A three-dimensional object model is created by deforming a general model of a three-dimensional object with the photographed data, and a three-dimensional model of a specific object is created by mapping a specific object texture captured on the modified model.
However, the method of using the plane photographs taken at the above-mentioned angles requires photographs taken from various angles, and it is necessary to make angles and distances to look at the objects between the respective photographs, so it is inconvenient There was a dot. Therefore, in order to use the above method, there is a problem that it is necessary to go to a place where several cameras are installed and take a picture of several faces by adjusting the correct angle and distance for each picture even if there is one camera.
In order to solve the above-described problems, a method for generating a three-dimensional face model using a two-dimensional face front image, hereinafter referred to as " A step of setting a control point on a two-dimensional image, a step of transforming a three-dimensional basic model made in advance to coincide with a two-dimensional image using the control point, a step of mapping a texture of the two- A method of generating a three-dimensional face model using a two-dimensional face front image including steps ".
The 3D modeling of the prior art and the prior art described above is a field that requires specialized training that has been trained for a long time, and it has been difficult for the general public, especially young students, to produce the 2D model. .
In addition, the present invention is intended to provide a method for converting a two-dimensional image into a three-dimensional graphic model that can easily be modeled by children and the general public, with the prospect that the 3D printer will be widely spread in the home.
In order to solve the above problems and needs,
Receiving
Pre-processing (102) an image,
A segmentation step 103 for segmenting the image,
Applying (104) a vertex to the region in the image,
The three-dimensional vertex position calculation step 105,
The rear model generation step 106,
A front model creation step 107, a front side vertex adjustment step 108,
Dimensional image including the rendering step 109 into a three-dimensional graphic model.
Further, in the present invention, the step (105) of calculating the position of the three-
(CenterX, centerY) of the image foreground region by an average calculation method,
(V x , V y , V z ) of the three-dimensional model assigned to the coordinates (x, y) on the image space, To a < / RTI >
(The above equation is expressed as V x = S x (x-center X)
V y = S y (center X - y)
V z = S z r (y) cos?
The above equation θ = π (centerX -x) / 2r (y) in, S x, S y, S z Is the scaling factor, and r (y) is the length of the horizontal axis image in the foreground of the image plane)
The method of converting a two-dimensional image into a three-dimensional graphic model according to the present invention creates an effect that a non-professional child and a general public can easily perform 3D modeling.
1 is a sequence diagram illustrating a method for converting a two-dimensional image into a three-dimensional graphic model according to the present invention.
Figure 2 is an embodiment of an image area vertex distribution according to the present invention;
FIG. 3 is a photograph of a 3D product manufactured by a method of converting a two-dimensional image into a three-dimensional graphic model according to the present invention.
4 is a diagram illustrating an edge extraction image processing result according to an embodiment of the present invention.
6 is a diagram illustrating a principle of arranging vertices at regular intervals on an image separated by foreground and generating faces using vertices;
BRIEF DESCRIPTION OF THE DRAWINGS FIG.
The present invention relates to an image processing method comprising the steps of receiving an
The
The image preprocessing
One method of extracting an edge performed in the preprocessing step of the present invention may be performed by extracting an edge from an input image as follows.
The present invention may perform the step of converting the input image into a gray image before performing the step of extracting the edge.
The converted gray image is processed by a low-pass filter, detected for eye-eye, or utilized for edge extraction, regardless of whether the input image is a color image or a black-and-white image.
In the converting to the gray image, a conventional image conversion method for converting a color image into a monochrome image may be used.
In addition, the present invention extracts an edge through a line of light and dark outlines in an image converted into a gray image.
The segmentation step 103 of the present invention means a segmentation operation for separating foreground and background from a preprocessed image.
The above-described region separation step refers to a process of separating the foreground and background from the edge-extracted image in the pre-processing.
Therefore, the process of separating the foreground and the background can be performed through a normal image processing method.
In the present invention, the position of a vertex of a model space is set in order to generate a three-dimensional model.
5 illustrates the principle of arranging vertices at regular intervals on an image separated by foreground and generating faces using vertices.
The position of the vertex is calculated by using the x and y axes of the image coordinates as the x and y axes of the model space, and the z axes according to the relative positions of the x and y axes.
As described above, in order to detect only a required face part, the present invention needs a segmentation operation of an image and also needs to set a position of a vertex.
The step of separating the region after the edge extraction of the present invention, i.e., separating the foreground and background, may be performed by assuming a hypothetical horizontal line in the face image, that is, when a line starting from one edge reaches the other edge, And the last detected position can be regarded as the foreground part.
In the present invention, edge detection is performed as shown in FIG. 4 on the assumption that the background portion of the image uses a simple image as shown in the proof photograph, thereby separating the foreground and the background.
The image region vertex distribution process 104 of the present invention sets a horizontal side in a foreground image area at regular intervals and sets a predetermined number of vertex points on each horizontal side.
Therefore, the resolution of the 3D model is determined by the number of the vertex points.
The three-dimensional vertex position calculation step 105 of the present invention is performed.
The three-dimensional vertex position calculation step 105 is as follows.
First, the center coordinates (centerX, centerY) of the image foreground region are calculated by an average calculation method (step).
Then, the vertex coordinates (V x , V y , V z ) of the three-dimensional model assigned to the coordinates (x, y) on the image space are calculated by the following method.
V x = S x (x-center X)
V y = S y (center X - y)
V z = S z r (y) cos?
The above equation θ = π (centerX -x) / 2r (y) in, S x, S y, S z Is the scaling factor, and r (y) is the length of the horizontal axis image in the foreground of the image plane.
The vertex coordinates of the front part of the model are (V x , V y , V z ) and the vertex coordinates of the rear part are (V x , V y , -V z ).
The normal vectors of the front and back sides are opposite to each other.
Upon completion of the vertex coordinate calculation, a rear model generation step (106) and a front model generation step (107) are performed to generate front or rear faces.
The rear model generation step 106 and the front model generation step 107 may be performed in a different order.
In the next step, the front side vertex adjustment step 108 for shaping the front side according to the image is performed using the grayscale information of the image image as a part for setting the front side differently from the rear side or using the edge information.
The execution of the front vertex adjustment step is performed according to the following equation.
That is, the equation is V z = V z + Αf (x, y) / G max to be.
In the above equations,? Is a scaling factor, f (x, y) is information on the grayscale value or edge on the image plane, and Gmax Is a maximum value of the tone value and can be input as a predetermined value for each specific object to be generated 3D.
After performing the above-described process, a rendering step (109) for outputting a 3D image or outputting a real image to a 3D printer is performed.
The present invention provides a method for transforming a two-dimensional image into a three-dimensional graphic model.
INDUSTRIAL APPLICABILITY The present invention is a very useful invention for an industry that produces, manufactures, and distributes software or hardware for modeling in three dimensions.
In particular, the present invention is a particularly useful invention for such a universal modeling industry as the modeling industry and 3D printer for the general public, especially for younger students, are becoming widespread in the home.
Receiving an
A segmentation step 103 for segmenting the image,
A step 104 of applying a vertex to an area in an image, a step 103 of calculating the position of the three-dimensional vertex,
A rear model generation step 106, a front model generation step 107,
A front vertex adjustment step (108), and a rendering step (109).
Claims (2)
Pre-processing (102) an image,
A segmentation step 103 for segmenting the image,
Applying (104) a vertex to the region in the image,
The three-dimensional vertex position calculation step 105,
A rear model generation step 106, a front model generation step 107,
The front vertex adjustment step 108,
A method of transforming a two-dimensional image comprising a rendering step (109) into a three-dimensional graphic model.
The three-dimensional vertex position calculation step (105)
(CenterX, centerY) of the image foreground region by an average calculation method,
(V x , V y , V z ) of the three-dimensional model assigned to the coordinates (x, y) on the image space, . ≪ / RTI >
(The above equation is expressed as V x = S x (x-center X)
V y = S y (center X - y)
V z = S z r (y) cos?
The above equation θ = π (centerX -x) / 2r (y) in, S x, S y, S z Is the scaling factor, and r (y) is the length of the horizontal axis image in the foreground of the image plane)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20150007742 | 2015-01-16 | ||
KR1020150007742 | 2015-01-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20160088814A true KR20160088814A (en) | 2016-07-26 |
KR101829733B1 KR101829733B1 (en) | 2018-03-29 |
Family
ID=56680978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160004611A KR101829733B1 (en) | 2015-01-16 | 2016-01-14 | Conversion Method For A 2D Image to 3D Graphic Models |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101829733B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021195873A1 (en) * | 2020-03-30 | 2021-10-07 | 南昌欧菲光电技术有限公司 | Method and device for identifying region of interest in sfr test chart image, and medium |
KR20220162930A (en) * | 2021-06-01 | 2022-12-09 | (주)이머시브캐스트 | Cloud vr-based 3d image provision method |
KR20230060575A (en) * | 2021-10-27 | 2023-05-08 | 삼덕통상 주식회사 | An automated manufacturing of a shoe upper part |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102145220B1 (en) | 2019-02-14 | 2020-08-18 | 엔에이치엔 주식회사 | Method and apparatus for convert two-dimensional image to three-dimensional image utilizing deep learning |
KR20210099750A (en) | 2020-02-05 | 2021-08-13 | 씨오지 주식회사 | Apparatus and method for providing image |
KR20230083348A (en) | 2021-12-02 | 2023-06-12 | (주)셀빅 | The platform and Method for generating the contents |
KR102627659B1 (en) | 2021-12-02 | 2024-01-24 | (주)셀빅 | The Apparatus and method for generating the Back side image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008310724A (en) | 2007-06-18 | 2008-12-25 | Nippon Telegr & Teleph Corp <Ntt> | Three-dimensional shape restoration device, three-dimensional shape restoration method, three-dimensional shape restoration program and recording medium with its program stored |
-
2016
- 2016-01-14 KR KR1020160004611A patent/KR101829733B1/en active IP Right Grant
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021195873A1 (en) * | 2020-03-30 | 2021-10-07 | 南昌欧菲光电技术有限公司 | Method and device for identifying region of interest in sfr test chart image, and medium |
KR20220162930A (en) * | 2021-06-01 | 2022-12-09 | (주)이머시브캐스트 | Cloud vr-based 3d image provision method |
KR20230060575A (en) * | 2021-10-27 | 2023-05-08 | 삼덕통상 주식회사 | An automated manufacturing of a shoe upper part |
Also Published As
Publication number | Publication date |
---|---|
KR101829733B1 (en) | 2018-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101829733B1 (en) | Conversion Method For A 2D Image to 3D Graphic Models | |
CN107993216B (en) | Image fusion method and equipment, storage medium and terminal thereof | |
EP3323249B1 (en) | Three dimensional content generating apparatus and three dimensional content generating method thereof | |
US9609307B1 (en) | Method of converting 2D video to 3D video using machine learning | |
KR100682889B1 (en) | Method and Apparatus for image-based photorealistic 3D face modeling | |
CN107484428B (en) | Method for displaying objects | |
JP4677536B1 (en) | 3D object recognition apparatus and 3D object recognition method | |
US10176564B1 (en) | Collaborative disparity decomposition | |
KR101759188B1 (en) | the automatic 3D modeliing method using 2D facial image | |
CN109816784B (en) | Method and system for three-dimensional reconstruction of human body and medium | |
KR102152436B1 (en) | A skeleton processing system for dynamic 3D model based on 3D point cloud and the method thereof | |
EP3905195A1 (en) | Image depth determining method and living body identification method, circuit, device, and medium | |
JP5068732B2 (en) | 3D shape generator | |
CN112651881B (en) | Image synthesizing method, apparatus, device, storage medium, and program product | |
US11232315B2 (en) | Image depth determining method and living body identification method, circuit, device, and medium | |
KR101125061B1 (en) | A Method For Transforming 2D Video To 3D Video By Using LDI Method | |
CN109448093B (en) | Method and device for generating style image | |
CN107203961B (en) | Expression migration method and electronic equipment | |
CN117115358A (en) | Automatic digital person modeling method and device | |
KR20140004604A (en) | Apparatus and method for generating 3 dimension face | |
KR101351745B1 (en) | Apparatus and method for generating 3 dimension face | |
CN111611997B (en) | Cartoon customized image motion video generation method based on human body action migration | |
KR20160049639A (en) | Stereoscopic image registration method based on a partial linear method | |
JP7298687B2 (en) | Object recognition device and object recognition method | |
JPH0273471A (en) | Estimating method for three-dimensional form |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
N231 | Notification of change of applicant | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |