WO2022045470A1 - Method for generating three-dimensional modelding of two-dimensional image through virtual grid network - Google Patents

Method for generating three-dimensional modelding of two-dimensional image through virtual grid network Download PDF

Info

Publication number
WO2022045470A1
WO2022045470A1 PCT/KR2020/017622 KR2020017622W WO2022045470A1 WO 2022045470 A1 WO2022045470 A1 WO 2022045470A1 KR 2020017622 W KR2020017622 W KR 2020017622W WO 2022045470 A1 WO2022045470 A1 WO 2022045470A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
image
grid network
images
generating
Prior art date
Application number
PCT/KR2020/017622
Other languages
French (fr)
Korean (ko)
Inventor
윤기식
Original Assignee
윤기식
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 윤기식 filed Critical 윤기식
Publication of WO2022045470A1 publication Critical patent/WO2022045470A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • a virtual grid network that performs curve interpolation using a grid network to correct disturbances that may occur when a plurality of two-dimensional images are overlapped. It relates to a method for generating a three-dimensional model of a two-dimensional image.
  • the object can be created as a three-dimensional image by photographing the object from various angles and modeling the photographed two-dimensional image in three dimensions.
  • various two-dimensional images are created by photographing an object at different focal lengths in various directions, or by setting reference points in each part of the object and extending the reference points with lines to form an image of the object. of the two-dimensional image was modeled as a three-dimensional image.
  • the present invention relates to a two-dimensional image through a virtual grid network capable of generating a three-dimensional object image by attaching a temporary grid network and a virtual grid network to the two-dimensional image.
  • a virtual grid network capable of generating a three-dimensional object image by attaching a temporary grid network and a virtual grid network to the two-dimensional image.
  • the present invention performs curved interpolation to match the corner surfaces of the unit grid or to match the contrast inside the unit grid in order to remove the disturbance that may occur between the images in the part where the two-dimensional images of the object to be photographed are overlapped.
  • a three-dimensional modeling generation method may be provided.
  • a temporary grid network is bonded to the photographed image, and the background image is selected based on the boundary between the non-curved surface and the curved surface formed in the temporary grid network.
  • a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network that separates them may be provided.
  • the virtual grid network includes a plurality of black first unit grids and white second unit grids alternately arranged to form a two-dimensional image through a virtual grid network.
  • a three-dimensional modeling generation method may be provided.
  • the first unit grids are overlapped so that the edge surfaces of the plurality of first unit grids having curved edge surfaces are connected to each other, and the edge surface 2D through a virtual grid network including forming the three-dimensional object image for interpolating curves by overlapping the second unit grids so that the edge surfaces of the plurality of second unit grids formed with the curve are connected
  • a method for generating a three-dimensional model of an image may be provided.
  • a curve is formed by superimposing the second unit grids so that the light and dark coincide with the plurality of the second unit grids, the contrast of which is generated on the inner surface.
  • a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network including the step of forming the three-dimensional object image to be interpolated may be provided.
  • the present invention removes the background image of the object by attaching a temporary grid network to the two-dimensional image of the object, and attaching the virtual grid network formed of the first unit grid and the second unit grid to the two-dimensional object image, thereby creating an object. It can be easily modeled as a 3D image.
  • the present invention can effectively remove the disturbance between the two-dimensional images by interpolating each of the first and second unit grids with curves in the portion where the two-dimensional images of the object overlap.
  • FIG. 1 is a flowchart illustrating a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a virtual grid network of a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • FIG. 3 is a view for explaining a process of generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • FIG. 1 is a flowchart illustrating a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention
  • FIG. 2 is a two-dimensional view through a virtual grid network according to an embodiment of the present invention.
  • It is a diagram illustrating a virtual grid network of a method for generating a three-dimensional modeling of an image
  • FIG. 3 is a view for explaining a process of generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • a two-dimensional image 120 extraction step of photographing and removing the background image 110 from the photographed photographed image 100 may be performed (step 210).
  • the photographed image 100 including the background image 110 in which the photographing object 10 is located may be photographed and output, and this background image 110 is three-dimensional. Since it corresponds to unnecessary data in the process of modeling the object image 160 , the process of removing the background image 110 from the photographed image 100 may be preceded.
  • a plurality of two-dimensional images corresponding to each surface may be output by photographing a plurality of times in the front, rear, left surface, right surface, and upper surface of the object to be photographed 10 .
  • the process of bonding the temporary grid network 140 to the photographed image 100 may be performed.
  • the temporary grid network 140 means that a plurality of black and white squares formed in a rectangle, a square, etc. are alternately arranged in a grid shape, and the plurality of squares are not limited to black and white, and a plurality of A plurality of colors of the quadrangle may be provided so that the four quadrilaterals are alternately arranged so that each quadrangle can be contrasted and distinguished.
  • the shape of the temporary grid network 140 may vary according to the shape of the object 10 to be photographed.
  • the photographed image 100 in which the temporary grid network 140 is attached to the concave portion of the photographing target 10 may be exposed by being deformed into a concave shape of the temporary grid network 140 . That is, concave curves are formed in the temporary grid network 140 according to the concave shape of the object 10 , so that the shape of the object 10 to be photographed can be determined through the temporary grid network 140 .
  • the temporary grid network 140 when the temporary grid network 140 is attached to the photographed image 100 in which a plurality of parts of the foot are captured, the temporary grid network 140 may be deformed and exposed according to the shape of the foot.
  • a temporary grid network 140 bonded to the photographed image 100 such as the front, rear, left and right sides of the foot may be used, but the background image in the photographed image 100 of the foot
  • the photographed image 100 may include an image of the foot including the instep of the foot, the background image 110 , and the like.
  • the temporary grid network 140 is attached to the photographed image 100 as described above, the temporary grid network 141 located at the foot part is bent and exposed according to the shape of the foot, and the temporary grid network 141 located in the background part except for the foot part
  • the grid network 140 may be exposed in the form of a rectangle, a square, or the like in which no curves are formed.
  • the boundary between the curved and exposed temporary grid network 140 and the non-curved temporary grid network 140 is continuously separated, only the temporary grid network 140 in which the shape of the foot is expressed can be separated. , by easily separating and removing the background image 110 from the photographed image 100 , the two-dimensional image 120 can be derived.
  • the two-dimensional object image 130 by superimposing a plurality of two-dimensional images 120 obtained by photographing the same surface of the object to be photographed, and extracting an intermediate value with respect to the error values formed in the plurality of overlapping secondary images 120 . ) may be performed (step 220).
  • the two-dimensional image 120 in which the background image 110 is removed from the photographed image 100 and only the shape of the photographing target 10 is exposed is that the photographing object 10 is photographed multiple times from various sides, and the photographing object ( 10), an error may occur in that the two-dimensional image 120 is not output in the same way depending on the shooting conditions such as vibration of the imaging device, light intensity at the time of shooting, and light intensity. there is.
  • a process of correcting an error between the plurality of two-dimensional images 120 may be performed.
  • two-dimensional (2D) images can be exposed without interference between a plurality of overlapping two-dimensional images 120 .
  • An image 120 is output.
  • the plurality of two-dimensional images 120 output in this way are positioned at coordinates formed along the XY axis, and the plurality of reference positions of the photographing target 10 exposed to each two-dimensional image 120 are XY expressed in terms of coordinates.
  • each two-dimensional image 120 a median value is derived from the XY position value that can be expressed differently for each reference position, and the position of the XY coordinates of the intermediate value to be derived is referenced to the shape of the imaging target 10.
  • the two-dimensional object image 130 in which the error value is corrected between the plurality of two-dimensional images 120 may be generated.
  • step 230 a process of bonding the virtual grid network 150 to the two-dimensional object image 130 is performed.
  • the virtual grid network 150 is provided with a plurality of black first unit grids 151 and white second unit grids 152 formed in a rectangle, a square, etc., similarly to the temporary grid network 140 .
  • the plurality of first unit grids 151 and second unit grids 152 are not limited to be formed in black and white, and the plurality of unit grids are alternately arranged to form the first unit
  • the color of the unit grid may be formed so that the grid 151 and the second unit grid 152 can be distinguished from each other.
  • the virtual grid network 150 is bonded to the two-dimensional object image 130 so that the plurality of first unit grids 151 and the second unit grids 152 of the virtual grid network 150 are the shape of the object 10 to be photographed. By being deformed according to the , the shape of each part of the object 10 to be photographed may be exposed to the virtual grid 150 .
  • a three-dimensional object image ( 160) is generated (step 240).
  • the two-dimensional object image 130 to which the virtual grid network 150 is attached is an image output by photographing the object 10 from one side, and in order to generate a three-dimensional object image 160, the object to be photographed (10) A process of forming a single image by correcting the two-dimensional object image 130 obtained by photographing each part is necessary.
  • first unit grids 151 are overlapped so that the edge surfaces of the plurality of first unit grids 151 in which the edge surfaces are curved are connected, and the plurality of second unit grids 152 in which the edge surfaces are formed in a curved shape. ) may be interpolated by overlapping the second unit grids 152 so that the edge surfaces of each are connected (step 241).
  • the edge surface of the first unit grid 151 or the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body is from the instep of the left foot to that of the left foot. It may be formed in a curved shape in the right (or left) direction.
  • the edge surface of the first unit grid 151 or the second unit grid 152 formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot is in the instep direction from the right (or left) side of the left foot. may be formed in a curved shape.
  • each two-dimensional object image 130 overlaps at the overlapping portion.
  • disturbances occur between the respective two-dimensional object images 130 , which may cause a problem in that it is difficult to output the accurate three-dimensional object image 160 .
  • the edge surface of the first unit grid 151 (or the second unit grid 152) formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body
  • the edge surface of the first unit grid 151 (or the second unit grid 152 ) formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot may match. That is, the first unit grid 151 coincides with the first unit grids 151, and the second unit grid 152 matches the second unit grids 152 with the four trough surfaces of each corner.
  • the two-dimensional object image 130 is obtained. Through this, the 3D object image 160 may be generated.
  • each of the two-dimensional object images 130 may be interpolated by overlapping the plurality of second unit grids 152 in which light and dark are generated on the inner surface so that the contrasts are matched (step 242). ).
  • the second unit grid 152 may be formed of white, etc., depending on the angle at which the photographing target 10 is photographed or the illuminance at the time of photographing, etc., light and dark are generated on the second unit grid 152 , the contrast may be exposed in the form of a gradation.
  • the inner surface of the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body is darkened from the top of the left foot to the right (or left) direction of the left foot.
  • the setting may be formed to have a gradation of light and dark.
  • the inner surface of the second unit grid 152 formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot has a gradation of the contrast darkening in the instep direction from the right (or left) side of the left foot. can be formed.
  • each of the second unit grids 152 has a difference in that the darkening direction is formed differently, but the second unit grid 152 of the two-dimensional object image 130 obtained by photographing the same part of the left foot. points are coincident.
  • the two-dimensional object image 130 by interpolating the overlap generated at the boundary surface of each of the two-dimensional object images 130 photographed from the front, rear, left surface, right surface, and upper surface of the left foot by the curved interpolation as described above. ) through the three-dimensional object image 160 can be generated.
  • the modeling of the three-dimensional object image 160 for the part of the photographing target 10 in contact with the floor surface, such as the sole surface can be generated using an actual grid network provided with a certain standard such as a grid paper.
  • the front, rear, left and right side surfaces, and upper surfaces of the foot are formed in a curved shape, it may be easy to grasp the shape of the sole surface based on the outer boundary of the sole surface.
  • the virtual grid network 150 it is easy to grasp the shape of the foot through the curved edge of the first unit grid 151 and the second unit grid 152, or the contrast formed on the inner surface thereof.
  • the actual grid network such as , graph paper, etc. is provided so that the interval is standardized, so it is easy to actually model the shape of the sole surface.
  • water or ink is applied to the sole surface to contact the graph paper, or the sole surface is brought into contact with the graph paper applied with ink, etc., through the outer boundary of the sole surface exposed to the graph paper, etc. shape can be modeled.
  • a virtual grid network that performs curve interpolation using a grid network to correct disturbances that may occur when a plurality of two-dimensional images are overlapped. It can be used in the field of a method for generating a three-dimensional model of a two-dimensional image.

Abstract

The present invention relates to a method for generating three-dimensional modeling of a two-dimensional image through a virtual grid network. The method comprises: a two-dimensional image extraction step of capturing images of a plurality of planes of an object to be captured a plurality of times and removing background images from the captured images that have been captured; a step of superimposing a plurality of two-dimensional images obtained by capturing images of the same plane, and extracting an intermediate value of error values that occur in the plurality of superimposed two-dimensional images, so as to generate two-dimensional object images; a step of generating a virtual grid network and attaching the virtual grid network to the two-dimensional object images; and performing curve interpolation on a first unit grid and a second unit grid, which are superimposed at the boundary of the plurality of two-dimensional object images to which the virtual grid network has been attached, so as to generate a three-dimensional object image, so that two-dimensional images obtained by capturing images of the object to be captured may be easily and conveniently modeled into a three-dimensional image.

Description

가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법How to create 3D modeling of 2D image through virtual grid network
본 발명은 2차원 이미지를 3차원 이미지로 모델링함에 있어, 복수 개의 2차원 이미지가 중첩되는 경우에 발생될 수 있는 교란을 보정하기 위해 그리드망을 이용하여 곡선보간을 수행하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법에 관한 것이다.In the present invention, in modeling a two-dimensional image into a three-dimensional image, a virtual grid network that performs curve interpolation using a grid network to correct disturbances that may occur when a plurality of two-dimensional images are overlapped. It relates to a method for generating a three-dimensional model of a two-dimensional image.
잘 알려진 바와 같이, 인터넷 상에서의 전시회, 전자 상거래 뿐만 아니라 가상 현실, 증강 현실 등과 같이, 물체를 3차원 이미지로 노출하여 사용자가 실제 물체와 같이 인식되도록 하는 분야가 증가하고 있다.As is well known, fields such as virtual reality, augmented reality, etc., as well as exhibitions and e-commerce on the Internet, exposing an object as a three-dimensional image so that a user can perceive it as a real object are increasing.
이렇게 물체를 3차원 이미지로 인식시키기 위해서는, 물체를 다양한 각도에서 촬영하고, 촬영된 2차원 이미지를 3차원으로 모델링함으로써, 물체를 3차원 이미지로 생성시킬 수 있다In order to recognize an object as a three-dimensional image in this way, the object can be created as a three-dimensional image by photographing the object from various angles and modeling the photographed two-dimensional image in three dimensions.
종래의 경우, 물체를 여러 방향에서 초점거리를 달리하여 촬영하여 다양한 2차원 이미지를 생성하거나, 물체의 각 부분에 기준점을 설정하여 기준점끼리 선으로 연장하여 물체의 이미지를 형성하는 등의 방법으로 물체의 2차원 이미지를 3차원 이미지로 모델링하였다.In the conventional case, various two-dimensional images are created by photographing an object at different focal lengths in various directions, or by setting reference points in each part of the object and extending the reference points with lines to form an image of the object. of the two-dimensional image was modeled as a three-dimensional image.
하지만, 이러한 경우, 물체의 부분을 다양한 각도에서 촬영된 복수 개의 2차원 이미지가 중첩이 될 경우, 각각의 2차원 이미지 상호간에 교란이 발생된다는 문제점이 있다. However, in this case, when a plurality of two-dimensional images taken from various angles of an object are overlapped, there is a problem in that each two-dimensional image is disturbed.
또한, 다수의 카메라 등으로 촬영이 수행됨에 따라 3차원 모델링 데이터의 생성에 과다한 비용이 요구되어, 사용자가 용이하게 작업을 수행하기 어려운 문제점이 있다.In addition, as photographing is performed with a plurality of cameras, an excessive cost is required to generate 3D modeling data, and thus, there is a problem in that it is difficult for a user to easily perform an operation.
본 발명은 물체의 2차원 이미지를 3차원으로 모델링하기 위해, 2차원 이미지에 임시 그리드망과 가상 그리드망 등을 합착시켜 3차원 객체이미지를 생성시킬 수 있는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법을 제공하고자 한다.In order to model a two-dimensional image of an object in three dimensions, the present invention relates to a two-dimensional image through a virtual grid network capable of generating a three-dimensional object image by attaching a temporary grid network and a virtual grid network to the two-dimensional image. To provide a 3D modeling creation method.
또한, 본 발명은 촬영대상의 2차원 이미지가 중첩되는 부분에서 이미지 상호 간에 발생될 수 있는 교란을 제거하기 위해, 단위그리드의 모서리면을 일치시키거나 단위그리드 내부의 명암을 일치시키는 곡선보간이 수행될 수 있는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법을 제공하고자 한다, In addition, the present invention performs curved interpolation to match the corner surfaces of the unit grid or to match the contrast inside the unit grid in order to remove the disturbance that may occur between the images in the part where the two-dimensional images of the object to be photographed are overlapped. To provide a method for generating 3D modeling of 2D images through a virtual grid network that can be
본 발명의 실시예들의 목적은 이상에서 언급한 목적으로 제한되지 않으며, 언급되지 않은 또 다른 목적들은 아래의 기재로부터 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The purpose of the embodiments of the present invention is not limited to the above-mentioned purpose, and other objects not mentioned will be clearly understood by those of ordinary skill in the art to which the present invention belongs from the description below. .
본 발명의 실시예에 따르면, 촬영대상의 복수 면을 복수 회 촬영하고, 촬영된 촬영이미지에서 배경이미지를 제거하는 2차원 이미지 추출 단계와, 동일면을 촬영한 복수 개의 상기 2차원 이미지를 중첩시키고, 중첩된 복수 개의 상기 2차원 이미지에 형성되는 오차값에 대하여 중간값을 추출하여 2차원 객체이미지를 생성하는 단계와, 가상 그리드망을 생성하여 상기 2차원 객체이미지에 합착시키는 단계와, 상기 가상 그리드망이 합착된 복수 개의 상기 2차원 객체이미지의 경계면에서 중첩되는 제 1 단위그리드 및 제 2 단위그리드를 곡선보간하여 3차원 객체이미지를 생성하는 단계를 포함하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법이 제공될 수 있다.According to an embodiment of the present invention, a two-dimensional image extraction step of photographing a plurality of surfaces of an object to be photographed a plurality of times, and removing a background image from the photographed photographed image, and overlapping a plurality of the two-dimensional images obtained by photographing the same surface, generating a two-dimensional object image by extracting a median value from the error values formed in the plurality of overlapping two-dimensional images; creating a virtual grid network and joining the two-dimensional object image to the virtual grid; A two-dimensional image through a virtual grid network comprising the step of generating a three-dimensional object image by interpolating the curves of the first unit grid and the second unit grid overlapping at the boundary of the plurality of two-dimensional object images to which the network is attached. A three-dimensional modeling generation method may be provided.
또한, 본 발명의 실시예에 따르면, 상기 2차원 이미지 추출 단계는, 임시 그리드망을 상기 촬영이미지에 합착시켜, 상기 임시 그리드망에 형성되는 비굴곡면과 굴곡면의 경계를 기준으로 상기 배경이미지를 분리시키는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법이 제공될 수 있다.In addition, according to an embodiment of the present invention, in the step of extracting the two-dimensional image, a temporary grid network is bonded to the photographed image, and the background image is selected based on the boundary between the non-curved surface and the curved surface formed in the temporary grid network. A method for generating a three-dimensional model of a two-dimensional image through a virtual grid network that separates them may be provided.
또한, 본 발명의 실시예에 따르면, 상기 가상 그리드망은, 복수 개의 흑색의 상기 제 1 단위그리드 및 백색의 상기 제 2 단위그리드가 교번되게 배치되어 형성되는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법이 제공될 수 있다.In addition, according to an embodiment of the present invention, the virtual grid network includes a plurality of black first unit grids and white second unit grids alternately arranged to form a two-dimensional image through a virtual grid network. A three-dimensional modeling generation method may be provided.
또한, 본 발명의 실시예에 따르면, 상기 3차원 객체이미지 생성 단계는, 모서리면이 곡선으로 형성되는 복수의 상기 제 1 단위그리드의 모서리면이 연결되도록 상기 제 1 단위그리드를 중첩시키고, 모서리면이 곡선으로 형성되는 복수의 상기 제 2 단위그리드의 모서리면이 연결되도록 상기 제 2 단위그리드를 중첩시켜 곡선을 보간하는 상기 3차원 객체이미지를 형성하는 단계를 포함하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법이 제공될 수 있다.In addition, according to an embodiment of the present invention, in the step of generating the three-dimensional object image, the first unit grids are overlapped so that the edge surfaces of the plurality of first unit grids having curved edge surfaces are connected to each other, and the edge surface 2D through a virtual grid network including forming the three-dimensional object image for interpolating curves by overlapping the second unit grids so that the edge surfaces of the plurality of second unit grids formed with the curve are connected A method for generating a three-dimensional model of an image may be provided.
또한, 본 발명의 실시예에 따르면, 상기 3차원 객체이미지 생성 단계는, 내부면에 명암이 생성되는 복수의 상기 제 2 단위그리드를 상기 명암이 일치되도록, 상기 제 2 단위그리드를 중첩시켜 곡선을 보간하는 상기 3차원 객체이미지를 형성하는 단계를 포함하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법이 제공될 수 있다.In addition, according to an embodiment of the present invention, in the step of generating the three-dimensional object image, a curve is formed by superimposing the second unit grids so that the light and dark coincide with the plurality of the second unit grids, the contrast of which is generated on the inner surface. A method for generating a three-dimensional model of a two-dimensional image through a virtual grid network including the step of forming the three-dimensional object image to be interpolated may be provided.
본 발명은 물체를 촬영한 2차원 이미지에 임시 그리드망을 합착하여 물체의 배경이미지를 제거하고, 제 1 단위그리드, 제 2 단위그리드로 형성된 가상 그리드망을 2차원 객체이미지에 합착시킴으로써, 물체를 용이하게 3차원 이미지로 모델링할 수 있다.The present invention removes the background image of the object by attaching a temporary grid network to the two-dimensional image of the object, and attaching the virtual grid network formed of the first unit grid and the second unit grid to the two-dimensional object image, thereby creating an object. It can be easily modeled as a 3D image.
또한, 본 발명은 물체의 2차원 이미지가 중첩되는 부분에서, 제 1 단위그리드, 제 2 단위그리드 각각을 곡선보간함으로써, 2차원 이미지 상호 간에 교란이 발생되는 것을 효율적으로 제거할 수 있다.In addition, the present invention can effectively remove the disturbance between the two-dimensional images by interpolating each of the first and second unit grids with curves in the portion where the two-dimensional images of the object overlap.
도 1은 본 발명의 실시예에 따른 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법을 설명하는 플로우차트이다.1 is a flowchart illustrating a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따라 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법의 가상 그리드망을 예시하는 도면이다.2 is a diagram illustrating a virtual grid network of a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따라 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성하는 과정을 설명하는 도면이다.3 is a view for explaining a process of generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
본 발명의 실시예들에 대한 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나 본 발명은 이하에서 개시되는 실시예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. 명세서 전체에 걸쳐 동일 참조 부호는 동일 구성 요소를 지칭한다.Advantages and features of embodiments of the present invention, and methods of achieving them, will become apparent with reference to the embodiments described below in detail in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various different forms, and only these embodiments allow the disclosure of the present invention to be complete, and common knowledge in the art to which the present invention pertains It is provided to fully inform those who have the scope of the invention, and the present invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout.
본 발명의 실시예들을 설명함에 있어서 공지 기능 또는 구성에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략할 것이다. 그리고 후술되는 용어들은 본 발명의 실시예에서의 기능을 고려하여 정의된 용어들로서 이는 사용자, 운용자의 의도 또는 관례 등에 따라 달라질 수 있다. 그러므로 그 정의는 본 명세서 전반에 걸친 내용을 토대로 내려져야 할 것이다. In describing the embodiments of the present invention, if it is determined that a detailed description of a well-known function or configuration may unnecessarily obscure the gist of the present invention, the detailed description thereof will be omitted. In addition, the terms to be described later are terms defined in consideration of functions in an embodiment of the present invention, which may vary according to intentions or customs of users and operators. Therefore, the definition should be made based on the content throughout this specification.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세히 설명하기로 한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 실시예에 따른 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법을 설명하는 플로우차트이고, 도 2는 본 발명의 실시예에 따라 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법의 가상 그리드망을 예시하는 도면이며, 도 3은 본 발명의 실시예에 따라 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성하는 과정을 설명하는 도면이다.1 is a flowchart illustrating a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention, and FIG. 2 is a two-dimensional view through a virtual grid network according to an embodiment of the present invention. It is a diagram illustrating a virtual grid network of a method for generating a three-dimensional modeling of an image, and FIG. 3 is a view for explaining a process of generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
도 1을 참조하면, 가상의 그리드망을 이용하여 2차원 이미지를 3차원 이미지로 생성하는 과정에 대해 도 1 내지 도 3를 참조하여 구체적으로 설명하면, 촬영대상(10)의 복수 면을 복수 회 촬영하고, 촬영된 촬영이미지(100)에서 배경이미지(110)를 제거하는 2차원 이미지(120) 추출 단계가 수행될 수 있다(단계210).Referring to FIG. 1 , the process of generating a two-dimensional image into a three-dimensional image using a virtual grid network will be described in detail with reference to FIGS. A two-dimensional image 120 extraction step of photographing and removing the background image 110 from the photographed photographed image 100 may be performed (step 210).
즉, 촬영대상(10)이 촬영되는 경우, 촬영대상(10)이 위치된 배경이미지(110)가 포함된 촬영이미지(100)가 촬영되어 출력될 수 있고, 이러한 배경이미지(110)는 3차원 객체이미지(160)를 모델링하기 위한 과정에서 불필요한 자료에 해당되므로, 촬영이미지(100)에서 배경이미지(110)를 제거하는 과정이 선행될 수 있다.That is, when the photographing target 10 is photographed, the photographed image 100 including the background image 110 in which the photographing object 10 is located may be photographed and output, and this background image 110 is three-dimensional. Since it corresponds to unnecessary data in the process of modeling the object image 160 , the process of removing the background image 110 from the photographed image 100 may be preceded.
먼저, 촬영대상(10)의 전면, 후면, 좌측면, 우측면, 상부면 등에서 복수 회 촬영하여, 각 면에 해당되는 복수 개의 2차원 이미지를 출력할 수 있다. First, a plurality of two-dimensional images corresponding to each surface may be output by photographing a plurality of times in the front, rear, left surface, right surface, and upper surface of the object to be photographed 10 .
예를 들면, 도 3과 같이, 인체의 좌측발을 촬영할 경우, 발가락 부분을 향하는 전면 촬영, 양쪽 복숭아 뼈 부분을 향하는 좌측면 및 우측면 촬영, 발뒤꿈치를 향하는 후면 촬영, 발등을 발의 상부에 촬영하는 상부 촬영이 수행되어, 각 부분을 촬영하는 복수 개의 촬영이미지(100)가 출력될 수 있다. For example, as shown in Figure 3, when photographing the left foot of the human body, the front photographing toward the toe part, the left and right side photographing toward the peach bone part, the rear photographing toward the heel, and the upper part of the foot are photographed The upper photographing is performed, and a plurality of photographed images 100 photographing each part may be output.
특히, 발의 촬영이미지(100)로부터 배경이미지(110)를 제거하기 위해, 촬영이미지(100)에 임시 그리드망(140)을 합착시키는 과정이 수행될 수 있다.In particular, in order to remove the background image 110 from the photographed image 100 of the foot, the process of bonding the temporary grid network 140 to the photographed image 100 may be performed.
여기에서, 임시 그리드망(140)은 직사각형, 정사각형 등으로 형성되는 흑색, 백색의 복수 개의 사각형이 격자 형태로 교대로 배치된 것을 의미하는 것으로, 복수의 사각형은 흑색, 백색으로 제한되지 않고, 복수 개의 사각형이 교대로 배치되면서 각각의 사각형이 대비되어 구분될 수 있도록, 복수 개의 사각형의 색이 구비될 수 있다.Here, the temporary grid network 140 means that a plurality of black and white squares formed in a rectangle, a square, etc. are alternately arranged in a grid shape, and the plurality of squares are not limited to black and white, and a plurality of A plurality of colors of the quadrangle may be provided so that the four quadrilaterals are alternately arranged so that each quadrangle can be contrasted and distinguished.
임시 그리드망(140)을 촬영이미지(100)에 합착시킬 경우, 촬영대상(10)의 형상에 따라 임시 그리드망(140)의 형태가 달라질 수 있다. 촬영대상(10)의 오목한 부분에 임시 그리드망(140)이 합착된 촬영이미지(100)는 임시 그리드망(140)이 오목한 형태로 변형되어 노출될 수 있다. 즉, 촬영대상(10)의 오목한 형상에 따라 임시 그리드망(140)에 오목한 굴곡이 형성됨으로써, 임시 그리드망(140)을 통해 촬영대상(10)의 형상이 판단될 수 있다.When the temporary grid network 140 is attached to the photographed image 100 , the shape of the temporary grid network 140 may vary according to the shape of the object 10 to be photographed. The photographed image 100 in which the temporary grid network 140 is attached to the concave portion of the photographing target 10 may be exposed by being deformed into a concave shape of the temporary grid network 140 . That is, concave curves are formed in the temporary grid network 140 according to the concave shape of the object 10 , so that the shape of the object 10 to be photographed can be determined through the temporary grid network 140 .
예를 들면, 발의 복수 부분이 촬영된 촬영이미지(100)에 임시 그리드망(140)이 합착될 경우, 발의 형상에 따라 임시 그리드망(140)이 변형되어 노출될 수 있다. 발의 2차원 이미지(120)를 생성시키기 위해, 발의 전면, 후면, 좌우측면 등의 촬영이미지(100)에 합착된 임시 그리드망(140)이 이용될 수 있지만, 발의 촬영이미지(100)에서 배경이미지(110)를 분리하여 배경이미지(110)를 제거하기 위해서는, 특히, 발을 상부에서 찰영한 촬영이미지(100)를 이용하여 발의 형상을 효과적으로 판단할 수 있다.For example, when the temporary grid network 140 is attached to the photographed image 100 in which a plurality of parts of the foot are captured, the temporary grid network 140 may be deformed and exposed according to the shape of the foot. In order to generate the two-dimensional image 120 of the foot, a temporary grid network 140 bonded to the photographed image 100 such as the front, rear, left and right sides of the foot may be used, but the background image in the photographed image 100 of the foot In order to remove the background image 110 by separating 110, in particular, it is possible to effectively determine the shape of the foot using the photographed image 100 obtained by taking the foot from above.
상술하면, 발을 상부에서 촬영할 경우, 촬영이미지(100)에는 발등을 포함한 발의 이미지, 배경이미지(110) 등이 포함될 수 있다. 이와 같은 촬영이미지(100)에 임시 그리드망(140)을 합착시킬 경우, 발 부분에 위치하는 임시 그리드망(141)은 발의 형상에 따라 굴곡되어 노출되고, 발 부분을 제외한 배경 부분에 위치하는 임시 그리드망(140)은 굴곡이 형성되지 않은 직사각형, 정사각형 등의 형태로 노출될 수 있다.In detail, when the foot is photographed from the upper part, the photographed image 100 may include an image of the foot including the instep of the foot, the background image 110 , and the like. When the temporary grid network 140 is attached to the photographed image 100 as described above, the temporary grid network 141 located at the foot part is bent and exposed according to the shape of the foot, and the temporary grid network 141 located in the background part except for the foot part The grid network 140 may be exposed in the form of a rectangle, a square, or the like in which no curves are formed.
이와 같이, 굴곡되어 노출된 임시 그리드망(140)과 굴곡이 형성되지 않은 임시 그리드망(140)의 경계를 연속적으로 분리시킬 경우, 발의 형상이 표현되는 임시 그리드망(140)만을 분리할 수 있어, 촬영이미지(100)로부터 배경이미지(110)를 용이하게 분리하여 제거함으로써, 2차원 이미지(120)를 도출할 수 있다.As described above, when the boundary between the curved and exposed temporary grid network 140 and the non-curved temporary grid network 140 is continuously separated, only the temporary grid network 140 in which the shape of the foot is expressed can be separated. , by easily separating and removing the background image 110 from the photographed image 100 , the two-dimensional image 120 can be derived.
다음으로, 촬영대상의 동일면을 촬영한 복수 개의 2차원 이미지(120)를 중첩시키고, 중첩된 복수 개의 2차 이미지(120)에 형성되는 오차값에 대하여 중간값을 추출하여 2차원 객체이미지(130)를 생성하는 단계가 수행될 수 있다(단계 220).Next, the two-dimensional object image 130 by superimposing a plurality of two-dimensional images 120 obtained by photographing the same surface of the object to be photographed, and extracting an intermediate value with respect to the error values formed in the plurality of overlapping secondary images 120 . ) may be performed (step 220).
촬영이미지(100)에서 배경이미지(110)가 제거되어 촬영대상(10)의 형상만이 노출되는 2차원 이미지(120)는 촬영대상(10)을 다양한 측면에서 복수 회 촬영된 것으로, 촬영대상(10)의 일부분에서 동일한 방법으로 촬영하였다 하더라도, 촬영기기의 진동, 촬영 당시의 빛의 정도, 빛의 세기 등의 촬영조건에 따라 2차원 이미지(120)가 동일하게 출력되지 않는 오류가 발생될 수 있다.The two-dimensional image 120 in which the background image 110 is removed from the photographed image 100 and only the shape of the photographing target 10 is exposed is that the photographing object 10 is photographed multiple times from various sides, and the photographing object ( 10), an error may occur in that the two-dimensional image 120 is not output in the same way depending on the shooting conditions such as vibration of the imaging device, light intensity at the time of shooting, and light intensity. there is.
이와 같은 오류를 방지하기 위해, 복수 개의 2차원 이미지(120) 간의 오차를 보정하는 과정이 수행될 수 있다.In order to prevent such an error, a process of correcting an error between the plurality of two-dimensional images 120 may be performed.
예를 들면, 먼저, 촬영대상(10)의 일부분을 촬영한 복수 개의 2차원 이미지(120)를 중첩시켜 위치시킬 경우, 중첩된 복수 개의 2차원 이미지(120) 간에 간섭없이 노출될 수 있도록 2차원 이미지(120)를 출력한다. 다음으로, 이와 같은 방법으로 출력된 복수 개의 2차원 이미지(120)를 X-Y 축으로 형성된 좌표에 위치시키고, 각각의 2차원 이미지(120)에 노출된 촬영대상(10)의 복수 개의 기준위치를 XY 좌표로 표현한다.For example, first, when a plurality of two-dimensional images 120 obtained by photographing a portion of an object 10 are superimposed and positioned, two-dimensional (2D) images can be exposed without interference between a plurality of overlapping two-dimensional images 120 . An image 120 is output. Next, the plurality of two-dimensional images 120 output in this way are positioned at coordinates formed along the XY axis, and the plurality of reference positions of the photographing target 10 exposed to each two-dimensional image 120 are XY expressed in terms of coordinates.
이어서, 개개의 2차원 이미지(120)에서 기준위치마다 상이하게 표현될 수 있는 XY 위치값에서 중간값을 도출하고, 도출될 중간값의 XY좌표의 위치를 촬영대상(10)의 형상을 참고하여 연결시킴으로써, 복수 개의 2차원 이미지(120) 간에 오차값이 보정된 2차원 객체이미지(130)를 생성할 수 있다.Then, in each two-dimensional image 120, a median value is derived from the XY position value that can be expressed differently for each reference position, and the position of the XY coordinates of the intermediate value to be derived is referenced to the shape of the imaging target 10. By connecting, the two-dimensional object image 130 in which the error value is corrected between the plurality of two-dimensional images 120 may be generated.
다음으로, 가상 그리드망(150)을 2차원 객체이미지(130)에 합착시키는 과정이 수행된다(단계 230).Next, a process of bonding the virtual grid network 150 to the two-dimensional object image 130 is performed (step 230).
여기에서, 가상 그리드망(150)은 임시 그리드망(140)과 유사하게, 직사각형, 정사각형 등으로 형성되는 흑색의 제 1 단위그리드(151)와 백색의 제 2 단위그리드(152)가 복수 개로 구비되어 격자 형태로 교번되게 배치된 것으로, 복수 개의 제 1 단위그리드(151) 및 제 2 단위그리드(152)는 흑색, 백색으로 형성되도록 제한되지 않고, 복수 개의 단위그리드가 교대로 배치되면서 제 1 단위그리드(151)와 제 2 단위그리드(152)가 대비되어 구분될 수 있도록, 단위그리드의 색이 형성될 수 있다.Here, the virtual grid network 150 is provided with a plurality of black first unit grids 151 and white second unit grids 152 formed in a rectangle, a square, etc., similarly to the temporary grid network 140 . to be alternately arranged in a lattice form, the plurality of first unit grids 151 and second unit grids 152 are not limited to be formed in black and white, and the plurality of unit grids are alternately arranged to form the first unit The color of the unit grid may be formed so that the grid 151 and the second unit grid 152 can be distinguished from each other.
2차원 객체이미지(130)에 가상 그리드망(150)이 합착되어, 가상 그리드망(150)의 복수 개의 제 1 단위그리드(151) 및 제 2 단위그리드(152)가 촬영대상(10)의 형상에 따라 변형됨으로써, 가상 그리드망(150)에 촬영대상(10)의 각 부분의 형상이 노출될 수 있다.The virtual grid network 150 is bonded to the two-dimensional object image 130 so that the plurality of first unit grids 151 and the second unit grids 152 of the virtual grid network 150 are the shape of the object 10 to be photographed. By being deformed according to the , the shape of each part of the object 10 to be photographed may be exposed to the virtual grid 150 .
다음으로, 가상 그리드망(150)이 합착된 복수 개의 2차원 객체이미지(130)의 경계면에서 중첩되는 제 1 단위그리드(151) 및 제 2 단위그리드(152)를 곡선보간하여 3차원 객체이미지(160)를 생성하는 과정이 수행된다(단계 240).Next, a three-dimensional object image ( 160) is generated (step 240).
가상 그리드망(150)이 합착된 2차원 객체이미지(130)는 촬영대상(10)을 일측면에서 촬영하여 출력한 이미지로, 3차원 객체이미지(160)를 생성시키기 위해, 촬영대상(10) 각 부분을 촬영한 2차원 객체이미지(130)를 보정하여 하나의 이미지로 형성하는 과정이 필요하다.The two-dimensional object image 130 to which the virtual grid network 150 is attached is an image output by photographing the object 10 from one side, and in order to generate a three-dimensional object image 160, the object to be photographed (10) A process of forming a single image by correcting the two-dimensional object image 130 obtained by photographing each part is necessary.
이를 위해, 각 부분의 2차원 객체이미지(130)를 취합하였을 경우, 2차원 객체이미지(130)가 중첩되는 부분에 대해서 이를 보정하거나 보간하는 작업이 수행될 수 있다.To this end, when the two-dimensional object image 130 of each part is collected, a work of correcting or interpolating the part where the two-dimensional object image 130 overlaps may be performed.
즉, 모서리면이 곡선으로 형성되는 복수의 제 1 단위그리드(151)의 모서리면이 연결되도록 제 1 단위그리드(151)를 중첩시키고, 모서리면이 곡선으로 형성되는 복수의 제 2 단위그리드(152)의 모서리면이 연결되도록 제 2 단위그리드(152)를 중첩시켜 곡선을 보간할 수 있다(단계 241).That is, the first unit grids 151 are overlapped so that the edge surfaces of the plurality of first unit grids 151 in which the edge surfaces are curved are connected, and the plurality of second unit grids 152 in which the edge surfaces are formed in a curved shape. ) may be interpolated by overlapping the second unit grids 152 so that the edge surfaces of each are connected (step 241).
상술하면, 인체의 좌측발의 상부면에서 촬영된 2차원 객체이미지(130)의 우측단에 형성되는 제 1 단위그리드(151) 또는 제 2 단위그리드(152)의 모서리면은 좌측발의 발등에서 좌측발의 우측(또는 좌측) 방향으로 굴곡진 형태로 형성될 수 있다.In detail, the edge surface of the first unit grid 151 or the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body is from the instep of the left foot to that of the left foot. It may be formed in a curved shape in the right (or left) direction.
또한, 좌측발의 우측면에서 촬영된 2차원 객체이미지(130)의 좌측단에 형성되는 제 1 단위그리드(151) 또는 제 2 단위그리드(152)의 모서리면은 좌측발의 우측(또는 좌측)에서 발등 방향으로 굴곡진 형태로 형성될 수 있다.In addition, the edge surface of the first unit grid 151 or the second unit grid 152 formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot is in the instep direction from the right (or left) side of the left foot. may be formed in a curved shape.
이와 같이, 상부면에서 촬영된 2차원 객체이미지(130)와 우측(또는 좌측)에서 촬영된 2차원 객체이미지(130)가 조합될 경우, 각각의 2차원 객체이미지(130)가 겹치는 부분에서 중첩되어 각각의 2차원 객체이미지(130) 간에 교란이 발생되어, 정확한 3차원 객체이미지(160)가 출력되기 어려운 문제점이 발생될 수 있다.In this way, when the two-dimensional object image 130 photographed from the upper surface and the two-dimensional object image 130 photographed from the right (or left) are combined, each two-dimensional object image 130 overlaps at the overlapping portion. As a result, disturbances occur between the respective two-dimensional object images 130 , which may cause a problem in that it is difficult to output the accurate three-dimensional object image 160 .
이러한 문제점을 제거하기 위해, 인체의 좌측발의 상부면에서 촬영된 2차원 객체이미지(130)의 우측단에 형성되는 제 1 단위그리드(151)(또는 제 2 단위그리드(152))의 모서리면을 좌측발의 우측면에서 촬영된 2차원 객체이미지(130)의 좌측단에 형성되는 제 1 단위그리드(151)(또는 제 2 단위그리드(152))의 모서리면을 일치시킬 수 있다. 즉, 각 모서리의 상하좌우 4개의 골곡면을 제 1 단위그리드(151)는 제 1 단위그리드(151)끼리 일치시키고, 제 2 단위그리드(152)는 제 2 단위그리드(152)끼리 일치시키는 곡선보간을 함으로써, 발등면에서 촬영된 2차 객체이미지(130)와 발의 우측면에서 촬영된 2차 객체이미지(130)가 겹치는 부분에서 중첩되어 교란이 발생되는 부분을 보정할 수 있다.In order to eliminate this problem, the edge surface of the first unit grid 151 (or the second unit grid 152) formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body The edge surface of the first unit grid 151 (or the second unit grid 152 ) formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot may match. That is, the first unit grid 151 coincides with the first unit grids 151, and the second unit grid 152 matches the second unit grids 152 with the four trough surfaces of each corner. By interpolation, it is possible to correct the part where the secondary object image 130 photographed from the dorsal surface and the secondary object image 130 photographed from the right side of the foot overlap at the overlapping portion, thereby causing disturbance.
상술한 곡선보간에 의해, 좌측발의 전면, 후면, 좌측면, 우측면, 상부면 등에서 촬영된 각각의 2차원 객체이미지(130)의 경계면에서 발생되는 중첩을 보간하여, 2차원 객체이미지(130)를 통해 3차원 객체이미지(160)를 생성시킬 수 있다.By interpolating the overlap generated at the boundary surface of each of the two-dimensional object images 130 taken from the front, rear, left, right, and upper surfaces of the left foot by the above-described curved interpolation, the two-dimensional object image 130 is obtained. Through this, the 3D object image 160 may be generated.
또한, 내부면에 명암이 생성되는 복수의 제 2 단위그리드(152)를 명암이 일치되도록 제 2 단위그리드(152)를 중첩시켜 각각의 2차원 객체이미지(130)를 보간할 수 있다(단계 242).In addition, each of the two-dimensional object images 130 may be interpolated by overlapping the plurality of second unit grids 152 in which light and dark are generated on the inner surface so that the contrasts are matched (step 242). ).
상술하면, 제 2 단위그리드(152)는 백색 등으로 형성될 수 있는바, 촬영대상(10)을 촬영하는 각도나 촬영 시의 조도 등에 따라, 제 2 단위그리드(152) 상에 명암이 발생되어, 명암이 그라데이션(gradation)의 형태로 노출될 수 있다.In detail, the second unit grid 152 may be formed of white, etc., depending on the angle at which the photographing target 10 is photographed or the illuminance at the time of photographing, etc., light and dark are generated on the second unit grid 152 , the contrast may be exposed in the form of a gradation.
즉, 인체의 좌측발의 상부면에서 촬영된 2차원 객체이미지(130)의 우측단에 형성되는 제 2 단위그리드(152)의 내부면은, 좌측발의 발등에서 좌측발의 우측(또는 좌측) 방향으로 어두워지는 명암이 그라데이션되게 형성될 수 있다.That is, the inner surface of the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body is darkened from the top of the left foot to the right (or left) direction of the left foot. The setting may be formed to have a gradation of light and dark.
또한, 좌측발의 우측면에서 촬영된 2차원 객체이미지(130)의 좌측단에 형성되는 제 2 단위그리드(152)의 내부면은, 좌측발의 우측(또는 좌측)에서 발등 방향으로 어두워지는 명암이 그라데이션되게 형성될 수 있다.In addition, the inner surface of the second unit grid 152 formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot has a gradation of the contrast darkening in the instep direction from the right (or left) side of the left foot. can be formed.
이러한 각각의 제 2 단위그리드(152) 내부면의 명암은, 어두워지는 방향이 상이하게 형성되는 차이점이 있지만, 좌측발의 동일한 부분을 촬영한 2차원 객체이미지(130)의 제 2 단위그리드(152)인 점은 일치된다.The contrast of the inner surface of each of the second unit grids 152 has a difference in that the darkening direction is formed differently, but the second unit grid 152 of the two-dimensional object image 130 obtained by photographing the same part of the left foot. points are coincident.
따라서, 인체의 좌측발의 상부면에서 촬영된 2차원 객체이미지(130)의 우측단에 형성되는 제 2 단위그리드(152)의 내부면의 명암과 좌측발의 우측면에서 촬영된 2차원 객체이미지(130)의 좌측단에 형성되는 제 2 단위그리드(152)의 내부의 명암을 제거하여 각각의 제 2 단위그리드(152)를 일치시키거나, 하나의 제 2 단위그리드(152) 내부면의 명암을 기준으로 다른 제 2 단위그리드(152)의 내부면의 명암을 변환시켜 각각의 제 2 단위그리드(152)를 일치시킴으로써, 발등면에서 촬영된 2차 객체이미지(130)와 발의 우측면에서 촬영된 2차 객체이미지(130)가 겹치는 부분을 곡선보간할 수 있다.Accordingly, the contrast of the inner surface of the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body and the two-dimensional object image 130 photographed from the right surface of the left foot. By removing the light and dark inside the second unit grid 152 formed at the left end of each of the second unit grids 152, or based on the contrast of the inner surface of one second unit grid 152 By converting the contrast of the inner surface of the other second unit grid 152 to match each second unit grid 152, the secondary object image 130 photographed from the dorsal surface and the secondary object photographed from the right side of the foot A portion where the images 130 overlap may be interpolated with curves.
상술한 바와 같은 곡선보간에 의해, 좌측발의 전면, 후면, 좌측면, 우측면, 상부면 등에서 촬영된 각각의 2차원 객체이미지(130)의 경계면에서 발생되는 중첩을 보간하여, 2차원 객체이미지(130)를 통해 3차원 객체이미지(160)를 생성시킬 수 있다.The two-dimensional object image 130 by interpolating the overlap generated at the boundary surface of each of the two-dimensional object images 130 photographed from the front, rear, left surface, right surface, and upper surface of the left foot by the curved interpolation as described above. ) through the three-dimensional object image 160 can be generated.
한편, 촬영대상(10)이 발바닥면과 같이 바닥면에 접하는 부분에 대한 3차원 객체이미지(160)의 모델링은 모눈종이 등과 같은 일정 규격이 구비된 실제 그리드망을 이용하여 생성시킬 수 있다.On the other hand, the modeling of the three-dimensional object image 160 for the part of the photographing target 10 in contact with the floor surface, such as the sole surface, can be generated using an actual grid network provided with a certain standard such as a grid paper.
발바닥면은 발의 전면, 후면, 좌우측면, 상부면 등이 굴곡된 형상으로 형성된 것과는 상이하게, 발바닥면의 외측 경계를 기준으로 그 형태를 파악하는 것이 용이할 수 있다.Unlike the one in which the front, rear, left and right side surfaces, and upper surfaces of the foot are formed in a curved shape, it may be easy to grasp the shape of the sole surface based on the outer boundary of the sole surface.
여기에서, 가상 그리드망(150)은 제 1 단위그리드(151), 제 2 단위그리드(152)의 모서리면이 굴곡되거나, 그 내부면에 형성되는 명암 등을 통해 발의 형상을 파악하기 용이한 반면, 모눈종이 등과 같은 실제 그리드망은 그 간격이 규격화되도록 구비되어 있어 발바닥면의 형상을 실제적으로 모델링하기 용이하다.Here, in the virtual grid network 150, it is easy to grasp the shape of the foot through the curved edge of the first unit grid 151 and the second unit grid 152, or the contrast formed on the inner surface thereof. The actual grid network such as , graph paper, etc. is provided so that the interval is standardized, so it is easy to actually model the shape of the sole surface.
이에 따라, 발바닥면에 물이나 잉크 등을 도포하여 모눈종이 등에 접촉시키거나, 잉크 등이 도포된 모눈종이 등에 발바닥면을 접촉시켜, 모눈종이 등에 노출된 발바닥면의 외측 경계선을 통해, 발바닥면의 형상을 모델링할 수 있다.Accordingly, water or ink is applied to the sole surface to contact the graph paper, or the sole surface is brought into contact with the graph paper applied with ink, etc., through the outer boundary of the sole surface exposed to the graph paper, etc. shape can be modeled.
이상의 설명에서는 본 발명의 다양한 실시예들을 제시하여 설명하였으나 본 발명이 반드시 이에 한정되는 것은 아니며, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자라면 본 발명의 기술적 사상을 벗어나지 않는 범위 내에서 여러 가지 치환, 변형 및 변경이 가능함을 쉽게 알 수 있을 것이다.In the above description, various embodiments of the present invention have been presented and described, but the present invention is not necessarily limited thereto. It will be readily appreciated that branch substitutions, transformations and alterations are possible.
본 발명은 2차원 이미지를 3차원 이미지로 모델링함에 있어, 복수 개의 2차원 이미지가 중첩되는 경우에 발생될 수 있는 교란을 보정하기 위해 그리드망을 이용하여 곡선보간을 수행하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법 분야에 이용가능하다.In the present invention, in modeling a two-dimensional image into a three-dimensional image, a virtual grid network that performs curve interpolation using a grid network to correct disturbances that may occur when a plurality of two-dimensional images are overlapped. It can be used in the field of a method for generating a three-dimensional model of a two-dimensional image.

Claims (5)

  1. 촬영대상의 복수 면을 복수 회 촬영하고, 촬영된 촬영이미지에서 배경이미지를 제거하는 2차원 이미지 추출 단계와,A two-dimensional image extraction step of photographing a plurality of surfaces of an object to be photographed multiple times and removing a background image from the photographed photographed image;
    동일면을 촬영한 복수 개의 상기 2차원 이미지를 중첩시키고, 중첩된 복수 개의 상기 2차원 이미지에 형성되는 오차값에 대하여 중간값을 추출하여 2차원 객체이미지를 생성하는 단계와,Generating a two-dimensional object image by superimposing a plurality of the two-dimensional images taken on the same plane, and extracting an intermediate value with respect to an error value formed in the plurality of overlapping two-dimensional images;
    가상 그리드망을 생성하여 상기 2차원 객체이미지에 합착시키는 단계와,creating a virtual grid network and attaching it to the two-dimensional object image;
    상기 가상 그리드망이 합착된 복수 개의 상기 2차원 객체이미지의 경계면에서 중첩되는 제 1 단위그리드 및 제 2 단위그리드를 곡선보간하여 3차원 객체이미지를 생성하는 단계Generating a three-dimensional object image by interpolating a curve between the first unit grid and the second unit grid overlapping at the boundary of the plurality of two-dimensional object images to which the virtual grid network is attached
    를 포함하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법.A method of generating a three-dimensional modeling of a two-dimensional image through a virtual grid network comprising a.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 2차원 이미지 추출 단계는, The step of extracting the two-dimensional image,
    임시 그리드망을 상기 촬영이미지에 합착시켜, 상기 임시 그리드망에 형성되는 비굴곡면과 굴곡면의 경계를 기준으로 상기 배경이미지를 분리시키는By bonding a temporary grid network to the photographed image, the background image is separated based on the boundary between the non-curved surface and the curved surface formed in the temporary grid network.
    가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법.A method of generating 3D modeling of a 2D image through a virtual grid network.
  3. 제 2 항에 있어서,3. The method of claim 2,
    상기 가상 그리드망은,The virtual grid network,
    복수 개의 흑색의 상기 제 1 단위그리드 및 백색의 상기 제 2 단위그리드가 교번되게 배치되어 형성되는A plurality of black first unit grids and white second unit grids are alternately arranged and formed
    가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법.A method of generating 3D modeling of a 2D image through a virtual grid network.
  4. 제 3 항에 있어서,4. The method of claim 3,
    상기 3차원 객체이미지 생성 단계는,The three-dimensional object image creation step is,
    모서리면이 곡선으로 형성되는 복수의 상기 제 1 단위그리드의 모서리면이 연결되도록 상기 제 1 단위그리드를 중첩시키고, 모서리면이 곡선으로 형성되는 복수의 상기 제 2 단위그리드의 모서리면이 연결되도록 상기 제 2 단위그리드를 중첩시켜 곡선을 보간하는 상기 3차원 객체이미지를 형성하는 단계The first unit grids are overlapped so that the edge surfaces of the plurality of first unit grids having curved edge surfaces are connected, and the edge surfaces of the plurality of second unit grids having curved edge surfaces are connected to each other. Forming the three-dimensional object image for interpolating a curve by overlapping a second unit grid
    를 포함하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법.A method of generating a three-dimensional modeling of a two-dimensional image through a virtual grid network comprising a.
  5. 제 3 항 또는 제 4 항에 있어서,5. The method according to claim 3 or 4,
    상기 3차원 객체이미지 생성 단계는,The three-dimensional object image creation step is,
    내부면에 명암이 생성되는 복수의 상기 제 2 단위그리드를 상기 명암이 일치되도록, 상기 제 2 단위그리드를 중첩시켜 곡선을 보간하는 상기 3차원 객체이미지를 형성하는 단계Forming the three-dimensional object image for interpolating a curve by overlapping the plurality of second unit grids, the second unit grids having the contrast on the inner surface, so that the contrasts match the second unit grids
    를 포함하는 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법.A method of generating a three-dimensional modeling of a two-dimensional image through a virtual grid network comprising a.
PCT/KR2020/017622 2020-08-25 2020-12-04 Method for generating three-dimensional modelding of two-dimensional image through virtual grid network WO2022045470A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200106931A KR102303566B1 (en) 2020-08-25 2020-08-25 Method for generating 3D modeling of 2D images using virual grid networks
KR10-2020-0106931 2020-08-25

Publications (1)

Publication Number Publication Date
WO2022045470A1 true WO2022045470A1 (en) 2022-03-03

Family

ID=77923959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/017622 WO2022045470A1 (en) 2020-08-25 2020-12-04 Method for generating three-dimensional modelding of two-dimensional image through virtual grid network

Country Status (2)

Country Link
KR (1) KR102303566B1 (en)
WO (1) WO2022045470A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4934789B2 (en) * 2006-01-23 2012-05-16 国立大学法人横浜国立大学 Interpolation processing method and interpolation processing apparatus
KR20150031085A (en) * 2013-09-13 2015-03-23 인하대학교 산학협력단 3D face-modeling device, system and method using Multiple cameras
JP2016119077A (en) * 2014-12-23 2016-06-30 ダッソー システムズDassault Systemes 3d modeled object defined by grid of control points
KR101851303B1 (en) * 2016-10-27 2018-04-23 주식회사 맥스트 Apparatus and method for reconstructing 3d space
KR20200032664A (en) * 2018-09-18 2020-03-26 서울대학교산학협력단 Device for 3D image reconstruction using rectangular grid projection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101693259B1 (en) 2015-06-17 2017-01-10 (주)유니드픽쳐 3D modeling and 3D geometry production techniques using 2D image
KR101904842B1 (en) 2018-01-30 2018-11-21 강석주 Three-dimensional modeling method of two-dimensional image and recording medium storing program for executing the same, and recording medium storing program for executing the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4934789B2 (en) * 2006-01-23 2012-05-16 国立大学法人横浜国立大学 Interpolation processing method and interpolation processing apparatus
KR20150031085A (en) * 2013-09-13 2015-03-23 인하대학교 산학협력단 3D face-modeling device, system and method using Multiple cameras
JP2016119077A (en) * 2014-12-23 2016-06-30 ダッソー システムズDassault Systemes 3d modeled object defined by grid of control points
KR101851303B1 (en) * 2016-10-27 2018-04-23 주식회사 맥스트 Apparatus and method for reconstructing 3d space
KR20200032664A (en) * 2018-09-18 2020-03-26 서울대학교산학협력단 Device for 3D image reconstruction using rectangular grid projection

Also Published As

Publication number Publication date
KR102303566B1 (en) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2022164126A1 (en) Device and method for automatically matching oral scan data and computed tomography image by means of crown segmentation of oral scan data
WO2017026839A1 (en) 3d face model obtaining method and device using portable camera
KR102335899B1 (en) Systems, methods, apparatuses, and computer-readable storage media for collecting color information about an object undergoing a 3d scan
US9460513B1 (en) Method for reconstructing a 3D scene as a 3D model using images acquired by 3D sensors and omnidirectional cameras
WO2017204571A1 (en) Camera sensing apparatus for obtaining three-dimensional information of object, and virtual golf simulation apparatus using same
WO2014073818A1 (en) Implant image creating method and implant image creating system
WO2020045946A1 (en) Image processing device and image processing method
WO2011155697A2 (en) Method and device for converting three-dimensional image using depth map information
WO2015165222A1 (en) Method and device for acquiring panoramic image
WO2018190504A1 (en) Face pose correction apparatus and method
US10268108B2 (en) Function enhancement device, attaching/detaching structure for function enhancement device, and function enhancement system
WO2022045470A1 (en) Method for generating three-dimensional modelding of two-dimensional image through virtual grid network
WO2018110978A1 (en) Image synthesizing system and image synthesizing method
WO2022177095A1 (en) Artificial intelligence-based method and application for manufacturing 3d prosthesis for tooth restoration
WO2022154523A1 (en) Method and device for matching three-dimensional oral scan data via deep-learning based 3d feature detection
CN106471804A (en) Method and device for picture catching and depth extraction simultaneously
WO2012148025A1 (en) Device and method for detecting a three-dimensional object using a plurality of cameras
WO2014010820A1 (en) Method and apparatus for estimating image motion using disparity information of a multi-view image
EP3066508A1 (en) Method and system for creating a camera refocus effect
WO2017086522A1 (en) Method for synthesizing chroma key image without requiring background screen
WO2017213335A1 (en) Method for combining images in real time
WO2017195985A1 (en) Portable 3d document scanning device and method
WO2011087279A2 (en) Stereoscopic image conversion method and stereoscopic image conversion device
WO2010087587A2 (en) Image data obtaining method and apparatus therefor
WO2021054756A1 (en) Front image generation device for heavy equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20951700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20951700

Country of ref document: EP

Kind code of ref document: A1