WO2022045470A1 - Procédé permettant de générer une modélisation tridimensionnelle d'une image bidimensionnelle par le biais d'un réseau de grille virtuelle - Google Patents

Procédé permettant de générer une modélisation tridimensionnelle d'une image bidimensionnelle par le biais d'un réseau de grille virtuelle Download PDF

Info

Publication number
WO2022045470A1
WO2022045470A1 PCT/KR2020/017622 KR2020017622W WO2022045470A1 WO 2022045470 A1 WO2022045470 A1 WO 2022045470A1 KR 2020017622 W KR2020017622 W KR 2020017622W WO 2022045470 A1 WO2022045470 A1 WO 2022045470A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
image
grid network
images
generating
Prior art date
Application number
PCT/KR2020/017622
Other languages
English (en)
Korean (ko)
Inventor
윤기식
Original Assignee
윤기식
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 윤기식 filed Critical 윤기식
Publication of WO2022045470A1 publication Critical patent/WO2022045470A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • a virtual grid network that performs curve interpolation using a grid network to correct disturbances that may occur when a plurality of two-dimensional images are overlapped. It relates to a method for generating a three-dimensional model of a two-dimensional image.
  • the object can be created as a three-dimensional image by photographing the object from various angles and modeling the photographed two-dimensional image in three dimensions.
  • various two-dimensional images are created by photographing an object at different focal lengths in various directions, or by setting reference points in each part of the object and extending the reference points with lines to form an image of the object. of the two-dimensional image was modeled as a three-dimensional image.
  • the present invention relates to a two-dimensional image through a virtual grid network capable of generating a three-dimensional object image by attaching a temporary grid network and a virtual grid network to the two-dimensional image.
  • a virtual grid network capable of generating a three-dimensional object image by attaching a temporary grid network and a virtual grid network to the two-dimensional image.
  • the present invention performs curved interpolation to match the corner surfaces of the unit grid or to match the contrast inside the unit grid in order to remove the disturbance that may occur between the images in the part where the two-dimensional images of the object to be photographed are overlapped.
  • a three-dimensional modeling generation method may be provided.
  • a temporary grid network is bonded to the photographed image, and the background image is selected based on the boundary between the non-curved surface and the curved surface formed in the temporary grid network.
  • a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network that separates them may be provided.
  • the virtual grid network includes a plurality of black first unit grids and white second unit grids alternately arranged to form a two-dimensional image through a virtual grid network.
  • a three-dimensional modeling generation method may be provided.
  • the first unit grids are overlapped so that the edge surfaces of the plurality of first unit grids having curved edge surfaces are connected to each other, and the edge surface 2D through a virtual grid network including forming the three-dimensional object image for interpolating curves by overlapping the second unit grids so that the edge surfaces of the plurality of second unit grids formed with the curve are connected
  • a method for generating a three-dimensional model of an image may be provided.
  • a curve is formed by superimposing the second unit grids so that the light and dark coincide with the plurality of the second unit grids, the contrast of which is generated on the inner surface.
  • a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network including the step of forming the three-dimensional object image to be interpolated may be provided.
  • the present invention removes the background image of the object by attaching a temporary grid network to the two-dimensional image of the object, and attaching the virtual grid network formed of the first unit grid and the second unit grid to the two-dimensional object image, thereby creating an object. It can be easily modeled as a 3D image.
  • the present invention can effectively remove the disturbance between the two-dimensional images by interpolating each of the first and second unit grids with curves in the portion where the two-dimensional images of the object overlap.
  • FIG. 1 is a flowchart illustrating a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a virtual grid network of a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • FIG. 3 is a view for explaining a process of generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • FIG. 1 is a flowchart illustrating a method for generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention
  • FIG. 2 is a two-dimensional view through a virtual grid network according to an embodiment of the present invention.
  • It is a diagram illustrating a virtual grid network of a method for generating a three-dimensional modeling of an image
  • FIG. 3 is a view for explaining a process of generating a three-dimensional model of a two-dimensional image through a virtual grid network according to an embodiment of the present invention.
  • a two-dimensional image 120 extraction step of photographing and removing the background image 110 from the photographed photographed image 100 may be performed (step 210).
  • the photographed image 100 including the background image 110 in which the photographing object 10 is located may be photographed and output, and this background image 110 is three-dimensional. Since it corresponds to unnecessary data in the process of modeling the object image 160 , the process of removing the background image 110 from the photographed image 100 may be preceded.
  • a plurality of two-dimensional images corresponding to each surface may be output by photographing a plurality of times in the front, rear, left surface, right surface, and upper surface of the object to be photographed 10 .
  • the process of bonding the temporary grid network 140 to the photographed image 100 may be performed.
  • the temporary grid network 140 means that a plurality of black and white squares formed in a rectangle, a square, etc. are alternately arranged in a grid shape, and the plurality of squares are not limited to black and white, and a plurality of A plurality of colors of the quadrangle may be provided so that the four quadrilaterals are alternately arranged so that each quadrangle can be contrasted and distinguished.
  • the shape of the temporary grid network 140 may vary according to the shape of the object 10 to be photographed.
  • the photographed image 100 in which the temporary grid network 140 is attached to the concave portion of the photographing target 10 may be exposed by being deformed into a concave shape of the temporary grid network 140 . That is, concave curves are formed in the temporary grid network 140 according to the concave shape of the object 10 , so that the shape of the object 10 to be photographed can be determined through the temporary grid network 140 .
  • the temporary grid network 140 when the temporary grid network 140 is attached to the photographed image 100 in which a plurality of parts of the foot are captured, the temporary grid network 140 may be deformed and exposed according to the shape of the foot.
  • a temporary grid network 140 bonded to the photographed image 100 such as the front, rear, left and right sides of the foot may be used, but the background image in the photographed image 100 of the foot
  • the photographed image 100 may include an image of the foot including the instep of the foot, the background image 110 , and the like.
  • the temporary grid network 140 is attached to the photographed image 100 as described above, the temporary grid network 141 located at the foot part is bent and exposed according to the shape of the foot, and the temporary grid network 141 located in the background part except for the foot part
  • the grid network 140 may be exposed in the form of a rectangle, a square, or the like in which no curves are formed.
  • the boundary between the curved and exposed temporary grid network 140 and the non-curved temporary grid network 140 is continuously separated, only the temporary grid network 140 in which the shape of the foot is expressed can be separated. , by easily separating and removing the background image 110 from the photographed image 100 , the two-dimensional image 120 can be derived.
  • the two-dimensional object image 130 by superimposing a plurality of two-dimensional images 120 obtained by photographing the same surface of the object to be photographed, and extracting an intermediate value with respect to the error values formed in the plurality of overlapping secondary images 120 . ) may be performed (step 220).
  • the two-dimensional image 120 in which the background image 110 is removed from the photographed image 100 and only the shape of the photographing target 10 is exposed is that the photographing object 10 is photographed multiple times from various sides, and the photographing object ( 10), an error may occur in that the two-dimensional image 120 is not output in the same way depending on the shooting conditions such as vibration of the imaging device, light intensity at the time of shooting, and light intensity. there is.
  • a process of correcting an error between the plurality of two-dimensional images 120 may be performed.
  • two-dimensional (2D) images can be exposed without interference between a plurality of overlapping two-dimensional images 120 .
  • An image 120 is output.
  • the plurality of two-dimensional images 120 output in this way are positioned at coordinates formed along the XY axis, and the plurality of reference positions of the photographing target 10 exposed to each two-dimensional image 120 are XY expressed in terms of coordinates.
  • each two-dimensional image 120 a median value is derived from the XY position value that can be expressed differently for each reference position, and the position of the XY coordinates of the intermediate value to be derived is referenced to the shape of the imaging target 10.
  • the two-dimensional object image 130 in which the error value is corrected between the plurality of two-dimensional images 120 may be generated.
  • step 230 a process of bonding the virtual grid network 150 to the two-dimensional object image 130 is performed.
  • the virtual grid network 150 is provided with a plurality of black first unit grids 151 and white second unit grids 152 formed in a rectangle, a square, etc., similarly to the temporary grid network 140 .
  • the plurality of first unit grids 151 and second unit grids 152 are not limited to be formed in black and white, and the plurality of unit grids are alternately arranged to form the first unit
  • the color of the unit grid may be formed so that the grid 151 and the second unit grid 152 can be distinguished from each other.
  • the virtual grid network 150 is bonded to the two-dimensional object image 130 so that the plurality of first unit grids 151 and the second unit grids 152 of the virtual grid network 150 are the shape of the object 10 to be photographed. By being deformed according to the , the shape of each part of the object 10 to be photographed may be exposed to the virtual grid 150 .
  • a three-dimensional object image ( 160) is generated (step 240).
  • the two-dimensional object image 130 to which the virtual grid network 150 is attached is an image output by photographing the object 10 from one side, and in order to generate a three-dimensional object image 160, the object to be photographed (10) A process of forming a single image by correcting the two-dimensional object image 130 obtained by photographing each part is necessary.
  • first unit grids 151 are overlapped so that the edge surfaces of the plurality of first unit grids 151 in which the edge surfaces are curved are connected, and the plurality of second unit grids 152 in which the edge surfaces are formed in a curved shape. ) may be interpolated by overlapping the second unit grids 152 so that the edge surfaces of each are connected (step 241).
  • the edge surface of the first unit grid 151 or the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body is from the instep of the left foot to that of the left foot. It may be formed in a curved shape in the right (or left) direction.
  • the edge surface of the first unit grid 151 or the second unit grid 152 formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot is in the instep direction from the right (or left) side of the left foot. may be formed in a curved shape.
  • each two-dimensional object image 130 overlaps at the overlapping portion.
  • disturbances occur between the respective two-dimensional object images 130 , which may cause a problem in that it is difficult to output the accurate three-dimensional object image 160 .
  • the edge surface of the first unit grid 151 (or the second unit grid 152) formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body
  • the edge surface of the first unit grid 151 (or the second unit grid 152 ) formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot may match. That is, the first unit grid 151 coincides with the first unit grids 151, and the second unit grid 152 matches the second unit grids 152 with the four trough surfaces of each corner.
  • the two-dimensional object image 130 is obtained. Through this, the 3D object image 160 may be generated.
  • each of the two-dimensional object images 130 may be interpolated by overlapping the plurality of second unit grids 152 in which light and dark are generated on the inner surface so that the contrasts are matched (step 242). ).
  • the second unit grid 152 may be formed of white, etc., depending on the angle at which the photographing target 10 is photographed or the illuminance at the time of photographing, etc., light and dark are generated on the second unit grid 152 , the contrast may be exposed in the form of a gradation.
  • the inner surface of the second unit grid 152 formed at the right end of the two-dimensional object image 130 photographed from the upper surface of the left foot of the human body is darkened from the top of the left foot to the right (or left) direction of the left foot.
  • the setting may be formed to have a gradation of light and dark.
  • the inner surface of the second unit grid 152 formed at the left end of the two-dimensional object image 130 photographed from the right side of the left foot has a gradation of the contrast darkening in the instep direction from the right (or left) side of the left foot. can be formed.
  • each of the second unit grids 152 has a difference in that the darkening direction is formed differently, but the second unit grid 152 of the two-dimensional object image 130 obtained by photographing the same part of the left foot. points are coincident.
  • the two-dimensional object image 130 by interpolating the overlap generated at the boundary surface of each of the two-dimensional object images 130 photographed from the front, rear, left surface, right surface, and upper surface of the left foot by the curved interpolation as described above. ) through the three-dimensional object image 160 can be generated.
  • the modeling of the three-dimensional object image 160 for the part of the photographing target 10 in contact with the floor surface, such as the sole surface can be generated using an actual grid network provided with a certain standard such as a grid paper.
  • the front, rear, left and right side surfaces, and upper surfaces of the foot are formed in a curved shape, it may be easy to grasp the shape of the sole surface based on the outer boundary of the sole surface.
  • the virtual grid network 150 it is easy to grasp the shape of the foot through the curved edge of the first unit grid 151 and the second unit grid 152, or the contrast formed on the inner surface thereof.
  • the actual grid network such as , graph paper, etc. is provided so that the interval is standardized, so it is easy to actually model the shape of the sole surface.
  • water or ink is applied to the sole surface to contact the graph paper, or the sole surface is brought into contact with the graph paper applied with ink, etc., through the outer boundary of the sole surface exposed to the graph paper, etc. shape can be modeled.
  • a virtual grid network that performs curve interpolation using a grid network to correct disturbances that may occur when a plurality of two-dimensional images are overlapped. It can be used in the field of a method for generating a three-dimensional model of a two-dimensional image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention se rapporte à un procédé permettant de générer une modélisation tridimensionnelle d'une image bidimensionnelle par le biais d'un réseau de grille virtuelle. Le procédé comprend : une étape d'extraction d'image bidimensionnelle consistant à capturer des images d'une pluralité de plans d'un objet à capturer une pluralité de fois et à éliminer des images d'arrière-plan des images capturées qui ont été capturées ; une étape de superposition d'une pluralité d'images bidimensionnelles obtenues par capture d'images du même plan et d'extraction d'une valeur intermédiaire de valeurs d'erreur qui se produisent dans la pluralité d'images bidimensionnelles superposées de sorte à générer des images d'objet bidimensionnelles ; une étape de génération d'un réseau de grille virtuelle et de fixation du réseau de grille virtuelle aux images d'objet bidimensionnelles ; et de réalisation d'une interpolation de courbe sur une première grille unitaire et une seconde grille unitaire, qui sont superposées au niveau de la limite de la pluralité d'images d'objet bidimensionnelles auxquelles le réseau de grille virtuelle a été fixé, de sorte à générer une image d'objet tridimensionnelle de telle sorte que des images bidimensionnelles obtenues par capture d'images de l'objet à capturer puissent être modélisées facilement et commodément en une image tridimensionnelle.
PCT/KR2020/017622 2020-08-25 2020-12-04 Procédé permettant de générer une modélisation tridimensionnelle d'une image bidimensionnelle par le biais d'un réseau de grille virtuelle WO2022045470A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200106931A KR102303566B1 (ko) 2020-08-25 2020-08-25 가상의 그리드망을 통한 2차원 이미지의 3차원 모델링 생성 방법
KR10-2020-0106931 2020-08-25

Publications (1)

Publication Number Publication Date
WO2022045470A1 true WO2022045470A1 (fr) 2022-03-03

Family

ID=77923959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/017622 WO2022045470A1 (fr) 2020-08-25 2020-12-04 Procédé permettant de générer une modélisation tridimensionnelle d'une image bidimensionnelle par le biais d'un réseau de grille virtuelle

Country Status (2)

Country Link
KR (1) KR102303566B1 (fr)
WO (1) WO2022045470A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4934789B2 (ja) * 2006-01-23 2012-05-16 国立大学法人横浜国立大学 補間処理方法および補間処理装置
KR20150031085A (ko) * 2013-09-13 2015-03-23 인하대학교 산학협력단 복수의 카메라들을 이용한 3d 얼굴 모델링 장치, 시스템 및 방법
JP2016119077A (ja) * 2014-12-23 2016-06-30 ダッソー システムズDassault Systemes 制御点のグリッドによって定義される3dモデリングされた物体
KR101851303B1 (ko) * 2016-10-27 2018-04-23 주식회사 맥스트 3차원 공간 재구성 장치 및 방법
KR20200032664A (ko) * 2018-09-18 2020-03-26 서울대학교산학협력단 사각형 그리드 투영을 이용한 3차원 영상 재구성 장치

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101693259B1 (ko) 2015-06-17 2017-01-10 (주)유니드픽쳐 2d 이미지를 이용한 3d모델링 및 3차원 형상 제작 기법
KR101904842B1 (ko) 2018-01-30 2018-11-21 강석주 2차원 이미지의 3차원 모델링 방법, 이를 구현하기 위한 프로그램이 저장된 기록매체 및 이를 구현하기 위해 매체에 저장된 컴퓨터프로그램

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4934789B2 (ja) * 2006-01-23 2012-05-16 国立大学法人横浜国立大学 補間処理方法および補間処理装置
KR20150031085A (ko) * 2013-09-13 2015-03-23 인하대학교 산학협력단 복수의 카메라들을 이용한 3d 얼굴 모델링 장치, 시스템 및 방법
JP2016119077A (ja) * 2014-12-23 2016-06-30 ダッソー システムズDassault Systemes 制御点のグリッドによって定義される3dモデリングされた物体
KR101851303B1 (ko) * 2016-10-27 2018-04-23 주식회사 맥스트 3차원 공간 재구성 장치 및 방법
KR20200032664A (ko) * 2018-09-18 2020-03-26 서울대학교산학협력단 사각형 그리드 투영을 이용한 3차원 영상 재구성 장치

Also Published As

Publication number Publication date
KR102303566B1 (ko) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2022164126A1 (fr) Dispositif et procédé de mise en correspondance automatique de données de balayage oral et d'image de tomodensitométrie au moyen d'une segmentation en couronne de données de balayage oral
WO2017026839A1 (fr) Procédé et dispositif permettant d'obtenir un modèle 3d de visage au moyen d'une caméra portative
WO2012176945A1 (fr) Appareil destiné à synthétiser des images tridimensionnelles pour visualiser des environnements de véhicule et procédé associé
EP3308323B1 (fr) Procédé de reconstruction de scène en 3d en tant que modèle 3d
WO2014073818A1 (fr) Procédé de création d'image d'implant et système de création d'image d'implant
WO2017204571A1 (fr) Appareil de détection de caméra pour obtenir des informations tridimensionnelles d'un objet, et appareil de simulation de golf virtuel l'utilisant
WO2020045946A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
WO2015165222A1 (fr) Procédé et dispositif d'acquisition d'image panoramique
US10268108B2 (en) Function enhancement device, attaching/detaching structure for function enhancement device, and function enhancement system
WO2017195984A1 (fr) Dispositif et procédé de numérisation 3d
JP2017523491A (ja) 3dスキャンされるオブジェクトに関する色情報を収集するためのシステム、方法、装置、及びコンピュータ可読記憶媒体
CN108280807A (zh) 单目深度图像采集装置和系统及其图像处理方法
WO2022045470A1 (fr) Procédé permettant de générer une modélisation tridimensionnelle d'une image bidimensionnelle par le biais d'un réseau de grille virtuelle
WO2018110978A1 (fr) Dispositif de synthèse d'image et procédé de synthèse d'image
WO2022177095A1 (fr) Procédé et application à base d'intelligence artificielle pour la fabrication d'une prothèse 3d pour la restauration dentaire
WO2022154523A1 (fr) Procédé et dispositif de mise en correspondance de données de balayage buccal tridimensionnel par détection de caractéristique 3d basée sur l'apprentissage profond
CN106471804A (zh) 用于图像捕捉和同时深度提取的方法及装置
WO2012148025A1 (fr) Dispositif et procédé servant à détecter un objet tridimensionnel au moyen d'une pluralité de caméras
EP3066508A1 (fr) Procédé et système permettant de créer un effet de remise au point de caméra
WO2018101746A2 (fr) Appareil et procédé de reconstruction d'une zone bloquée de surface de route
WO2017086522A1 (fr) Procédé de synthèse d'image d'incrustation couleur sans écran d'arrière-plan
WO2017213335A1 (fr) Procédé pour combiner des images en temps réel
WO2010087587A2 (fr) Procédé d'obtention de données d'images et son appareil
WO2021054756A1 (fr) Dispositif de génération d'images frontales pour équipements lourds
WO2023182755A1 (fr) Procédé de production de données de balayage facial tridimensionnel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20951700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20951700

Country of ref document: EP

Kind code of ref document: A1