KR20170090016A - System and method for converting 2-dimensional image to 3-dimensional image - Google Patents
System and method for converting 2-dimensional image to 3-dimensional image Download PDFInfo
- Publication number
- KR20170090016A KR20170090016A KR1020160010193A KR20160010193A KR20170090016A KR 20170090016 A KR20170090016 A KR 20170090016A KR 1020160010193 A KR1020160010193 A KR 1020160010193A KR 20160010193 A KR20160010193 A KR 20160010193A KR 20170090016 A KR20170090016 A KR 20170090016A
- Authority
- KR
- South Korea
- Prior art keywords
- dimensional image
- dimensional
- contour
- pixel
- value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The present invention discloses a method and system for restoring a two-dimensional image into a three-dimensional image, which enables restoration of a two-dimensional image of a user into a three-dimensional image in order to produce a three-dimensional shape based on a photograph of a user's image .
A method of restoring a disclosed two-dimensional image into a three-dimensional image includes the steps of: (a) calling up a two-dimensional image by an image loading unit and dividing the two-dimensional image by a predetermined pixel unit; (b) a feature detecting unit detecting a predetermined number or more of contour points in the two-dimensional image; (c) setting a depth value for the three-dimensional transformation for every two pixels around the detected predetermined number or more contour points for the two-dimensional image; (d) The control unit selects the straight line S including the predetermined number or more of contour points, divides the straight line S into a top curve and a bottom curve with respect to each selected line straight line, Calculating a three-dimensional machining path coordinate based on a depth value of the three-dimensional machining path; And (e) rendering the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates.
According to the present invention, a three-dimensional shape can be restored based on a photograph of a user.
Description
The present invention relates to a method and system for restoring a two-dimensional image into a three-dimensional image, and more particularly, to restoring a two-dimensional image of a user into a three-dimensional image in order to produce a stereoscopic shape based on a photograph And more particularly, to a method and system for restoring a two-dimensional image into a three-dimensional image.
In recent years, as interest in three-dimensional stereoscopic images has increased, research has been actively conducted on methods for generating three-dimensional stereoscopic images. From the beginning of the study of 3-Dimensional (3D) graphics, the ultimate goal of the researchers is to create a realistic graphic screen like real images.
Therefore, in the field of traditional modeling technology, studies using polygonal models have been conducted, and as a result, modeling and rendering techniques have been developed to a sufficient degree to provide a highly realistic three-dimensional environment.
However, the process for creating a complex model requires a lot of expert effort and time. In addition, realistic and complex environments require enormous amounts of information, resulting in low efficiency in storage and transmission.
It is an object of the present invention to solve the above-mentioned problems, and it is an object of the present invention to provide a three-dimensional image processing method and apparatus capable of restoring a two-dimensional image of a user into a three- And to provide a method and system for restoring images.
According to an aspect of the present invention, there is provided a system for restoring a two-dimensional image into a three-dimensional image, the system including: an image loading unit for loading a two-dimensional image and dividing the two- A feature detector for detecting a predetermined number or more of contour points in the two-dimensional image; Dimensional image, a depth value for three-dimensional transformation is set for every pixel centered on the detected number of contour points or more, a front surface S including the predetermined number or more of contour points is selected, Calculating a three-dimensional machining path coordinate based on a depth value of pixels on the straight line forming the upper and lower curves by dividing the straight line facing each other into an upper curve and a lower curve; And a rendering unit rendering the two-dimensional image into a three-dimensional image according to the calculated three-dimensional processing path coordinates.
The feature detecting unit may be configured to designate an area of one or more pixels having a lightness difference of a plurality of pixels in the two-dimensional image as a reference point difference or more, and to set the outline point as a constant It is possible to detect it by more than the number.
In addition, the feature detecting unit may determine a predetermined number or more of contour points by adding a specific number of detailed contour points between the detected contour points.
In addition, the control unit may set the depth values according to the magnitude of the lightness and darkness values of the pixels of the detected number of contour points, And sets a depth value greater than a depth value for a pixel of the contour point when the contrast value for the pixel of the contour point is higher than the contrast value for the pixel of the contour point, May be set to a depth value lower than the depth value for the pixel of the contour point when the contrast value is lower than the contrast value for the pixel of the contour point.
If the slope exists between the contour points of the predetermined number or more based on the depth value of the pixels on the straight line forming the upper and lower curves, the control unit calculates the slope of the slopes of the slopes , Comparing the lightness values of the pixels at each contour point, and comparing the depth value of the pixel at the contour point side with the larger lightness value and the slope of the straight lines forming the slant surface side of the contour point with the greater lightness value The three-dimensional machining path coordinates can be calculated so as to have a larger value than the target contour point side.
According to another aspect of the present invention, there is provided a method of restoring a two-dimensional image into a three-dimensional image, the method comprising the steps of: (a) (b) a feature detecting unit detecting a predetermined number or more of contour points in the two-dimensional image; (c) setting a depth value for the three-dimensional transformation for every two pixels around the detected predetermined number or more contour points for the two-dimensional image; (d) The control unit selects the straight line S including the predetermined number or more of contour points, divides the straight line S into a top curve and a bottom curve with respect to each selected line straight line, Calculating a three-dimensional machining path coordinate based on a depth value of the three-dimensional machining path; And (e) rendering the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates.
In the step (b), the feature detecting unit may designate, as a contour point, an area of one or more pixels having a difference in contrast between a plurality of pixels in the two-dimensional image, It is possible to detect a certain number or more of the entire area of the image.
Also, in the step (b), the feature detector may determine a certain number or more of contour points by adding a specific number of detailed contour points between the detected contour points.
In addition, in the step (c), the control unit compares the lightness and darkness values of the pixels of the detected number of contour points or more with each other to set the depth value according to the magnitude of the lightness value, Comparing the lightness value of the surrounding pixels based on the lightness value of the pixel of the contour point and comparing the lightness value of the pixels around the contour point with the lightness value of the pixel of the contour point, Depth value and may be set to a depth value lower than the depth value for the pixel of the contour point when the contrast value for the pixel of the contour point is lower than the contrast value.
In the step (d), when the slope exists between the contour points of the predetermined number or more based on the depth value of the pixels on the straight line making up the upper and lower curves, For the slope of the straight lines, the lightness values for the pixels of each contour point are compared with each other, and the depth value for the pixel on the contour point side where the lightness value is larger and the depth value for the pixel on the contour point side The three-dimensional machining path coordinates can be calculated so that the slope of the straight lines has a value larger than that of the contour point side to be compared.
According to the present invention, a three-dimensional image for producing a three-dimensional shape can be restored based on a photograph of a user's image.
FIG. 1 is a schematic view of a functional block of a system for restoring a two-dimensional image into a three-dimensional image according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating a method of restoring a two-dimensional image into a three-dimensional image according to an exemplary embodiment of the present invention. Referring to FIG.
3 is a diagram illustrating an example of a process of generating three-dimensional machining path coordinates according to an embodiment of the present invention.
4 is a diagram illustrating an example of comparing brightness values of pixels of contour points set according to an embodiment of the present invention.
5 is a view illustrating an example of setting depth values of contour points according to the magnitude of the contrast value according to an embodiment of the present invention.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art to which the present invention pertains. The present invention may be embodied in many different forms and is not limited to the embodiments described herein.
In order to clearly illustrate the present invention, parts not related to the description are omitted, and the same or similar components are denoted by the same reference numerals throughout the specification.
Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "comprising ", it means that it can include other elements as well, without departing from the other elements unless specifically stated otherwise.
If any part is referred to as being "on" another part, it may be directly on the other part or may be accompanied by another part therebetween. In contrast, when a section is referred to as being "directly above" another section, no other section is involved.
The terms first, second and third, etc. are used to describe various portions, components, regions, layers and / or sections, but are not limited thereto. These terms are only used to distinguish any moiety, element, region, layer or section from another moiety, moiety, region, layer or section. Thus, a first portion, component, region, layer or section described below may be referred to as a second portion, component, region, layer or section without departing from the scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the invention. The singular forms as used herein include plural forms as long as the phrases do not expressly express the opposite meaning thereto. Means that a particular feature, region, integer, step, operation, element and / or component is specified and that the presence or absence of other features, regions, integers, steps, operations, elements, and / It does not exclude addition.
Terms indicating relative space such as "below "," above ", and the like may be used to more easily describe the relationship to other portions of a portion shown in the figures. These terms are intended to include other meanings or acts of the apparatus in use, as well as intended meanings in the drawings. For example, when inverting a device in the figures, certain portions that are described as being "below" other portions are described as being "above " other portions. Thus, an exemplary term "below" includes both up and down directions. The device can be rotated by 90 degrees or rotated at different angles, and terms indicating relative space are interpreted accordingly.
Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Commonly used predefined terms are further interpreted as having a meaning consistent with the relevant technical literature and the present disclosure, and are not to be construed as ideal or very formal meanings unless defined otherwise.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
FIG. 1 is a schematic view of a functional block of a system for restoring a two-dimensional image into a three-dimensional image according to an embodiment of the present invention.
Referring to FIG. 1, a
The
The
That is, the
In addition, the
The
That is, the
If there is an inclined plane between a certain number of contour points or more on the basis of the depth value of pixels on the straight line forming the upper and lower curves, the
The
The
The
FIG. 2 is a flowchart illustrating a method of restoring a two-dimensional image into a three-dimensional image according to an exemplary embodiment of the present invention. Referring to FIG.
Referring to FIG. 2, a
Next, the
That is, the
In addition, the
Next, the
That is, the
Next, the
Here, calculating the three-dimensional machining path coordinates is for generating a two-dimensional image of a user as a three-dimensional three-dimensional image and applying the same to materials such as bronze or iron to cut or shape a three-dimensional three-dimensional shape of the user.
When the three-dimensional machining path coordinates are calculated, it is possible to melt the materials such as iron or bronze using a 3D printer and to laminate them along three-dimensional machining path coordinates on a point-by-point basis, or to make materials such as hexahedral iron or bronze, It can be manufactured by cutting along the machining path coordinates.
Since it is general to laminate the three-dimensional machining path coordinates using a 3D printer, in the embodiment of the present invention, the three-dimensional machining path coordinates of the hexahedron material are cut using a tool to produce a three- Dimensional machining path coordinate is calculated as an example.
First, the
The
Thus, each point of any endocrine curve ([m i (λ)] i = 1 ... N ) generated by the
As described above, the machining path is defined as a set of co-ordinate coordinates T (k) = (p (k), a (k)). The tool axis is located on the straight line p (k) + ha (k) since p (k) is the coordinate of the tool end point at point k, and a (k) is the direction of the tool axis. And if the taper angle of the tool is φ, if the tool radius at position h 0 is r 0, then h The tool radius r (h) at the position can be expressed by the following equation (2).
In Figure 3, the i < th > section of the co-
The shortest distance between to be. Also The tool radius at that position is to be. At this time Wow Can be easily obtained by calculating the shortest distance between two straight lines. Thus, according to the following equation (3) The error between the section and the tool , The error of the tool with respect to the total endogenous curve [m i (λ)] is .
At this time, when e k is greater than 0,
In summary, the upper and lower curves are approximated with polyline according to the required accuracy, and the ending curves are generated with an appropriate λ to estimate the error of the machining path by obtaining the error with the endometrical curve for each tool position in the machining path. In general, it is appropriate to use λ = {0, 0.5, 1}, ie given upper / lower curves and intermediate curves.
The
here,
Represents the learning rate and adjusts the learning speed and precision.Repeat 3) and 4) until the stop condition is satisfied. The stop condition is usually determined by the maximum number of iterations or whether the error has reached the target value.
Such a GD technique can be applied if the target function is only possible with the first derivative, and therefore, it is possible to broaden the application width, to implement it easily, and to adjust the learning direction by including various constraints in the target function.
However, since the learning speed is slow, an appropriate learning coefficient value should be set. If the learning coefficient is too small, the learning time becomes unnecessarily long. If the learning coefficient is too large, a phenomenon of diverging without converging to the optimum value occurs. For this reason, the Newton method that utilizes the second derivative information rather than the GD method that uses only the first-order differential information, and the quasi-Newton method that approximates the second derivative are applied.
However, in spite of these drawbacks, GD is applied in many fields and techniques, among which Deep Belief Networks (DBN) is one of the most popular machine learning techniques in recent years. DBN, a type of neural network model, is attracting attention due to its powerful performance and scalability. In the case of DBN, the optimization technique of GD series is mainly used because of the internal randomness.
In order to process the three-dimensional shape of the user along the three-dimensional machining path coordinate, the side surface of the tool is brought into close contact with the surface of the tool as close as possible to minimize the error at an arbitrary time k , and the tool proceeds at a constant speed, . To satisfy this condition, we simplify the optimization problem as follows.
That is, the
A simple way to minimize the error between the polyline curve [m i ] and the co-construct T (k) is to first find a feature point c k that replaces [m i ] and repeat the process of reducing the error between the constructions and the feature points. In this case, the feature points are And the point at which the error e k becomes the minimum.
The
In order to obtain the machining path in which the tool advances at a constant speed, the machining path is defined as a set of N co-ordinate coordinates T i = (p i , a i ), i Th co-ordinate coordinate T i is the i Th point m i To minimize the error. To do this, one of the top, middle, and bottom curves is defined and the feature point is set as c k = m i .
The
In addition, when there is an inclined plane between a certain number of contour points or more on the basis of the depth value of pixels on the straight line forming the upper and lower curves, the
The
When the three-dimensional machining path coordinates are calculated, the
Then, the
Accordingly, the user can convert the three-dimensional image into a three-dimensional image of his or her own photograph (two-dimensional image), and confirm the displayed three-dimensional image.
As described above, according to the present invention, it is possible to restore a two-dimensional image of a user into a three-dimensional image in order to produce a three-dimensional shape based on a photograph of a user's image, Method and system can be realized.
It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims and their equivalents. Only. It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .
100: System for restoring a two-dimensional image to a three-dimensional image
110: image loading section
120:
130:
140:
150:
150:
Claims (10)
A feature detector for detecting a predetermined number or more of contour points in the two-dimensional image;
Dimensional image, a depth value for three-dimensional transformation is set for every pixel centered on the detected number of contour points or more, a front surface S including the predetermined number or more of contour points is selected, Calculating a three-dimensional machining path coordinate based on a depth value of pixels on the straight line forming the upper and lower curves by dividing the straight line facing each other into an upper curve and a lower curve; And
A rendering unit for rendering the two-dimensional image into a three-dimensional image according to the calculated three-dimensional processing path coordinates;
Dimensional image is reconstructed into a three-dimensional image.
Wherein the feature detecting unit specifies an area of one or more pixels having a lightness difference of a plurality of pixels among the plurality of pixels in the two-dimensional image by a predetermined reference difference or more as a contour point and sets the contour point to a predetermined number or more Dimensional image is reconstructed into a three-dimensional image.
Wherein the feature detection unit determines a predetermined number or more of contour points by adding a specific number of detailed contour points between the detected contour points.
Wherein the control unit sets the depth values according to the magnitude of the lightness and darkness values by comparing the lightness and darkness values of the pixels of the detected number of contour points or more with respect to the pixels of the contour point, Value to a depth value that is greater than a depth value for a pixel of the contour point when the contrast value for the pixel of the contour point is higher than the contrast value for the pixel of the contour point, The depth value for the pixel of the contour point is lower than the depth value for the pixel of the contour point if the contrast value for the pixel of the point is lower than the contrast value for the pixel of the point.
Wherein the control unit controls the angle of inclination of the straight lines forming the upper and lower curves so that when there is an inclined plane between the predetermined number or more of the contour points on the basis of the depth value of the pixels on the straight line forming the upper and lower curves, The depth value for the pixel on the side of the contour point having the larger lightness value and the slope of the straight lines forming the inclined surface on the side of the contour point having the larger lightness value are compared with each other, Dimensional processing path coordinates are calculated so as to have a larger value than the point side.
(b) a feature detecting unit detecting a predetermined number or more of contour points in the two-dimensional image;
(c) setting a depth value for the three-dimensional transformation for every two pixels around the detected predetermined number or more contour points for the two-dimensional image;
(d) The control unit selects the straight line S including the predetermined number or more of contour points, divides the straight line S into a top curve and a bottom curve with respect to each selected line straight line, Calculating a three-dimensional machining path coordinate based on a depth value of the three-dimensional machining path; And
(e) rendering the two-dimensional image as a three-dimensional image according to the calculated three-dimensional processing path coordinates;
Dimensional image is reconstructed into a three-dimensional image.
Wherein the feature detecting unit designates a region of one or more pixels having a difference in contrast between a plurality of pixels in the two-dimensional image by a predetermined reference difference or more as an outline point, A method for restoring a two-dimensional image into a three-dimensional image, the method comprising the steps of:
Wherein the step (b) further comprises the step of the feature detector adding a specific number of detailed contour points between the detected contour points to determine a certain number or more of contour points.
Wherein the controller sets the depth values according to the magnitudes of the lightness and darkness values by comparing the lightness and darkness values of the pixels of the predetermined number or more of the detected edge points with each other, A depth value greater than the depth value for the pixel of the contour point when the contrast value for the pixel of the contour point is higher than the contrast value for the pixel of the contour point, And sets the depth value to a depth value lower than the depth value for the pixel of the contour point when the contrast value for the pixel of the contour point is lower than the contrast value for the pixel of the contour point.
In the step (d), when there is an inclined plane between the predetermined number or more of the contour points on the basis of the depth value of pixels on the linear plane constituting the upper and lower curves, For each of the contour points, the contrast values for the pixels of each contour point are compared with each other, and the depth value for the pixel on the side of the contour point whose brightness value is larger and the depth value for the pixel on the contour point side Wherein the three-dimensional processing path coordinates are calculated such that the slope has a larger value than the side of the contour point to be compared.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160010193A KR101779532B1 (en) | 2016-01-27 | 2016-01-27 | System and method for converting 2-dimensional image to 3-dimensional image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160010193A KR101779532B1 (en) | 2016-01-27 | 2016-01-27 | System and method for converting 2-dimensional image to 3-dimensional image |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170090016A true KR20170090016A (en) | 2017-08-07 |
KR101779532B1 KR101779532B1 (en) | 2017-09-19 |
Family
ID=59653662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160010193A KR101779532B1 (en) | 2016-01-27 | 2016-01-27 | System and method for converting 2-dimensional image to 3-dimensional image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101779532B1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785427A (en) * | 2018-12-26 | 2019-05-21 | 长江勘测规划设计研究有限责任公司 | The method of three-dimensional modeling is quickly carried out using X-Y scheme |
KR20190070623A (en) * | 2017-12-13 | 2019-06-21 | 중앙대학교 산학협력단 | Apparatus and method for bas-relief modeling |
KR20200048237A (en) | 2018-10-29 | 2020-05-08 | 방성환 | Server for generatig 3d image and method for generating stereoscopic image |
CN116756836A (en) * | 2023-08-16 | 2023-09-15 | 中南大学 | Tunnel super-undermining volume calculation method, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100414629B1 (en) | 1995-03-29 | 2004-05-03 | 산요덴키가부시키가이샤 | 3D display image generation method, image processing method using depth information, depth information generation method |
KR101570359B1 (en) * | 2014-11-28 | 2015-11-19 | 한국델켐 (주) | system and method for generating flank milling tool path |
-
2016
- 2016-01-27 KR KR1020160010193A patent/KR101779532B1/en active IP Right Grant
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190070623A (en) * | 2017-12-13 | 2019-06-21 | 중앙대학교 산학협력단 | Apparatus and method for bas-relief modeling |
KR20200048237A (en) | 2018-10-29 | 2020-05-08 | 방성환 | Server for generatig 3d image and method for generating stereoscopic image |
CN109785427A (en) * | 2018-12-26 | 2019-05-21 | 长江勘测规划设计研究有限责任公司 | The method of three-dimensional modeling is quickly carried out using X-Y scheme |
CN116756836A (en) * | 2023-08-16 | 2023-09-15 | 中南大学 | Tunnel super-undermining volume calculation method, electronic equipment and storage medium |
CN116756836B (en) * | 2023-08-16 | 2023-11-14 | 中南大学 | Tunnel super-undermining volume calculation method, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR101779532B1 (en) | 2017-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10510180B2 (en) | Learning to reconstruct 3D shapes by rendering many 3D views | |
US9978177B2 (en) | Reconstructing a 3D modeled object | |
EP3032495B1 (en) | Texturing a 3d modeled object | |
KR101779532B1 (en) | System and method for converting 2-dimensional image to 3-dimensional image | |
CN110288695B (en) | Single-frame image three-dimensional model surface reconstruction method based on deep learning | |
JP7403528B2 (en) | Method and system for reconstructing color and depth information of a scene | |
KR102096673B1 (en) | Backfilling points in a point cloud | |
JP4677536B1 (en) | 3D object recognition apparatus and 3D object recognition method | |
KR100634537B1 (en) | Apparatus and method for processing triangulation of 3-D image, computer-readable storing medium storing a computer program for controlling the apparatus | |
CN102184540B (en) | Sub-pixel level stereo matching method based on scale space | |
CN103123727A (en) | Method and device for simultaneous positioning and map building | |
KR101867991B1 (en) | Motion edit method and apparatus for articulated object | |
US20140300941A1 (en) | Method and apparatus for generating hologram based on multi-view image | |
CN106408596B (en) | Sectional perspective matching process based on edge | |
CN104715504A (en) | Robust large-scene dense three-dimensional reconstruction method | |
EP2372652A1 (en) | Method for estimating a plane in a range image and range image camera | |
Natali et al. | Rapid visualization of geological concepts | |
WO2020230214A1 (en) | Depth estimation device, depth estimation model learning device, depth estimation method, depth estimation model learning method, and depth estimation program | |
Sreeni et al. | Haptic rendering of dense 3D point cloud data | |
Shivakumar et al. | Real time dense depth estimation by fusing stereo with sparse depth measurements | |
CN104796624A (en) | Method for editing and propagating light fields | |
US8837815B2 (en) | Method of filtering a disparity mesh obtained from pixel images | |
US10783707B2 (en) | Determining a set of facets that represents a skin of a real object | |
Leeper et al. | Constraint based 3-dof haptic rendering of arbitrary point cloud data | |
Aydar et al. | A low-cost laser scanning system design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |