KR101275749B1 - Method for acquiring three dimensional depth information and apparatus thereof - Google Patents

Method for acquiring three dimensional depth information and apparatus thereof Download PDF

Info

Publication number
KR101275749B1
KR101275749B1 KR1020120140507A KR20120140507A KR101275749B1 KR 101275749 B1 KR101275749 B1 KR 101275749B1 KR 1020120140507 A KR1020120140507 A KR 1020120140507A KR 20120140507 A KR20120140507 A KR 20120140507A KR 101275749 B1 KR101275749 B1 KR 101275749B1
Authority
KR
South Korea
Prior art keywords
depth information
captured image
image
bright
pixel
Prior art date
Application number
KR1020120140507A
Other languages
Korean (ko)
Inventor
최상복
Original Assignee
최상복
박영만
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 최상복, 박영만 filed Critical 최상복
Priority to KR1020120140507A priority Critical patent/KR101275749B1/en
Application granted granted Critical
Publication of KR101275749B1 publication Critical patent/KR101275749B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Abstract

According to an aspect of the present invention, there is provided a method of acquiring a first captured image of an interference fringe generated on an object by means of slit means for passing a monochromatic light source through at least one slit by using a camera means. Image processing a plurality of bright fringes intermittently generated among the interference fringes to solidify them, using the degree of interference with respect to each of the solid bright fringes, and coordinates of pixels in which the bright fringes are located; And obtaining depth information on pixel points at which the bright fringes are located among all pixels of the first captured image.

Description

Method for acquiring three dimensional depth information and apparatus

The present invention relates to a method and apparatus for acquiring 3D depth information, and more particularly, to 3D depth information capable of generating 3D image information by fusing depth information of an object to a 2D image of an object acquired by a 2D camera. An acquisition method and apparatus therefor.

In general, 3D image information has been applied to a variety of motion detection games, human robot eyes, precision measurement equipment. The most widely used method for acquiring 3D image information is a method of calculating a difference between two images based on a physical distance and an angle between two 2D cameras using a trigonometric function. This method is disadvantageous in that it is difficult to apply to the real-time mass production system inspection because of the large amount of computation and the maintenance cost is high because the camera needs to be calibrated periodically. The background technology of the present invention is disclosed in Korean Patent Laid-Open No. 2012-0073178.

In addition, as a method of acquiring 3D image information using an infrared depth sensor, a method of recognizing depth information by measuring the time when infrared light is reflected back to an object, and using the size of the infrared dot and the distance of the infrared dot by tiling the infrared dots There is a method of recognizing depth information, but the two methods require a separate infrared camera and have a disadvantage of being expensive.

An object of the present invention is to provide a method and apparatus for obtaining 3D depth information which can easily generate 3D image information by fusing depth information of an object to a 2D image of an object obtained by a 2D camera.

According to an aspect of the present invention, there is provided a method of acquiring a first captured image of an interference fringe generated on an object by means of slit means for passing a monochromatic light source through at least one slit by using a camera means. Image processing a plurality of bright fringes intermittently generated among the interference fringes to solidify them, using the degree of interference with respect to each of the solid bright fringes, and coordinates of pixels in which the bright fringes are located; And obtaining depth information on pixel points at which the bright fringes are located among all pixels of the first captured image.

In the obtaining of the depth information, the order of interference with respect to each of the bright fringes, the coordinates of a pixel in which the bright fringes are located, the actual size that can be expressed by one pixel, the wavelength of the monochromatic light source, and the size of the slit Real depth information on the pixel points can be obtained by using Equation below.

L real = (d · y) / (m · λ)

Here, L real is the actual depth information of the pixel point included in the bright pattern of the order of the interference m, d is the size of the slot in the case of one slot, the interval between two slots in the case of two slots, y is the actual distance from the center of the bright fringe of order 0 to the pixel point included in the bright fringe of order m, and λ is the wavelength of the monochromatic light source.

When the depth information is obtained, depth information on the intersection points may be acquired using coordinates of pixels corresponding to intersection points between the solid bright fringes when the depth information is obtained.

The method for acquiring 3D depth information may include obtaining a second captured image of the object by using the camera means, and splitting the second captured image into a plurality of divided regions based on color information in the second captured image. And 3D image information by mapping depth information of pixel points at which the bright fringes obtained from the first captured image are located for each divided area of the second captured image, wherein the 3D image information is generated. The method may further include generating and mapping depth information by using depth information on at least one pixel point mapped to the corresponding division area in the case of the pixel point having no depth information in the division area.

In addition, in the generating of the 3D image information, if the depth information of at least one pixel point mapped to the corresponding divided region is the same, the same for the pixel points having no depth information in the corresponding divided region. When depth information is generated and depth information of a plurality of pixel points mapped to the corresponding divided region is different, depth information of the two pixel points for pixel points positioned between two pixel points having different depth information. Depth information can be generated linearly by applying a linear function based on.

In addition, the present invention, the first image acquisition unit for acquiring a first captured image of the interference fringes generated on the object by the slit means for passing the monochromatic light source through at least one slit using the camera means; An image processing unit which processes and linearizes a plurality of bright fringes intermittently generated among the interference fringes in the first captured image, an order of interference on each of the solid bright fringes, and a pixel in which the bright fringes are located And a depth information acquirer configured to obtain depth information about pixel points at which the bright fringes are located among all the pixels of the first captured image.

The apparatus for obtaining 3D depth information may include a second image acquisition unit for acquiring a second captured image of the object by using the camera means, and the second captured image based on color information in the second captured image. A region grouping unit for dividing a into a plurality of divided regions, and depth information of pixel points at which the bright fringes obtained from the first captured image are located, for each divided region of the second captured image, and then performing 3D 3D image generation that generates image information, but generates and maps depth information using depth information of at least one pixel point mapped to a corresponding partition area in the case of a pixel point having no depth information in the partition area It may further include wealth.

According to the method and apparatus for obtaining 3D depth information according to the present invention, there is an advantage in that 3D image information can be easily generated by fusing depth information of an object to a 2D image of an object obtained by a 2D camera.

1 is a conceptual diagram of a 3D depth information obtaining method according to an embodiment of the present invention.
2 is an exemplary view of the slit means of FIG.
3 is a conceptual diagram illustrating an interference fringe generated in an object when using the double slit structure in FIG.
4 illustrates an example of an interference fringe according to the number of slits of FIG. 2.
5 is a block diagram of an apparatus for obtaining 3D depth information according to an embodiment of the present invention.
FIG. 6 is a flowchart of a 3D depth information obtaining method using FIG. 5.
7 is a conceptual diagram of a bar-shaped slit applicable to this embodiment.
FIG. 8 is a schematic layout view of the slit means using the longitudinal bar type slit and the horizontal bar type slit of FIG. 7.
FIG. 9 is a diagram for explaining operation S660 of FIG. 6.

DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the present invention.

1 is a conceptual diagram of a 3D depth information obtaining method according to an embodiment of the present invention. The present invention generates 3D image information by fusing depth information of an object to a 2D image of an object obtained by a 2D camera. To this end, the present invention includes a slit means 200, a camera means 100, the illumination means 300.

First, the camera means 100 is implemented as a general 2D camera as a part of obtaining a two-dimensional captured image of the object 10. Here, the two-dimensional captured image is obtained in a state in which the general illumination is irradiated to the object 10 by the lighting means 300.

The slit means 200 passes a monochromatic light source through at least one slit (ex, single slit, double slit) to generate interference fringes on the surface of the object 10. It is apparent that the interference fringe includes the concept of a diffraction fringe. This slit means 200 corresponds to a known device according to the principle of interference of light of Thomas Young.

2 is an exemplary view of the slit means of FIG. (A) of FIG. 2 corresponds to a single slit, and (b) corresponds to a double slit structure. The structure of such single and double slits is known in the art. As a monochromatic light source to be used, it can be selectively used among infrared, ultraviolet, and visible light. However, when the object 10 is a living thing, the use of ultraviolet rays is excluded.

3 is a conceptual diagram illustrating an interference fringe generated in an object when using the double slit structure in FIG. 3 is a representation of an interference fringe for a cross section of an object for convenience of description.

As shown in the right part in Fig. 3A, the interference fringe includes a dark fringe generated by destructive interference and a bright fringe generated by constructive interference. Dark and light patterns are formed intermittently alternately from the center of the object. The distance between the interference fringes is determined according to the wavelength of the light source used and the distance between the inspection object and the slit means.

In FIG. 3, m corresponds to the order of interference for each of the bright fringes. The part where m = 0 corresponds to the bright pattern formed in the center of the object. The bright fringes on the outside are indicated by m = 1, 2, 3, and 4, respectively.

Figure 3 (b) is a simplified representation of Figure 3 (a), the distance from the bright pattern of m = 0 to the bright pattern of any order other than non-zero m to y, the distance between the two slits to d, The distance between the slit means and the object 10 is represented by L.

Here, the wavelength λ of the monochromatic light source is defined as λ = (d · y) / (m · L), as is well known. The distance L between the slit means and the object 10 can be regarded as substantially corresponding to the depth information of the object 10. 3 illustrates a planar object 10, where L is a fixed value for the entire surface of the object 10. However, if the surface of the object 10 is three-dimensional, the L value will not be constant for the entire surface of the object 10.

In the present embodiment, the wavelength of the monochromatic light source and the standard of the slit are known in advance, and L corresponding to the depth information of the object 10 is unknown. Therefore, by changing the above equation to L, it can be defined as L = (d · y) / (m · λ). Here, the information about m and y is a value that can be obtained through the captured image of the interference fringe generated in the object 10, which will be described in detail later. Therefore, when the captured image of the interference fringe is used, depth information of the object may be obtained for each coordinate point in the image.

Here, the photographing of the interference fringe generated by the slit means 200 may use a general 2D camera such as the camera means 100 of FIG. In addition, when photographing, it is preferable to photograph the interference fringe in a dark room without lighting so as not to be influenced by external factors. When measured in the dark room it is possible to obtain an image of only the interference fringe by a single wavelength of infrared or ultraviolet.

Here, in general, the 2D camera can observe infrared rays or ultraviolet rays in addition to visible light through internal gain control. Therefore, even when an infrared or ultraviolet light source other than visible light is used as the monochromatic light source, the camera means 100, that is, the 2D camera, may be used in the present embodiment without separately providing an infrared camera or an ultraviolet camera. In this case, the camera means 100 is used for the acquisition of the 2D captured image and the acquisition of the interference fringe, and the acquisition of the two images may be performed sequentially.

Of course, when using an infrared or ultraviolet light source as a monochromatic light source, using an infrared or ultraviolet-only camera can obtain a better quality image. Therefore, in addition to the camera means 100 for capturing the 2D image of FIG. 1, an infrared camera or an ultraviolet camera may be further provided. At this time, the camera means 100 for photographing the 2D image of the object 10 is provided with a filter that passes only visible light so as not to be affected by infrared rays or ultraviolet rays. In addition, the camera means 100 may be configured to include all functions of a 2D camera, an infrared camera, and an ultraviolet camera.

4 illustrates an example of an interference fringe according to the number of slits of FIG. 2. 4A to 4D schematically illustrate examples of interference fringes occurring in the plane of the object 10 when the number of slits is 1 to 4.

In fact, the interference fringes are alternately formed with light and dark fringes as shown in FIG. 3. When viewed in plan, the interference fringe of FIG. 3 is circular as shown in FIG. 4. The innermost small circle corresponds to the bright pattern of m = 0. When the photographed image of the circular interference fringe is image processed and the bright fringe portion is processed into a solid line, it may be expressed as a circle of several layers as shown in FIG. 4. The greater the number of slits, the more points the bright patterns intersect. These intersections are where strong constructive interference occurs.

For convenience of description, FIG. 4 shows bright patterns at regular equal intervals, and in reality, the intervals become farther away and the brightness of the bright patterns decreases. In addition, when the object is not a plane but a three-dimensional object, a bright pattern of distortion or bending may appear at a point corresponding to the three-dimensional portion. Since the above-mentioned configuration of the slit means by Thomas Young and the principle and example of the generation of the interference fringe are known in the art, more detailed description thereof will be omitted.

Next, an apparatus and method for obtaining 3D depth information for the present embodiment will be described based on the above contents. 5 is a block diagram of an apparatus for obtaining 3D depth information according to an embodiment of the present invention. The apparatus 400 may include a second image acquisition unit 410, an area grouping unit 420, a first image acquisition unit 430, an image processing unit 440, a depth information acquisition unit 450, and a depth information mapping unit ( 460).

FIG. 6 is a flowchart of a 3D depth information obtaining method using FIG. 5. Hereinafter, a three-dimensional depth information acquisition method according to an embodiment of the present invention will be described in detail.

First, the second image acquisition unit 410 acquires a second captured image of the object 10 by the camera means 100 (S610). Here, the second captured image corresponds to a general 2D image.

Next, the area grouping unit 420 divides the second captured image into a plurality of divided regions based on the color information in the second captured image (S620).

For example, first, the second captured image is divided into several divided regions based on color information such as RGB in the second captured image. This distinguishes between similar colors. Next, the area is further divided according to the brightness information in each divided area. This reclassifies things with similar brightness. Then, when edge information is detected, the area is reclassified according to the edge information. Finally, by using symmetry information (ex, symmetry information by NTGST algorithm), simple edges are discarded and only the edges with symmetry are used to reclassify the region.

Of course, these are only examples and the present invention is not necessarily limited thereto. That is, in step S620, various known color grouping methods or color classification methods may be applied.

In step S620, the pixel points, which are expected to have different depth information, are grouped in units of regions on the second captured image of the object 10. This is because there is a possibility that the depths are different between areas divided according to color, brightness, edge, and the like. In addition, if the area is divided by the color information, it may be usefully used when inspecting or obtaining depth information on an object whose depth for each area is simply divided according to the color or brightness of the surface. In this case, if only depth information of one pixel point in an area is known, the same depth information may be allocated to other pixels constituting the area.

Next, the first image acquisition unit 430 obtains a first captured image, which is a captured image of an interference fringe generated on the object 10 by the slit means 200, from the camera means 100 (S630). ).

Thereafter, the image processor 440 solidifies a plurality of bright patterns among the interference patterns in the first captured image through image processing (S640). Here, as the linearization processing, various known image processing methods may be applied, and the linearization example has been described above.

The depth information acquirer 450 obtains depth information on pixel points at which the solid bright patterns are located (S650). Here, since calculating the depth information for all the pixels constituting the bright pattern may take a long time, the depth information may be calculated for some pixel points instead of all the pixels. For pixel points for which the depth information has not been calculated, depth information of adjacent pixel points from which the depth information is obtained may be used.

In the step S650, acquiring the actual depth information on the pixel point where the bright fringe is located, the order of interference for each of the bright fringes, the coordinates of the pixel where the bright fringes are located, the actual size that can be expressed by one pixel, Using the wavelength of the monochromatic light source and the slit specification, L real = (d · y) / (m · λ) is obtained.

Here, L real is the actual depth information of the pixel point included in the bright pattern of the order of the interference m, d is the size of the slot in the case of one slot, the interval between two slots in the case of two slots to be. Further, y is the actual distance from the center of the bright fringe of order 0 to the pixel point included in the bright fringe of order m, and λ is the wavelength of the monochromatic light source.

A case in which a single slit is used for this step S650 will be described with reference to FIG. 4A. If there is only one slit, the d value is the size of the slit hole. In this embodiment, it is assumed that d = 1 mm. In addition, it is assumed that the actual size per pixel photographed by the camera means 100 is 0.1 mm and the wavelength of the light source used is 800 nm (infrared).

At this time, referring to FIG. 4 (a), if the pixel distance from the center of the image to an arbitrary point of the circle of m = 2 is determined to be 19 pixels, the depth information L real of the arbitrary point is (d · y) / (m * lambda) = (1 mm x 1.9 mm) / (2 x 800 nm) = 1.1875 m. Here, y is a value reflecting a size of 0.1 mm per pixel for 19 pixels.

Further, if the pixel distance from the center of the image to an arbitrary point of the circle of m = 3 is 29 pixels, the depth information L real of this arbitrary point is (d · y) / (m · λ) = (1 mm × 2.9 mm) / (3 × 800 nm) = 1.2083 m.

In this way, depth information for each pixel point constituting the circle in a 360 ° direction at intervals of 1 ° from the center of the circle can be obtained. Of course, it does not necessarily have to be 1 ° intervals, and as m increases, the intervals can be made closer. In addition, such an operation may be performed for each order to obtain depth information about the entire image. In the case where the object 10 has a small change in depth (ex, two kinds of depths), it is not necessary to acquire depth information for every point.

In the case of using the double slit, the same manner as described above may also be applied. At this time, only the difference between the two slits is used as the slot value d. However, when the double slit is used, the intersection point of the circle by the two slits occurs and only the depth information at this intersection may be used.

That is, when there are two slits, depth information on the intersection points may be acquired using coordinates of pixels corresponding to intersection points between the solid bright patterns when the depth information is obtained. Referring to FIG. 4 (b), depth information may be obtained through left circles for one arbitrary intersection and depth information may also be obtained through right circles. Unless otherwise, the two depth information obtained will be the same within the margin of error. Of course, when a difference in depth information occurs, an average value of two depth information may be used as depth information on an intersection point.

The previous example is when the slit is circular. However, the slit can also be configured as a bar. This bar-shaped slit also corresponds to a known structure.

7 is a conceptual diagram of a bar-shaped slit applicable to this embodiment. The left figure of FIG. 7 shows an interference fringe in the form of a horizontal stripe according to two bar-shaped slits formed in the horizontal axis, and the right figure shows an interference fringe in the form of a vertical stripe shape according to two bar-shaped slits formed in the longitudinal axis.

FIG. 8 is a schematic layout view of the slit means using the longitudinal bar type slit and the horizontal bar type slit of FIG. 7. In the case of using two bar slits, a check-shaped interference pattern in which a vertical stripes pattern and a horizontal stripes pattern are overlapped with each other can be observed. As a result, the image is shown in the form of a matrix as a whole.

In the case of using the bar-shaped slit, depth information can be obtained by the same principle as described above. Here, suppose d = 2 mm as the distance between two bar-shaped slits. In addition, it is assumed that the actual size per pixel is 0.1 mm and the wavelength of the light source used is 400 nm (ultraviolet).

If the pixel distance from the center of the image to an arbitrary point on the horizontal stripe of m = 1 is 9 pixels, the depth information L real of this arbitrary point is (d · y) / (m · λ) = (2mm 0.9 mm) / (1 × 400 nm) = 4.5m. Further, if the pixel distance from the center of the image to an arbitrary point on the horizontal stripe with m = 2 is determined to be 19 pixels, the depth information L real of this arbitrary point is (d · y) / (m · λ) = (2mm It is converted into * 1.9 mm) / (2 * 400 nm) = 4.75m. Through this, it can be seen that any point on the horizontal stripes of m = 1 and any point on the horizontal stripes of m = 2 have different depth information. In this way, depth information can be calculated for each point.

In the above embodiment, the depth information in actual m units is obtained when the depth information is obtained. However, in some cases it is not necessary to obtain actual depth information, and in some cases only a relative depth is required for the surfaces of the object. This is because the 3D image can be generated only by the relative depth, not the actual depth of the surface of the object corresponding to each pixel point.

In this case, in step S650, the relative depth information L relative = y / m between the pixel points is obtained by using the degree of interference with respect to each of the bright patterns and the coordinates of the pixel where the bright patterns are located.

Here, L relative is information about the relative depth of the pixel point included in the bright pattern of the order of the interference m, y is from the center of the bright pattern of the order 0 to the pixel point included in the bright pattern of the order m The distance in pixels. That is, in all the above examples, if only the ratio of y to m for each pixel point is known, the relative depth between each pixel can be known and the 3D image can be realized through this. This case can be applied only to inspection of the formation of depth on the surface of the object or motion detection games.

After the above-described step S650, the 3D image generating unit 460 transmits depth information on pixel points where the bright fringes, which are obtained in the first captured image, are located to each divided area of the second captured image. The 3D image information is generated by mapping the generated information (S660).

Herein, in the case of the pixel point having no depth information in the divided area, depth information is generated and mapped using depth information of at least one pixel point mapped to the corresponding divided area. Specific examples are as follows.

FIG. 9 is a diagram for explaining operation S660 of FIG. 6. 9 illustrates an example of mapping depth information of pixel points acquired in the first captured image onto each divided area of the second captured image at operation S620. The points to which the depth information is mapped are shaded.

9 illustrates an example in which the second captured image is divided into three divided regions, which are represented by Group 1, Group 2, and Group 3. FIG. Small squares mean individual pixels. In FIG. 9, the boundary line of the oblique line corresponds to a boundary line divided according to color, brightness, edge, or symmetry information as a reference for dividing each group through image processing. As described above, the divided regions of the second captured image correspond to pixel groups that are expected to have different depths.

In addition, FIG. 9 is obtained by mapping the depth information to the pixel of the intersection point between the vertical and horizontal stripes formed on the object through the slit means as shown in FIG. 8, the pixels obtained with the depth information is shaded in yellow.

First, looking at Group 1, which is the first partition, the depth information of the three pixel points mapped to Group 1 are all equal to 100 mm. In this case, 100 mm of depth information is generated for all pixel points having no depth information in Group1. That is, all of the depth information of the pixels included in the group 1 is regarded as 100 mm. Accordingly, when 3D information of a target object having different depth information according to color or brightness is desired, the depth information of the group may be determined only by depth information of a few pixels in the group.

As another example, looking at group 2, it can be seen that depth information for a plurality of pixel points mapped to group 2 is different. Here, the depth information of pixels ① and ② of group 2 is the same as 200 mm and the depth information of pixel ③ is different to 210 mm. Add 200 mm from pixel ① to pixel ②. For example, Group 2 adds a depth of 200 mm to the pixel point portion from the left end to the entire upper and lower pixel layers inside. Depth information is linearly generated by applying a linear function to pixel points located between pixels ② and ③. That is, the depth information gradually increases from the 200 mm depth of pixel ② to the 210 mm depth of pixel ③. In this way, depth information is obtained for all pixels in Group 2. The right side of the pixel ③ may be assigned the same depth as the pixel ③, or may be assigned to the incremental depth information following the first-order function applied previously.

In the case of Group 3, the depth information of the three points is also different, so the first linear function between 110 mm and 120 mm is applied between the first two points among the three points, and the second first order between 120 mm and 150 mm between the two subsequent points. Apply the function to assign depth information to each pixel.

In the case of the embodiment of the present invention as described above, steps S610 and S620 may be performed between steps S650 and S660. In addition, the present invention is to generate the three-dimensional image information in the embodiment, but the present invention is not necessarily limited thereto. That is, it is also possible to obtain only the depth information of the object using only the steps S630 to S650 proposed in the present invention. This may be useful when a user wants to know only depth information of an object rather than a three-dimensional image of the object.

In the embodiment of the present invention, the determination of the shape and number of slits may vary according to the shape, distance, inspection conditions, etc. of the object to obtain 3D information. In the case of using one circular slit, it can be used to obtain only simple depth information from the center of the rough image, and the hardware configuration is easy and easy to manufacture. One bar-shaped slit is advantageous in the case of obtaining only transverse or longitudinal information and is more difficult to manufacture than a circular slit. In the case of using two circular slits, it is easy to acquire depth information of a larger area, more difficult to manufacture than one slit, and the amount of calculation may be increased. The use of two vertical / horizontal bar-shaped slits is high in accuracy, easy to calculate, and increases in hardware configuration.

According to the method and apparatus for acquiring three-dimensional depth information according to the present invention as described above, the depth information of the object obtained by using the interference principle of light of Thomas Young on the two-dimensional image of the object obtained by the 2D camera is fused to 3D There is an advantage that can easily generate image information.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

100: camera means 200: slit means
300: lighting means 400: three-dimensional depth information acquisition device
410: second image acquisition unit 420: region grouping unit
430: First image acquisition unit 440: Image processing unit
450: depth information acquisition unit 460: 3D image generation unit

Claims (7)

In the 3D depth information acquisition method using a 3D depth information acquisition device,
Acquiring, by camera means, a first captured image of interference fringes generated on the object by slit means for passing a monochromatic light source through at least one slit;
Performing image processing on the plurality of bright fringes intermittently generated among the interference fringes in the first captured image by solidifying them; And
Depth information about pixel points where the bright patterns are located among all the pixels of the first captured image is obtained by using the degree of interference with respect to each of the solid bright patterns and coordinates of the pixels where the bright patterns are located. 3D depth information acquisition method comprising the step of obtaining.
The method according to claim 1,
Acquiring the depth information,
The actual depth information for the pixel points is obtained using the order of interference for each of the bright fringes, the coordinates of the pixel where the bright fringes are located, the actual size per pixel, the wavelength of the monochromatic light source, and the size of the slit. 3D depth information acquisition method obtained by the following equation:
L real = (d · y) / (m · λ)
Here, L real is the actual depth information of the pixel point included in the bright pattern of the order of the interference m, d is the size of the slit when the slit is one, the distance between the two slits when the two slit, y is the actual distance from the center of the bright fringe of order 0 to the pixel point included in the bright fringe of order m and is a value calculated by reflecting the actual size per pixel, and λ is the wavelength of the monochromatic light source .
The method according to claim 2,
3. The method of claim 3, wherein the depth information about the intersection points is obtained using the coordinates of pixels corresponding to the intersection points between the solid bright patterns when the depth information is obtained.
The method according to any one of claims 1 to 3,
Acquiring a second captured image of the object using the camera means;
Dividing the second captured image into a plurality of divided regions based on color information in the second captured image; And
3D image information is generated by mapping depth information of pixel points at which the bright fringes obtained from the first captured image are located to each divided region of the second captured image, wherein the depth information is provided within the divided region. And generating and mapping depth information by using depth information of at least one pixel point mapped to a corresponding partition in the case of a pixel point without.
The method of claim 4,
In generating the 3D image information,
If the depth information of at least one pixel point mapped to the corresponding divided region is the same, the same depth information is generated for the pixel points having no depth information in the corresponding divided region,
If depth information of a plurality of pixel points mapped to the corresponding divided region is different, a linear function is performed based on depth information of the two pixel points for pixel points positioned between two pixel points having different depth information. 3D depth information acquisition method of linearly generating depth information by applying a.
A first image acquisition unit for acquiring, by camera means, a first captured image of interference fringes generated on an object by slit means for passing a monochromatic light source through at least one slit;
An image processor configured to solidify the image by processing a plurality of bright fringes intermittently generated among the interference fringes in the first captured image; And
Depth information about pixel points where the bright patterns are located among all the pixels of the first captured image is obtained by using the degree of interference with respect to each of the solid bright patterns and coordinates of the pixels where the bright patterns are located. 3D depth information acquisition device including an acquisition depth information acquisition unit.
The method of claim 6,
A first image acquisition unit which acquires a second captured image of the object using the camera means;
An area grouping unit for dividing the second captured image into a plurality of divided regions based on color information in the second captured image; And
3D image information is generated by mapping depth information of pixel points at which the bright fringes obtained from the first captured image are located to each divided region of the second captured image, wherein the depth information is provided within the divided region. The 3D depth information obtaining apparatus further includes a 3D image generating unit for generating and mapping depth information by using depth information of at least one pixel point mapped to a corresponding partition in the case of no pixel point.
KR1020120140507A 2012-12-05 2012-12-05 Method for acquiring three dimensional depth information and apparatus thereof KR101275749B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120140507A KR101275749B1 (en) 2012-12-05 2012-12-05 Method for acquiring three dimensional depth information and apparatus thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120140507A KR101275749B1 (en) 2012-12-05 2012-12-05 Method for acquiring three dimensional depth information and apparatus thereof

Publications (1)

Publication Number Publication Date
KR101275749B1 true KR101275749B1 (en) 2013-06-19

Family

ID=48867170

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120140507A KR101275749B1 (en) 2012-12-05 2012-12-05 Method for acquiring three dimensional depth information and apparatus thereof

Country Status (1)

Country Link
KR (1) KR101275749B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822060A (en) * 2015-05-05 2015-08-05 联想(北京)有限公司 Information processing method, information processing device and electronic equipment
CN111183457A (en) * 2017-09-27 2020-05-19 夏普株式会社 Video image generation device, video image capturing system, video image generation method, control program, and recording medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040001098A (en) * 2002-06-27 2004-01-07 한국과학기술원 Phase shifted diffraction grating interferometer and measuring method
KR20100126017A (en) * 2009-05-22 2010-12-01 (주) 인텍플러스 Apparatus for measurment of three-dimensional shape
KR20110084029A (en) * 2010-01-15 2011-07-21 삼성전자주식회사 Apparatus and method for obtaining 3d image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040001098A (en) * 2002-06-27 2004-01-07 한국과학기술원 Phase shifted diffraction grating interferometer and measuring method
KR20100126017A (en) * 2009-05-22 2010-12-01 (주) 인텍플러스 Apparatus for measurment of three-dimensional shape
KR20110084029A (en) * 2010-01-15 2011-07-21 삼성전자주식회사 Apparatus and method for obtaining 3d image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822060A (en) * 2015-05-05 2015-08-05 联想(北京)有限公司 Information processing method, information processing device and electronic equipment
CN111183457A (en) * 2017-09-27 2020-05-19 夏普株式会社 Video image generation device, video image capturing system, video image generation method, control program, and recording medium

Similar Documents

Publication Publication Date Title
US10902668B2 (en) 3D geometric modeling and 3D video content creation
US10347031B2 (en) Apparatus and method of texture mapping for dental 3D scanner
JP4290733B2 (en) Three-dimensional shape measuring method and apparatus
US8160334B2 (en) Method for optical measurement of objects using a triangulation method
JP5623347B2 (en) Method and system for measuring shape of reflecting surface
US8339616B2 (en) Method and apparatus for high-speed unconstrained three-dimensional digitalization
JP6270157B2 (en) Image processing system and image processing method
KR101906780B1 (en) Measurement system of a light source in space
CN104197861B (en) Three-dimension digital imaging method based on structure light gray scale vector
JP5633058B1 (en) 3D measuring apparatus and 3D measuring method
CN104596439A (en) Speckle matching and three-dimensional measuring method based on phase information aiding
JP4670341B2 (en) Three-dimensional shape measurement method, three-dimensional shape measurement device, and three-dimensional shape measurement program
CN106871815A (en) A kind of class minute surface three dimension profile measurement method that Kinect is combined with streak reflex method
JP2012504771A (en) Method and system for providing three-dimensional and distance inter-surface estimation
CN107860337A (en) Structural light three-dimensional method for reconstructing and device based on array camera
CN106767526A (en) A kind of colored multi-thread 3-d laser measurement method based on the projection of laser MEMS galvanometers
CN110278431A (en) Phase-detection focuses 3-D image acquisition system automatically
CN101482398A (en) Fast three-dimensional appearance measuring method and device
JP6035031B2 (en) Three-dimensional shape measuring device using multiple grids
KR20170045232A (en) 3-d intraoral measurements using optical multiline method
KR101275749B1 (en) Method for acquiring three dimensional depth information and apparatus thereof
CN102750698B (en) Texture camera calibration device, texture camera calibration method and geometry correction method of texture image of texture camera
CN103591906A (en) A method for carrying out three dimensional tracking measurement on a moving object through utilizing two dimensional coding
RU2573767C1 (en) Three-dimensional scene scanning device with non-lambert lighting effects
JP4382430B2 (en) Head three-dimensional shape measurement system

Legal Events

Date Code Title Description
A201 Request for examination
A302 Request for accelerated examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
N231 Notification of change of applicant
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20160610

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20170602

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20180531

Year of fee payment: 6

FPAY Annual fee payment

Payment date: 20190328

Year of fee payment: 7