IL224345A - System and method for spatial channel separation - Google Patents

System and method for spatial channel separation

Info

Publication number
IL224345A
IL224345A IL224345A IL22434513A IL224345A IL 224345 A IL224345 A IL 224345A IL 224345 A IL224345 A IL 224345A IL 22434513 A IL22434513 A IL 22434513A IL 224345 A IL224345 A IL 224345A
Authority
IL
Israel
Prior art keywords
image
pairs
sequence
lens assembly
stereoscopic
Prior art date
Application number
IL224345A
Other languages
Hebrew (he)
Inventor
Appel Uzi
Eckhoize Vardit
Goldwaser Amit
Neta Uri
Original Assignee
Appel Uzi
C3D Ltd
Eckhoize Vardit
Goldwaser Amit
Neta Uri
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appel Uzi, C3D Ltd, Eckhoize Vardit, Goldwaser Amit, Neta Uri filed Critical Appel Uzi
Priority to IL224345A priority Critical patent/IL224345A/en
Publication of IL224345A publication Critical patent/IL224345A/en

Links

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Description

APPLICATION FOR PATENT Inventors: Uri Neta, Vardit Eckhoise, Uzi Apel and Arnit Goldwaser Title: SYSTEM AND METHOD FOR SPATIAL CHANNEL SEPARATION FIELD AND BACKGROUND OF THE INVENTION The present invention relates to a system for capturing 3D information of objects and more particularly to a system and method for capturing 3D information of teeth and using such information for dental applications.
A three-dimensional object or scene includes depth information, however, when such an object or scene is captured by a camera, the light information is mapped onto a two-dimensional (2D) image plane making 3D data difficult to extract.
In order to preserve such depth information, an object or scene must be captured from several angles and the captured information combined to form an image that includes depth information (also referred to herein as a 3D image).
Stereoscopic optical systems are capable of providing depth information by producing separate images from differing perspective viewpoints. These separate images can be presented to respective left and right eyes of a user to enable the user to perceive an image having some depth or they can be used to reconstruct a 3D virtual image or model of the object or scene captured.
Use of stereoscopic optical systems as intraoral digital scanners is well known in the art. Three-dimensional modeling of the oral cavity and teeth is required in a large number of procedures in dentistry such as restorative dentistry and orthodontics. At present, most modeling is effected using traditional impression fabrication process however, three-dimensional virtual images of tooth preparations generated by intraoral scanners can be used for direct fabrication using CAD/CAM systems or can be used to create accurate master models for restorations and as such these systems are slowly replacing traditional techniques (see Logozzo et al. "A Comparative Analysis Of Intraoral 3d Digital Scanners For Restorative Dentistry" Internet Journal of Medical Technology. 2011 Volume 5 Number 1).
Stereoscopic optical systems which utilize telecentric lenses for capturing of stereoscopic images are known in the art and are discussed hereinbelow. Telecentric lenses provide purely orthographic projections of object/scene points by locating an aperture stop at the focal point of a lens. This property makes such lenses particularly suitable for stereoscopic optical systems (Multi-aperture Telecentric Lens for 3D Reconstruction” published in OPTICS LETTERS / Vol. 36, No. 7, pp.1050-1052 / April 1, 2011). However, since various light channels representing different angles of view are captured simultaneously using such a system, such channels must be separated in order to obtain separate images representing the different angles of view. Such separation of the different light channels requires a shutter system (e.g. pinhole) for temporally separating the various channels thus enabling sequential collection of images.
While reducing the present invention to practice, the present inventors have uncovered that images representing different angles of view of an object can be collected simultaneously and effectively by using various spatial separation approaches, and that a system utilizing such spatial separation approaches would be particularly suitable for use as an intraoral digital scanner.
SUMMARY OF THE INVENTION According to one aspect of the present invention there is provided a system for obtaining 3D information from an object comprising: (a) a front telecentric lens assembly capable of capturing light beams from the object, the light beams including two light channels each representing a unique view of the object; (b) a rear lens assembly for splitting the light beams into two (or more) discrete light channels; and (c) an image capturing device for simultaneously capturing the two or more discrete light channels and generating image information from each of the two or more discrete light channels thereby providing a pair of images each representing the unique view of at least a portion of the object.
According to further features in preferred embodiments of the invention described below, the image capturing device includes two or more spatially separated image sensors.
According to still further features in the described preferred embodiments the image capturing device includes a single image sensor having two or more spatially separated sensor regions.
According to still further features in the described preferred embodiments the front telecentric lens assembly includes at least one lens about 15-300 mm in diameter with a focal length of about 50-80 mm.
According to still further features in the described preferred embodiments the front telecentric lens assembly and the rear lens assembly are spaced apart by 60-120 mm.
According to still further features in the described preferred embodiments the rear lens assembly includes at least one lens about 5-20 m in diameter with a focal length of about 15-30 mm.
According to still further features in the described preferred embodiments, the imaging planes of the two spatially separated image sensors are co-planar.
According to still further features in the described preferred embodiments, the imaging planes of the two spatially separated image sensors are angled with respect to each other at an angle of 60-90 degrees.
According to still further features in the described preferred embodiments the two or more discrete light channels represent two different angular views of the object.
According to still further features in the described preferred embodiments the system further comprises a processing unit for processing the image information from each of the two or more discrete light channels and generating a pair of stereoscopic images.
According to still further features in the described preferred embodiments the processing unit is configured for processing a sequence of several pairs of stereoscopic images representing overlapping portions of the object to thereby generate a sequence of stereoscopic image pairs.
According to still further features in the described preferred embodiments the processing unit is configured for selecting specific pairs of stereoscopic images from the sequence of stereoscopic image pairs according to a predetermined image information overlap and a displacement between the specific pairs in the sequence.
According to still further features in the described preferred embodiments the specific pairs exhibit a maximum separation in the sequence with the image information overlap of at least 50%.
According to still further features in the described preferred embodiments the processing unit is configured for generating a cloud of 3D points from each specific pair of stereoscopic images.
According to still further features in the described preferred embodiments the processing unit is configured for matching the cloud of 3D points between two adjacent specific pairs in the sequence to thereby generate a transformation matrix including information on motion of specific points between the two adjacent specific pairs.
According to still further features in the described preferred embodiments the processing unit is configured for registering the transformation matrix with a reference coordinate system and combining registered transformation matrices of several specific pairs of stereoscopic images to thereby align 3D points to the reference coordinate system.
According to still further features in the described preferred embodiments the front lens assembly, the rear lens assembly and the image capturing device are positioned within a housing.
According to still further features in the described preferred embodiments the system further includes a mirror assembly attachable to the housing in front of the front lens assembly, the mirror assembly being positioned so as to enable the front lens assembly to reflect light from a surface of the object to the front lens assembly.
According to still further features in the described preferred embodiments the housing includes a processing unit including a six axis accelerometer or gyroscope.
According to still further features in the described preferred embodiments the processing unit includes a processor for processing data from the accelerometer or gyroscope and appending the data to the pair of images.
According to still further features in the described preferred embodiments the system further comprises an illumination source.
According to still further features in the described preferred embodiments the illumination source is an IR illumination source.
According to another aspect of the present invention there is provided a method of generating 3D information of an object comprising: (a) providing a sequence of several pairs of stereoscopic images representing overlapping portions of the object; (b) selecting specific pairs of stereoscopic images from the sequence of stereoscopic image pairs having image information overlap of at least 50% and being maximally displaced from each other in the sequence; and (c) using the specific pairs of stereoscopic images to derive 3D information of the object.
The present invention successfully addresses the shortcomings of the presently known configurations by providing a system for simultaneous acquisition of a pair of stereo images and using one or more pairs of the stereo images for reconstructing 3D information of an object such as a tooth.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, suitable methods and materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and not intended to be limiting.
Implementation of the method and system of the present invention involves performing or completing selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
BRIEF DESCRIPTION OF THE DRAWINGS The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
In the drawings: FIG. 1 schematically illustrates a prior art system for generating stereoscopic images.
FIG. 2 schematically illustrates another prior art system for generating stereoscopic images.
FIGs. 3a-b illustrate one embodiment of the present system in which two optical channels representing two different viewing angles of the object are collected by two spatially separated regions of a single image sensor.
FIG. 4 illustrates another embodiment of the present system in which two optical channels representing two different viewing angles of the object are collected by two spatially separated image sensors.
FIG. 5a schematically illustrates the components of the present system.
FIG. 5b illustrates the component layout within the imaging head of the present system.
FIG. 6 is a flow chart diagram of the process of obtaining 3D information from imaged teeth.
FIGs. 7a-b illustrate a setup used for calibration of the present system (Figure 7a) and a flowchart showing use of calibration in 3D information reconstruction (Figure 7b).
FIGs. 8a-b illustrate use of the present system in scanning (Figure 8a) a model of the lower jaw and producing a 3D computer model (Figure 8b) therefrom.
DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention is of a system which can be used to capture 3D information of an object. Specifically, the present invention can be used to capture 3D information of a tooth and generate 3D models suitable for restorative dentistry and orthodontics The principles and operation of the present invention may be better understood with reference to the drawings and accompanying descriptions.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
Several prior art stereoscopic optical systems utilize telecentric lenses for obtaining image information. However, since such systems utilize a shutter system, the image separation obtained thereby can only be sequential and therefore requires a stationary object for acquisition.
The top half of Figure 1 (derived from Optics Letters Vol. 36, No. 7, pp.1050-1052 / April 1, 2011) illustrates a 4f telecentric system with lenses 10 and 12 having a coinciding focal plane 14 (also referred to as optical axis). The focal plane runs through shutter 16 (pinhole). A light beam originating from the object running parallel to the optical axis is focused through lens 12 into shutter 16 and lens 10 where it exits as optical axis 18. In the bottom half of Figure 1, the pinhole of shutter 16 is shifted off-axis (down). An incoming light beam not parallel to optical axis 14 is focused through lens 12 into shutter 16 and lens 10 to exit as optical axis 19 which is angled with respect to to optical axis 14.
Figure 2 illustrates another prior art system (ISee3D Inc.). This stereoscopic optical system also utilizes a shutter (S) which blocks part of the beam at the plane of the iris. Images from different viewpoint are captured sequentially on the image plane by alternating the shutter position over time.
Although sequential collection of stereoscopic images enables reconstruction of 3D data of an object, it requires more time for collection of paired images and thus limits use of a system employing sequential collection in cases where scanning of an object is required (e.g. scanning a full jaw in dentistry).
In order to traverse the above mentioned limitation, the present inventors have devised a stereoscopic optical system which is capable of simultaneously collecting two or more discrete light channels each representing a different view of an object or scene.
Thus, according to one aspect of the present invention, there is provided a system for obtaining 3D information from an object and/or scene. As used herein, the phrase "3D information" refers to information which can be used to map the 3D structure of the object, and/or map the object in 3D space (scene). Such information can include, for example, XYZ coordinates of an object point. As is further described hereinunder, such 3D information can be used to generate virtual and/or 3D models of the object and/or scene.
The system of the present invention includes a front telecentric lens assembly capable of capturing light beams from the object and a rear lens assembly for splitting the light beams into at least two discrete light channels, each representing a unique view of the object. The system further includes one or more image capturing devices for simultaneously capturing the discrete light channels and generating image information therefrom thereby providing a pair of images each representing a unique view of. at least a portion of the object.
In order to enable simultaneous collection of two or more discrete light channels, the present system employs a unique layout of lenses and two or more optically aligned and spatially separated light collection sensor regions.
The front lens or lenses used by the present system can be 15-300 mm in diameter withy a focal length of 50-80 mm, while the back lenses can be 5-20 mm in diameter with a focal length of 15-30 mm. The image sensors can be CCD arrays with overall pixel count of 4-20 megapixels (MP). An optional multi-pinhole filter can be positioned between the front and back lenses. Such a pinhole can have two or more apertures spaced apart by 4-10 mm, with each being 0.5-1.5 mm in diameter.
Figures 3a-4 illustrate the optical layout of an imaging head of two embodiments of the present system, which is referred to herein as system 20.
Figure 3a shows telecentric system composed of front lens assembly 22 (which can include one or more lenses), back lenses 24 and 26 and image sensor 28. System 20 can also include an optional filter on pinhole plane 30 for further separating between the light channels originating from different views of object 32 while filtering out light which does not originate from object 32 and increase the noise at image sensor 28.
Light rays 34 and 36 scattered from object 32 form through telecentric lens 22 to create light channels 38 and 40 representing two different views of object 32. Light channels 38 and 40 pass through pinhole 30 and are directly passed to lenses 24 and 26. Lenses 24 and 26 focus light channels 38 and 40 into collimated light channels 42 and 44 which are projected onto sensor 28 at regions 46 and 48 (respectively) simultaneously forming two separate images, each representing a different view of object 32. Regions 46 and 48 of sensor 28 are separated by spacer 50 to avoid cross talk between collimated light channels 42 and 44 (Figure 3b).
Figure 4 illustrates another optical layout embodiment in which sensor 28 includes two separate sensors (cameras) 46 and 48 having the same image plane. In this embodiment of system 20, light channels 42 and 44 are not collimated and in fact are projected outwardly increasing separation and further minimizing any cross talk between light channels 42 and 44. Since angling light channels 38 and 40 outward can distort the image formed on sensors 46 and 48, system 20 applies a correction algorithm to the formed images (as discussed hereinbelow) to correct any distortion.
Figures 5a is a block diagram illustrating the components of system 20.
System 20 includes imaging head 50 which includes the optical components described above, as well as an optical processing unit 52. Optical processing unit 52 includes the image sensors (cameras 1 and 2 in this configuration), an illumination source, a six axis sensor (accelerometer, gyroscope) and the ports (wired, e.g. USB or wireless, e.g. BlueTooth, WiFi) for communicating with a computing platform (PC).
The six axis sensor collects spatial orientation data (X,y,z,a,P,y) of imaging head 50 and the processor of processing unit 52 assigns such data to each stereoscopic image pair captured by cameras 1 and 2. The spatially-tagged stereoscopic image pairs are then communicated to the PC for further processing as described below.
Figure 5b illustrates imaging head 50 and its internal components. Imaging head 50 includes a housing 60 for containing the optical components described above including front lens 22, pinhole filter 30, back lenses 24 and 26 and image sensors 46 and 48 (cameras 1 and 2 in Figure 5a). Processing unit 52 is positioned in back of image sensors 46 and 48.
Imaging head 50 also includes an illumination source 23 (e.g. white or IR LEDs) within housing 60 for lighting the object imaged and a removable front assembly 62 for carrying a mirror 64 and spacer 66 for spacing front assembly 62 from teeth 80 (and providing a set imaging distance). Mirror 64 enables collection of light scattered from a tooth surface opposite front lens 22 and as such, front assembly 62 is mounted on housing 60 when such a view is desired.
Figure 6 illustrates the steps of creating a dental surface model from a sequence of stereoscopic images using the computing platform of the present system.
A stereoscopic image sequence is captured by imaging head 50 of system 20 as describe above while moving imaging head 50 along the jaw (scanning the jaw) with the imaging head being displaced about 2-5mm from the teeth under illumination provided by the imaging head. A six axis accelerometer within the imaging head senses the position of the imaging head and tags each captured image pair with spatial position information.
The image capture rate and the rate of motion of the imaging head along the jaw are such that adjacent image pairs exhibit a large degree of overlap, between 50% and 90%.
The relationship between imaging speed, frame rate of image sensor (camera) and frame size can be expressed as follows: 100 f - frames per seconds FOV - is the Field Of View along the motion direction V - the velocity of motion OL - overlap of successive images in % Normal FOV - 10-15 mm, normal motion velocity - 50mm/sec at 20 FPS; overlap (OL) - 75%.
To scan an entire jaw (e.g. 18 teeth), the imaging head can be operated for less than 3 seconds for each pass. Preferably, three consecutive passes are employed in order to ensure overlap between the outer (using mirror 64 described above with respect to Figure 5b), inner and top sides of the teeth and in order to generate a sequence of image pairs having a certain degree of redundancy.
Following scanning, a sub-sequence 100 of image pairs having an acceptable image quality is selected for further processing by the computing platform.
In step 110, a quality measure is assigned to each image pair of sub-sequence 100. The quality measure may include an image sharpness measure, an image highlight measure, an image segmentation quality measure, and the like.
In step 120, a motion estimation algorithm computes the 2D motion between adjacent image pairs, thus determining the level of overlap. Motion estimation can be effected using optical flow, block-matching, correlation, feature point matching and the like. Once each stereoscopic image pair is tagged with an image quality measure and a motion vector with respect to the preceding image pair, a Pairs Selection step 130 selects high-quality image pairs that are maximally displaced (in sub-sequence 100), but exhibit an above-threshold overlap ratio (e.g. at least 50%).
Steps 140-170 are then performed on each image pair as follows.
A Distortion Correction step 140 applies a geometric transformation to each image pair in order to reduce distortions (e.g. lens induced distortions). Distortion correction may be performed by providing correction equations that transform distorted image coordinates to undistorted coordinates. Such equations are usually based on the focal lengths (x, y) and principal point followed by skew, radial distortion and tangential distortion coefficients. Alternatively, coordinates correction tables are provided and the correction values for each distorted image pixel are obtained from the table via interpolation. The distortion parameters/table can be obtained experimentally, by measuring reference objects/patterns or from the lens design software [Z. Zhang. Flexible camera calibration by viewing a plane from unknown orientations. ICCV, pp. 666-673, 1999] Alternatively, a U-shaped metal spring calibration target (Figure 7a) which includes an accurate small hole pattern (Figure 7a) can be used to establish jaw image perspective and provide a correction for processing of the stereoscopic data as is shown in Figure 7b.
The calibration target is utilized as follows: The calibration target is imaged from an angle to provide a perspective image of the calibration holes with the distances between the calibration holes altered from a normal top view. Since the distances between the holes are known, the image can be rectified to the original top view by using the correct distances between the holes.
The calibration target is then positioned on top of the last tooth of an arch and a perspective image of the full arch with the calibration target is captured. The image is then corrected to a top view as described above.
A stereo image is then captured and processed as described herein and a cloud of points is generated.
A 2D top view of the 3D cloud of points that matches the right tooth on the corrected full arch image is identified and then placed within the 3D cloud of points.
The process is repeated for each cloud of points, this ensures that errors that might arise from matching and combining each 3D cloud with a single image do not accumulate. If a series of 3D clouds of points was combined without such stepwise calibration errors would accumulate and the resulting 3D information of a series of teeth would be less accurate.
The Pair Rectification step 150 aligns the image pair such that left and right image points depicting the same world point, both lie on the same image row / scanline. Image rectification may be done in a calibration pre-process, or dynamically on captured image pairs. Assuming a rigid camera setup and no zoom during the scanning, a pre-process approach may be adopted.
The Stereoscopic Matching step 160 generates a disparity map depicting the image displacements between matching object points in the rectified, distortion-corrected images. Such displacement values can be obtained for a grid of points, or for all image points with the exception of image points outside the overlap area between the stereoscopic image pair, for image points where no visual information is available or for image points where a consistent stereoscopic matching cannot be performed (e.g. highlight areas).
Based on camera parameters, step 170 converts the disparity map into a cloud of points representing X, Y, Z coordinates related to the focal point of the left camera.
Since each cloud of points obtained from a specific image pair is defined relative to the left camera position, it is necessary to unify all point clouds with a frame of reference. Such a frame of reference can be represented by the coordinate system of the left camera in the first image pair in the sequence.
Cloud Registration step 180 computes the transformation between each cloud of points and the frame of reference. The registration step is performed by matching a set of 3D points between consecutive image pairs, solving for the rigid motion (rotation and translation) transformation between pairs, and concatenating the obtained transformation matrix with matrices derived from previous image pairs, to align the current cloud of points with the frame of reference.
Registering each cloud of points with the frame of reference and joining the transformed cloud with the previously registered clouds, yields a large cloud of points with significant overlap.
A cloud post-processing step 190 reduces the union cloud, by identifying overlap and other types of redundancy (e.g. over-dense sampling with respect to the local geometric variation). The reduced cloud of points is then converted into a mesh/polygonal representation that can be represented by a file format such as the Standard Tessellation Language (STL) format. The STL file describes a raw unstructured triangulated surface by the unit normal and vertices (ordered by the right-hand rule) of the triangles using a three-dimensional Cartesian coordinate system.
As used herein the term “about” refers to ± 10 %.
Additional objects, advantages, and novel features of the present invention will become apparent to one ordinarily skilled in the art upon examination of the following examples, which are not intended to be limiting.
EXAMPLES Reference is now made to the following example, which together with the above descriptions, illustrate the invention in a non limiting fashion.
Scanning a model of a lower jaw and reproducing a 3D computer model therefrom A prototype system constructed in accordance with the teachings of the present invention was used to scan a model of lower jaw teeth (Figure 8a). The hand held scanner portion was positioned at tooth 1 (at the back of the jaw) and the mirror was

Claims (15)

WHAT IS CLAIMED IS:
1. A system for obtaining 3D information from an object comprising: (a) a front telecentric lens assembly capable of capturing light beams from the object, said light beams including two light channels each representing a unique view of the object; (b) a rear lens assembly for splitting said light beams into two (or more) discrete light channels; and (c) an image capturing device for simultaneously capturing said two or more discrete light channels and generating image information from each of said two or more discrete light channels thereby providing a pair of images each representing said unique view of at least a portion of the object.
2. The system of claim 1, wherein said image capturing device includes two or more spatially separated image sensors.
3. The system of claim 1, wherein said image capturing device includes a single image sensor having two or more spatially separated sensor regions.
4. The system of claim 1, wherein said front telecentric lens assembly includes at least one lens about 15-300 mm in diameter with a focal length of about 50-80 mm.
5. The system of claim 1, wherein said front telecentric lens assembly and said rear lens assembly are spaced apart by 60-120 .
6. The system of claim 1, wherein said rear lens assembly includes at least one lens about 5-20 mm in diameter with a focal length of about 15-30 m.
7. The system of claim 2, wherein imaging planes of said two spatially separated image sensors are co-planar.
8. The system of claim 2, wherein imaging planes of said two spatially separated image sensors are angled with respect to each other at an angle of 60-90 degrees.
9. The system of claim 1, wherein said two or more discrete light channels represent two different angular views of the object.
10. The system of claim 1, further comprising a processing unit for processing said image information from each of said two or more discrete light channels and generating a pair of stereoscopic images.
11. The system of claim 10, wherein said processing unit is configured for processing a sequence of several pairs of stereoscopic images representing overlapping portions of the object to thereby generate a sequence of stereoscopic image pairs.
12. The system of claim 11, wherein said processing unit is configured for selecting specific pairs of stereoscopic images from said sequence of stereoscopic image pairs according to a predetermined image information overlap and a displacement between said specific pairs in said sequence.
13. The system of claim 11, wherein said specific pairs exhibit a maximum separation in said sequence with said image information overlap of at least 50%.
14. The system of claim 11, wherein said processing unit is configured for generating a cloud of 3D points from each specific pair of stereoscopic images.
15. The system of claim 14, wherein said processing unit is configured for matching said cloud of 3D points between two adjacent specific pairs in said sequence to thereby generate a transformation matrix including information on motion of specific points between said two adjacent specific pairs.
IL224345A 2013-01-21 2013-01-21 System and method for spatial channel separation IL224345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IL224345A IL224345A (en) 2013-01-21 2013-01-21 System and method for spatial channel separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL224345A IL224345A (en) 2013-01-21 2013-01-21 System and method for spatial channel separation

Publications (1)

Publication Number Publication Date
IL224345A true IL224345A (en) 2017-07-31

Family

ID=62454817

Family Applications (1)

Application Number Title Priority Date Filing Date
IL224345A IL224345A (en) 2013-01-21 2013-01-21 System and method for spatial channel separation

Country Status (1)

Country Link
IL (1) IL224345A (en)

Similar Documents

Publication Publication Date Title
JP6564537B1 (en) 3D reconstruction method and apparatus using monocular 3D scanning system
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN108267097B (en) Three-dimensional reconstruction method and device based on binocular three-dimensional scanning system
JP6007178B2 (en) 3D imaging system
KR101531440B1 (en) Reduction and removal of artifacts from a three-dimensional dental x-ray data set using surface scan information
JP2874710B2 (en) 3D position measuring device
KR20190121359A (en) 3D scanning system and its scanning method
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
JP2008241355A (en) Device for deriving distance of object
JP2010526992A (en) Single lens, single aperture, single sensor 3D imaging device
JP2008242658A (en) Three-dimensional object imaging apparatus
WO2013076605A1 (en) Method and system for alignment of a pattern on a spatial coded slide image
JP2010517039A (en) Method and apparatus for quantitative three-dimensional imaging
JP3798161B2 (en) Fundus measurement device and recording medium recording fundus measurement program
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN109889799A (en) Monocular structure light depth perception method and device based on RGBIR camera
JP6285686B2 (en) Parallax image generation device
JPH11108630A (en) 3d shape measuring method, and method for analyzing distortion and stress on skin surface using the same
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
JP2004110804A (en) Three-dimensional image photographing equipment and method
AU2013308155B2 (en) Method for description of object points of the object space and connection for its implementation
KR101275127B1 (en) 3-dimension camera using focus variable liquid lens applied and method of the same
IL224345A (en) System and method for spatial channel separation
JP2023505397A (en) Determining the spatial relationship between maxillary and mandibular teeth
JP6292785B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
FF Patent granted
KB Patent renewed
KB Patent renewed