GB2370443A - Method of prompting a user in recording a 3-D image. - Google Patents
Method of prompting a user in recording a 3-D image. Download PDFInfo
- Publication number
- GB2370443A GB2370443A GB0205005A GB0205005A GB2370443A GB 2370443 A GB2370443 A GB 2370443A GB 0205005 A GB0205005 A GB 0205005A GB 0205005 A GB0205005 A GB 0205005A GB 2370443 A GB2370443 A GB 2370443A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- camera
- subject
- recording
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/221—Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/218—Image signal generators using stereoscopic image cameras using a single 2D image sensor using spatial multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
A method used in recording a three-dimensional image uses the steps of measuring a distance from a camera 104 to a subject 108, recording a first image from a first position 112, prompting a user to move to a second position 116 and taking a second image from the second position. The method may use verbal or visual prompts or directions, e.g. in the form of arrows in the camera viewfinder or a sound or voice signal, to direct the camera user to the optimum position for 3-D imaging (see Figure 4). A processor uses the first and second images and the first and second changes in camera position and/or rotations to generate a three-dimensional image.
Description
METHOD OF PROMPTING A USER IN RECORDING A 3-D VISUAL IMAGE
BACKGROUND OF THE INVENTION (1) Field of the Invention
The present invention relates generally to generating threedimensional images. More particularly, the present invention relates to prompting a user in recording a 3-D image. (2) Related Art
Photographic and imaging systems today are primarily designed for use in recreating two-dimensional images. In a two dimensional image, only one perspective is needed. This perspective may be generated by positioning a camera at a fixed position and recording the image on photographic film or on electronic sensors. However, human vision is stereoscopic. Thus, the human brain combines two images, each image reaching one eye. The brain combines the two images to generate an additional dimension of depth to create a three dimensional (3-D) image. In recent years, cameras and electronic sensors have been designed to try to take two images and recombine them to reproduce a three dimensional image with a depth component.
Traditional three dimensional imaging systems utilized two cameras.
Preferably, the relationship between the two cameras was fixed. Thus, when the two perspectives (one perspective from each camera) were recombine, the information relating the two perspectives was known because of the fixed relationship between the two cameras. The problem with using two cameras is that it is more expensive than a single camera arrangement.
Two cameras typically require two lenses, two camera bodies, and two sets of film.
Alternative systems for generating 3-D images have been implemented using two lenses in one camera body. These systems are still more expensive than standard two-dimensional cameras because multiple systems of lenses are needed to create multiple images. Each lens system generates an image corresponding to a different perspective view of the subject being photographed Furthermore, placing two lens systems in a single camera body, requires that the lens systems be placed in close) proximity to each other. The close proximity of the two systems of lenses results in less depth perception than would be available if the lens systems could be placed further apart.
Alternate embodiments of generating a 3-D image are possible using
mirrors and prisms. However, such systems are bulky and complicated.
Thus, it is desirable to design a system which can quickly and easily generate multiple images for combination into a single 3-D image. Such a system will be described in the following application.
BRIEF SUMMARY OF THE INVENTION
According to a first aspect of this invention there is provided a method as claimed in claim 1 herein.
According to a second aspect of this invention mere is provided a method as claimed in claim 5 herein.
According to a third aspect of this invention there is provided a software program as claimed in claim 12 herein.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates one embodiment of the invention for generating a three-dimensional image.
Figure 2 is a flow diagram describing the process of generating two images.
Figure 3 is a flow diagram describing the process of combining two images to form a single three-dimensional image.
Figure 4 is a flow diagram illustrating a system to assist a user improve the data gathering capacity of a camera.
DETAILED DESCRIPTION OF THE INVENTION
In the following description, a system for using a single camera with a single lens system to generate three dimensional images will be described.
The camera will include a set of motion sensors, preferably a micro machined silicon ("MEMS") sensor, which detects linear and rotational acceleration or movements of the camera, and thereby can compute displacement of the camera to determine positions at which a camera is located. Alternately a global positioning ("GPS") system may be used to determine location. Another type of motion sensor which may also be used are vibrating MEM sensors or commercially available laser gyros. Using the position information and the at least two images taken by the single lens system, the camera or an external processor can recreate a three dimensional image representing a subject.
In the accompanying description, certain descriptions will be provided to facilitate understanding of the invention. For example, the specification will describe the invention using certain MEMS sensor types such as micro-machined accelerometers. However, it is recognized that other position sensors or motion detectors may be used. In particular, GPS systems and other types of MEMS may be appropriate. The actual sensor used will depend on the cost of the sensor, whether a sensor can provide data with sufficient accuracy, the power consumption of-the sensor, and the size of the sensor. The included details are provided to facilitate understanding of the invention, and should not be interpreted to limit the scope of the invention.
An operation of the overall system is illustrated in Figure 1. In
Figure 1, a camera 104 is used to generate a three dimensional image of a subject 108. The camera is in an initial position 112 when the first image is taken. After the first image is taken, the camera is moved to a second position 116. The movement includes both a lateral translation illustrated by arrow 120 and may include a rotation motion illustrated by arrow 124. In one embodiment, a motion sensor 128 within the camera detects the lateral translation and the rotation of the camera. In one embodiment, motion sensor 128 includes two MEMS sensors, one MEMS sensor which detects lateral acceleration 120 and a second MEMS sensor to detect rotation. In one alternative embodiment, GPS sensors may be used to determine the position of the camera.
In a preferred embodiment, MEMS sensor 128 is an inertial sensor.
Such sensors are based on comb drive actuator technology developed by
Howe and described in an article entitled"Laterally Driven Polysilicate
Resident Micro structures", by W. C. Tang, T. C. Nguyen and R. T. Howe, proceedings IEEE Microelectromechanical Systems Workshop, Salt Lake
City, Utah, U. S. A., February, 1989, pages 53-59. An example of an appropriate accelerometer is a SONIC accelerometer from Analog Devices.
Analog Devices also produces integrated BiCMOS devices merged with a micro machine sensor for determining device rotation. These sensors are being used in advanced automotive braking systems. These sensors are being commercialized by General Motors and are described in the article "Overview Of MEMS Activities In The US :"by CH Mastrangelo who is with the Center for Integrated Sensors and Circuits, Department of Electrical
Engineering, University of Michigan, Ann Arbor, Michigan, 48109. The article from Mastrangelo also describes alternative embodiments of motion sensors, including optical actuators which may be used to determine the motion of a camera. By integrating the acceleration of the camera, a velocity can be developed. A second integration of the velocity generates a displacement of the camera. This displacement information may be used to determine a second position 116 of a camera 104 when the second image is taken with respect to the first position 112 and orientation of the camera 104.
The relative orientations and positions of the camera, including both the first position 112 and the second position 116, are recorded either in a memory device 132 in the camera, or in an alternative embodiment, the data may be stored in an external memory coupled to the camera. Some motions sensors, such as sensors which measure acceleration may not produce position data. In these embodiments, data describing the motion of the camera, such as acceleration data, may be recorded in memory. At a later time, a processor uses the motion data to compute position data. The respective motion or position and orientation data are organized to allow correlation of each image recorded with a corresponding position, such as first position 112 or second position 116.
Each image recorded may be recorded on photographic film, or more preferably, using electronic sensors 134. In one embodiment, the electronic sensors are Complementary Metal Oxide Semiconductor (CMOS) sensors.
In alternate embodiments, photo sensing charge couple device arrays ("CCD") or photo-diodes may be used. The electronic image output by the electronic sensors are stored in a second memory device 136. If the image was recorded on photographic film, the image is converted to an electronic form for further processing. The conversion may be accomplished using a scanner or other methods of converting chemical or light data to electronic data. Such scanners are commercially available from several vendors, including Hewlett Packard of Palo Alto, California. The digital image is stored in memory device 136.
In order to create a single three dimensional image from the two images of one subject, a processing unit 140 retrieves the images and the corresponding position and orientation information and recombines them into a single, three-dimensional image. The processing unit 140 may be implemented in a graphics processor card. In another embodiment, the processor is a general microprocessor executing a program to handle graphics processing functions. Various methods of processing two images to generate one 3-D image are described in Masatoshi Okutomi and Takeo
Kanade, A Multiple-Baseline Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 4, April 1993.
Figure 2 illustrates a flow chart describing the steps used to create a 3- D image from the camera of Figure 1. In step 204, before the first image is taken, the user resets a camera position indicator. The resetting of the camera position indicator preferably clears the memory 132 storing output of the motion sensor, such that the first image recorded in a sequence is preferably at a zero point reference frame. The camera records an image of a subject at the zero point reference frame in step 208. At approximately the same time as the recording of the first image occurs, a corresponding position and orientation of the camera is recorded in memory 132. In the camera of Figure 1, a CCD array generates the image stored in the second memory 136 of Figure 1.
After the camera has recorded the first image and corresponding position and orientation information, the camera is relocated in step 212.
The relocation may involve both a lateral translation 120 and rotational movement 124. Either a person or a motor driven apparatus may move the camera. In one embodiment, the camera is moved along a track, minimizing rotational movements and allowing embodiments of the camera which do not measure camera orientation. During the relocation, a sensor, preferably a MEMS sensor, records the motion and rotation of the device in step 216. In one embodiment of the invention, the MEMS sensor records acceleration and integrates the acceleration to generate a displacement. The recorded acceleration, rotation or displacement information is stored in a memory device.
When the camera has reached a second position, the camera records a second image of the subject. As the second image is recorded, the camera uses information from the motion sensor and records a camera position and orientation corresponding to the second image. The position and orientation information is stored in a position and orientation memory device 132. The second image and the first image must have a sufficient amount of subject matter overlap so that the processor will be able to reconstruct the overlapping regions and generate a three dimensional image.
The prior sequence of steps 204 through 220 described a system as used in a still camera. It is contemplated that a MEMS or motion sensor may be installed in a video camera and many images taken as the camera moves. Each image corresponds to a set of position data generated from information recorded by the motion sensors. These images may then be reconstructed with neighboring images to allow the recreation of a moving 3-D graphic. The techniques described in the reconstruction of such a moving image are accomplished by repeated iterations of steps 204 through 220 and a series of reconstruction steps executed by the processor. In step 224, the position and orientation information generated by the motion sensor, along with the corresponding recorded images, are transferred to a processor.
Figure 3 is a flow diagram describing the steps taken by the processor or processing device 140 to reconstruct a three dimensional image from the two dimensional images and corresponding position and orientation data.
In step 304, the processor receives the position and orientation information and the corresponding image data from the camera. The processor then determines corresponding points in the first and second image in step 308.
Corresponding points are points in different images or perspectives which correspond to the same point in a subject. Thus, a corresponding point is a point or pixel in the first image which corresponds to a point on the subject, and a second point or pixel in the second image which corresponds to the same point on the subject. For example, the tip of a person's nose may be a corresponding point in both a first and a second image. In one embodiment of the invention, pattern recognition software is used to determine corresponding points. A second, simpler method of determining corresponding points involves an end user, which selects a point in the first image, selects or"clicks"on the first point using a mouse or other pointing device, and selects or"clicks"on the corresponding point in the second image. In step 312, these selected corresponding points and their x, y coordinates are recorded in a memory device-The record is typically a two dimensional record because an x and y coordinate must be recorded for each point.
In step 316, the distance from the camera to an identified point on the subject is determined. The distance may be determined because the first camera position, the second camera position, and the identified image point form a triangle. One side of a triangle represents the distance the camera moved or the distance between the first camera position and the second camera position which is known. Thus, the dimensions of one side of the triangle are known. In addition, the micro sensor sensed the rotation of the camera as the camera moved from the first position to the second position.
Thus, the angular displacement of the camera with respect to the subject point is also known. Using trigonometry, the distance between each camera position and the identified image point can be determined to generate a z dimension.
In step 320, the processor generates the third z dimension. The z dimension is related to the coordinate frame chosen. Once an x, y Cartesian coordinate system is chosen, the z dimension is specified and typically normal to the plane of the x, y coordinate system. The z coordinate may be determined using the distance from the image point to the camera, the camera orientation, and the coordinate system chosen. The new x, y, z coordinates for each corresponding pair of coordinates is stored in a field in the memory device associated with the corresponding points.
Steps 316 and 320 are repeated for each pair of corresponding points in the first and second image. After each processing of a pair of corresponding points, the system determines whether the corresponding points previously processed is the last set of corresponding points in an image. If there are more corresponding points to process, the system returns to step 316. When the system has completed a complete set of x, y, z coordinates for every corresponding point in an image, the processor may build a mesh of triangles which do not overlap and which connect the points of all 3-D records into a two dimensional surface in 3-D space. The construction of the mesh triangle may be done using a structured approach of laying out points on a regular grid. The Delaunai algorithm may also be
used to reconstruct the mesh triangle. The Delaunai algorithm is well 0 known in the art, and thus will not be described in detail in this application.
In step 332, the processor constructs a database from the x, y, z records and the triangle mesh to describe the 3-D image. The computer may further add descriptions of the surface so that the program can simulate how the subject looks from different angles. In one embodiment of the invention, the processor outputs a description, such as a virtual world ("WRL") file which is a common format for 3-D data on the Interner. 3-D files for use in graphics viewing programs by third party vendors may also be used to display the images. A typical 3-D viewing program is'True Space"by
Calligri Company.
The process described in the flowchart in Figure 3 allows a three dimensional image to be generated from two sets of two dimensional data.
The two dimensional data was generated by a camera which took a first image at a first position. Motion sensors detected the movement and rotation or position of the camera as the camera was moved to a second position, where a second image was taken. By processing the image and position information, a processor generated a 3-D image.
The described system reduces costs because less equipment is needed.
Specifically, only one lens system is needed. A single lens system makes the system less bulky than prior art 3-D imaging systems. Finally, the system described is suitable for video cameras, in which multiple images are taken.
These multiple images may be combined to generate a 3-D image database.
The described system may be further refined to improve the ease of use and the data gathering capacity of the camera 104. Figure 4 is a flow chart describing a method to assist a user improve the data gathering capacity of camera 104. In step 404, the camera takes a first image of a subject from a first location. The camera also measures and records the distance from the subject to the camera"viewfinder"in step 408. The camera may then proceed to compute optimum camera positions from which to take subsequent images. It is preferable that subsequent images for generation of a 3-D image be taken from approximately the same distance around a subject. Thus, a camera would preferably maintain an approximately constant distance from a subject, moving in a circle around the subject.
A number of techniques may be used to determine the distance from the camera to the subject, including using an auto-ranging device. Such an auto-ranging device may use image sensor data, infrared signals or sound signals to determine the distance from the camera to the subject. In an auto-ranging sound signal system, a transmitter in the camera emits a sound pulse. A receiver in the camera detects the reflection of the sound pulse from the subject. The time difference from the emitting of the pulse to the receipt of the sound pulse reflection is used to determine the distance from the camera to the subject.
In step 412, the camera begins an image capture sequence.
Orientation measurements are taken to determine the orientation of the camera. Preferably, the orientation of the camera is maintained such that the lens always faces the object.
The camera prompts the user to move in step 416. In one embodiment, the prompts may be given in the form of arrows displayed in the camera viewfinder. The arrows prompt a user to move the camera in a particular direction, or rotate the camera to a particular orientation. The arrows may be displayed using a liquid crystal display (LCD). An autoranging device may be used to provide signals to a processor. The processor controls the display to output signals prompting the user to maintain an approximately constant distance from the subject being imaged. In an alternate embodiment, a sound or voice signal may be used to tell the user to move the camera in a particular direction or rotate the camera to a particular orientation.
In step 420, the camera determines whether the camera is within a tolerance distance from an optimum position. If the camera is not within the tolerance distance to the optimum position, the camera returns to step 416 prompting the user to further adjust the camera position. If in step 420, it is determined that the camera is within the tolerance distance, the camera records a second or subsequent image of the subject in step 424.
In step 428, the camera determines whether all images necessary for a database have been recorded. The number of images needed is determined by the number of perspectives of the subject desired and the detail of the final 3-D image desired. If additional images are needed, the camera returns to step 416 prompting the user to move to a subsequent position for the
recording of a subsequent image. When it is determined in step 428 that a 0 sufficient number of images have been recorded, the camera is done and a'
3-D image may be reconstructed.
0 While certain exemplary embodiments have been described in detail and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention is not to be limited to the specific arrangements and construction shown and described, since various other modifications may occur to those with ordinary skill in the art.
Claims (12)
- CLAIMS: 1. A method of prompting a user in recording a three-dimensional image comprising the steps of : measuring a distance from a camera to a subject ; recording a first image from a first position; prompting a user to move to a second position, said second position suitable for taking a second image for use in reconstructing a three-dimensional image of the subject ; and recording the second image.
- 2. The method of claim 1 wherein the second position is approximately equidistant from the subject as the first position.
- 3. The method of claim 1 or 2 further comprising the step of : reconstructing a three-dimensional image using the first image and the second image.
- 4. The method of any preceding claim wherein the recording of the first image and the recording of the second image occurs at two different points in time.
- 5. A system of prompting a user in recording a three-dimensional image comprising : means for measuring a distance to a subject ; means for recording a first image from a first position; means for prompting a user to move to a second position suitable for taking a second image for use in reconstructing a three-dimensional image of the subject; and means for recording the second image.
- 6. The system of claim 5 wherein movement of the system from the first position to the second position is measured by at least one of rotation and two-dimensional (x, y) displacement.
- 7. The system of claim 6, wherein the measured movement further comprises acceleration.
- 8. The system of any of claims 5-7 wherein the second position is approximately equidistant from the subject as the first position.
- 9. The system of any of claims 5-8 further comprising: means for reconstructing a three-dimensional image using the first image and the second image.
- 10. The system of any of claims 5-9 being a still camera.
- 11. The system of any of claims 5-10 being a video camera.
- 12. A software program comprising software program code means adapted to perform all the steps of claim 1 when the software program is being executed by a processor associated with a camera.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/003,389 US6094215A (en) | 1998-01-06 | 1998-01-06 | Method of determining relative camera orientation position to create 3-D visual images |
GB0014678A GB2348561B (en) | 1998-01-06 | 1998-12-29 | Method of determining relative camera orientation position to create 3-D visual images |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0205005D0 GB0205005D0 (en) | 2002-04-17 |
GB2370443A true GB2370443A (en) | 2002-06-26 |
GB2370443B GB2370443B (en) | 2002-08-07 |
Family
ID=26244495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0205005A Expired - Fee Related GB2370443B (en) | 1998-01-06 | 1998-12-29 | Method of prompting a user in recording a 3-D visual image |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2370443B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005025239A1 (en) * | 2003-09-04 | 2005-03-17 | Sharp Kabushiki Kaisha | Method of and apparatus for selecting a stereoscopic pair of images |
EP2302941A3 (en) * | 2009-09-28 | 2013-09-11 | Samsung Electronics Co., Ltd. | System and method for creating 3D video |
EP2884746A1 (en) * | 2013-12-16 | 2015-06-17 | Robert Bosch Gmbh | Monitoring camera device with depth information determination |
-
1998
- 1998-12-29 GB GB0205005A patent/GB2370443B/en not_active Expired - Fee Related
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005025239A1 (en) * | 2003-09-04 | 2005-03-17 | Sharp Kabushiki Kaisha | Method of and apparatus for selecting a stereoscopic pair of images |
US8026950B2 (en) | 2003-09-04 | 2011-09-27 | Sharp Kabushiki Kaisha | Method of and apparatus for selecting a stereoscopic pair of images |
EP2302941A3 (en) * | 2009-09-28 | 2013-09-11 | Samsung Electronics Co., Ltd. | System and method for creating 3D video |
US9083956B2 (en) | 2009-09-28 | 2015-07-14 | Samsung Electronics Co., Ltd. | System and method for creating 3D video |
EP2884746A1 (en) * | 2013-12-16 | 2015-06-17 | Robert Bosch Gmbh | Monitoring camera device with depth information determination |
US9967525B2 (en) | 2013-12-16 | 2018-05-08 | Robert Bosch Gmbh | Monitoring camera apparatus with depth information determination |
Also Published As
Publication number | Publication date |
---|---|
GB0205005D0 (en) | 2002-04-17 |
GB2370443B (en) | 2002-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6094215A (en) | Method of determining relative camera orientation position to create 3-D visual images | |
US6304284B1 (en) | Method of and apparatus for creating panoramic or surround images using a motion sensor equipped camera | |
CN111462213B (en) | Equipment and method for acquiring 3D coordinates and dimensions of object in motion process | |
JP6483075B2 (en) | Method of 3D panoramic mosaicing of scenes | |
JP4010753B2 (en) | Shape measuring system, imaging device, shape measuring method, and recording medium | |
JP6974873B2 (en) | Devices and methods for retrieving depth information from the scene | |
CN111442721B (en) | Calibration equipment and method based on multi-laser ranging and angle measurement | |
US6839081B1 (en) | Virtual image sensing and generating method and apparatus | |
CN111429523B (en) | Remote calibration method in 3D modeling | |
CN110419208B (en) | Imaging system, imaging control method, image processing apparatus, and computer readable medium | |
WO2014100950A1 (en) | Three-dimensional imaging system and handheld scanning device for three-dimensional imaging | |
CN111445529B (en) | Calibration equipment and method based on multi-laser ranging | |
JP2019118090A (en) | Imaging apparatus and control method of imaging apparatus | |
GB2370443A (en) | Method of prompting a user in recording a 3-D image. | |
JPH09138850A (en) | Surface shape reconstitution device | |
El-Hakim et al. | Two 3D Sensors for Environment Modeling and Virtual Reality: Calibration and Multi-View Registration | |
JP3655065B2 (en) | Position / attitude detection device, position / attitude detection method, three-dimensional shape restoration device, and three-dimensional shape restoration method | |
CN114549750A (en) | Multi-modal scene information acquisition and reconstruction method and system | |
JP3060218B1 (en) | Photogrammetry method and apparatus | |
JPH11120361A (en) | Three-dimensional shape restoring device and restoring method | |
Klarquist et al. | The Texas active vision testbed | |
Singamsetty et al. | An Integrated Geospatial Data Acquisition System for Reconstructing 3D Environments | |
JP2005078554A (en) | Restoration method and device for fish-eye camera motion and three-dimensional information, and recording medium with program for implementing it recorded | |
Nedevschi et al. | 3D ENVIRONMENT RECONSTRUCTION USING MULTIPLE MOVING STEREOVISION SENSORS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20171229 |