WO2017077277A1 - Système et procédés d'imagerie d'objets tridimensionnels - Google Patents

Système et procédés d'imagerie d'objets tridimensionnels Download PDF

Info

Publication number
WO2017077277A1
WO2017077277A1 PCT/GB2016/053368 GB2016053368W WO2017077277A1 WO 2017077277 A1 WO2017077277 A1 WO 2017077277A1 GB 2016053368 W GB2016053368 W GB 2016053368W WO 2017077277 A1 WO2017077277 A1 WO 2017077277A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
captured
features
feature selection
candidate
Prior art date
Application number
PCT/GB2016/053368
Other languages
English (en)
Inventor
Leonardo Rubio NAVARRO
Original Assignee
Fuel 3D Technologies Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuel 3D Technologies Limited filed Critical Fuel 3D Technologies Limited
Priority to EP16788779.3A priority Critical patent/EP3371780A1/fr
Publication of WO2017077277A1 publication Critical patent/WO2017077277A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the present invention relates to an imaging system for generating three-dimensional (3D) images of a 3D object, and a method performed by the imaging system.
  • the imaging system may move relatively during a period in which the imaging system is capturing multiple images of the object.
  • the 3D surface is illuminated by light (or other electromagnetic radiation), and the two-dimensional images are created using the light reflected from it.
  • Most real objects exhibit two forms of reflectance: specular reflection (particularly exhibited by glass or polished metal) in which, if incident light (visible light or other electromagnetic radiation) strikes the surface of the object in a single direction, the reflected radiation propagates in a very narrow range of angles; and Lambertian reflection (exhibited by diffuse surfaces, such as matte white paint) in which the reflected radiation is isotropic with an intensity according to Lambert's cosine law (an intensity directly proportional to the cosine of the angle between the direction of the incident light and the surface normal).
  • specular reflection particularly exhibited by glass or polished metal
  • Lambertian reflection exhibited by diffuse surfaces, such as matte white paint
  • Lambert's cosine law an intensity directly proportional to the cosine of the angle between the direction of the incident light and the surface normal.
  • Most real objects have some mixture of Lambertian and specular reflective properties.
  • WO 2009/122200 "3D Imaging System” describes a system in which, in preferred embodiments, the object is successively illuminated by at least three directional light sources, and multiple cameras at spatially separated spatial positions capture images of the object.
  • a localization template fixed to the object, is provided in the optical fields of all the light sensors, to allow the images to be registered with each other, in a frame of reference in which the object is unmoving.
  • the object will have a number of "landmarks" which, when imaged, produce features which can be easily recognized in each of the images.
  • two images a "stereo pair” of images
  • the system determines the corresponding positions in the stereo pair of the corresponding features.
  • an initial 3D model of the object is created stereoscopically (i.e. by optical triangulation).
  • Photometric data is generated from images captured at different times when successive ones of the directional light sources are activated. If the object is moving relative to the cameras during this period, the images are registered using the localization template (i.e.
  • the photometric data makes it possible to obtain an estimate the normal direction to the surface of the object with a resolution comparable to individual pixels of the image.
  • the normal directions are then used to refine the initial model of the 3D object.
  • the user does not know whether the errors caused by movement of the camera were too large to permit high-quality photometric processing until the photometric processing has been completed. Since the processing is computationally intense, this can lead to a significant delay before the user is warned that the process must be performed again.
  • the problem could be solved by arranging for the images to be captured more rapidly, so that there is less time for relative motion of the imaging system and object between the images being captured.
  • an imaging rate of 60 images per second is not sufficient to solve the problem, and capturing images more quickly than this without dramatically increasing the cost of the imaging system is a significant engineering challenge.
  • the present invention aims to provide new and useful methods and systems for obtaining three- dimensional (3D) models of a 3D object, and optionally displaying images of the models.
  • the invention proposes that, in a 3D imaging system in which an object is illuminated (preferably successively) from at least three directions (by energy generated by at least one energy source) and at least three respective images of the object are captured, corresponding features in different ones of the images are identified, and the positions of the features in the images are used to estimate motion of the object relative to the energy sensors.
  • the estimated motion is used to register the images in a common coordinate system in which the object is stationary (i.e. the respective positions and directions of the respective viewpoints from which the images were captured are found in the common reference frame), and thereby correct for the relative motion of the object and imaging system between different times at which the images were captured.
  • feature selection image(s) which are likely to be associated with landmarks on the imaged object, rather than the background. Only these features (“reference features”), and the corresponding features in other of the images, are used in the algorithm for determining the motion between the images.
  • a first way of selecting the reference features is based on a knowledge of the scene. For example, if the background has a predetermined color or pattern, areas of the images which show the background may be identified using the color or pattern, and the offset calculation would not use features in those areas.
  • a second way of selecting the reference features is based on the intensities in the images. This is motivated by the observation that objects distant from the energy source(s) and the energy sensors are more likely to appear dark. First, because of the "near/far” effect, according to which energy received from nearby energy sources is greater than from distant ones. Secondly, the user of the system will often have been careful to ensure that the object is illuminated by the directional energy sources. Thus, light generated by the directional energy source(s) (which may be in flashes) typically dominates ambient light falling onto the background. Furthermore, the user may be instructed to ensure that there are no bright light sources in the background.
  • the reference features may be selected as features which have an intensity above a threshold.
  • the reference features may be selected by identifying areas of the images (e.g. areas which are at least 5x5 pixels in size, or at least 10x10 pixels in size, or at least 20x20 pixels in size) with a relatively high average intensity, and selecting features which are located within those areas (that is, not considering features in areas identified as having a low average intensity).
  • the "average” may be the mean intensity, the median intensity, or any other average value.
  • a third way of selecting reference features is by estimating the distance of the corresponding landmarks from the imaging system, and selecting from those landmarks ones within a predetermined distance range from the imaging system, e.g. less than a certain distance from a certain point on the imaging system.
  • the corresponding features are then used as the reference features.
  • Distance may be calculated in several ways. One way would be using a depth camera, e.g. one using sheet-of-light triangulation, structured light (that is, light having a specially designed light pattern), time-of-flight or interferometry.
  • Another way of finding the distances of landmarks is stereoscopically, using a stereo pair of images captured at the same time by different respective energy sensors from different respective viewpoints having a known positional relationship. Since the stereo pair was captured at the same time, any motion of the imaging system relative to the object affects them both equally. An approximate distance can be obtained for any landmarks which cause features in both images of the stereo pair. The distance can be used to select corresponding reference features. Corresponding features can then be identified in other of the images captured by the imaging system at other times, to register those images with the stereo pair of images.
  • a computational algorithm is used to estimate the motion relative to the object of the energy sensors which captured those images.
  • the algorithm may be a known homographic algorithm.
  • the algorithm may incorporate any prior knowledge of motion between the object and the imaging system. For example, if it is known that the object is moving past the imaging system on a conveyor belt at a certain speed, that information may be used to give an initial estimate of the relative motion between times at which two images were captured.
  • the object is preferably illuminated successively in individual ones of the at least three directions. If this is done, the energy sources may emit light of the same frequency spectrum (e.g. if the energy is visible light, the directional light sources may each emit white light, and the captured images may be color images). However, in principle, the object could alternatively be illuminated in at least three directions by energy sources which emit energy with different respective frequency spectra (e.g.
  • the directional light sources may respectively emit red, white and blue light).
  • the directional energy sources could be activated simultaneously, if the energy sensors are able to distinguish the energy spectra.
  • the energy sensors might be adapted to record received red, green and blue light separately. That is, the red, green and blue light channels of the captured images would be captured simultaneously, and would respectively constitute the images in which the object is illuminated in a single direction.
  • this second possibility is not preferred, because coloration of the object may lead to incorrect photometric imaging.
  • the algorithm may include calculating a quality control index, determining whether the quality control index is above or below a threshold, and if the quality control index is below (or in other embodiments, above) the threshold issuing a warning to a user of the imaging system.
  • the quality control index may simply be a measure of the offset between two or more of the images.
  • the threshold may be set to warn the user when the offset is sufficiently great that the 3D imaging process may be unreliable. In this way, an embodiment of the invention may be able to issue a warning to the user before the
  • the imaging system may not form the 3D model.
  • the algorithm may assume that the relative motion of the object and energy sensor was uniform over the period in which the at least three images were captured, and use this assumption to improve the estimation of the respective viewpoints from which the images were captured. For example, if one of the three images is darker than the others, such that the landmarks cannot be identified in that image, the algorithm may use a relative motion of the object and energy sensor calculated from the other two images to estimate the viewpoint from which the dark image was captured. This possibility is particularly useful if the three images include an image which is darker because it was captured at a time when none of the energy sources was illuminating the object (e.g. because it was desired to measure how much ambient light the object reflects).
  • the 3D model of the object may be reconstructed from some or all of the images, such as using the methods explained in WO 2009/122200.
  • an initial model of the 3D object may be formed stereoscopically from two or more of the images ("stereo pairs") which were captured, preferably simultaneously, by energy sensors at different spatial locations, and this initial model may be refined using photometric data from at least three of the images which were captured (preferably
  • one of the latter images may be one of the stereo pair of images.
  • the stereo pair of images are captured simultaneously by energy sensors with a fixed, known geometrical relationship, then those images may be registered with each other by knowledge of the geometrical relationship, rather than using the features according to the present inventive concept.
  • the present inventive concept is used to register ones of the images which were not captured at the same time into a common coordinate system in which the object is stationary.
  • the present inventive concept may be used, for each of the energy sensors, to mutually register the set of images captured at different times by that energy sensor.
  • the inventive concept may also be used to register a set of images taken by one of the cameras with respective set(s) of images taken with other of the camera(s).
  • each set of images includes a respective image taken
  • each set of images may be registered with the other set(s) of images using the known geometrical relationship between the respective viewpoints of the simultaneously taken pair of images.
  • the energy used is electromagnetic radiation, i.e. light.
  • the term "light” is used in this document to include electromagnetic radiation which is not in the visible spectrum.
  • Various forms of directional energy source may be used in embodiments of the invention. For example, a standard photographic flash, a high brightness LED cluster, or Xenon flash bulb or a 'ring flash' of small diameter (if the diameter is too large, it will not be a directional light source, though the source may still be useful for the stereoscopy). It will be appreciated that the energy need not be in the visible light spectrum.
  • One or more of the energy sources may be configured to generate light in the infrared (IR) spectrum (wavelengths from 700nm to 1 mm) or part of the near infrared spectrum (wavelengths from 700nm to 1 10Onm).
  • the energy may be polarized.
  • the energy sensors may be two or more standard digital cameras, or video cameras, or CMOS sensors and lenses appropriately mounted. In the case of other types of directional energy, sensors appropriate for the directional energy used are adopted. A discrete energy sensor may be placed at each viewpoint, or in another alternative a single sensor may be located behind a split lens or in combination with a mirror arrangement.
  • the energy sources and viewpoints preferably have a known positional relationship, which is typically fixed.
  • the energy sensor(s) and energy sources may be incorporated in a portable apparatus, such as a hand-held instrument.
  • the energy sensor(s) and energy sources may be incorporated in an apparatus which is mounted in a building.
  • at least three illumination directions are required for photometric imaging, the number of illumination directions may be higher than this.
  • the timing may be controlled by a processor, such as the one which calculates the relative motion of the object and energy sensor(s).
  • the energy to illuminate the object could be provided by a single energy source which moves between successive positions in which it illuminates the object in corresponding ones of the directions.
  • At least three energy sources are provided, It would be possible for these sources to be provided as at least three energy outlets from an illumination system in which there are fewer than three elements which generate the energy.
  • a single energy generation unit (light generating unit) and a switching unit which successively transmits energy generated by the single energy generation unit to respective input ends of at least three energy transmission channels (e.g. optical fibers).
  • the energy would be output at the other ends of the energy transmission channels, which would be at three respective spatially separately locations.
  • the output ends of the energy transmission channels would constitute respective energy sources.
  • the light would propagate from the energy sources in different respective directions.
  • the invention may be expressed as an apparatus for capturing images, including a processor for analyzing the images according to program instructions (which may be stored in non- transitory form on a tangible data storage device). Alternatively, it may be expressed as the method carried out by the apparatus.
  • Fig. 1 shows a first schematic view of an imaging assembly for use in an embodiment of the present invention to form a 3D model of an object
  • Fig. 2 is a flow diagram of a method performed by an embodiment of Fig 1 ;
  • Fig. 3 shows, as Figs. 3(a) and 3(b), two images successively captured by one of the cameras of the embodiment of Fig. 1 ;
  • Fig. 4 illustrates sub-steps of a first possible implementation of a step of the method of
  • Fig. 5 illustrates sub-steps of a second possible implementation of a step of the method of Fig. 2;
  • Fig. 6 is composed of Figs. 6(a) which shows reference features identified in the image of Fig. 3(a), and Fig. 6(b) which shows corresponding features in the image of Fig. 3(b); and
  • Fig. 7 illustrates an embodiment of the invention incorporating the imaging assembly of Fig. 1 and a processor.
  • the imaging assembly includes an energy source 1 . It further includes units 2, 3 which each include a respective energy sensor 2a, 3a in form of an image capturing device, and a respective energy source 2b, 3b (note that in variations of the embodiment, the energy sensors 2a, 3a are not part of the same units as the energy sources 2b, 3b).
  • the units 2, 3 are fixedly mounted to each other by a strut 6, and both are fixedly mounted to the energy source 1 by struts 4, 5.
  • the exact form of the mechanical connection between the units 2, 3 and the energy source 1 is different in other forms of the invention, but it is preferable if it maintains the energy source 1 and the units 2,3 at fixed distances from each other and at fixed relative orientations.
  • the relative positions of the energy sources 1 , 2b, 3b and sensors 2a, 3a are pre-known.
  • the energy sources 1 , 2b, 3b and image capturing devices 2a, 3a may be incorporated in a portable, hand-held instrument.
  • the embodiment includes a processor which is in electronic communication with the energy sources 1 , 2b, 3b and image capturing devices 2a, 3a. This is described below in detail with reference to Fig. 7.
  • the energy sources 1 , 2b, 3b are each adapted to generate electromagnetic radiation, such as visible light or infra-red radiation.
  • the energy sources 1 , 2b, 3b are all controlled by the processor.
  • the output of the image capturing devices 2a, 3a is transmitted to the processor.
  • Each of the image capturing devices 2a, 3a is arranged to capture an image of an object 7 (in Fig. 1 , a dodecahedron) positioned in both the respective fields of view of the image capturing devices 2a, 3a.
  • the image capturing devices 2a, 3a are spatially separated, and preferably also arranged with converging fields of view, so the apparatus is capable of providing two separated viewpoints of the object 7, so that stereoscopic imaging of the object 7 is possible.
  • stereo pair The case of two viewpoints is often referred to as a "stereo pair", although it will be appreciated that in variations of the embodiment more than two spatially-separated image capturing devices may be provided, so that the object 7 is imaged from more than two viewpoints. This may increase the precision and/or visible range of the apparatus.
  • stereo and
  • Suitable image capture devices for use in the invention include the 1 /3-Inch CMOS Digital Image Sensor (AR0330) provided by ON Semiconductor of Arizona, US.
  • a 100 method according to the invention is shown.
  • the image capturing devices 2a, 3a take a plurality of images of the object 7 over a certain time period, and transmit them to the processor.
  • the plurality of images preferably comprises at least two images taken respectively by the image capturing devices 2a, 3a at the same time (that is a stereo pair of simultaneously taken images); as discussed below this stereo pair can be used for generating an initial model of the object 7 stereoscopically, and optionally in other ways.
  • the plurality of images comprises, for each of the energy sources 1 , 2b, 3b, at least one image taken by one of the image capturing devices 2a, 3a at a time when the processor controls that energy source to illuminate the object 7, and controls the other energy sources not to illuminate the object 7.
  • one or more of the images used for the photometric modeling may be one of the stereo pairs used for stereoscopic modelling.
  • the stereo pair of images may be captured at a time when all three of the image capturing devices are illuminating the object 7 (and/or the object 7 is being illuminated by other energy sources (not shown) which need not be directional), and the images for the photometric modelling may be captured at time when only one of the energy sources 1 , 2b, 3b is illuminating the object 7.
  • the set of images may include at least one "dark image" which is captured by one of the image capture devices 2a, 3a at a time when none of the energy sources is illuminating the object, so that the object is just reflecting ambient light. Such an image may be useful to measure how much ambient light the object reflects. For example, the ambient light reflected from each pixel may optionally be subtracted from the images used to perform photometry.
  • Fig. 3 shows two of the images 9, 10 of the object 7 captured by one of image capture devices 2a, 3a at different respective times. Each image shows in the foreground a view of the object 7 from a different respective viewpoint. Each image also includes a portion showing whatever lies behind the object 7, in the background.
  • the processor seeks features of one or more of the feature selection images which are likely to correspond to (i.e. be images of) landmarks on the object 7.
  • step 102 A first way in which step 102 may be carried out is shown in Fig. 4.
  • the method identifies bright regions in a feature selection image. These regions are likely to correspond to areas of the object 7 rather than the background. Alternatively, if the background is known to have a certain color and/or pattern, then sub-step 201 may be include rejecting all regions of the image which have this color or pattern.
  • the processor identifies features in the bright region of the feature selection image. This may be done using any standard algorithm for identifying features. Such algorithms are used in the methods for performing stereographic modeling disclosed in WO 2009/122200.
  • the identified set of identified features may be at the locations shown in Fig. 6(a). All these are vertices of the dodecahedron 7. They are used as "reference features”.
  • step 102 is performed using two feature selection images, which are a stereo pair of images simultaneously captured by the respective image capturing devices 2a, 3a.
  • step 102 includes a sub-step 301 of the processor identifying features in the stereo pair of feature selection images.
  • the algorithm for identifying features may be the same as explained above in relation to in sub-step 202, but unlike in sub-step 202 each of the features of one of the feature selection images is matched with a corresponding feature in the other of the feature selection images. In other words, a number of pairs of features are identified, with the features of each pair being in the different respective feature selection images.
  • the pair of features correspond to (i.e. are images of) the same landmark on the object 7 or on the background.
  • the system of WO 2009/122200 makes use of well-known algorithms for matching features in this way.
  • the processor uses stereoscopy, and the known positional relationship of the image capturing devices 2a, 3a to determine, for each of the feature pairs, the position of the corresponding landmark in a three-dimensional space defined based on the imaging system. From this, the distance is obtained of each landmark from a position in the imaging system.
  • the processor rejects all landmarks which are found to be out of a certain distance range from the position in the imaging system, e.g. landmarks with a distance from the position in the imaging system which is outside a certain distance range defined by one or more distance parameters (e.g. greater than 2 meters or less than 50 mm; or greater than 550mm or less than 350mm), and the features corresponding to the remaining landmarks are used as the reference features.
  • the distance range may be chosen to reflect the distance of the object 7 from the imaging system.
  • the process may be repeated using a narrower distance range.
  • step 1 03 the processor finds features in all the other images (i.e. the images other than the feature selection image(s)) which correspond to the features of the feature selection image(s) identified in step 1 02.
  • the features in the image 10 of Fig. 3(b) which correspond to the reference features shown in Fig. 6(a) are at the locations shown by dots in Fig. 6(b).
  • step 104 the system uses a known homography algorithm, using the features identified in steps 102 and 103, to determine the viewpoints of all the images relative to the object 7 in a common coordinate system. In this way, all the images are registered with each other accurately in the common coordinate system.
  • the images captured by one of the image capturing devices 2a include at least one image captured at the same time as an image captured by the second image capturing device 3a (i.e. the at least one stereo pair of images).
  • the fixed geometrical relationship of the positions and imaging directions of the image capturing devices 2a, 3a may be used to register the respective viewpoints of the two simultaneously captured images.
  • all the mutually registered images captured by one of the image capturing devices 2a are registered with all the mutually registered images captured by the other image capturing device 3a, so that all the images captured by both the image capturing devices 2a, 3a have known viewpoints in a common reference frame in which the object is stationary.
  • the homography algorithm may incorporate any prior knowledge of motion between the object and the imaging system. For example, if it is known that the object is moving past the imaging system on a conveyor belt at a certain speed, that information may be used to give an initial estimate of the relative motion between times at which two images were captured. The initial estimate may be refined using the identified features.
  • the homography algorithm includes calculating a quality control index, determining whether the quality control index is above or below a threshold, and if the quality control index is below the threshold issuing a warning to a user of the imaging system.
  • the quality control index may simply be a measure of the offset between two or more of the images.
  • the threshold may be set to warn the user when the offset is sufficiently great that the 3D imaging process may be unreliable. In this way, the embodiment may be able to issue a warning to the user before the computationally-complex formation of the 3D computer model is carried out (see step 105 below).
  • the algorithm may assume that the relative motion of the camera and energy sensor was uniform over the period in which the at least three images were captured, and use this assumption to improve the estimation of the motion. For example, if one of the images is darker than the others, such that the reference features cannot be identified in that image, the algorithm may use a relative motion calculated from the other images to estimate the viewpoint from which the dark image was captured. This possibility is particularly useful if the images include a "dark image", in which the reference features of the object 7 are hard to identify: the viewpoint from which the dark image was captured may be inferred from the respective determined viewpoints of two images captured at neighboring times.
  • the viewpoint of a dark image taken by one of the image capture devices 2a, 3a may be assumed to be an average of the viewpoints of an immediately preceding image and an immediately succeeding image taken by the same image capture device (i.e. by interpolation).
  • the viewpoint of a dark image may be inferred by extrapolation from the viewpoints of two or more preceding images, or two or more succeeding images. Since interpolation is usually more accurate than extrapolation, the former possibility is preferred, which suggests that the dark image captured by a certain image capture device should be captured in the middle of the sequence of images captured by that image capture device.
  • the method uses the images to form a 3D model of the object.
  • This can be by the method described in WO 2009/122200.
  • two acquisition techniques are used to construct the 3D model.
  • One technique of acquisition is passive stereoscopic reconstruction, which calculates surface depth based on optical triangulation. This is based around known principles of optical parallax. This technique generally provides good unbiased low-frequency information (the coarse underlying shape of the surface of the object), but is noisy or lacks high frequency detail.
  • the other technique is photometric reconstruction, in which surface orientation is calculated from the observed variation in reflected energy against the known angle of incidence of the directional source.
  • the model may be formed by forming an initial model of the shape of the object 7 using stereoscopic reconstruction, and then refining the model using the photometric data.
  • the stereoscopic reconstruction uses optical triangulation, by geometrically correlating pairs of features in the respective stereo pair of images captured by the image capture devices 2a, 3a to give the positions of each of the corresponding landmarks in a three-dimensional space defined based on the imaging system. If step 102 was performed in the way shown in Fig. 5, then these steps have already been performed, and need not be repeated. The positions of the landmarks are then used to form the initial model of the object 7. Note that in variations of the embodiment the initial model of the object 7 may be formed in other ways, such as using a depth camera.
  • the photometric reconstruction requires an approximating model of the surface material reflectivity properties. In the general case this may be modelled (at a single point on the surface) by the Bidirectional Reflectance Distribution Function (BRDF).
  • BRDF Bidirectional Reflectance Distribution Function
  • a simplified model is typically used in order to render the problem tractable.
  • One example is the Lambertian Cosine Law model. In this simple model the intensity of the surface as observed by the camera depends only on the quantity of incoming irradiant energy from the energy source and foreshortening effects due to surface geometry on the object.
  • the data obtained by the photometric and stereoscopic reconstructions is fused by treating the stereoscopic reconstruction as a low-resolution skeleton providing a gross-scale shape of the object, and using the photometric data to provide high-frequency geometric detail and material reflectance characteristics.
  • Fig. 7 is a block diagram showing a technical architecture of the overall system 200 for performing the method.
  • the technical architecture includes a processor 322 (which may be referred to as a central processor unit or CPU) that is in communication with the cameras 2a, 3a, for controlling when they capture images and receiving the images.
  • the processor 322 is further in communication with, and able to control the energy sources 1 , 2b, 3b.
  • the processor 322 is also in communication with memory devices including secondary storage 324 (such as disk drives or memory cards), read only memory (ROM) 326, random access memory (RAM) 328.
  • the processor 322 may be implemented as one or more CPU chips.
  • the system 200 includes a user interface (Ul) 330 for controlling the processor 322.
  • the Ul 330 may comprise a touch screen, keyboard, keypad or other known input device. If the Ul 330 comprises a touch screen, the processor 322 is operative to generate an image on the touch screen. Alternatively, the system may include a separate screen (not shown) for displaying images under the control of the processor 322.
  • the system 200 optionally further includes a unit 332 for forming 3D objects designed by the processor 322; for example the unit 332 may take the form of a 3D printer.
  • the system 200 may include a network interface for transmitting instructions for production of the objects to an external production device.
  • the secondary storage 324 is typically comprised of a memory card or other storage device and is used for non-volatile storage of data and as an over-flow data storage device if RAM 328 is not large enough to hold all working data. Secondary storage 324 may be used to store programs which are loaded into RAM 328 when such programs are selected for execution.
  • the secondary storage 324 has an order generation component 324a, comprising non-transitory instructions operative by the processor 322 to perform various operations of the method of the present disclosure.
  • the ROM 326 is used to store instructions and perhaps data which are read during program execution.
  • the secondary storage 324, the RAM 328, and/or the ROM 326 may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media.
  • the processor 322 executes instructions, codes, computer programs, scripts which it accesses from hard disk, floppy disk, optical disk (these various disk based systems may all be considered secondary storage 324), flash drive, ROM 326, RAM 328, or the network connectivity devices 332. While only one processor 322 is shown, multiple processors may be present. Thus, while instructions may be discussed as executed by a processor, the instructions may be executed simultaneously, serially, or otherwise executed by one or multiple processors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un système d'imagerie 3D dans lequel un objet à imager est éclairé de façon successive dans au moins trois directions, et au moins trois images de l'objet sont capturées par un ou plusieurs capteurs d'énergie. Des caractéristiques correspondantes dans différentes images sont identifiées et les positions des caractéristiques dans les images sont utilisées pour estimer le mouvement de l'objet par rapport aux capteurs d'énergie. Le mouvement estimé est utilisé pour enregistrer les images dans un système de coordonnées commun, ce qui permet de corriger le mouvement relatif de l'objet et du système d'imagerie entre les différents moments où les images ont été capturés. Les caractéristiques peuvent être sélectionnées de façon à constituer des points de repère sur l'objet plutôt que sur un arrière-plan derrière l'objet.
PCT/GB2016/053368 2015-11-03 2016-10-31 Système et procédés d'imagerie d'objets tridimensionnels WO2017077277A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP16788779.3A EP3371780A1 (fr) 2015-11-03 2016-10-31 Système et procédés d'imagerie d'objets tridimensionnels

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1519398.0 2015-11-03
GB1519398.0A GB2544263A (en) 2015-11-03 2015-11-03 Systems and methods for imaging three-dimensional objects

Publications (1)

Publication Number Publication Date
WO2017077277A1 true WO2017077277A1 (fr) 2017-05-11

Family

ID=55130599

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2016/053368 WO2017077277A1 (fr) 2015-11-03 2016-10-31 Système et procédés d'imagerie d'objets tridimensionnels

Country Status (3)

Country Link
EP (1) EP3371780A1 (fr)
GB (1) GB2544263A (fr)
WO (1) WO2017077277A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200122967A (ko) * 2019-04-19 2020-10-28 주식회사 스트리스 복수 영상 센서로부터 취득한 영상 정보와 위치 정보 간 연계를 통한 도로 공간 정보 구축을 위한 시스템 및 방법
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US11250945B2 (en) 2016-05-02 2022-02-15 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11850025B2 (en) 2011-11-28 2023-12-26 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US11903723B2 (en) 2017-04-04 2024-02-20 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009122200A1 (fr) * 2008-04-02 2009-10-08 Eykona Technologies Ltd Système de formation d’images 3d

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027642B2 (en) * 2000-04-28 2006-04-11 Orametrix, Inc. Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects
US7289662B2 (en) * 2002-12-07 2007-10-30 Hrl Laboratories, Llc Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views
US20110007072A1 (en) * 2009-07-09 2011-01-13 University Of Central Florida Research Foundation, Inc. Systems and methods for three-dimensionally modeling moving objects
US8754887B2 (en) * 2012-07-20 2014-06-17 Google Inc. Determining three-dimensional (3D) object data models based on object movement

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009122200A1 (fr) * 2008-04-02 2009-10-08 Eykona Technologies Ltd Système de formation d’images 3d

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENGLEI WU ET AL: "Fusing Multiview and Photometric Stereo for 3D Reconstruction under Uncalibrated Illumination", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 17, no. 8, 1 August 2011 (2011-08-01), pages 1082 - 1095, XP011373286, ISSN: 1077-2626, DOI: 10.1109/TVCG.2010.224 *
JONGWOO LIM ET AL: "Passive Photometric Stereo from Motion", COMPUTER VISION, 2005. ICCV 2005. TENTH IEEE INTERNATIONAL CONFERENCE ON BEIJING, CHINA 17-20 OCT. 2005, PISCATAWAY, NJ, USA,IEEE, LOS ALAMITOS, CA, USA, vol. 2, 17 October 2005 (2005-10-17), pages 1635 - 1642, XP010857009, ISBN: 978-0-7695-2334-7, DOI: 10.1109/ICCV.2005.185 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11850025B2 (en) 2011-11-28 2023-12-26 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US11250945B2 (en) 2016-05-02 2022-02-15 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11923073B2 (en) 2016-05-02 2024-03-05 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US11903723B2 (en) 2017-04-04 2024-02-20 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
KR20200122967A (ko) * 2019-04-19 2020-10-28 주식회사 스트리스 복수 영상 센서로부터 취득한 영상 정보와 위치 정보 간 연계를 통한 도로 공간 정보 구축을 위한 시스템 및 방법
KR102225321B1 (ko) 2019-04-19 2021-03-09 주식회사 스트리스 복수 영상 센서로부터 취득한 영상 정보와 위치 정보 간 연계를 통한 도로 공간 정보 구축을 위한 시스템 및 방법

Also Published As

Publication number Publication date
GB201519398D0 (en) 2015-12-16
EP3371780A1 (fr) 2018-09-12
GB2544263A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
US9367952B2 (en) 3D geometric modeling and 3D video content creation
EP3371779B1 (fr) Systèmes et procédés de formation de modèles d'objets tridimensionnels
JP6456156B2 (ja) 法線情報生成装置、撮像装置、法線情報生成方法および法線情報生成プログラム
US10832429B2 (en) Device and method for obtaining distance information from views
CN106643699B (zh) 一种虚拟现实系统中的空间定位装置和定位方法
US8090194B2 (en) 3D geometric modeling and motion capture using both single and dual imaging
CN104335005B (zh) 3d扫描以及定位系统
WO2017077277A1 (fr) Système et procédés d'imagerie d'objets tridimensionnels
US20100245851A1 (en) Method and apparatus for high-speed unconstrained three-dimensional digitalization
CN104634276A (zh) 三维测量系统、拍摄设备和方法、深度计算方法和设备
EP3381015B1 (fr) Systèmes et procédés permettant de former des modèles d'objets en trois dimensions
EP3069100B1 (fr) Dispositif de mappage 3d
JP2009288235A (ja) 物体の姿勢を求める方法及び装置
JP6556013B2 (ja) 処理装置、処理システム、撮像装置、処理方法、プログラム、および記録媒体
EP3382645A2 (fr) Procédé de génération d'un modèle 3d à partir de structure from motion et stéréo photométrique d'images 2d parcimonieuses
JP6282377B2 (ja) 3次元形状計測システムおよびその計測方法
EP4189650A2 (fr) Systèmes, procédés et supports pour récupérer directement des surfaces planes dans une scène à l'aide d'une lumière structurée
JP4193342B2 (ja) 3次元データ生成装置
EP3232153B1 (fr) Dispositif de balayage portable de precision
CN107392955B (zh) 一种基于亮度的景深估算装置及方法
RU2685761C1 (ru) Фотограмметрический способ измерения расстояний вращением цифрового фотоаппарата
Xu et al. High-resolution modeling of moving and deforming objects using sparse geometric and dense photometric measurements
KR20130019080A (ko) 플래놉틱 카메라와 깊이 카메라를 결합한 영상 장치 및 영상 처리 방법
Rüther et al. μNect: On using a gaming RGBD camera in micro-metrology applications
CN110021044A (zh) 利用双鱼眼图像计算所摄物体坐标的方法及图像获取装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16788779

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016788779

Country of ref document: EP