WO2012061945A1 - System and method for object searching using spatial data - Google Patents

System and method for object searching using spatial data Download PDF

Info

Publication number
WO2012061945A1
WO2012061945A1 PCT/CA2011/050700 CA2011050700W WO2012061945A1 WO 2012061945 A1 WO2012061945 A1 WO 2012061945A1 CA 2011050700 W CA2011050700 W CA 2011050700W WO 2012061945 A1 WO2012061945 A1 WO 2012061945A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
search
model
database
image
Prior art date
Application number
PCT/CA2011/050700
Other languages
French (fr)
Inventor
Miriam Tuerk
Mark Joseph Fasciano
James Andrew Estill
Edmund Cochrane Reeler
Dmitry Kulakov
Original Assignee
Ambercore Software Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ambercore Software Inc. filed Critical Ambercore Software Inc.
Publication of WO2012061945A1 publication Critical patent/WO2012061945A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]

Definitions

  • the following relates generally to searching for objects, or identifying objects, or both, using data representing spatial coordinates.
  • interrogation In order to investigate an object or structure, it is known to interrogate the object or structure and collect data resulting from the interrogation.
  • the nature of the interrogation will depend on the characteristics of the object or structure.
  • the interrogation will typically be a scan by a beam of energy propagated under controlled conditions.
  • the results of the scan are stored as a collection of data points, and the position of the data points in an arbitrary frame of reference is encoded as a set of spatial-coordinates. In this way, the relative positioning of the data points can be determined and the required information extracted from them.
  • Data having spatial coordinates may include data collected by electromagnetic sensors of remote sensing devices, which may be of either the active or the passive types.
  • Non-limiting examples include LiDAR (Light Detection and Ranging), RADAR, SAR
  • Satellite Imagery Synthetic-aperture RADAR
  • IFSAR Interferometric Synthetic Aperture Radar
  • Satellite Imagery Other examples include various types of 3D scanners and may include sonar and ultrasound scanners.
  • Data having spatial coordinates may also include 2D images collected from camera or photographic devices.
  • LiDAR refers to a laser scanning process which is usually performed by a laser scanning device from the air, from a moving vehicle or from a stationary tripod.
  • the process typically generates spatial data encoded with three dimensional spatial data coordinates having XYZ values and which together represent a virtual cloud of 3D point data in space or a "point cloud".
  • Each data element or 3D point may also include an attribute of intensity, which is a measure of the level of reflectance at that spatial data coordinate, and often includes attributes of RGB, which are the red, green and blue color values associated with that spatial data coordinate.
  • Other attributes such as first and last return and waveform data may also be associated with each spatial data coordinate. These attributes are useful both when extracting information from the point cloud data and for visualizing the point cloud data. It can be appreciated that data from other types of sensing devices may also have similar or other attributes.
  • point cloud data or spatial data in general, can reveal to the human eye a great deal of information about the various objects which have been scanned or imaged.
  • Information can also be manually extracted from the point cloud data and represented in other forms such as 3D vector points, lines and polygons, or as 3D wire frames, shells and surfaces.
  • a common approach for extracting these types of information from 3D point cloud data or 2D images involves subjective manual pointing at points representing a particular feature within the point cloud data or the 2D image data either in a virtual 3D view or on 2D plans, cross sections and profiles. The collection of selected points is then used as a representation of an object.
  • Some semi-automated software and CAD tools exist to streamline the manual process including snapping to improve pointing accuracy and spline fitting of curves and surfaces. Such a process is tedious and time consuming. Accordingly, methods and systems that better semi-automate and automate the extraction of these geometric features from the point cloud data are highly desirable.
  • Figure 1 is a schematic diagram to illustrate an example of an aircraft and a ground vehicle using sensors to collect data points of a landscape.
  • Figure 2 is a block diagram of an example embodiment of a computing device and example software components.
  • Figure 3 is a block diagram showing example components of a 3D objects database.
  • Figure 4 is a flow diagram illustrating example computer executable instructions for searching for similar 3D objects.
  • Figure 5 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through laser scanning.
  • Figure 6 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through photogrammetry.
  • Figure 7 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through silhouette imaging.
  • Figure 8 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through edge detection.
  • Figure 9 is a flow diagram illustrating example computer executable instructions for scaling a 3D model to correspond with dimensions of a target object.
  • Figure 10 is a flow diagram illustrating example computer executable instructions for receiving or obtaining one or more spatial criteria to perform an object search of the target object.
  • Figure 1 1 is a flow diagram illustrating example computer executable instructions for generating one or more adjusted 3D models, or search shells.
  • Figure 12 is a flow diagram illustrating example computer executable instructions for searching for 3D objects by comparing the search shells with objects stored in the 3D objects database.
  • Figure 13 is a flow diagram illustrating example computer executable instructions for returning the results of the search.
  • Figure 14 is a schematic diagram for a camera device obtaining images of a target object from different angles and constructing an enclosed space from the images.
  • Figure 15 is a schematic diagram for constructing points of an example 3D object using 2D images.
  • Figure 16 is a schematic diagram illustrating search shells of a target object, the target object positioned beside a standard reference object.
  • Figure 1 7 is a schematic diagram illustrating the comparison of an example search shell and a 3D CAD model of another object.
  • Figure 18 is another schematic diagram illustrating the comparison of the example search shell and a 3D CAD model of yet another object.
  • Figure 19 is another schematic diagram illustrating the comparison of the example search shell and a 3D CAD model of yet another object.
  • Figure 20 is another schematic diagram illustrating the comparison of the example search shell and a 3D CAD model of yet another object.
  • Figure 21 is a schematic diagram illustrating the cross-section along A-A of Figure 1 7.
  • Figure 22 is a schematic diagram illustrating the cross-section along B-B of Figure 18.
  • Figure 23 is a schematic diagram illustrating the cross-section along C-C of Figure 19.
  • Figure 24 is a schematic diagram illustrating the cross-section along D-D of Figure 20.
  • Figure 25 is a block diagram showing example components of a 2D image and 3D objects database.
  • Figure 26 is a flow diagram illustrating example computer executable instructions for searching for 2D images and 3D CAD models of similar objects.
  • Figure 27 is a flow diagram illustrating example computer executable instructions for isolating a sub-image using edge detection.
  • Figure 28 is a flow diagram illustrating example computer executable instructions for scaling a 2D sub-image using a standard reference object.
  • Figure 29 is a flow diagram illustrating example computer executable instructions for searching for similar objects in the 2D images and 3D objects database using spatial criteria.
  • Figure 30 is a flow diagram illustrating example computer executable instructions for determining the probability that the search results are correct.
  • Figure 31 , Figure 32 and Figure 33 are schematic diagrams illustrating example stages of searching for similar objects in the 2D images and 3D objects database.
  • Figure 34 is a block diagram showing example components of a 2D image and 3D objects database, as well as attributes associated with the objects.
  • Figure 35 is a flow diagram illustrating example computer executable instructions for identifying an irregular or unknown object through object matching in the 2D images and 3D objects database.
  • Figure 36 is a flow diagram illustrating example computer executable instructions for isolating and categorizing sub-images.
  • Figure 37 illustrates example computer executable instructions for receiving spatial criteria.
  • Figure 38 is a flow diagram illustrating example computer executable instructions for conducting a combined 2D and 3D search comparison.
  • Figure 39 is a flow diagram illustrating example computer executable instructions for conducting another example embodiment of a combined 2D and 3D search comparison.
  • Figure 40 is a flow diagram illustrating example computer executable instructions for conducting another example embodiment of a combined 2D and 3D search comparison.
  • Figure 41 is a schematic diagram illustrating the projection of a 3D CAD model onto a 2D plane, and the projection of a 2D image onto a 3D CAD model.
  • Figure 42 is a flow diagram illustrating example computer executable instructions for determining a related attribute of an identified target object.
  • the proposed systems and methods extract various features from data having 2D and 3D spatial coordinates (e.g. images and 3D point clouds), and search for similar objects in a database.
  • searching for objects or identifying objects can be difficult, for example, when based on text since text may not be sufficiently descriptive.
  • searching for objects based on 2D images and 3D images can be more accurate and allow for similar-type objects to be identified.
  • the objects to be searched are various and non-limiting examples include: cars, trains, telephones, vases, chairs, cups, clothing, shoes, cutlery, dishes, street signs, food items, staplers, pens, hair accessories, tables, lamps, bicycles, tools, etc.
  • the search or identification is based on inputting or providing at least one of: a 2D image of an object of interest, a 3D model of an object of interest (e.g. from laser scanning or a 3D computer aided design (CAD) model), multiples thereof, and combinations thereof.
  • a 2D image of an object of interest e.g. from laser scanning or a 3D computer aided design (CAD) model
  • CAD computer aided design
  • the proposed systems and methods search for similar or identical objects in a database containing 2D images, or 3D models, or both.
  • the object of interest is herein referred to as a "target object", since it is the target or reference used to search for similar or identical objects.
  • the data may be collected from various types of sensors.
  • a non-limiting example of such a sensor is the LiDAR system built by Ambercore Software Inc. and available under the trade-mark TITAN.
  • data is collected using one or more sensors 10.
  • the sensors 10 may be mounted to an aircraft 2 or to a ground vehicle 12.
  • the aircraft 2 may fly over a landscape 6 (e.g. an urban landscape, a suburban landscape, a rural or isolated landscape) while a sensor collects data points about the landscape 6.
  • a landscape 6 e.g. an urban landscape, a suburban landscape, a rural or isolated landscape
  • the LiDAR sensor 10 would emit lasers 4 and collect the laser reflection.
  • Similar principles apply when an electromagnetic sensor 10 is mounted to a ground vehicle 12.
  • a LiDAR system may emit lasers 8 to collect data.
  • the collected data may be stored onto a memory device. Data points that have been collected from various sensors (e.g. airborne sensors, ground vehicle sensors, stationary sensors) can be merged together to form a point cloud.
  • Each of the collected data points is associated with respective spatial coordinates which may be in the form of three dimensional spatial data coordinates, such as XYZ Cartesian coordinates (or alternatively a radius and two angles representing Polar coordinates).
  • Each of the data points also has numeric attributes indicative of a particular characteristic, such as intensity values, RGB values, first and last return values and waveform data, which may be used as part of the filtering process.
  • the RGB values may be measured from an imaging camera and matched to a data point sharing the same coordinates.
  • the determination of the coordinates for each point is performed using known algorithms to combine location data, e.g. GPS data, of the sensor with the sensor readings to obtain a location of each point with an arbitrary frame of reference.
  • location data e.g. GPS data
  • data of a target object may also be collected from LiDAR devices suitable for scanning smaller objects, such as terrestrial laser scanning, industrial laser scanning and handheld 3D laser scanning devices.
  • LiDAR devices suitable for scanning smaller objects, such as terrestrial laser scanning, industrial laser scanning and handheld 3D laser scanning devices.
  • terrestrial laser scanners and industrial laser scanners include those manufactured by companies such as RI EGL and LEICA.
  • handheld 3D laser scanners include those manufactured by companies such as NIKON.
  • Data of a target object may also be collected from camera or photographic devices. These include the camera devices on mobile or cellular phones, such as those provided by, for example, Research in Motion Limited and Apple Inc. In general, various 2D and 3D imaging devices are applicable to the principles described herein.
  • a computing device 20 includes a processor 22 and memory 24.
  • the memory 24 communicates with the processor 22 to process data.
  • various types of computer configurations e.g. networked servers, standalone computers, mobile devices, cloud computing, etc.
  • the data having spatial coordinates 26 and various software 28 reside in the memory 24.
  • the data having spatial coordinates 26 may refer to 3D data (e.g. 3D CAD models, laser scanned points, etc.).
  • data having spatial coordinates includes photos, video data, and 2D images, for further clarity, these are simply referred to as 2D images 30.
  • the 2D images 30 are also stored in memory 24.
  • a 3D objects database 32 stores 3D CAD models of objects. These objects are named and may be associated with additional information.
  • a 2D and 3D objects database 34 stores 2D images of objects and 3D CAD models of objects. For example, in the database 34, there may be associated with a certain object, such as a chair, 2D images of the chair and a 3D CAD model of the chair. There may also be attributes 36 associated with the objects in either, or both, of the databases 32, 34. It can be appreciated that the databases 32, 34, 36, the 2D images 30, the data having spatial coordinates 26, and the software 28 may all interact with each other.
  • An example approach for populating the databases 32, 34 with 3D CAD models of objects would be to import design drawings (e.g. 3D CAD models) of various objects.
  • objects may be scanned by LiDAR, cameras, etc. from various angles to generate 3D CAD models of objects. It can be appreciated that there are various ways of generating or obtaining 3D CAD models to populate the databases 32, 34.
  • 3D models also called 3D CAD models, herein refer to mathematical representations that are suitable for processing by computing devices.
  • Non- limiting examples of 3D models include 3D block models, 3D wire frame models, 3D shell models, 3D solid models, etc.
  • a display device 18 may also be in communication with the processor 22 to display 2D or 3D images.
  • the data 26, 30 may be processed according to various computer executable operations or instructions stored in the software 28. In this way, the features may be extracted from the data 26, 30, and similar or matching objects can be identified in the databases 32, 34, 36.
  • the software 28 may include a number of different modules for searching for similar or matching objects in the databases 32, 34, 36.
  • a similar 3D object searching module 38 searches for 3D CAD models of objects that are similar to one or more 2D images of a target object and/or a 3D model of a target object. For example, by taking several photographs of a chair (e.g. the target object) from different angles, the photographs can be used to create a 3D enclosed volume of space which is then used to search for 3D CAD models of similar-looking chairs in the 3D objects database 32.
  • Module 40 is for searching for similar 2D images and 3D objects (e.g. in database 34) using 2D images and/or 3D CAD models of a target object.
  • a search can be performed using module 40 to return 2D images and their associated 3D CAD models of similar-looking chairs and then validate these 2D images and 3D CAD models against the original photograph.
  • Module 42 is for analysing irregular objects using any one of databases 32, 34 in combination with database 36, which specifies attributes associated with the objects. For example, based on a photograph and/or laser scan of a plate or dish of food, the unknown food objects (e.g.
  • chicken leg, serving of rice, corn on the cob) on the plate can be identified by comparing the 2D images (and/or 3D models) of the unknown food objects with known 2D images of objects and/or their associated 3D CAD models of objects in the databases 32, 34. Any candidate 2D images and their associated 3D models can then be validated against the original photograph using their inherent 2D and 3D geometric properties with scaling to determine matches ranging from exact fits to rough approximations with estimated error. Moreover, by comparing the unknown object with similar or identical known objects in the databases 32, 34, then attributes 36, such as the weight of the known object, can be scaled, estimated and associated with the unknown object.
  • any module or component exemplified herein that executes instructions or operations may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
  • Computer storage media may include volatile and non-volatile, removable and nonremovable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data, except transitory propagating signals per se.
  • Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the computing device 20 or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions or operations that may be stored or otherwise held by such computer readable media.
  • example data components of the 3D objects database 32 are shown.
  • data tags e.g. information
  • 3D CAD model 60 may be associated with data tags 66
  • 3D CAD model 62 may be associated with data tags 68
  • 3D CAD model 64 may be associated with data tags 70.
  • Each data tag can include one or more of the following: an object type, a manufacturer make, a model, etc.
  • a 3D CAD model of a certain car may belong under the object type "car”, have a manufacturer make “Toyota”, and have model "Prius 2010".
  • the type of identifying information may vary with the different types of objects. It can be appreciated that the geometric or spatial information inherent of the 3D CAD model, or the data tags, or both, may be used as searching parameters.
  • the 3D objects database also includes 3D CAD models of standard reference objects.
  • Standard reference objects refer to well known 3D objects. These include, for example, a pop can, stop sign, pen, dollar bill, coins, sheet of paper, fire hydrant, paper clip, etc.
  • Standard reference objects are consistent in size and shape as they are manufactured in large quantities. For example, all twenty dollar bills are the same within a country; all stop signs are the same; all pop cans are the same; etc. It can be appreciated that standard reference objects may be customized depending on the application or industry (e.g. a hard hat may be a standard reference object in the construction industry). The dimensions of these standard reference objects are known through their 3D CAD models. The standard reference objects are used to scale 3D CAD models of objects, as will be discussed later.
  • example computer executable instructions are provided for searching for similar objects in the database 32. These operations are implemented by module 38.
  • the computing device obtains or receives a 3D model of an object of interest (e.g. the target object).
  • the 3D model is scaled to correspond with the actual dimensions of the target object, for example, if the 3D model is of different dimensions of the target object.
  • one or more search criteria such as spatial criteria, are received or obtained to perform a search for objects in the database 32 that are similar to the target object.
  • the computing device 78 uses the spatial criteria, the computing device 78 generates one or more adjusted 3D models (e.g. "search shells").
  • the adjusted 3D models, or search shells reflect the spatial tolerances. For example, if the obtained 3D model of the target object is 30 cm tall, and the height tolerance is +/- 5 cm, then one adjusted 3D model of the target object may be 35 cm tall, while another may be 25 cm tall.
  • the computing device 20 searches for one or more similar objects in a 3D objects database 32 by comparing the one or more adjusted 3D models, or search shells, with the same spatial criteria of the stored 3D models. For example, the search may determine whether an object in the 3D objects database 32 have a height between 25 cm and 35 cm. If so, the object may be considered to be similar to the target object.
  • Figures 5, 6, 7, and 8 provide example computer executable instructions for obtaining a 3D model of the target object, as per block 72. It can be appreciated that there are different approaches to obtaining a 3D model, as discussed below.
  • Figure 5 relates obtaining a 3D model from laser or LiDAR scanning.
  • the target object is laser or LiDAR scanned to generate a point cloud of 3D point covering the surface of the target object.
  • a closed triangulated irregular network (TIN) is generated representing the target object's surface in the form of a wire frame.
  • TIN can be formed using Delaunay's triangulation algorithm, for example.
  • various triangulation algorithms for generating a TIN are applicable to the principles described herein.
  • a 3D shell of 3D CAD model is generated of the target object. In this approach, the 3D CAD model should be accurately sized to match the dimensions of the target object, due to the accuracy of the laser scanning method.
  • Figure 6 provides example computer executable instructions for generating a 3D CAD model of the target object using photogrammetry.
  • the computing device receives at least two images of the target object, the images photographed from different angles or positions.
  • stereophotogrammetry is applied to the images to estimate the coordinates of the 3D points on the target object.
  • the 2D points of the images are correlated to determine the 3D coordinates of points on the target object.
  • a closed TIN wire frame of the target object is generated (block 98).
  • block 100 from the TIN, a 3D shell or 3D CAD model of the object is generated.
  • the dimensions of the 3D CAD model could be accurately dimensioned if various photogrammetric parameters are known, such as the focal length, distance between the camera and the target object, and number of pixels in length and width of the target object. Using such parameters, known methods may be applied to determine the actual dimensions of the target object, which may be applied to the 3D CAD model of the target object.
  • Figure 7 provides another approach using silhouette imaging.
  • the computing device 20 receives at least two images of the target object.
  • the target object is photographed from different positions or angles (e.g. for stereoscopic effect) and the target object is photographed against a uniform background. In this way, the outline of the target object image from the images can be more easily identified.
  • the computing device 20 receives the distance between the target object and the camera device that captured the images. The distance can be measured manually, or automatically, such as by a range finder. Examples of ranger finder devices include ultrasonic transmitters and receivers, infrared light beams, camera device, or combinations thereof.
  • the angle or position of the camera device is also received.
  • all the images are converted into silhouettes by marking the background pixels, which are known to have approximately uniform RGB values, as white and all other pixels as black. Therefore, the target object image should be black.
  • the silhouettes of the target objects are mathematically projected to form a 3D CAD model of the enclosing volume in space.
  • the distance from the target object, the angle of the image plane, and the position of the camera device capturing the images are used to determine the locations of points and lines of the target object. It can be appreciated that the actual dimensions of the target object may be accurately represented using the photogrammetric parameters, as discussed above with respect to Figure 6.
  • FIG 14 a schematic diagram shows a target object being a car 1 72 that is being imaged from different positions.
  • the image planes 1 74, 1 76, 1 78, 180, 182, 184, 186 and 188 are taken from different angles and distances relative to the target object.
  • the images of the car in the image planes are used to form silhouettes of the car, which can then be used to determine or define an enclosed volume of space in the approximate shape of the car.
  • the enclosed volume of space is then used to generate a 3D CAD model representing the car.
  • FIG. 8 another approach for generating a 3D CAD model of a target object is provided and is based on edge detection.
  • At block 1 10 at least two images of the target objects are received. The images are photographed from different positions (e.g. for stereoscopic effect) and are preferably, although not necessarily, photographed against a uniform background.
  • the distances between the target object and the camera device are received or obtained.
  • edge detection algorithms are applied to separate the image of the target object from the background, as well as to separate the image of the target object into distinct polygonal components.
  • the edges of the polygonal components may be based on discontinuities in depth, discontinuities in surface orientation, discontinuities in color, discontinuities in material properties, etc.
  • edge detection algorithms include differential edge detection, Sobel edge detection, Laplace edge detection, and Canny edge detection. Other edge detection algorithms may be used to identify polygonal components of the target object. Threshold or gradient filters may also be used to pre-process the images of the target object, so that edges of the target object can be more easily detected.
  • the component polygons from the different images are mathematically projected to determine where their projection intersects in 3D space.
  • the 3D point intersections are used to define corners or profile lines of the target object. The 3D point intersections are used to generate a closed TIN wire frame of the target object, e.g.
  • a 3D shell or 3D CAD model of the object is generated (block 120). If the photogrammetric parameters are available, as described earlier, it is possible to scale the 3D CAD model to correspond to the actual dimensions of the target object.
  • FIG. 15 a schematic diagram illustrates an example of the edge detection approach described in Figure 8.
  • Images or photos 192 and 194 are taken of an object from different angles.
  • a polygonal component of the object is identified using edge detection.
  • the image of the polygonal component 200 is from one angle in image 192, and the image of the polygonal component 196 is from another angle in image 194.
  • the image rays 198 of the polygonal component 196 can be determined.
  • the image rays 198 are then projected outward.
  • the image rays 202 of the polygonal component 200 are determined and are projected outward from the position of the camera device that captured image 192.
  • the intersection of rays 202 and 198 define the three-dimensional coordinates or lines of the polygonal component of the target object. In this example, the intersection of the image rays defines the square surface p-q-r-s.
  • example computer executable instructions are provided for scaling the 3D CAD model to correspond with the dimensions of the target object, if necessary.
  • the 3D CAD model of the target object is already scaled to the dimensions corresponding to the target object. If so, at block 124, no action is taken. If not, at block 126, it is then determined if a standard reference object been laser scanned or imaged/photographed along with the target object. Standard reference objects refer to well known objects, and these objects have corresponding 3D CAD models 58 in the database 32. Therefore, if a standard reference model is a pop can (e.g.
  • a pop can was imaged or scanned when imaging or scanning the target object, respectively. If so, then at block 128, the standard reference object is identified in the laser scanning or the image, and the dimensions of the scanned or image standard reference object are determined. For example, if the target object was imaged, the image of the pop can has dimensions of 2.4 inches tall by 1 .25 inches wide.
  • an accurate 3D CAD model 58 of the standard reference object is retrieved from the database 32. For example, the accurate 3D CAD model of the pop can is retrieved, the 3D CAD model having a height of 4.8125 inches and diameter of 2.5 inches.
  • the dimension of the laser scanned or photographed standard reference object is compared with the dimensions of the accurate 3D CAD model. Such comparison is used to generate or determine one or more scale factors between the obtained image or CAD model and the accurate 3D CAD model. It can be appreciated that there may be multiple scale factors, for example, when the scale factor along the horizontal axis may be different from the scale factor along the vertical axis.
  • the scale factor or scale factors are applied to the 3D model of the target object.
  • the process continues to block 136.
  • the computing device 20 obtains or retrieves the distance at which the target object was laser scanned or imaged.
  • a laser scan of photograph is captured of a standard reference object from the same distance as the distance obtained or retrieved in block 136. In other words, if, for example, the image of the target object was captured by a camera located 5 feet away, then a subsequent image of a standard reference object will be taken from the camera at 5 feet away. The dimensions of the imaged standard reference object are then determined from the image.
  • the process continues with blocks 130, 132, and 134, the details of which are described above.
  • example computer executable instructions are provided for receiving or obtaining one or more search criteria to perform an object search for the target object, as per block 76.
  • a graphical user interface GUI
  • search criteria may be predetermined or preset.
  • the search criteria include spatial criteria, including for example height tolerance, width tolerance, length tolerance, volume tolerance, area tolerance (including cross-sectional areas), and shell surface tolerance.
  • the shell surface of a 3D CAD model refers to the surface (e.g. the outer surface) of the 3D CAD model.
  • the shell surface tolerance refers to the distance away from the shell surface.
  • a shell surface tolerance of +10 inches increases the width by 20 inches, the height by 20 inches, etc.
  • the shell surface tolerance takes into account the dimensions of complex shapes which are not considered by length, width and height criteria.
  • Example of complex shapes includes vases with tapered or curved profiles and chairs with legs and a back.
  • Other search criteria include texture and color. For example, although a target object is blue, similar objects having the same shape but having a red color may be desired.
  • the search criteria can also include the object type, object grouping, object make, object model, etc. Such information is preferably available in the database 32 as shown by data tags 66, 68, 70.
  • the search criteria is further defined by narrowing the search to a portion or part of the target object.
  • the computing device 20 determines or receives the parts or portion of the target object to be searched and compared with against the 3D CAD model objects in the database 32.
  • Parts of an object can include the upper portion (e.g. upper 30% of the target object), the lower portion (e.g. bottom 25% of the target object), the left portion, the right portion, etc.
  • FIG. 1 1 examples are provided for generating one or more adjusted 3D models, also called search shells, as per block 78.
  • the spatial criteria and tolerances are used to generate an adjusted 3D model. For example, if the height tolerance is +/- 10%, then an adjusted 3D CAD model (e.g. the first search shell) is created that is 10% taller than the 3D CAD model of the target object, and another 3D CAD model (e.g. the second search shell) is created that is 10% shorter than the 3D CAD model of the target object.
  • the search will be performed so that objects taller than the second search shell, and objects shorter than the first search shell are considered as candidate similar objects to the target object.
  • the shell surface tolerance may be +/- 2 inches. Therefore, an adjusted 3D model may be generated by adding a 2 inch thick skin over the entire surface of the 3D CAD model of the target object. Another adjusted 3D model may be generated by subtracting a 2 inch thick skin from the entire surface of the 3D CAD model of the target object. [0085] Turning to Figure 16 briefly, the concept of the search shells and the standard reference objects are further explained through example illustration.
  • the 3D CAD model 206 is shown of the target object, which is a vase. A scan or image of a pop can 204 was captured along side the vase. Since the pop can 204 is a standard reference object, its dimensions are known.
  • the 3D CAD model of the vase 206 can then be scaled in proportion to the known dimensions of the 3D CAD model of the pop can 204.
  • a search shell 208 can be created by adding a one quarter inch thick skin onto the 3D CAD model of the vase 206. This search shell 208 can be used for a more refined search.
  • Another search shell 210 can be created by adding a one half inch thick skin onto the 3D CAD model of the vase 206. This search shell 210 can be used for a more coarse or approximate search. It can be appreciated that the larger the search shells are relative to the 3D CAD model of the target object, the more coarse or the more approximate the results become.
  • the search criteria may include pre-defined settings for the search shells, such as a coarse search, a more refined search, and a very exact search.
  • a coarse search may use a search shell that is 20% larger than the 3D CAD model of the target object.
  • a more refined search may use a search shell that is 10% larger than the 3D CAD model of the target object.
  • a more exact search may use a search shell that is 2% larger than the 3D CAD model of the target object.
  • example computer executable instructions are provided for searching for one or more similar objects in a 3D objects database, the 3D objects database storing 3D models of objects, by comparing the search criteria of the one or more adjusted 3D models with the same search criteria of the stored 3D models, as per block 80.
  • the search is narrowed to the specified object type or group. For example, it may have been specified that the target object belongs to the category of vases. Therefore, the search in the 3D objects database 32 will narrowed down to vases using the data tags.
  • 3D CAD objects are retrieved from the database 32 if they are within any one of the height, width or length tolerances, if such tolerances have been specified.
  • the computing device 20 superimposes each 3D search shell with the 3D CAD objects retrieved from the 3D objects database.
  • the other parts that are considered irrelevant are eliminated or truncated from the 3D CAD objects retrieved from the database 32. For example, only the top portions of the 3D models of the vases from the database 32 are considered and, thus, the bottom portions of the vases are ignored.
  • the computing device 20 determines the percentage volume of the 3D CAD object that lies within the 3D search shell (called "overlap parameter 1 ").
  • overlap parameter 2 it is then determined how much percentage volume of the 3D search shell lies within the 3D CAD object (called "overlap parameter 2"). Both overlap parameter 1 and overlap parameter 2 are preferably considered. For example, it may be that the candidate 3D CAD object lies 100% within the search shell (e.g. overlap parameter 1 ), since it is much smaller than the search shell.
  • the percentage of the search shell that lies or overlaps with the volume of the 3D CAD object may only be 40% (e.g. overlap parameter 2). Therefore, it can be understood that the 3D CAD object is much smaller compared to the search shell and thus, may be considered not similar to the target object.
  • Figures 1 7, 18, 19 and 20 Examples of overlapping search shells are shown in Figures 1 7, 18, 19 and 20.
  • Figure 1 7 shows a search shell 210 of a vase. It is being compared with a 3D CAD object of another vase 212, which resembles a cylinder.
  • Figure 18 shows the search shell 210 being compared with a 3D CAD object of another vase 214, which is thinner around the base and wider at the top.
  • Figure 19 shows the search shell 210 being compared with a 3D CAD object of another vase 216, which is wider around the midsection.
  • Figure 20 shows the search shell 210 being compared with a 3D CAD object of another vase 218, which resembles a laboratory flask.
  • vases 212, 214, and 218 overlap 100% in volume with the search shell 210.
  • the search shell 210 does not 100% overlap in volume with the vases 212, 214, and 218. It can also be appreciated that, if the only the upper portion, such as the upper 30%, of the search shell 210 was being used as a search parameter, then vase 214 would be the most similar to the specified portion of the target object.
  • the computing device 20 determines the percentage area (e.g. surface area, cross sectional area, profile area, etc.) of the 3D CAD object that lies within the corresponding area of the search shell or target object (called “overlap parameter 3").
  • the computing device 20 determines the corresponding percentage area between the search shell or target object overlapping the 3D CAD object (called "overlap parameter 4").
  • overlap parameter 3 the percentage area between the search shell or target object overlapping the 3D CAD object.
  • overlap parameter 4 the database can contain many pre-computed geometric properties about the 3D shells which it contains. Examples include a series of parallel horizontal cross sections whose areas quantify the changing shape of a 3D object as you move vertically. Similarly a series of vertical cross sections in different vertical planes reveal and quantify the changing shape based on direction. These may be used to hasten the 3D search through the database in order to rapidly find similarly shaped 3D objects to compare to the target 3D object.
  • Figures 21 , 22, 23, and 24 show cross-sectional areas corresponding to Figures 1 7, 18, 19, and 20, respectively.
  • the dotted circle 210' outlines the cross-sectional area of the 3D CAD model of the target object, and is shown in comparison with the cross-sections of the vases 212, 214, 216, and 218.
  • the cross-sectional area is taken at a lower height of the vases. Therefore, as shown in Figure 23, the vase 216 has the most similar cross-sectional area compared to 3D CAD model of the target object, for the specified cross-sectional height.
  • block 164 the alignment of the geometric centers and the rotational angles are adjusted or fine tuned.
  • the candidate 3D CAD object from the database 32 and the 3D CAD model of the target object are re-oriented and re-positioned relative to one another to determine if the percentage volume of overlap can be increased.
  • the rotations and alignment adjustments are made in small increments. In some cases, larger rotations may be used where there are no obvious "front", “back”, “top”, “bottom”, etc. orientation identifiers of the objects.
  • block 154 to 164 are repeated (block 166). However, if after a certain number re-alignment iterations have been performed and the candidate 3D CAD object from the database 32 still does not meet the conditions of block 162, then the re-alignment operations are stopped.
  • example computer executable instructions are provided for returning the stored 3D CAD object (from the database 32) as a similar object to the target object.
  • the computing device 20 orders or organizes the similar 3D CAD objects according to degree of similarity to the target object.
  • the degree of similarity is measured, for example, by the percentage overlap in volume, or area, or both.
  • a high percentage of overlap means a high degree of similarity.
  • the similar 3D CAD objects are displayed on the display 18 in an order based on the degree of similarity.
  • the proposed systems and methods provide for 2D and 3D object searching.
  • both 2D images and 3D CAD objects can be returned as results when providing a 2D image or 3D CAD model as an input search parameter.
  • Such a search may investigate the data components of the 2D and 3D objects database 34.
  • the database 34 can include similar or the same 3D object components 220 (e.g. comprising 3D CAD models 60, 62, 64) that were described with respect to the 3D objects database 32.
  • the 2D and 3D objects database 34 includes 2D images of objects 222.
  • 2D images, or images, refer to photos, drawings, pictures, etc.
  • the 2D images 222 may include multiple images of the same object, although the images may be from different perspectives. For example, there is a 2D image of object A 224 from a first perspective, and another 2D image of object A 226 from a second perspective. Images 224 and 226 are both associated with the object A data tag 66.
  • the data tag 66 is also associated with the 3D CAD model of object A 60.
  • 2D image 228 of object B is associated with the object B data tag 68
  • 2D image of object C is associated with object C data tag 70.
  • the data tags can be used to narrow search results.
  • the database 34 may also include statistics of objects 232, which include where an object was made, how many of such objects were made, when an object was made, where the object is sold or is available. These statistics 232 associated with each object can be advantageously used to identify the probability that a search result matches the target object. Further details in this regard are discussed below.
  • example computer executable instructions are provided for searching for similar 2D and 3D objects from the 2D and 3D objects database 34. Such instructions may be performed by module 40.
  • the computing device 20 obtains or receives one or more of the following: a 2D image of a target object, multiple 2D images of the target object (e.g. image frames from a video), and a 3D model of the target object (e.g. by laser scanning). Based on the obtained or received data, the process continues with blocks 236, 238, 240, 242, 244. In parallel or in series to these blocks, a search for similar 3D objects is performed according to blocks 72, 74, 76, 78, and 80, the operations of which have been described above with respect to Figure 4.
  • the computing device 20 analyzes each image using edge detection, for example, to separate each image into individual sub-images of the target object. This involves identifying or outlining the perimeter of the target object.
  • Sub-images herein refer to a set of pixels within an image that represent a portion of, or an entire object. For example, there may be an image or photo of a plate of food, whereby shown on the plate is a cob of corn, a serving of rice, and a pork chop.
  • the extracted sub-images are the isolated image of a cob of corn, the isolated image of the serving of rice, and the isolated image of a pork chop.
  • the sub- images of the cob of corn, or rice, or pork chop are used to search for similar 2D and 3D objects. It can be appreciated that known edge-detection techniques for detecting and generating sub-images are applicable to the principles described herein.
  • the 2D sub-images are scaled to correspond with dimensions of the target object. Scaling of the 2D sub-images may be performed by using known photogrammetry techniques and devices. Another approach of scaling images or sub-images includes the use of a standard reference object, as described earlier with respect to Figure 9.
  • one or more 2D search criteria including spatial criteria, are received or obtained to perform an object search for the target object.
  • the spatial criteria similar to the 3D searching, relate to spatial tolerances of the sub- images.
  • a user may provide spatial criteria, or the computing device 20 can automatically obtain spatial criteria.
  • the spatial criteria of a 2D image may include width and length tolerances, perimeter tolerances, area tolerances, etc.
  • one or more adjusted 2D images (also referred herein as 2D search stencils), are generated. Each adjusted 2D image has one or more of the spatial criteria adjusted based on the one or more defined tolerances.
  • a search is performed for one or more similar 2D objects in the 2D and 3D objects database 34.
  • the search may be conducted by comparing the spatial criteria of the one or more adjusted 2D images (e.g. the 2D search stencils) with same spatial criteria of the stored 2D images. For example, it may be investigated if a 2D image in the database 34 fits within the 2D search stencil.
  • the search may also be conducted based on attributes of the image pixels such as color and texture. If only part of an object of interest has been captured then the specifics will help to narrow down the search (e.g. left front car headlight, image captured from the front).
  • the 2D images returned from the 2D search and the 3D models returned from the 3D search are compared to refine the search results. This involves combining the returned 2D images with at least one of a 3D CAD model of the target object, or with a returned 3D object (a result from the 3D search). It is then determined if the 2D images or 3D models, or both, match one another when projected onto each other from different perspectives. If so, the returned results are rated as being close or very similar matches to the 2D image or 3D model initially obtained at block 234. At block 248, the results are then ordered or organized, for example, according to probability analysis.
  • example computer executable instructions are provided for implementing the operation of block 236, regarding edge detection.
  • the image of the target object is displayed to the user on a display devicel 8.
  • the computing device 20 receives a selection input identifying at least a point on the target object.
  • edge detection is performed for the target object.
  • an option is provided to either receive more selection input information by going back to block 252 or to proceed to block 253 to save the extracted sub-image.
  • the perimeter of the target object is outlined to isolate the sub-image of the target object. In other words, the identification of one or more target objects in an image may be assisted or confirmed, or both, by a user selection.
  • the computing device 20 and display device 18 may be an integrated device, such as a mobile hand-held device (e.g. including devices under the trade-marks iPad, iPhone, BlackBerry Torch, and BlackBerry PlayBook), and that user selection of the target object may be made through any known user interface devices (e.g. cursor, touch screen, scroll wheel, track pad, etc.).
  • a mobile hand-held device e.g. including devices under the trade-marks iPad, iPhone, BlackBerry Torch, and BlackBerry PlayBook
  • user selection of the target object may be made through any known user interface devices (e.g. cursor, touch screen, scroll wheel, track pad, etc.).
  • example computer executable instructions are provided for implementing the operation of block 238, regarding the scaling of a 2D image, or sub-image.
  • the example of Figure 28 includes the use of a target object.
  • at block 256 if it is determined that the 2D sub-image of the target object has already been scaled, then no action is taken (block 258).
  • a standard reference object it is determined if a standard reference object has been imaged or photographed along with the target object. If so, at block 262, the standard reference object is identified in the image. At block 264, an accurate 3D model or 2D image of the standard reference object is retrieved. It can be appreciated that there may be a database of both 2D images and 3D models of standard reference object. At block 266, the dimensions of the photographed reference objects are compared with the accurate 3D model or 2D image of the standard reference object. One or more scale factors are then generated between the photographed reference object and the accurate reference object. The scale factors may be along the horizontal axis, vertical axis, or both. At block 268, the scale factor or factors are applied to the 2D sub-image of the target object.
  • Figure 29 provides an example implementation of block 244, for carrying out a 2D search.
  • the search can be narrowed to the specified object type or group.
  • the computing device 20 retrieves 2D images of the objects within any one or more of the geometric tolerances such as height, width, or length, if specified, and within any one or more of the image attribute tolerances such as color or texture, if specified.
  • each 2D search stencil is superimposed with the 2D images of the objects retrieved from the 2D and 3D objects database 34.
  • the computing device 20 eliminates or truncates the irrelevant parts or portions of the retrieved 2D images of the objects (e.g. parts or portions ancillary to the specified certain part or portion of the target object). For example, it may be that the user would only like to search the upper portion of the image of the target object, and any object having such similarly shaped upper portion is of interest to the user.
  • a number of operations are performed (blocks 282, 284, 286, 288, 290, 292).
  • Overlap parameter 1 ' is computed by determining the percentage area of the retrieved 2D image that lies within the 2D image of the target object.
  • Overlap parameter 2' is computed by determining the percentage area of the 2D image of the target object that lies within the retrieved 2D image.
  • the 'perimeter parameter ⁇ is computed by determining the difference between the perimeters of the 2D image of the target object and the retrieved 2D image.
  • the parameters are compared against specified ranges to determine if the retrieved 2D object is sufficiently, or is not sufficiently similar to the image of the target object.
  • Overlap parameter ⁇ is within a first specified range; or if Overlap perimeter 2' is within a second specified range; or if 'perimeter parameter ⁇ is within a third specified range; or if any of the above conditions are true, then the retrieved 2D object is returned as being a similar object to the target object.
  • the alignments of geometric centers and rotational angles of the images may be fine tuned (e.g. by making slight changes) to increase the percentage area of overlap, if possible. In some cases, larger rotations may be used where there is no obvious front or back, or top or bottom, or left or right.
  • Blocks 282 to 290 are repeated accordingly, as per block 292. Re-alignment or fine-tuning is stopped after a certain number of re-alignment iterations have been performed.
  • example computer executable instructions are provided for implementing probability analysis according to block 248, as described in Figure 26.
  • the computing device 20 orders or organizes the similar 3D CAD objects according to degree of similarity to the target object.
  • the degree of similarity can be measured according to percentage of overlap, color similarity, shape similarity, etc.
  • the computing device 20 retrieves information related to the target object.
  • the information includes: what is the make of target object and the model of the target object; and when or where, or both, was the 2D image or 3D laser scan can taken of the target object.
  • the computing device 20 compares the information of the target object with statistical data of the similar 2D or 3D object, retrieved from the database 34.
  • statistical data from data storage 232 can reveal where the object was made, how many of such objects were made, when the object was made, and where it is sold or is available.
  • the object returned from the database 34 according to the statistics, is made and sold exclusively in the United States.
  • the image of the target object was captured in India. Therefore, it is of low probability that the degree of similarity between the target object and the identified object from the database is accurate.
  • FIG. 31 , 32 and 33 a schematic diagram shows an example of performing a 2D and 3D search, whereby the results from both searches can be combined to refine the results.
  • a 2D image or photograph 300 includes the images of a car
  • the target object in this example is the car 302, and the stop sign
  • 303 is a standard reference object that can be used for scaling the image.
  • a user selects the image of the car 302 within the image 300. Based on the selection, the sub-image of the car 302 is extracted and isolated. The sub-image of the car 302 is then scaled using a scale factor determined between the dimensions of the image of stop sign 303 and the known actual dimensions of the stop sign. The scaled sub-image of the car 304 is then used to generate a 2D search stencil.
  • a spatial criteria GUI 306 is presented to the user to obtain length tolerances 308, height tolerances 310, etc.
  • the outline or perimeter of the target object is expanded or decreased, based on the tolerances, to create search stencil 312. It can be seen in Figure 31 , that the search stencil 312 is taller and longer than the scaled sub-image of the car 304.
  • the search stencil 312 is then used to search for similar 2D images of objects.
  • FIG 32 such a search is shown by the search stencil 312 being compared with the images of three other cars 314, 316, 318 that are retrieved from the 2D and 3D objects database 34. It can be appreciated that the image of car 314 and the search stencil 312 most closely overlap one another.
  • a 3D CAD model of the target object is also generated from the initially obtained 2D images or 3D laser scanning.
  • the 3D CAD model of the target object 320 is combined with 2D images of the target object, for example, by projecting the 2D images onto the 3D CAD model. This colorizes and textures the 3D CAD model of the target object 320.
  • 3D CAD model is not covered, or "wall-papered", by a projection of a 2D image
  • estimation, inference, and interpolation methods can be used to determine the coloring and texturing of the uncovered surfaces. For example, if an image of a car 302 shows only one side, it can be inferred that the opposite side of the car has the same colors and textures.
  • one or more 2D images of the 3D CAD model can be generated from different perspectives.
  • the 2D images of the different perspectives include a rear view perspective image 330, a top-down front and side view perspective image 332, and a top- down front view perspective image 334. It can be fully appreciated that by combining the 2D image and 3D CAD model (either as initially obtained, or retrieved from the database 34), this advantageously gives the effect of being able to generate different 2D images of perspective views of the target object.
  • the different perspective views are different than the perspective view of the initial image 300, and thus the different perspective views can be used to broaden the 2D search for images of objects (related to the different perspectives).
  • the 2D images of the different perspective view are used to find other images of similar objects in the 2D and 3D objects database 34. Therefore, based on the perspective image 330, the image 338 of a similar car is returned as a result. Similarly, based on the perspective image 334, the image 336 of the car is returned as a result.
  • systems and methods may be provided for identifying an image or 3D CAD model of an unknown object, such as irregularly shaped objects.
  • an unknown object such as irregularly shaped objects.
  • attributes or characteristics of the object can be identified.
  • the input images could be: a photograph, taken from a commonly known camera such as an iPhone, taken at an angle close to the horizontal, from approximately 20 feet away from a pile of building sand, beside which is a builders shovel of known length which will act as a scaling object (e.g. standard reference object).
  • a scaling object e.g. standard reference object
  • the pile of building sand may have an irregular shape, geometric comparisons and scaling of the 2D images against database 2D images and 3D objects of several known quantities of building sand can determine matches ranging from exact fits to rough approximations of characteristics such as area, volume, weight and cost with estimated error.
  • the input images could be: a photograph, taken from a commonly known camera such as an iPhone, taken at a downward angle of 45 degrees from the horizontal, from approximately 2 feet away from a plate of food, beside which is a BIC pen which will act as a scaling object (e.g. standard reference object).
  • a BIC pen which will act as a scaling object (e.g. standard reference object).
  • this known information could be used in the search to help retrieve relevant data from the 2D and 3D database.
  • some objects may be irregularly shaped servings of food, color comparisons can identify them as items on the menu, and geometric comparisons (e.g. in 2D and 3D) to known quantified servings can determine matches ranging from exact fits to rough approximations of characteristics such as volume, weight and calorie count with estimated error. The application could further approximate calories consumed if an image of the leftover servings was provided.
  • Figure 34 provides example components of the 2D and 3D objects database 34, whereby 2D images or 3D CAD models of objects in the database 34, or both, may be associated with various attributes stored in the attributes database 36.
  • 2D images 222 in the database 34 includes: a 2D image of object A from a first perspective, and may be of a given version as marked ' ⁇ 1 ' (340); a 2D image of object A from a second perspective, and of a second version marked ' ⁇ 2' (342); a 2D image of object B of a first version (344); a 2D image of object C of a first version (346); and a 2D image of object C of a second version (348).
  • the different versions of the same object can be due to the different amounts or volumes of the same object.
  • a pork chop is an irregularly shaped object, and some pork chops may be larger or smaller than others, although they are considered the same object.
  • the 3D CAD models 220 include: a 3D CAD model of the first version of object A (350); a 3D CAD model of the second version of object A (352); a 3D CAD model of the third version of object A (354); a 3D CAD model of the first version of object B (356).
  • Images and 3D CAD models of object A are related to object A data tags 66.
  • Images and 3D CAD models of object B are related to object B data tags 68.
  • images and 3D CAD models of object C are related to object C data tags 70. It can be seen that for an object, there may be different images and 3D CAD models, each having different versions and perspectives.
  • these attributes include the portion/serving (e.g. for food for example), the weight 360, the volume 362, the cost 364, etc.
  • the values associated with the attributes may be arranged in a table format, correlating the images or 3D CAD models, or both with the attributes.
  • Figure 35 example computer executable instructions are provided for identifying an unknown or irregular object through comparison with known objects in the 2D and 3D objects database 34, and estimating attributes associated with the unknown or irregular object.
  • the operations of Figure 35 may be implemented by module 42. It can be appreciated that many of the operations may be similar to those described earlier with respect to Figure 26, and thus, to be concise, the reference numerals for such operations are repeated below.
  • Block 234 image or images, or a 3D model is obtained.
  • Blocks 72, 74, 76, 78 and 80 are then performed to search for a 3D similar objects in the 2D and 3D objects database 34.
  • a 2D search is performed.
  • the computing device 20 analyzes each image using edge detection to separate each object image into individual sub-images of the target object, for example, by identifying or outlining the perimeter of the target object.
  • the 2D sub-images are scaled to correspond with dimensions of the target object.
  • one or more 2D spatial criteria are obtained to perform an object search for the target object.
  • one or more 2D search stencils are created based on the spatial tolerances.
  • a 2D search is conducted.
  • the results are combined to further refine the results.
  • the search results of the 2D and 3D search are used to identify the unknown target object.
  • the identity of the similar objects as returned through images or 3D CAD models from the search in the database 34, are used to infer the identity of the unknown target object. Therefore, the unknown target object assumes the identity of the similar object or objects.
  • the computing device 20 determines a related attribute of one or more of the target objects (e.g. servings, portions, costs, weight, volume, number of pieces, etc.).
  • the 3D search process or 2D search process continually updates the 3D database 32 or 2D and 3D database 34.
  • the 3D models in the 3D and 2D spatial data object and image database 34 are continuously improved and updated by adding the target models, especially those of higher quality.
  • the following operations are performed.
  • the target object 3D model is added to either the 3D objects database 32 or the 2D images and 3D objects database 34, or both.
  • the target object 3D model may be added as either an additional 3D model or as a replacement 3D model.
  • example computer executable instructions are provided for isolating a sub-image from the initially obtained image, as per block 237.
  • edge detection is applied to identify one or more object images (e.g. sub-images) in the obtained images.
  • object images e.g. sub-images
  • its attributes e.g. texture, color, area, perimeter, height, area, etc.
  • the computing device 20 retrieves or obtains the object type, category, or grouping if available. Such type, category, or grouping is used to further help identify and characterize the object images.
  • the sub-image is of a food product, for example, on the trade-marked Big Breakfast meal at McDonald's, and it known the Big Breakfast meal includes sausage, scrambled eggs and English muffins, then based on the, for example, yellow color of the imaged food product, it can be identified that the imaged food product is a serving of scrambled eggs.
  • a search is performed for one or more similar 2D objects in the 2D and 3D objects database 34, which stores 2D images of objects.
  • the searching process includes comparing the spatial criteria of the one or more adjusted 2D images with the same spatial criteria of the stored 2D images.
  • Figure 37 an example of retrieving 2D spatial criteria is provided, as per block 241 .
  • any one or more of the following are obtained in relation to the sub-image: height tolerance; width tolerance; perimeter length tolerance; area tolerance; and texture or color tolerance, or both.
  • Figures 38, 39, and 40 show different approaches for implementing the operation of combining the 2D and 3D search results, as per block 247.
  • a set of example computer executable instructions 378 are provided for combining a 2D image or sub-image(s) of the target object 380 and a candidate 3D model 382 retrieved from the 2D and 3D database 34.
  • the 2D sub-image 380 and the retrieved 3D CAD model 382 are inputs.
  • the 3D CAD model 382 is rotated and shifted until its 2D projection is closely aligned with the 2D sub-image 380 when projected onto its plane.
  • the 3D CAD model 380 is then mathematically projected onto the 2D plane of the object image to create a 2D projection of the 3D CAD model.
  • the 2D sub-image 380 can be mathematically projected onto the surface of the 3D CAD model 382, thereby creating a 3D projection of the 2D image.
  • at least one of the following is determined: if there is sufficient similarity between the 2D projection (of the 3D CAD model) and the sub-image; and if there is sufficient similarity between the 3D projection (of the sub-image) and the 3D CAD model.
  • Such similarity can be measured by determining if the overlapping area is above a certain percentage, if the difference in the perimeters is below a certain value, or if the texture or color matches.
  • the computing device 20 confirms that the estimated or retrieved 3D CAD model 382 is sufficiently similar to the sub- image 380 of the object.
  • the inputs to the set of example computer executable instructions 392 include 2D images 394 retrieved from the 2D and 3D database 34, the image 394 being of a candidate object that is similar to the target object.
  • the 2D image 394 is a search result.
  • the inputs also include a known 3D CAD model 396 of the target object, that was initially obtained either from images, from an input 3D CAD model, or from a 3D CAD model created by laser scanning.
  • the 3D CAD model 396 is rotated and shifted until it is closely aligned with the retrieved 2D images when projected onto their planes.
  • the 3D CAD model 396 is projected onto the 2D plane of the image 394 retrieved from the 2D and 3D database 34 to create a 2D projection, or the 2D image 394 is projected onto the 3D surface of the 3D CAD model 396, or both. Then, it is determined if there is sufficient similarity between the 2D projection and the 2D image 394, or the 3D projection and the 3D CAD model 396.
  • the computing device 20 confirms that the retrieved 2D image 394 is sufficiently similar to the 3D CAD model 396.
  • the inputs include either the 2D images of the target object or 2D images retrieved from the 2D and 3D database 34 of a candidate object similar to the target object (e.g. a search result from database 34).
  • the 2D images are referenced 402.
  • the inputs also include a 3D CAD model 404 which is either a candidate 3D model retrieved from the 2D and 3D database 34 or a 3D model of the of the target object, which has been obtained either through estimation using 2D images, or directly as an input 3D CAD model, or a 3D CAD model derived from a point cloud (e.g. generated by laser scanning).
  • the computing device 20 overlays the 2D images 402 over the 3D CAD model 404 to color or texture the 3D CAD model 404.
  • new or different 2D snapshots or images are taken of the coloured or textured 3D CAD model 404 from different perspectives and view points. This generates 2D model images.
  • the generated 2D model images are used to search for other similar 2D images in the database 34. It can be appreciated that this advantageously allows for other 2D images to be found, which may of different perspectives than the originally provided images at block 234. In other words, combining the 3D CAD model allows different or new 2D images to be generated.
  • the generated 2D model images are also used to search for other 3D model objects in the database 34. This may done by comparing 2D projections of 3D model objects in the database 34 with the generated 2D model images.
  • FIG 41 an example schematic diagram illustrates how a 3D CAD model can be projected onto a 2D image plane, and how a 2D image can be projected onto a 3D CAD model.
  • a 3D CAD model 414 of a chicken drumstick is shown.
  • an image 416 or image plane of a plate holding food is shown.
  • the food includes a sub-image 418 of a drumstick from a side perspective.
  • the 3D CAD model 414 of the drumstick has been shifted and rotated, it is then projected onto the 2D image plane 416, to determine how closely its 2D projection is similar to the sub-image 418 of the chicken drumstick.
  • the 2D sub-image 418 of the chicken drumstick is projected onto the surface of the 3D CAD model 414 of the chicken drumstick. Based on this "3D projection", it is determined how closely the sub-image 418 and the 3D CAD model 414 match. As described above, projecting the sub-image 418 onto the 3D CAD model 414 can also be used to color and texture the 3D CAD model 414 (e.g. with the color and texture of a chicken drumstick).
  • example computer executable instructions are provided for implementing block 366.
  • attributes of the target object may be inferred or estimated based on the attributes of the similar objects.
  • the computing device 20 determines a proportionality relationship between the target object's image or 3D model and the identified 2D image or 3D CAD model of the similar object. For example, it is identified that an image in the database of a symmetric pile of fine sand is similar to the image of a target object pile of sand. However, the image of the target object pile of sand is 1 .3 times the size (e.g. in both height and in width) of the image in the database of a pile of sand. This proportionality relationship or scaling factor will be used to calculate the adjusted attributes of the target object.
  • attributes of the identified image or 3D CAD model from the database 34 are obtained. Such attributes describe the object's characteristics as associated with the image or 3D CAD model. Attributes can be retrieved from the attributes database 36, and can include weight, portions, servings, cost, volume, etc. It can be appreciated that various types of attributes can be associated with an object. For example, for the image of the pile of sand, it may be known that the pile of sand has a volume of 10 cubic meters, a cost of $100, and a weight of 16,000 kg.
  • attributes of the identified image or 3D CAD model are applied to the target image.
  • the attributes values may be scaled based on the computed
  • the proportionality relationship or linear scaling factor is 1 .3, then the estimated attributes of the image of the target object, which is identified as sand, including volume, cost and weight could be proportionally multiplied by 1 .3 to the power of 3.
  • the target object of interest is automatically identified in an image, or is semi-automatically identified in the image, or is manually identified in the image.
  • an operator or user can: point at the centre of the target object; draw a bounding rectangle or bounding circle around the target object; draw a bounding polygon perimeter around the target object; or include or exclude areas using one or more of the above techniques.
  • the operator may be interested, not in a window, but in the frame around the window.
  • the operator could include the frame but exclude the window within the frame.
  • the operator may be interested in the wing mirror of a car and not the whole car itself.
  • an automatic search can be conducted for ideal fitting shoes.
  • a pair of well worn and very likely disfigured shoes is laser scanned on the inside and on the outside to create 3D point surfaces from which are derived TIN networks. From these TIN networks, 3D CAD surface models are derived of the inside and the outside surfaces of the shoes.
  • a 2D and 3D object database 34 is populated by the following: 3D models of the inside and outside surfaces of new shoes of many and various known sizes; 3D models of the inside and outside surfaces of old worn shoes of many and various known sizes; 3D models of the feet or space immediately enclosing the feet of the wearer of the worn shoes; 2D images of the above where possible; and information about the wearers of the worn shoes. Such information includes, for example, the wearer's height, weight, age, occupation and medical problems like suffering from back problems or being overweight, etc.
  • Geometric properties about the shoes and feet in the database are pre-calculated from the 3D models and stored to assist in searches and estimations. Such geometric properties can include, for example, foot width and arch height in several places along the length of the shoe. Notably, the 3D models themselves have many geometric properties inherently contained in their 3D shape. [00143] Similarly 3D geometric properties are calculated for the inside and the outside 3D models of the laser scanned or photographed pair of worn shoes or scanned or
  • Searches in the database 34 are performed based on these geometric properties by comparing them to the pre-calculated geometric properties in the database 34.
  • Such a system and database may provide a mechanism for statistical research and the development of innovative ways to offer recommendations and diagnoses. For example: a certain type of uneven wear on the shoe heels or soles may suggest by correlation that the wearer may likely have hip or back problems and recommendations for several types of a new shoe of a particular size and having a healthier fit can be suggested.
  • an iterative 3D search is performed over a period of time.
  • the iterative 3D searching process matches a target object with candidate objects in the 2D images and 3D objects database 34 by conducting several iterations or searches over time as more and more information becomes available, and until a more closely matching candidate is found. This advantageously helps the searching process return a more accurate or similar result to the target object.
  • a search is performed using a single picture of a couch taken using a mobile device, such as a device under the trade-mark iPhone.
  • the result of the 3D and 2D search is, for example, one-hundred matching similar candidates.
  • a second picture is taken from a different angle and another search is performed on only the one-hundred matching candidates.
  • the result of the second iteration is fifteen remaining candidate objects.
  • a third picture is taken from yet another angle and another search (e.g. the third iteration) is performed on only the fifteen remaining candidates rendering only one candidate as the best match.
  • Another suggested process that uses the above principles of combined 2D and 3D searching is described according to the below stages to, for example, search for a couch.
  • Stage 1 Use a mobile device including a camera, for example those provided under the trade-marks iPhone or BlackBerry, to take a photograph of the front of a couch from about ten feet away and five feet above the floor.
  • the computing device 20 described above may be the mobile device.
  • the computing device 20 may be separate from, and in communication with, the mobile device.
  • the image is analyzed using edge detection to generate a sub-image of the couch.
  • a search is performed using the sub- image of the couch in the database 34.
  • the database 34 does not contain the camera parameters of that model of the mobile device, then take a similar image of a large rectangular box of known measurements from the same height and distance away, and use edge detection to generate sub-images of the sides of the box. Using either the camera parameters or the box image, scale the photograph to correctly represent the couch measurements.
  • Stage 2 For each couch 3D model in the 3D database the computing device 20 performs the following: retrieve the 3D couch model from the 3D database 34;
  • Stage 3 Repeat Stage 1 using the mobile device to take a photograph of the side the couch from about ten feet away and five feet above the floor. Repeat Stage 2 for each of the remaining selected candidate models.
  • Stage 4 Repeat Stage 3 if necessary, to capture photographs from other angles around the couch to narrow down the list of candidates. This process could also be done using a video camera which steadily moves in a ten foot circle around the couch using multiple images to narrow down the candidate models until only one or a few candidate models remain. These are then presented as similar matches along with their estimated probabilities, in this case calculated purely on geometric similarity.
  • Stage 5 Any 2D images and 3D models collected or generated in the process can be added to the 2D and 3D database 34 with meta data about the object of interest.
  • the scaled video and photo images and sub images and information about the couch could be added together with associated image directions, distances and accuracies and camera information for use in future 2D and 3D searches.
  • the 3D models of the remaining matching candidates can be retrieved from the database as well as any associated images. These can present to the observer the possibilities of what the other side of the photographed object may look like and probabilities can also be associated with the remaining candidates based on attribute information in the database. For example a search using the photograph of a car headlight might render two matching types of car. Retrieving the 3D models from the database shows detailed information about the two possibilities of what the rest of the car looks like as well as the associated probabilities based on the availability of the types of car.
  • the consecutive images of a video are considered.
  • the video progresses and more and more video images are taken or captured from different angles around the target object. Meanwhile searches in 2D and 3D are being performed in real time on selected video images, thereby reducing the number of similar candidates until only a single candidate remains. The identification of an exact match is then reported and the video data collection is automatically stopped.
  • any 3D objects created and their associated 2D images can be added to the 3D and 2D Spatial Data Object and Image Database 34 along with the associated accuracy and resolution. In this way an ever increasing library of objects is built up and added to with ever increasing accuracies over time.

Abstract

Systems and methods are provided for searching for objects in a 3D objects database, as well as in a 2D image and 3D objects database. In one aspect, a 3D object search includes obtaining a 3D model of a target object, generating a search shell from the 3D model, and comparing the search shell with other 3D models in the 3D objects database. Similarly shaped objects in the 3D objects database are returned as search results. In another aspect, a 2D and 3D object search includes obtaining a 2D image of a target object, and comparing the 2D image with 2D images and 3D models in the 2D image and 3D objects database. Images or models that are similarly shaped to the 2D images are returned as search results. Further, such search results can be used to identify an unknown target object.

Description

SYSTEM AND METHOD FOR OBJ ECT SEARCHING USING SPATIAL DATA
CROSS-REFERENCE TO RELATED APPLICATIONS:
[0001] The present application claims priority from United States Provisional Application No. 61/412,1 12 filed on November 10, 2010, the entire contents of which are hereby incorporated by reference.
TECHNICAL FI ELD:
[0002] The following relates generally to searching for objects, or identifying objects, or both, using data representing spatial coordinates.
DESCRIPTION OF TH E RELATED ART
[0003] In order to investigate an object or structure, it is known to interrogate the object or structure and collect data resulting from the interrogation. The nature of the interrogation will depend on the characteristics of the object or structure. The interrogation will typically be a scan by a beam of energy propagated under controlled conditions. The results of the scan are stored as a collection of data points, and the position of the data points in an arbitrary frame of reference is encoded as a set of spatial-coordinates. In this way, the relative positioning of the data points can be determined and the required information extracted from them.
[0004] Data having spatial coordinates may include data collected by electromagnetic sensors of remote sensing devices, which may be of either the active or the passive types. Non-limiting examples include LiDAR (Light Detection and Ranging), RADAR, SAR
(Synthetic-aperture RADAR), IFSAR (Interferometric Synthetic Aperture Radar) and Satellite Imagery. Other examples include various types of 3D scanners and may include sonar and ultrasound scanners.
[0005] Data having spatial coordinates may also include 2D images collected from camera or photographic devices.
[0006] LiDAR refers to a laser scanning process which is usually performed by a laser scanning device from the air, from a moving vehicle or from a stationary tripod. The process typically generates spatial data encoded with three dimensional spatial data coordinates having XYZ values and which together represent a virtual cloud of 3D point data in space or a "point cloud". Each data element or 3D point may also include an attribute of intensity, which is a measure of the level of reflectance at that spatial data coordinate, and often includes attributes of RGB, which are the red, green and blue color values associated with that spatial data coordinate. Other attributes such as first and last return and waveform data may also be associated with each spatial data coordinate. These attributes are useful both when extracting information from the point cloud data and for visualizing the point cloud data. It can be appreciated that data from other types of sensing devices may also have similar or other attributes.
[0007] The visualization of point cloud data, or spatial data in general, can reveal to the human eye a great deal of information about the various objects which have been scanned or imaged. Information can also be manually extracted from the point cloud data and represented in other forms such as 3D vector points, lines and polygons, or as 3D wire frames, shells and surfaces. These forms of data can then be input into many existing systems and workflows for use in many different industries including for example, engineering, architecture, construction and product design.
[0008] A common approach for extracting these types of information from 3D point cloud data or 2D images involves subjective manual pointing at points representing a particular feature within the point cloud data or the 2D image data either in a virtual 3D view or on 2D plans, cross sections and profiles. The collection of selected points is then used as a representation of an object. Some semi-automated software and CAD tools exist to streamline the manual process including snapping to improve pointing accuracy and spline fitting of curves and surfaces. Such a process is tedious and time consuming. Accordingly, methods and systems that better semi-automate and automate the extraction of these geometric features from the point cloud data are highly desirable.
[0009] Automation of the process is, however, difficult as it is necessary to recognize which data points form a certain type of object. For example, in an urban setting, some data points may represent a building, some data points may represent a tree, and some data points may represent the ground. These points coexist within the point cloud or image and their segregation is not trivial. Moreover, the identification of the object within the point cloud or image can be challenging as well.
[0010] From the above it can be understood that efficient and automated methods and systems for identifying and extracting features from 2D and 3D spatial coordinate data are highly desirable.
BRI EF DESCRIPTION OF TH E DRAWI NGS [0011] Embodiments of the invention or inventions will now be described by way of example only with reference to the appended drawings wherein:
[0012] Figure 1 is a schematic diagram to illustrate an example of an aircraft and a ground vehicle using sensors to collect data points of a landscape.
[0013] Figure 2 is a block diagram of an example embodiment of a computing device and example software components.
[0014] Figure 3 is a block diagram showing example components of a 3D objects database.
[0015] Figure 4 is a flow diagram illustrating example computer executable instructions for searching for similar 3D objects.
[0016] Figure 5 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through laser scanning.
[0017] Figure 6 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through photogrammetry.
[0018] Figure 7 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through silhouette imaging.
[0019] Figure 8 is a flow diagram illustrating example computer executable instructions for obtaining a 3D model of a object through edge detection.
[0020] Figure 9 is a flow diagram illustrating example computer executable instructions for scaling a 3D model to correspond with dimensions of a target object.
[0021] Figure 10 is a flow diagram illustrating example computer executable instructions for receiving or obtaining one or more spatial criteria to perform an object search of the target object.
[0022] Figure 1 1 is a flow diagram illustrating example computer executable instructions for generating one or more adjusted 3D models, or search shells. [0023] Figure 12 is a flow diagram illustrating example computer executable instructions for searching for 3D objects by comparing the search shells with objects stored in the 3D objects database.
[0024] Figure 13 is a flow diagram illustrating example computer executable instructions for returning the results of the search.
[0025] Figure 14 is a schematic diagram for a camera device obtaining images of a target object from different angles and constructing an enclosed space from the images.
[0026] Figure 15 is a schematic diagram for constructing points of an example 3D object using 2D images.
[0027] Figure 16 is a schematic diagram illustrating search shells of a target object, the target object positioned beside a standard reference object.
[0028] Figure 1 7 is a schematic diagram illustrating the comparison of an example search shell and a 3D CAD model of another object.
[0029] Figure 18 is another schematic diagram illustrating the comparison of the example search shell and a 3D CAD model of yet another object.
[0030] Figure 19 is another schematic diagram illustrating the comparison of the example search shell and a 3D CAD model of yet another object.
[0031] Figure 20 is another schematic diagram illustrating the comparison of the example search shell and a 3D CAD model of yet another object.
[0032] Figure 21 is a schematic diagram illustrating the cross-section along A-A of Figure 1 7.
[0033] Figure 22 is a schematic diagram illustrating the cross-section along B-B of Figure 18.
[0034] Figure 23 is a schematic diagram illustrating the cross-section along C-C of Figure 19.
[0035] Figure 24 is a schematic diagram illustrating the cross-section along D-D of Figure 20. [0036] Figure 25 is a block diagram showing example components of a 2D image and 3D objects database.
[0037] Figure 26 is a flow diagram illustrating example computer executable instructions for searching for 2D images and 3D CAD models of similar objects.
[0038] Figure 27 is a flow diagram illustrating example computer executable instructions for isolating a sub-image using edge detection.
[0039] Figure 28 is a flow diagram illustrating example computer executable instructions for scaling a 2D sub-image using a standard reference object.
[0040] Figure 29 is a flow diagram illustrating example computer executable instructions for searching for similar objects in the 2D images and 3D objects database using spatial criteria.
[0041] Figure 30 is a flow diagram illustrating example computer executable instructions for determining the probability that the search results are correct.
[0042] Figure 31 , Figure 32 and Figure 33 are schematic diagrams illustrating example stages of searching for similar objects in the 2D images and 3D objects database.
[0043] Figure 34 is a block diagram showing example components of a 2D image and 3D objects database, as well as attributes associated with the objects.
[0044] Figure 35 is a flow diagram illustrating example computer executable instructions for identifying an irregular or unknown object through object matching in the 2D images and 3D objects database.
[0045] Figure 36 is a flow diagram illustrating example computer executable instructions for isolating and categorizing sub-images.
[0046] Figure 37 illustrates example computer executable instructions for receiving spatial criteria.
[0047] Figure 38 is a flow diagram illustrating example computer executable instructions for conducting a combined 2D and 3D search comparison. [0048] Figure 39 is a flow diagram illustrating example computer executable instructions for conducting another example embodiment of a combined 2D and 3D search comparison.
[0049] Figure 40 is a flow diagram illustrating example computer executable instructions for conducting another example embodiment of a combined 2D and 3D search comparison.
[0050] Figure 41 is a schematic diagram illustrating the projection of a 3D CAD model onto a 2D plane, and the projection of a 2D image onto a 3D CAD model.
[0051] Figure 42 is a flow diagram illustrating example computer executable instructions for determining a related attribute of an identified target object.
DETAILED DESCRIPTION
[0052] It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate
corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
[0053] The proposed systems and methods extract various features from data having 2D and 3D spatial coordinates (e.g. images and 3D point clouds), and search for similar objects in a database. In particular, it is appreciated that searching for objects or identifying objects can be difficult, for example, when based on text since text may not be sufficiently descriptive. It is recognized that searching for objects based on 2D images and 3D images can be more accurate and allow for similar-type objects to be identified. The objects to be searched are various and non-limiting examples include: cars, trains, telephones, vases, chairs, cups, clothing, shoes, cutlery, dishes, street signs, food items, staplers, pens, hair accessories, tables, lamps, bicycles, tools, etc. The search or identification is based on inputting or providing at least one of: a 2D image of an object of interest, a 3D model of an object of interest (e.g. from laser scanning or a 3D computer aided design (CAD) model), multiples thereof, and combinations thereof. Based on at least the inputted 2D image or 3D model, or both, of the object of interest, the proposed systems and methods search for similar or identical objects in a database containing 2D images, or 3D models, or both. The object of interest is herein referred to as a "target object", since it is the target or reference used to search for similar or identical objects.
[0054] As discussed above, the data may be collected from various types of sensors. A non-limiting example of such a sensor is the LiDAR system built by Ambercore Software Inc. and available under the trade-mark TITAN.
[0055] Turning to Figure 1 , data is collected using one or more sensors 10. The sensors 10, for example, may be mounted to an aircraft 2 or to a ground vehicle 12. The aircraft 2 may fly over a landscape 6 (e.g. an urban landscape, a suburban landscape, a rural or isolated landscape) while a sensor collects data points about the landscape 6. For example, if a LiDAR system is used, the LiDAR sensor 10 would emit lasers 4 and collect the laser reflection. Similar principles apply when an electromagnetic sensor 10 is mounted to a ground vehicle 12. For example, when the ground vehicle 12 drives through the landscape 6, a LiDAR system may emit lasers 8 to collect data. It can be readily understood that the collected data may be stored onto a memory device. Data points that have been collected from various sensors (e.g. airborne sensors, ground vehicle sensors, stationary sensors) can be merged together to form a point cloud.
[0056] Each of the collected data points is associated with respective spatial coordinates which may be in the form of three dimensional spatial data coordinates, such as XYZ Cartesian coordinates (or alternatively a radius and two angles representing Polar coordinates). Each of the data points also has numeric attributes indicative of a particular characteristic, such as intensity values, RGB values, first and last return values and waveform data, which may be used as part of the filtering process. In one example embodiment, the RGB values may be measured from an imaging camera and matched to a data point sharing the same coordinates.
[0057] The determination of the coordinates for each point is performed using known algorithms to combine location data, e.g. GPS data, of the sensor with the sensor readings to obtain a location of each point with an arbitrary frame of reference.
[0058] Although not shown, data of a target object may also be collected from LiDAR devices suitable for scanning smaller objects, such as terrestrial laser scanning, industrial laser scanning and handheld 3D laser scanning devices. Examples of terrestrial laser scanners and industrial laser scanners include those manufactured by companies such as RI EGL and LEICA. Examples of handheld 3D laser scanners include those manufactured by companies such as NIKON. Data of a target object may also be collected from camera or photographic devices. These include the camera devices on mobile or cellular phones, such as those provided by, for example, Research in Motion Limited and Apple Inc. In general, various 2D and 3D imaging devices are applicable to the principles described herein.
[0059] Turning to Figure 2, a computing device 20 includes a processor 22 and memory 24. The memory 24 communicates with the processor 22 to process data. It can be appreciated that various types of computer configurations (e.g. networked servers, standalone computers, mobile devices, cloud computing, etc.) are applicable to the principles described herein. The data having spatial coordinates 26 and various software 28 reside in the memory 24. The data having spatial coordinates 26 may refer to 3D data (e.g. 3D CAD models, laser scanned points, etc.). In general, although data having spatial coordinates includes photos, video data, and 2D images, for further clarity, these are simply referred to as 2D images 30. The 2D images 30 are also stored in memory 24.
[0060] Several databases 32, 34, 36 are also stored in memory 32. A 3D objects database 32 stores 3D CAD models of objects. These objects are named and may be associated with additional information. A 2D and 3D objects database 34 stores 2D images of objects and 3D CAD models of objects. For example, in the database 34, there may be associated with a certain object, such as a chair, 2D images of the chair and a 3D CAD model of the chair. There may also be attributes 36 associated with the objects in either, or both, of the databases 32, 34. It can be appreciated that the databases 32, 34, 36, the 2D images 30, the data having spatial coordinates 26, and the software 28 may all interact with each other.
[0061] An example approach for populating the databases 32, 34 with 3D CAD models of objects would be to import design drawings (e.g. 3D CAD models) of various objects. In another example approach, objects may be scanned by LiDAR, cameras, etc. from various angles to generate 3D CAD models of objects. It can be appreciated that there are various ways of generating or obtaining 3D CAD models to populate the databases 32, 34.
[0062] It can be appreciated that 3D models, also called 3D CAD models, herein refer to mathematical representations that are suitable for processing by computing devices. Non- limiting examples of 3D models include 3D block models, 3D wire frame models, 3D shell models, 3D solid models, etc. [0063] A display device 18 may also be in communication with the processor 22 to display 2D or 3D images. It can be appreciated that the data 26, 30 may be processed according to various computer executable operations or instructions stored in the software 28. In this way, the features may be extracted from the data 26, 30, and similar or matching objects can be identified in the databases 32, 34, 36.
[0064] Continuing with Figure 2, the software 28 may include a number of different modules for searching for similar or matching objects in the databases 32, 34, 36. For example, a similar 3D object searching module 38 searches for 3D CAD models of objects that are similar to one or more 2D images of a target object and/or a 3D model of a target object. For example, by taking several photographs of a chair (e.g. the target object) from different angles, the photographs can be used to create a 3D enclosed volume of space which is then used to search for 3D CAD models of similar-looking chairs in the 3D objects database 32. Module 40 is for searching for similar 2D images and 3D objects (e.g. in database 34) using 2D images and/or 3D CAD models of a target object. For example, based on a photograph of a chair (e.g. the target object), a search can be performed using module 40 to return 2D images and their associated 3D CAD models of similar-looking chairs and then validate these 2D images and 3D CAD models against the original photograph. Module 42 is for analysing irregular objects using any one of databases 32, 34 in combination with database 36, which specifies attributes associated with the objects. For example, based on a photograph and/or laser scan of a plate or dish of food, the unknown food objects (e.g. chicken leg, serving of rice, corn on the cob) on the plate can be identified by comparing the 2D images (and/or 3D models) of the unknown food objects with known 2D images of objects and/or their associated 3D CAD models of objects in the databases 32, 34. Any candidate 2D images and their associated 3D models can then be validated against the original photograph using their inherent 2D and 3D geometric properties with scaling to determine matches ranging from exact fits to rough approximations with estimated error. Moreover, by comparing the unknown object with similar or identical known objects in the databases 32, 34, then attributes 36, such as the weight of the known object, can be scaled, estimated and associated with the unknown object.
[0065] It can be appreciated that there may be many other different modules for searching for similar objects using data having spatial coordinates 26 or 2D images 30. It can also be understood that many of the modules described herein can be combined with one another. [0066] It will be appreciated that any module or component exemplified herein that executes instructions or operations may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and nonremovable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data, except transitory propagating signals per se. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the computing device 20 or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions or operations that may be stored or otherwise held by such computer readable media.
[0067] Details regarding the different feature extraction systems and methods, that may be associated with the various modules in the software 28, will now be discussed.
[0068] Turning to Figure 3, example data components of the 3D objects database 32 are shown. There may be 3D CAD models 60, 62, 64 of different objects. Associated with each 3D CAD model may be data tags (e.g. information) that identify the objects. For example, 3D CAD model 60 may be associated with data tags 66; 3D CAD model 62 may be associated with data tags 68; and 3D CAD model 64 may be associated with data tags 70. Each data tag can include one or more of the following: an object type, a manufacturer make, a model, etc. There may be other types of identifying information, such as a name, group, class, etc. For example, for a 3D CAD model of a certain car, it may belong under the object type "car", have a manufacturer make "Toyota", and have model "Prius 2010". The type of identifying information may vary with the different types of objects. It can be appreciated that the geometric or spatial information inherent of the 3D CAD model, or the data tags, or both, may be used as searching parameters.
[0069] The 3D objects database also includes 3D CAD models of standard reference objects. Standard reference objects refer to well known 3D objects. These include, for example, a pop can, stop sign, pen, dollar bill, coins, sheet of paper, fire hydrant, paper clip, etc. Standard reference objects are consistent in size and shape as they are manufactured in large quantities. For example, all twenty dollar bills are the same within a country; all stop signs are the same; all pop cans are the same; etc. It can be appreciated that standard reference objects may be customized depending on the application or industry (e.g. a hard hat may be a standard reference object in the construction industry). The dimensions of these standard reference objects are known through their 3D CAD models. The standard reference objects are used to scale 3D CAD models of objects, as will be discussed later.
[0070] Turning to Figure 4, example computer executable instructions are provided for searching for similar objects in the database 32. These operations are implemented by module 38. At block 72, the computing device obtains or receives a 3D model of an object of interest (e.g. the target object). At block 74, the 3D model is scaled to correspond with the actual dimensions of the target object, for example, if the 3D model is of different dimensions of the target object. At block 76, one or more search criteria, such as spatial criteria, are received or obtained to perform a search for objects in the database 32 that are similar to the target object. At block 78, using the spatial criteria, the computing device 78 generates one or more adjusted 3D models (e.g. "search shells"). The adjusted 3D models, or search shells, reflect the spatial tolerances. For example, if the obtained 3D model of the target object is 30 cm tall, and the height tolerance is +/- 5 cm, then one adjusted 3D model of the target object may be 35 cm tall, while another may be 25 cm tall. At block 80, the computing device 20 searches for one or more similar objects in a 3D objects database 32 by comparing the one or more adjusted 3D models, or search shells, with the same spatial criteria of the stored 3D models. For example, the search may determine whether an object in the 3D objects database 32 have a height between 25 cm and 35 cm. If so, the object may be considered to be similar to the target object. At block 82, for each compared 3D model from the database 32, it is determined if the spatial criteria of a stored 3D model sufficiently matches the same spatial criteria of the adjusted 3D model. If so, at block 86, the stored 3D model of the database 32 is returned since it is considered to be a similar object to the target object. If not, no action is taken (block 84).
[0071] Figures 5, 6, 7, and 8 provide example computer executable instructions for obtaining a 3D model of the target object, as per block 72. It can be appreciated that there are different approaches to obtaining a 3D model, as discussed below.
[0072] Figure 5 relates obtaining a 3D model from laser or LiDAR scanning. At block 88, the target object is laser or LiDAR scanned to generate a point cloud of 3D point covering the surface of the target object. At block 90, from the point cloud of 3D points, a closed triangulated irregular network (TIN) is generated representing the target object's surface in the form of a wire frame. This TIN can be formed using Delaunay's triangulation algorithm, for example. In general, various triangulation algorithms for generating a TIN are applicable to the principles described herein. At block 92, from the TIN, a 3D shell of 3D CAD model is generated of the target object. In this approach, the 3D CAD model should be accurately sized to match the dimensions of the target object, due to the accuracy of the laser scanning method.
[0073] In another approach, Figure 6 provides example computer executable instructions for generating a 3D CAD model of the target object using photogrammetry. At block 94, the computing device receives at least two images of the target object, the images photographed from different angles or positions. At block 96, stereophotogrammetry is applied to the images to estimate the coordinates of the 3D points on the target object. In stereophotogrammetry, the 2D points of the images are correlated to determine the 3D coordinates of points on the target object. Based on the 3D point, a closed TIN wire frame of the target object is generated (block 98). At block 100, from the TIN, a 3D shell or 3D CAD model of the object is generated. Although not shown in Figure 6, the dimensions of the 3D CAD model could be accurately dimensioned if various photogrammetric parameters are known, such as the focal length, distance between the camera and the target object, and number of pixels in length and width of the target object. Using such parameters, known methods may be applied to determine the actual dimensions of the target object, which may be applied to the 3D CAD model of the target object.
[0074] Figure 7 provides another approach using silhouette imaging. At block 102, the computing device 20 receives at least two images of the target object. In particular, the target object is photographed from different positions or angles (e.g. for stereoscopic effect) and the target object is photographed against a uniform background. In this way, the outline of the target object image from the images can be more easily identified. At block 104, the computing device 20 receives the distance between the target object and the camera device that captured the images. The distance can be measured manually, or automatically, such as by a range finder. Examples of ranger finder devices include ultrasonic transmitters and receivers, infrared light beams, camera device, or combinations thereof. It can also be appreciated that the angle or position of the camera device, for example, relative to the target object, is also received. At block 106, all the images are converted into silhouettes by marking the background pixels, which are known to have approximately uniform RGB values, as white and all other pixels as black. Therefore, the target object image should be black. At block 108, the silhouettes of the target objects are mathematically projected to form a 3D CAD model of the enclosing volume in space. Typically, to correlate the different silhouette images of the target object to determine an enclosing space, the distance from the target object, the angle of the image plane, and the position of the camera device capturing the images, are used to determine the locations of points and lines of the target object. It can be appreciated that the actual dimensions of the target object may be accurately represented using the photogrammetric parameters, as discussed above with respect to Figure 6.
[0075] Turning briefly to Figure 14, a schematic diagram shows a target object being a car 1 72 that is being imaged from different positions. In particular, the image planes 1 74, 1 76, 1 78, 180, 182, 184, 186 and 188 are taken from different angles and distances relative to the target object. The images of the car in the image planes are used to form silhouettes of the car, which can then be used to determine or define an enclosed volume of space in the approximate shape of the car. The enclosed volume of space is then used to generate a 3D CAD model representing the car.
[0076] Turning to Figure 8, another approach for generating a 3D CAD model of a target object is provided and is based on edge detection. At block 1 10, at least two images of the target objects are received. The images are photographed from different positions (e.g. for stereoscopic effect) and are preferably, although not necessarily, photographed against a uniform background. At block 112, the distances between the target object and the camera device are received or obtained. At block 1 14, edge detection algorithms are applied to separate the image of the target object from the background, as well as to separate the image of the target object into distinct polygonal components. The edges of the polygonal components may be based on discontinuities in depth, discontinuities in surface orientation, discontinuities in color, discontinuities in material properties, etc. These discontinuities are identified by geometric relationships and color (e.g. RGB values) relationships between the pixels. Examples of applicable edge detection algorithms include differential edge detection, Sobel edge detection, Laplace edge detection, and Canny edge detection. Other edge detection algorithms may be used to identify polygonal components of the target object. Threshold or gradient filters may also be used to pre-process the images of the target object, so that edges of the target object can be more easily detected. At block 1 16, the component polygons from the different images are mathematically projected to determine where their projection intersects in 3D space. At block 1 18, the 3D point intersections are used to define corners or profile lines of the target object. The 3D point intersections are used to generate a closed TIN wire frame of the target object, e.g. using Delaunay's triangulation algorithm. From the TIN, a 3D shell or 3D CAD model of the object is generated (block 120). If the photogrammetric parameters are available, as described earlier, it is possible to scale the 3D CAD model to correspond to the actual dimensions of the target object.
[0077] Turning to Figure 15 briefly, a schematic diagram illustrates an example of the edge detection approach described in Figure 8. Images or photos 192 and 194 are taken of an object from different angles. A polygonal component of the object is identified using edge detection. The image of the polygonal component 200 is from one angle in image 192, and the image of the polygonal component 196 is from another angle in image 194. Based on the camera's properties, such as field of view and focal length, the image rays 198 of the polygonal component 196 can be determined. The image rays 198 are then projected outward. Similarly, the image rays 202 of the polygonal component 200 are determined and are projected outward from the position of the camera device that captured image 192. The intersection of rays 202 and 198 define the three-dimensional coordinates or lines of the polygonal component of the target object. In this example, the intersection of the image rays defines the square surface p-q-r-s.
[0078] In view of the above, it can therefore be appreciated that there are various approaches to obtaining a 3D model, for example, from both 2D images and 3D data. Other methods of generating or obtaining a 3D model are applicable to the principles described herein.
[0079] Turning to Figure 9, example computer executable instructions are provided for scaling the 3D CAD model to correspond with the dimensions of the target object, if necessary. In particular, at block 122 it is determined if the 3D CAD model of the target object is already scaled to the dimensions corresponding to the target object. If so, at block 124, no action is taken. If not, at block 126, it is then determined if a standard reference object been laser scanned or imaged/photographed along with the target object. Standard reference objects refer to well known objects, and these objects have corresponding 3D CAD models 58 in the database 32. Therefore, if a standard reference model is a pop can (e.g. having a height of 4.8125 inches, and diameter of 2.5 inches), at block 126, it is determined if a pop can was imaged or scanned when imaging or scanning the target object, respectively. If so, then at block 128, the standard reference object is identified in the laser scanning or the image, and the dimensions of the scanned or image standard reference object are determined. For example, if the target object was imaged, the image of the pop can has dimensions of 2.4 inches tall by 1 .25 inches wide. At block 130, an accurate 3D CAD model 58 of the standard reference object is retrieved from the database 32. For example, the accurate 3D CAD model of the pop can is retrieved, the 3D CAD model having a height of 4.8125 inches and diameter of 2.5 inches. At block 132, the dimension of the laser scanned or photographed standard reference object is compared with the dimensions of the accurate 3D CAD model. Such comparison is used to generate or determine one or more scale factors between the obtained image or CAD model and the accurate 3D CAD model. It can be appreciated that there may be multiple scale factors, for example, when the scale factor along the horizontal axis may be different from the scale factor along the vertical axis. At block 134, the scale factor or scale factors are applied to the 3D model of the target object.
[0080] If, however, at block 126 it is determined that the standard reference object has not been laser scanned or imaged with the target object, then the process continues to block 136. At block 136, the computing device 20 obtains or retrieves the distance at which the target object was laser scanned or imaged. At block 138, a laser scan of photograph is captured of a standard reference object from the same distance as the distance obtained or retrieved in block 136. In other words, if, for example, the image of the target object was captured by a camera located 5 feet away, then a subsequent image of a standard reference object will be taken from the camera at 5 feet away. The dimensions of the imaged standard reference object are then determined from the image. The process continues with blocks 130, 132, and 134, the details of which are described above.
[0081] Turning to Figure 10, example computer executable instructions are provided for receiving or obtaining one or more search criteria to perform an object search for the target object, as per block 76. At block 140, a graphical user interface (GU I) is displayed to allow a user to select search criteria. Alternatively, or in addition, search criteria may be predetermined or preset. The search criteria include spatial criteria, including for example height tolerance, width tolerance, length tolerance, volume tolerance, area tolerance (including cross-sectional areas), and shell surface tolerance. The shell surface of a 3D CAD model refers to the surface (e.g. the outer surface) of the 3D CAD model. The shell surface tolerance refers to the distance away from the shell surface. For example, a shell surface tolerance of +10 inches increases the width by 20 inches, the height by 20 inches, etc. The shell surface tolerance takes into account the dimensions of complex shapes which are not considered by length, width and height criteria. Example of complex shapes includes vases with tapered or curved profiles and chairs with legs and a back. Other search criteria include texture and color. For example, although a target object is blue, similar objects having the same shape but having a red color may be desired. The search criteria can also include the object type, object grouping, object make, object model, etc. Such information is preferably available in the database 32 as shown by data tags 66, 68, 70.
[0082] Continuing with Figure 10, at block 142, the search criteria is further defined by narrowing the search to a portion or part of the target object. In other words, the computing device 20 determines or receives the parts or portion of the target object to be searched and compared with against the 3D CAD model objects in the database 32. Parts of an object can include the upper portion (e.g. upper 30% of the target object), the lower portion (e.g. bottom 25% of the target object), the left portion, the right portion, etc. This advantageously allows a 3D object search to only consider the part of the target object that is of interest. For example, a user may take an image of an office chair. The user, however, is only interested in the chair's back. The user may then specify that the object search be performed based on the upper 60% of the chair (e.g. includes the back, but not the chair's legs). Therefore, the search results would contain chairs having similar chair backs, but possibly very different chair legs.
[0083] Turning to Figure 1 1 , examples are provided for generating one or more adjusted 3D models, also called search shells, as per block 78. In particular, at block 144, the spatial criteria and tolerances are used to generate an adjusted 3D model. For example, if the height tolerance is +/- 10%, then an adjusted 3D CAD model (e.g. the first search shell) is created that is 10% taller than the 3D CAD model of the target object, and another 3D CAD model (e.g. the second search shell) is created that is 10% shorter than the 3D CAD model of the target object. As will be discussed later, the search will be performed so that objects taller than the second search shell, and objects shorter than the first search shell are considered as candidate similar objects to the target object.
[0084] In another example, the shell surface tolerance may be +/- 2 inches. Therefore, an adjusted 3D model may be generated by adding a 2 inch thick skin over the entire surface of the 3D CAD model of the target object. Another adjusted 3D model may be generated by subtracting a 2 inch thick skin from the entire surface of the 3D CAD model of the target object. [0085] Turning to Figure 16 briefly, the concept of the search shells and the standard reference objects are further explained through example illustration. The 3D CAD model 206 is shown of the target object, which is a vase. A scan or image of a pop can 204 was captured along side the vase. Since the pop can 204 is a standard reference object, its dimensions are known. The 3D CAD model of the vase 206 can then be scaled in proportion to the known dimensions of the 3D CAD model of the pop can 204. Based on the 3D CAD model of the vase 206, a search shell 208 can be created by adding a one quarter inch thick skin onto the 3D CAD model of the vase 206. This search shell 208 can be used for a more refined search. Another search shell 210 can be created by adding a one half inch thick skin onto the 3D CAD model of the vase 206. This search shell 210 can be used for a more coarse or approximate search. It can be appreciated that the larger the search shells are relative to the 3D CAD model of the target object, the more coarse or the more approximate the results become.
[0086] The search criteria may include pre-defined settings for the search shells, such as a coarse search, a more refined search, and a very exact search. For example, a coarse search may use a search shell that is 20% larger than the 3D CAD model of the target object. A more refined search may use a search shell that is 10% larger than the 3D CAD model of the target object. A more exact search may use a search shell that is 2% larger than the 3D CAD model of the target object.
[0087] Turning to Figure 12, example computer executable instructions are provided for searching for one or more similar objects in a 3D objects database, the 3D objects database storing 3D models of objects, by comparing the search criteria of the one or more adjusted 3D models with the same search criteria of the stored 3D models, as per block 80.
[0088] At block 146, if an object type or group criteria, or some other identification, is provided or obtained, then the search is narrowed to the specified object type or group. For example, it may have been specified that the target object belongs to the category of vases. Therefore, the search in the 3D objects database 32 will narrowed down to vases using the data tags. At block 148, 3D CAD objects are retrieved from the database 32 if they are within any one of the height, width or length tolerances, if such tolerances have been specified. At block 150, the computing device 20 superimposes each 3D search shell with the 3D CAD objects retrieved from the 3D objects database. If only a certain portion or part of the 3D CAD model of the target object is to be searched, for example as specified in block 142, then the other parts that are considered irrelevant are eliminated or truncated from the 3D CAD objects retrieved from the database 32. For example, only the top portions of the 3D models of the vases from the database 32 are considered and, thus, the bottom portions of the vases are ignored.
[0089] At this stage, there may be one or more 3D CAD objects from the database 32 that are considered candidates to being similar to the target object. For each of the candidate 3D CAD objects, further spatial comparison is performed, as follows. Continuing with Figure 12, at block 154, the computing device 20 determines the percentage volume of the 3D CAD object that lies within the 3D search shell (called "overlap parameter 1 "). At block 156, it is then determined how much percentage volume of the 3D search shell lies within the 3D CAD object (called "overlap parameter 2"). Both overlap parameter 1 and overlap parameter 2 are preferably considered. For example, it may be that the candidate 3D CAD object lies 100% within the search shell (e.g. overlap parameter 1 ), since it is much smaller than the search shell. However, this alone does not indicate a strong similarity. The percentage of the search shell that lies or overlaps with the volume of the 3D CAD object may only be 40% (e.g. overlap parameter 2). Therefore, it can be understood that the 3D CAD object is much smaller compared to the search shell and thus, may be considered not similar to the target object.
[0090] Examples of overlapping search shells are shown in Figures 1 7, 18, 19 and 20. Figure 1 7 shows a search shell 210 of a vase. It is being compared with a 3D CAD object of another vase 212, which resembles a cylinder. Figure 18 shows the search shell 210 being compared with a 3D CAD object of another vase 214, which is thinner around the base and wider at the top. Figure 19 shows the search shell 210 being compared with a 3D CAD object of another vase 216, which is wider around the midsection. Figure 20 shows the search shell 210 being compared with a 3D CAD object of another vase 218, which resembles a laboratory flask. As can be seen, vases 212, 214, and 218 overlap 100% in volume with the search shell 210. However, the search shell 210 does not 100% overlap in volume with the vases 212, 214, and 218. It can also be appreciated that, if the only the upper portion, such as the upper 30%, of the search shell 210 was being used as a search parameter, then vase 214 would be the most similar to the specified portion of the target object.
[0091] Turning back to Figure 12, similar logic is applied in blocks 158 and 160, which refer to computing the overlapping areas. In particular at block 158, the computing device 20 determines the percentage area (e.g. surface area, cross sectional area, profile area, etc.) of the 3D CAD object that lies within the corresponding area of the search shell or target object (called "overlap parameter 3"). At block 160, the computing device 20 determines the corresponding percentage area between the search shell or target object overlapping the 3D CAD object (called "overlap parameter 4"). It should be noted that the database can contain many pre-computed geometric properties about the 3D shells which it contains. Examples include a series of parallel horizontal cross sections whose areas quantify the changing shape of a 3D object as you move vertically. Similarly a series of vertical cross sections in different vertical planes reveal and quantify the changing shape based on direction. These may be used to hasten the 3D search through the database in order to rapidly find similarly shaped 3D objects to compare to the target 3D object.
[0092] Figures 21 , 22, 23, and 24 show cross-sectional areas corresponding to Figures 1 7, 18, 19, and 20, respectively. The dotted circle 210' outlines the cross-sectional area of the 3D CAD model of the target object, and is shown in comparison with the cross-sections of the vases 212, 214, 216, and 218. In the example, the cross-sectional area is taken at a lower height of the vases. Therefore, as shown in Figure 23, the vase 216 has the most similar cross-sectional area compared to 3D CAD model of the target object, for the specified cross-sectional height.
[0093] Returning to Figure 12, at block 162, it is then determined if one or more of the following conditions are true: if overlap parameter 1 is within a first specified range; if overlap parameter 2 is within a second specified range; if overlap parameter 3 is within a third specified range; if overlap parameter 4 is within a fourth specified range; or if any combinations of the above are true. If one of the conditions are true, or if certain specified combinations are true, then the candidate 3D CAD object is returned as a similar object to the target object.
[0094] However, if no similar object is returned, then at block 164 the alignment of the geometric centers and the rotational angles are adjusted or fine tuned. In other words, the candidate 3D CAD object from the database 32 and the 3D CAD model of the target object are re-oriented and re-positioned relative to one another to determine if the percentage volume of overlap can be increased. Preferably, the rotations and alignment adjustments are made in small increments. In some cases, larger rotations may be used where there are no obvious "front", "back", "top", "bottom", etc. orientation identifiers of the objects. [0095] Upon re-adjusting the orientation or alignment, or both, block 154 to 164 are repeated (block 166). However, if after a certain number re-alignment iterations have been performed and the candidate 3D CAD object from the database 32 still does not meet the conditions of block 162, then the re-alignment operations are stopped.
[0096] Turning to Figure 13, example computer executable instructions are provided for returning the stored 3D CAD object (from the database 32) as a similar object to the target object. In particular, at block 168, the computing device 20 orders or organizes the similar 3D CAD objects according to degree of similarity to the target object. The degree of similarity is measured, for example, by the percentage overlap in volume, or area, or both. A high percentage of overlap means a high degree of similarity. At block 1 70, the similar 3D CAD objects are displayed on the display 18 in an order based on the degree of similarity.
[0097] In another aspect, the proposed systems and methods provide for 2D and 3D object searching. In other words, both 2D images and 3D CAD objects can be returned as results when providing a 2D image or 3D CAD model as an input search parameter. Such a search may investigate the data components of the 2D and 3D objects database 34.
[0098] Turning to Figure 25, an example block diagram of the 2D and 3D objects database 34 is provided. The database 34 can include similar or the same 3D object components 220 (e.g. comprising 3D CAD models 60, 62, 64) that were described with respect to the 3D objects database 32. In addition, the 2D and 3D objects database 34 includes 2D images of objects 222. 2D images, or images, refer to photos, drawings, pictures, etc. The 2D images 222 may include multiple images of the same object, although the images may be from different perspectives. For example, there is a 2D image of object A 224 from a first perspective, and another 2D image of object A 226 from a second perspective. Images 224 and 226 are both associated with the object A data tag 66.
Notably, the data tag 66 is also associated with the 3D CAD model of object A 60.
[0099] 2D image 228 of object B is associated with the object B data tag 68, and 2D image of object C is associated with object C data tag 70. As described earlier, the data tags can be used to narrow search results.
[00100] The database 34 may also include statistics of objects 232, which include where an object was made, how many of such objects were made, when an object was made, where the object is sold or is available. These statistics 232 associated with each object can be advantageously used to identify the probability that a search result matches the target object. Further details in this regard are discussed below.
[00101] Turning to Figure 26, example computer executable instructions are provided for searching for similar 2D and 3D objects from the 2D and 3D objects database 34. Such instructions may be performed by module 40. At block 234, the computing device 20 obtains or receives one or more of the following: a 2D image of a target object, multiple 2D images of the target object (e.g. image frames from a video), and a 3D model of the target object (e.g. by laser scanning). Based on the obtained or received data, the process continues with blocks 236, 238, 240, 242, 244. In parallel or in series to these blocks, a search for similar 3D objects is performed according to blocks 72, 74, 76, 78, and 80, the operations of which have been described above with respect to Figure 4. In other words, both a 2D and 3D search is performed, the search results of each complimenting the other. At block 80 however, it can be appreciated that the search for similar 3D objects can be conducted using database 34. At block 236, for the one or more 2D images, the computing device 20 analyzes each image using edge detection, for example, to separate each image into individual sub-images of the target object. This involves identifying or outlining the perimeter of the target object. Sub-images herein refer to a set of pixels within an image that represent a portion of, or an entire object. For example, there may be an image or photo of a plate of food, whereby shown on the plate is a cob of corn, a serving of rice, and a pork chop. The extracted sub-images are the isolated image of a cob of corn, the isolated image of the serving of rice, and the isolated image of a pork chop. As will be discussed further, the sub- images of the cob of corn, or rice, or pork chop are used to search for similar 2D and 3D objects. It can be appreciated that known edge-detection techniques for detecting and generating sub-images are applicable to the principles described herein. At block 238, the 2D sub-images are scaled to correspond with dimensions of the target object. Scaling of the 2D sub-images may be performed by using known photogrammetry techniques and devices. Another approach of scaling images or sub-images includes the use of a standard reference object, as described earlier with respect to Figure 9.
[00102] Continuing with Figure 26, at block 240, one or more 2D search criteria, including spatial criteria, are received or obtained to perform an object search for the target object. The spatial criteria, similar to the 3D searching, relate to spatial tolerances of the sub- images. For example, a user may provide spatial criteria, or the computing device 20 can automatically obtain spatial criteria. The spatial criteria of a 2D image may include width and length tolerances, perimeter tolerances, area tolerances, etc. At block 242, one or more adjusted 2D images (also referred herein as 2D search stencils), are generated. Each adjusted 2D image has one or more of the spatial criteria adjusted based on the one or more defined tolerances. For example, if both the width and height tolerance is +5 inches, then 2D search stencil of an image or sub-image will be +5 inches taller and +5 inches wider than the scaled sub-image. At block 244, a search is performed for one or more similar 2D objects in the 2D and 3D objects database 34. The search may be conducted by comparing the spatial criteria of the one or more adjusted 2D images (e.g. the 2D search stencils) with same spatial criteria of the stored 2D images. For example, it may be investigated if a 2D image in the database 34 fits within the 2D search stencil. The search may also be conducted based on attributes of the image pixels such as color and texture. If only part of an object of interest has been captured then the specifics will help to narrow down the search (e.g. left front car headlight, image captured from the front).
[00103] At 246, the 2D images returned from the 2D search and the 3D models returned from the 3D search are compared to refine the search results. This involves combining the returned 2D images with at least one of a 3D CAD model of the target object, or with a returned 3D object (a result from the 3D search). It is then determined if the 2D images or 3D models, or both, match one another when projected onto each other from different perspectives. If so, the returned results are rated as being close or very similar matches to the 2D image or 3D model initially obtained at block 234. At block 248, the results are then ordered or organized, for example, according to probability analysis.
[00104] Turning to Figure 27, example computer executable instructions are provided for implementing the operation of block 236, regarding edge detection. At block 250, the image of the target object is displayed to the user on a display devicel 8. At block 252, the computing device 20 receives a selection input identifying at least a point on the target object. At block 254, edge detection is performed for the target object. At block 255 an option is provided to either receive more selection input information by going back to block 252 or to proceed to block 253 to save the extracted sub-image. Thus, the perimeter of the target object is outlined to isolate the sub-image of the target object. In other words, the identification of one or more target objects in an image may be assisted or confirmed, or both, by a user selection. It can also be appreciated that the computing device 20 and display device 18 may be an integrated device, such as a mobile hand-held device (e.g. including devices under the trade-marks iPad, iPhone, BlackBerry Torch, and BlackBerry PlayBook), and that user selection of the target object may be made through any known user interface devices (e.g. cursor, touch screen, scroll wheel, track pad, etc.). [00105] Turning to Figure 28, example computer executable instructions are provided for implementing the operation of block 238, regarding the scaling of a 2D image, or sub-image. The example of Figure 28 includes the use of a target object. In particular, at block 256, if it is determined that the 2D sub-image of the target object has already been scaled, then no action is taken (block 258). Otherwise, at block 260, it is determined if a standard reference object has been imaged or photographed along with the target object. If so, at block 262, the standard reference object is identified in the image. At block 264, an accurate 3D model or 2D image of the standard reference object is retrieved. It can be appreciated that there may be a database of both 2D images and 3D models of standard reference object. At block 266, the dimensions of the photographed reference objects are compared with the accurate 3D model or 2D image of the standard reference object. One or more scale factors are then generated between the photographed reference object and the accurate reference object. The scale factors may be along the horizontal axis, vertical axis, or both. At block 268, the scale factor or factors are applied to the 2D sub-image of the target object.
[00106] Continuing with Figure 28, if at block 260 no standard reference object has been imaged along with the target object, then at block 270, the distance at which the target object has been photographed, is obtained. At block 272, the standard reference object is photographed from the same distance. The process then continues from block 272 to block 264, as described above.
[00107] Turning to Figure 29, example computer executable instructions are provided for searching for one or more similar objects in a 2D and 3D objects database 34. In other words, Figure 29 provides an example implementation of block 244, for carrying out a 2D search. In particular, at block 274, if an object type or group is specified, the search can be narrowed to the specified object type or group. At block 276, the computing device 20 retrieves 2D images of the objects within any one or more of the geometric tolerances such as height, width, or length, if specified, and within any one or more of the image attribute tolerances such as color or texture, if specified. At block 278, each 2D search stencil is superimposed with the 2D images of the objects retrieved from the 2D and 3D objects database 34. If a certain portion of the target object is to be searched, then the computing device 20 eliminates or truncates the irrelevant parts or portions of the retrieved 2D images of the objects (e.g. parts or portions ancillary to the specified certain part or portion of the target object). For example, it may be that the user would only like to search the upper portion of the image of the target object, and any object having such similarly shaped upper portion is of interest to the user. [00108] For each retrieved 2D image of the object (e.g. retrieved from database 34), or for the selected parts or portions of the retrieved image, a number of operations are performed (blocks 282, 284, 286, 288, 290, 292). At block 282, Overlap parameter 1 ' is computed by determining the percentage area of the retrieved 2D image that lies within the 2D image of the target object. At block 284, Overlap parameter 2' is computed by determining the percentage area of the 2D image of the target object that lies within the retrieved 2D image. At block 286, the 'perimeter parameter Ύ is computed by determining the difference between the perimeters of the 2D image of the target object and the retrieved 2D image. At block 288, the parameters are compared against specified ranges to determine if the retrieved 2D object is sufficiently, or is not sufficiently similar to the image of the target object. In a non- limiting example, if Overlap parameter Ύ is within a first specified range; or if Overlap perimeter 2' is within a second specified range; or if 'perimeter parameter Ύ is within a third specified range; or if any of the above conditions are true, then the retrieved 2D object is returned as being a similar object to the target object. At block 290, however, if no similar object is returned, then the alignments of geometric centers and rotational angles of the images may be fine tuned (e.g. by making slight changes) to increase the percentage area of overlap, if possible. In some cases, larger rotations may be used where there is no obvious front or back, or top or bottom, or left or right. Blocks 282 to 290 are repeated accordingly, as per block 292. Re-alignment or fine-tuning is stopped after a certain number of re-alignment iterations have been performed.
[00109] Turning to Figure 30, example computer executable instructions are provided for implementing probability analysis according to block 248, as described in Figure 26. In particular, with respect to Figure 30, at block 294, the computing device 20 orders or organizes the similar 3D CAD objects according to degree of similarity to the target object. The degree of similarity can be measured according to percentage of overlap, color similarity, shape similarity, etc. At block 296, the computing device 20 retrieves information related to the target object. For example, the information includes: what is the make of target object and the model of the target object; and when or where, or both, was the 2D image or 3D laser scan can taken of the target object. At block 298, the computing device 20 then compares the information of the target object with statistical data of the similar 2D or 3D object, retrieved from the database 34. In particular, statistical data from data storage 232 can reveal where the object was made, how many of such objects were made, when the object was made, and where it is sold or is available. [00110] In an example of the comparison at block 298, the object returned from the database 34, according to the statistics, is made and sold exclusively in the United States. However, the image of the target object was captured in India. Therefore, it is of low probability that the degree of similarity between the target object and the identified object from the database is accurate.
[00111] Turning to Figures 31 , 32 and 33, a schematic diagram shows an example of performing a 2D and 3D search, whereby the results from both searches can be combined to refine the results.
[00112] Turning to Figure 31 , a 2D image or photograph 300 includes the images of a car
302 and stop sign 303. The target object in this example is the car 302, and the stop sign
303 is a standard reference object that can be used for scaling the image. A user selects the image of the car 302 within the image 300. Based on the selection, the sub-image of the car 302 is extracted and isolated. The sub-image of the car 302 is then scaled using a scale factor determined between the dimensions of the image of stop sign 303 and the known actual dimensions of the stop sign. The scaled sub-image of the car 304 is then used to generate a 2D search stencil. In particular, a spatial criteria GUI 306 is presented to the user to obtain length tolerances 308, height tolerances 310, etc. The outline or perimeter of the target object is expanded or decreased, based on the tolerances, to create search stencil 312. It can be seen in Figure 31 , that the search stencil 312 is taller and longer than the scaled sub-image of the car 304. The search stencil 312 is then used to search for similar 2D images of objects.
[00113] Turning to Figure 32, such a search is shown by the search stencil 312 being compared with the images of three other cars 314, 316, 318 that are retrieved from the 2D and 3D objects database 34. It can be appreciated that the image of car 314 and the search stencil 312 most closely overlap one another. As discussed with respect to Figure 26, a 3D CAD model of the target object is also generated from the initially obtained 2D images or 3D laser scanning. In Figure 32, the 3D CAD model of the target object 320 is combined with 2D images of the target object, for example, by projecting the 2D images onto the 3D CAD model. This colorizes and textures the 3D CAD model of the target object 320. Where the 3D CAD model is not covered, or "wall-papered", by a projection of a 2D image, it can be appreciated that estimation, inference, and interpolation methods can be used to determine the coloring and texturing of the uncovered surfaces. For example, if an image of a car 302 shows only one side, it can be inferred that the opposite side of the car has the same colors and textures.
[00114] Turning to Figure 33, from the coloured or textured 3D CAD model of the target object (e.g. the car), one or more 2D images of the 3D CAD model can be generated from different perspectives. The 2D images of the different perspectives include a rear view perspective image 330, a top-down front and side view perspective image 332, and a top- down front view perspective image 334. It can be fully appreciated that by combining the 2D image and 3D CAD model (either as initially obtained, or retrieved from the database 34), this advantageously gives the effect of being able to generate different 2D images of perspective views of the target object. These different perspective views are different than the perspective view of the initial image 300, and thus the different perspective views can be used to broaden the 2D search for images of objects (related to the different perspectives). In other words, the 2D images of the different perspective view are used to find other images of similar objects in the 2D and 3D objects database 34. Therefore, based on the perspective image 330, the image 338 of a similar car is returned as a result. Similarly, based on the perspective image 334, the image 336 of the car is returned as a result.
[00115] In another aspect, systems and methods may be provided for identifying an image or 3D CAD model of an unknown object, such as irregularly shaped objects. In a further aspect, once the unknown object is identified, attributes or characteristics of the object can be identified.
[00116] For example in a building construction consumer application the input images could be: a photograph, taken from a commonly known camera such as an iPhone, taken at an angle close to the horizontal, from approximately 20 feet away from a pile of building sand, beside which is a builders shovel of known length which will act as a scaling object (e.g. standard reference object). Although the pile of building sand may have an irregular shape, geometric comparisons and scaling of the 2D images against database 2D images and 3D objects of several known quantities of building sand can determine matches ranging from exact fits to rough approximations of characteristics such as area, volume, weight and cost with estimated error.
[00117] Similarly for an example in a restaurant consumer application the input images could be: a photograph, taken from a commonly known camera such as an iPhone, taken at a downward angle of 45 degrees from the horizontal, from approximately 2 feet away from a plate of food, beside which is a BIC pen which will act as a scaling object (e.g. standard reference object). If the restaurant is of a known franchise or if the item is a known item on the menu, then this known information could be used in the search to help retrieve relevant data from the 2D and 3D database. Although some objects may be irregularly shaped servings of food, color comparisons can identify them as items on the menu, and geometric comparisons (e.g. in 2D and 3D) to known quantified servings can determine matches ranging from exact fits to rough approximations of characteristics such as volume, weight and calorie count with estimated error. The application could further approximate calories consumed if an image of the leftover servings was provided.
[00118] The above are non-limiting examples. Further details of identifying an unknown object, such as an irregularly shaped object, and determining its related attributes are provided below.
[00119] Figure 34 provides example components of the 2D and 3D objects database 34, whereby 2D images or 3D CAD models of objects in the database 34, or both, may be associated with various attributes stored in the attributes database 36. 2D images 222 in the database 34 includes: a 2D image of object A from a first perspective, and may be of a given version as marked 'ν1 ' (340); a 2D image of object A from a second perspective, and of a second version marked 'ν2' (342); a 2D image of object B of a first version (344); a 2D image of object C of a first version (346); and a 2D image of object C of a second version (348). The different versions of the same object can be due to the different amounts or volumes of the same object. For example, a pork chop is an irregularly shaped object, and some pork chops may be larger or smaller than others, although they are considered the same object. Examples of the 3D CAD models 220 include: a 3D CAD model of the first version of object A (350); a 3D CAD model of the second version of object A (352); a 3D CAD model of the third version of object A (354); a 3D CAD model of the first version of object B (356). Images and 3D CAD models of object A are related to object A data tags 66. Images and 3D CAD models of object B are related to object B data tags 68. Similarly, images and 3D CAD models of object C are related to object C data tags 70. It can be seen that for an object, there may be different images and 3D CAD models, each having different versions and perspectives.
[00120] Continuing with Figure 34, the examples in the attributes database 36 are shown, these attributes include the portion/serving (e.g. for food for example), the weight 360, the volume 362, the cost 364, etc. The values associated with the attributes may be arranged in a table format, correlating the images or 3D CAD models, or both with the attributes.
[00121] Turning to Figure 35, example computer executable instructions are provided for identifying an unknown or irregular object through comparison with known objects in the 2D and 3D objects database 34, and estimating attributes associated with the unknown or irregular object. The operations of Figure 35 may be implemented by module 42. It can be appreciated that many of the operations may be similar to those described earlier with respect to Figure 26, and thus, to be concise, the reference numerals for such operations are repeated below.
[00122] In particular, at Figure 35, block 234, image or images, or a 3D model is obtained. Blocks 72, 74, 76, 78 and 80 are then performed to search for a 3D similar objects in the 2D and 3D objects database 34. Subsequently, or simultaneously, a 2D search is performed. At block 237, for the one or more 2D images, the computing device 20 analyzes each image using edge detection to separate each object image into individual sub-images of the target object, for example, by identifying or outlining the perimeter of the target object. At block 238, the 2D sub-images are scaled to correspond with dimensions of the target object. At block 241 , one or more 2D spatial criteria are obtained to perform an object search for the target object. At block 242, one or more 2D search stencils are created based on the spatial tolerances. At block 244, a 2D search is conducted.
[00123] Continuing with Figure 35, at block 247, based on the results of the 2D image search and 3D CAD model search, the results are combined to further refine the results. The search results of the 2D and 3D search are used to identify the unknown target object. Particularly, the identity of the similar objects, as returned through images or 3D CAD models from the search in the database 34, are used to infer the identity of the unknown target object. Therefore, the unknown target object assumes the identity of the similar object or objects. There are several approaches 378, 392, 400 to combining and comparing the results of the 2D search and the 3D search. Further details of such approaches are described with respect to Figures 38, 39 and 40.
[00124] Continuing with Figure 35, at block 366, based on the 2D image of the 3D object, the computing device 20 determines a related attribute of one or more of the target objects (e.g. servings, portions, costs, weight, volume, number of pieces, etc.). [00125] Although not shown in Figure 35, or in Figure 4 and Figure 26, it can be appreciated that the 3D search process or 2D search process continually updates the 3D database 32 or 2D and 3D database 34. As part of the search process, the 3D models in the 3D and 2D spatial data object and image database 34 are continuously improved and updated by adding the target models, especially those of higher quality. As an optional step near the end of the search process, the following operations are performed. It is first determined if a 2D and 3D search has resulted in a well matching candidate, and if the quality of the target object 3D model exceeds a certain level quality. Higher quality, in one aspect, is defined as being a finer resolution which shows clearer details, Quality levels or threshold may also be a parameter manually inputted by an operator. If such conditions are satisfied, then the target object 3D model is added to either the 3D objects database 32 or the 2D images and 3D objects database 34, or both. The target object 3D model may be added as either an additional 3D model or as a replacement 3D model.
[00126] Certain of the above operations are now explained in further detail. Turning to Figure 36, example computer executable instructions are provided for isolating a sub-image from the initially obtained image, as per block 237. In particular, at block 368, edge detection is applied to identify one or more object images (e.g. sub-images) in the obtained images. At block 370, for each object image or sub-image, its attributes (e.g. texture, color, area, perimeter, height, area, etc.) are determined. At block 372, the computing device 20 retrieves or obtains the object type, category, or grouping if available. Such type, category, or grouping is used to further help identify and characterize the object images. For example, if it is known that the sub-image is of a food product, for example, on the trade-marked Big Breakfast meal at McDonald's, and it known the Big Breakfast meal includes sausage, scrambled eggs and English muffins, then based on the, for example, yellow color of the imaged food product, it can be identified that the imaged food product is a serving of scrambled eggs. At block 374, a search is performed for one or more similar 2D objects in the 2D and 3D objects database 34, which stores 2D images of objects. The searching process includes comparing the spatial criteria of the one or more adjusted 2D images with the same spatial criteria of the stored 2D images.
[00127] Turning to Figure 37, an example of retrieving 2D spatial criteria is provided, as per block 241 . In particular, at block 376, it can be appreciated that for each sub-image, any one or more of the following are obtained in relation to the sub-image: height tolerance; width tolerance; perimeter length tolerance; area tolerance; and texture or color tolerance, or both. [00128] Figures 38, 39, and 40 show different approaches for implementing the operation of combining the 2D and 3D search results, as per block 247.
[00129] Turning to Figure 38, a set of example computer executable instructions 378 are provided for combining a 2D image or sub-image(s) of the target object 380 and a candidate 3D model 382 retrieved from the 2D and 3D database 34. The 2D sub-image 380 and the retrieved 3D CAD model 382 are inputs. At block 384, the 3D CAD model 382 is rotated and shifted until its 2D projection is closely aligned with the 2D sub-image 380 when projected onto its plane. At block 386, the 3D CAD model 380 is then mathematically projected onto the 2D plane of the object image to create a 2D projection of the 3D CAD model. In addition, or in the alternative, the 2D sub-image 380 can be mathematically projected onto the surface of the 3D CAD model 382, thereby creating a 3D projection of the 2D image. At block 388, at least one of the following is determined: if there is sufficient similarity between the 2D projection (of the 3D CAD model) and the sub-image; and if there is sufficient similarity between the 3D projection (of the sub-image) and the 3D CAD model. Such similarity can be measured by determining if the overlapping area is above a certain percentage, if the difference in the perimeters is below a certain value, or if the texture or color matches. At block 390, if it is determined that there is sufficient similarity, then the computing device 20 confirms that the estimated or retrieved 3D CAD model 382 is sufficiently similar to the sub- image 380 of the object.
[00130] Turning to Figure 39, another approach is provided for comparing and combining the 2D and 3D search results, which is an example embodiment of block 247, described earlier. The inputs to the set of example computer executable instructions 392 include 2D images 394 retrieved from the 2D and 3D database 34, the image 394 being of a candidate object that is similar to the target object. In other words, the 2D image 394 is a search result. The inputs also include a known 3D CAD model 396 of the target object, that was initially obtained either from images, from an input 3D CAD model, or from a 3D CAD model created by laser scanning. At block 384, the 3D CAD model 396 is rotated and shifted until it is closely aligned with the retrieved 2D images when projected onto their planes. At block 386 and block 388, as similarly described above with respect to Figure 38, the 3D CAD model 396 is projected onto the 2D plane of the image 394 retrieved from the 2D and 3D database 34 to create a 2D projection, or the 2D image 394 is projected onto the 3D surface of the 3D CAD model 396, or both. Then, it is determined if there is sufficient similarity between the 2D projection and the 2D image 394, or the 3D projection and the 3D CAD model 396. At block 398, if there is sufficient similarity, the computing device 20 confirms that the retrieved 2D image 394 is sufficiently similar to the 3D CAD model 396.
[00131] Turning to Figure 40, another example approach 400 is provided, according to a further aspect of block 247. The inputs include either the 2D images of the target object or 2D images retrieved from the 2D and 3D database 34 of a candidate object similar to the target object (e.g. a search result from database 34). The 2D images are referenced 402. The inputs also include a 3D CAD model 404 which is either a candidate 3D model retrieved from the 2D and 3D database 34 or a 3D model of the of the target object, which has been obtained either through estimation using 2D images, or directly as an input 3D CAD model, or a 3D CAD model derived from a point cloud (e.g. generated by laser scanning). At block 406, the computing device 20 overlays the 2D images 402 over the 3D CAD model 404 to color or texture the 3D CAD model 404. At block 408, new or different 2D snapshots or images are taken of the coloured or textured 3D CAD model 404 from different perspectives and view points. This generates 2D model images. At block 410, the generated 2D model images are used to search for other similar 2D images in the database 34. It can be appreciated that this advantageously allows for other 2D images to be found, which may of different perspectives than the originally provided images at block 234. In other words, combining the 3D CAD model allows different or new 2D images to be generated. At block 412, the generated 2D model images are also used to search for other 3D model objects in the database 34. This may done by comparing 2D projections of 3D model objects in the database 34 with the generated 2D model images.
[00132] Turning to Figure 41 , an example schematic diagram illustrates how a 3D CAD model can be projected onto a 2D image plane, and how a 2D image can be projected onto a 3D CAD model. A 3D CAD model 414 of a chicken drumstick is shown. Also shown is an image 416 or image plane of a plate holding food. The food includes a sub-image 418 of a drumstick from a side perspective. In one approach, after the 3D CAD model 414 of the drumstick has been shifted and rotated, it is then projected onto the 2D image plane 416, to determine how closely its 2D projection is similar to the sub-image 418 of the chicken drumstick. In another approach, the 2D sub-image 418 of the chicken drumstick is projected onto the surface of the 3D CAD model 414 of the chicken drumstick. Based on this "3D projection", it is determined how closely the sub-image 418 and the 3D CAD model 414 match. As described above, projecting the sub-image 418 onto the 3D CAD model 414 can also be used to color and texture the 3D CAD model 414 (e.g. with the color and texture of a chicken drumstick). [00133] Turning to Figure 42, example computer executable instructions are provided for implementing block 366. In particular, attributes of the target object may be inferred or estimated based on the attributes of the similar objects. At block 430, upon identifying a 2D image of an object similar to the target object, or a 3D CAD model of an object similar to the target object, the computing device 20 determines a proportionality relationship between the target object's image or 3D model and the identified 2D image or 3D CAD model of the similar object. For example, it is identified that an image in the database of a symmetric pile of fine sand is similar to the image of a target object pile of sand. However, the image of the target object pile of sand is 1 .3 times the size (e.g. in both height and in width) of the image in the database of a pile of sand. This proportionality relationship or scaling factor will be used to calculate the adjusted attributes of the target object.
[00134] At block 432, attributes of the identified image or 3D CAD model from the database 34 are obtained. Such attributes describe the object's characteristics as associated with the image or 3D CAD model. Attributes can be retrieved from the attributes database 36, and can include weight, portions, servings, cost, volume, etc. It can be appreciated that various types of attributes can be associated with an object. For example, for the image of the pile of sand, it may be known that the pile of sand has a volume of 10 cubic meters, a cost of $100, and a weight of 16,000 kg.
[00135] At block 434, attributes of the identified image or 3D CAD model are applied to the target image. The attributes values may be scaled based on the computed
proportionality relationship between the target object and the similar object. For example, since the proportionality relationship or linear scaling factor is 1 .3, then the estimated attributes of the image of the target object, which is identified as sand, including volume, cost and weight could be proportionally multiplied by 1 .3 to the power of 3.
[00136] It can therefore be appreciated that the systems and methods of Figure 35 allow for an unidentified target object, as embodied in one or more images or a 3D CAD model, to be identified and it's attributes to be determined.
[00137] In each of the above systems and methods with respect to Figures 4, 26 and 35, it can also be appreciated that the target object of interest is automatically identified in an image, or is semi-automatically identified in the image, or is manually identified in the image. Below are some example methods for identifying the target object in an image, involving manual inputs. In the context of a display 18 or screen displaying an image of the target object, an operator or user can: point at the centre of the target object; draw a bounding rectangle or bounding circle around the target object; draw a bounding polygon perimeter around the target object; or include or exclude areas using one or more of the above techniques. For example the operator may be interested, not in a window, but in the frame around the window. The operator could include the frame but exclude the window within the frame. The operator may be interested in the wing mirror of a car and not the whole car itself.
[00138] It can also be appreciated that the above principles for searching and identifying similar objects in a 3D objects database 32, or a 2D image and 3D objects database 34, or both, can be applied in various commercial environments.
[00139] In a consumer application, an automatic search can be conducted for ideal fitting shoes. A pair of well worn and very likely disfigured shoes is laser scanned on the inside and on the outside to create 3D point surfaces from which are derived TIN networks. From these TIN networks, 3D CAD surface models are derived of the inside and the outside surfaces of the shoes.
[00140] Alternatively, multiple photographs or a video, or both, are taken of the outside of a pair of shoes or of someone's feet. The images are then analyzed to create 3D CAD models (e.g. using edge detection). In the case of feet, a tightly fitting model of the space immediately enclosing each foot would likely serve the purpose of adequately describing the basic shape of the feet.
[00141] A 2D and 3D object database 34 is populated by the following: 3D models of the inside and outside surfaces of new shoes of many and various known sizes; 3D models of the inside and outside surfaces of old worn shoes of many and various known sizes; 3D models of the feet or space immediately enclosing the feet of the wearer of the worn shoes; 2D images of the above where possible; and information about the wearers of the worn shoes. Such information includes, for example, the wearer's height, weight, age, occupation and medical problems like suffering from back problems or being overweight, etc.
[00142] Geometric properties about the shoes and feet in the database are pre-calculated from the 3D models and stored to assist in searches and estimations. Such geometric properties can include, for example, foot width and arch height in several places along the length of the shoe. Notably, the 3D models themselves have many geometric properties inherently contained in their 3D shape. [00143] Similarly 3D geometric properties are calculated for the inside and the outside 3D models of the laser scanned or photographed pair of worn shoes or scanned or
photographed feet.
[00144] Searches in the database 34 are performed based on these geometric properties by comparing them to the pre-calculated geometric properties in the database 34.
[00145] By comparing the inside of the laser scanned pair of old shoes or 2D images and 3D models of the scanned or photographed feet to the inside of many new shoes in the database 34, several possibilities can be retrieved to suggest a more suitably fitting pair of new shoes.
[00146] By comparing the outside of the laser scanned or photographed pair of old shoes in 3D and 2D to the outside of many old shoes in the database 34, several possibilities of similarly worn shoes can be retrieved as well as the information about their wearers.
[00147] Such a system and database may provide a mechanism for statistical research and the development of innovative ways to offer recommendations and diagnoses. For example: a certain type of uneven wear on the shoe heels or soles may suggest by correlation that the wearer may likely have hip or back problems and recommendations for several types of a new shoe of a particular size and having a healthier fit can be suggested.
[00148] In a further embodiment of the above 3D searching principles, an iterative 3D search is performed over a period of time. The iterative 3D searching process matches a target object with candidate objects in the 2D images and 3D objects database 34 by conducting several iterations or searches over time as more and more information becomes available, and until a more closely matching candidate is found. This advantageously helps the searching process return a more accurate or similar result to the target object.
[00149] For example a search is performed using a single picture of a couch taken using a mobile device, such as a device under the trade-mark iPhone. The result of the 3D and 2D search is, for example, one-hundred matching similar candidates. A second picture is taken from a different angle and another search is performed on only the one-hundred matching candidates. The result of the second iteration is fifteen remaining candidate objects. A third picture is taken from yet another angle and another search (e.g. the third iteration) is performed on only the fifteen remaining candidates rendering only one candidate as the best match. [00150] Another suggested process that uses the above principles of combined 2D and 3D searching is described according to the below stages to, for example, search for a couch.
[00151] Stage 1 : Use a mobile device including a camera, for example those provided under the trade-marks iPhone or BlackBerry, to take a photograph of the front of a couch from about ten feet away and five feet above the floor. It can be appreciated that the computing device 20 described above may be the mobile device. Alternatively, the computing device 20 may be separate from, and in communication with, the mobile device. Using the above-described computer executable instructions, the image is analyzed using edge detection to generate a sub-image of the couch. A search is performed using the sub- image of the couch in the database 34. If the database 34 does not contain the camera parameters of that model of the mobile device, then take a similar image of a large rectangular box of known measurements from the same height and distance away, and use edge detection to generate sub-images of the sides of the box. Using either the camera parameters or the box image, scale the photograph to correctly represent the couch measurements.
[00152] Stage 2: For each couch 3D model in the 3D database the computing device 20 performs the following: retrieve the 3D couch model from the 3D database 34;
mathematically position and orient the 3D model at the same angle as the photograph was taken; mathematically project the oriented 3D model onto the image plane of the photo; reject the 3D model if its projection is more than three inches taller or wider than the photo suggests; reject the 3D model if its projection is less than three inches shorter or narrower than the photo suggests; reject the model if its 2D projection area differs by more than one square foot in area than the photo image area suggests; reject the 3D model if its projection area overlaps the photo image area by less than 95 percent; reject the 3D model if the photo image area overlaps the 3D model projection area by less than 95 percent; and, finally add the 3D model as a candidate if it has not yet been rejected.
[00153] Stage 3: Repeat Stage 1 using the mobile device to take a photograph of the side the couch from about ten feet away and five feet above the floor. Repeat Stage 2 for each of the remaining selected candidate models.
[00154] Stage 4: Repeat Stage 3 if necessary, to capture photographs from other angles around the couch to narrow down the list of candidates. This process could also be done using a video camera which steadily moves in a ten foot circle around the couch using multiple images to narrow down the candidate models until only one or a few candidate models remain. These are then presented as similar matches along with their estimated probabilities, in this case calculated purely on geometric similarity.
[00155] Stage 5: Any 2D images and 3D models collected or generated in the process can be added to the 2D and 3D database 34 with meta data about the object of interest. In this case the scaled video and photo images and sub images and information about the couch could be added together with associated image directions, distances and accuracies and camera information for use in future 2D and 3D searches.
[00156] At each stage the 3D models of the remaining matching candidates can be retrieved from the database as well as any associated images. These can present to the observer the possibilities of what the other side of the photographed object may look like and probabilities can also be associated with the remaining candidates based on attribute information in the database. For example a search using the photograph of a car headlight might render two matching types of car. Retrieving the 3D models from the database shows detailed information about the two possibilities of what the rest of the car looks like as well as the associated probabilities based on the availability of the types of car.
[00157] In another example, the consecutive images of a video are considered. The video progresses and more and more video images are taken or captured from different angles around the target object. Meanwhile searches in 2D and 3D are being performed in real time on selected video images, thereby reducing the number of similar candidates until only a single candidate remains. The identification of an exact match is then reported and the video data collection is automatically stopped.
[00158] In both of these examples any 3D objects created and their associated 2D images can be added to the 3D and 2D Spatial Data Object and Image Database 34 along with the associated accuracy and resolution. In this way an ever increasing library of objects is built up and added to with ever increasing accuracies over time.
[00159] The steps or operations in the flow charts described herein are just for example. There may be many variations to these steps or operations without departing from the spirit of the invention or inventions. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified. [00160] While the basic principles of this invention or these inventions have been herein illustrated along with the embodiments shown, it will be appreciated by those skilled in the art that variations in the disclosed arrangement, both as to its details and the organization of such details, may be made without departing from the spirit and scope thereof. Accordingly, it is intended that the foregoing disclosure and the showings made in the drawings will be considered only as illustrative of the principles of the invention or inventions, and not construed in a limiting sense.

Claims

Claims:
1 . A method for object searching, the method comprising:
-obtaining a 3D model of a target object;
-obtaining one or more spatial criteria related to the 3D model to perform a search;
-conducting a search in a 3D objects database by comparing the one or more spatial criteria with at least one 3D model stored in the database; and,
-after determining a certain 3D model stored in the database satisfies the one or more spatial criteria, returning the certain 3D model as being similar to the target object.
2. The method of claim 1 further comprising scaling the 3D model of the target object to correspond with actual dimensions of the target object.
3. The method of claim 2 wherein scaling the 3D model of the target object comprises: -comparing dimensions of the 3D model of the target object to dimensions of a standard reference object to generate one or more scaling factors; and
-applying the one or more scaling factors to the 3D model of the target object.
4. The method of claim 1 wherein the one or more spatial criteria comprises one or more spatial tolerances, and the method further comprising:
-generating a 3D search shell by adjusting the 3D model of the target object based on the one more spatial tolerances; and
-comparing the 3D search shell with the at least one 3D model stored in the database to conduct the search.
5. The method of claim 4 wherein comparing the 3D search shell with the at least the one 3D model stored in the database comprises:
-superimposing the 3D search shell with the at least one 3D model stored in the database; -computing one or more overlapping parameters between the 3D search shell and the at least one 3D model stored in the database;
-determining if the one or more overlapping parameters are within a specified range; and -if so, determining the at least one 3D model stored in the database is the certain 3D model that satisfies the one or more spatial criteria.
6. The method of claim 5 wherein the one or more overlapping parameters comprises computing a percentage volume of the at least one 3D model stored in the database that lies within the 3D search shell.
7. The method of claim 5 wherein the one or more overlapping parameters comprises computing a percentage volume of the 3D search shell that lies within the at least one 3D model stored in the database.
8. The method of claim 5 wherein the one or more overlapping parameters comprises computing a percentage area of the at least one 3D model stored in the database that lies within the 3D search shell.
9. The method of claim 5 wherein the one or more overlapping parameters comprises computing a percentage area of the 3D search shell that lies within the at least one 3D model stored in the database.
10. The method of claim 1 wherein the 3D model of the target object is obtained by using 2D images of the target object.
1 1 . The method of claim 10 further comprising:
-receiving the 2D images of the target object photographed against a uniform background; -receiving distances between the target object and the camera device that captured the 2D images;
-generating silhouettes of the target object from the 2D images; and
-projecting the silhouettes using the distances between the target object and the camera device to form a volume of space representing the 3D model of the target object.
12. A computer readable medium comprising computer executable instructions for object searching, the computer executable instructions comprising:
-obtaining a 3D model of a target object;
-obtaining one or more spatial criteria related to the 3D model to perform a search;
-conducting a search in a 3D objects database by comparing the one or more spatial criteria with at least one 3D model stored in the database; and,
-after determining a certain 3D model stored in the database satisfies the one or more spatial criteria, returning the certain 3D model as being similar to the target object.
13. A method for object searching, the method comprising:
-obtaining 2D images of a target object;
-obtaining a 3D model of the target object;
-obtaining one or more spatial criteria related to the 2D images and 3D model to perform a search;
-conducting a 2D search for 2D images of similar objects in a 2D image database, the 2D search based on the 2D images of the target object;
-conducting a 3D search for 3D models of similar objects in a 3D objects database, the 3D search based on the 3D model of the target object;
-determining a certain 2D image stored in the 2D image database, or a certain 3D model stored in the 3D objects database, or both, satisfies the one or more spatial criteria;
-returning at least one of the certain 2D image and the certain 3D model as being similar to the target object; and
-combining results of the 2D search and the 3D search to decrease or increase the number of search results.
14. The method of claim 13 wherein the 3D model of the target object is obtained from the 2D images of the target object.
15. The method of claim 13 wherein the 3D model of the target object is obtained from a laser scan of the target object.
16. The method of claim 13 wherein the one or more spatial criteria comprises one or more spatial tolerances, and the method further comprising:
-generating a 2D search stencil by adjusting at least one of the 2D images of the target object based on the one more spatial tolerances; and
-comparing the 2D search stencil with the 2D images of similar objects in the 2D image database to conduct the 2D search.
17. The method of claim 16 wherein comparing the 2D search stencil with the 2D images of similar objects in the 2D image database comprises:
-superimposing the 2D search stencil with at least one of 2D images stored in the 2D image database; -computing one or more overlapping parameters between the 2D search stencil and the at least one of the 2D images stored in the 2D image database;
-determining if the one or more overlapping parameters are within a specified range; and -if so, determining the at least one of the 2D images stored in the 2D image database is the certain 2D image that satisfies the one or more spatial criteria.
18. The method of claim 17 wherein the one or more overlapping parameters comprises computing a percentage area of the at least one of the 2D images stored in the 2D image database that lies within the 2D search stencil.
19. The method of claim 17 wherein the one or more overlapping parameters comprises computing a percentage area of the 2D search stencil that lies within the at least one of the 2D images stored in the 2D image database.
20. The method of claim 17 wherein the one or more overlapping parameters comprises computing a difference between a perimeter of the at least one of the 2D images stored in the 2D image database and a perimeter of the 2D search stencil.
21. The method of claim 13 wherein the one or more spatial criteria comprises one or more spatial tolerances, and the method further comprising:
-generating a 3D search shell by adjusting the 3D model of the target object based on the one more spatial tolerances; and
-comparing the 3D search shell with at least one of the 3D models stored in the 3D objects database to conduct the 3D search.
22. The method of claim 21 wherein comparing the 3D search shell with the at least the one of the 3D models stored in the 3D objects database comprises:
-superimposing the 3D search shell with the at least one of the 3D models stored in the 3D objects database;
-computing one or more overlapping parameters between the 3D search shell and the at least one of the 3D models stored in the 3D objects database;
-determining if the one or more overlapping parameters are within a specified range; and -if so, determining the at least one of the 3D models stored in the 3D objects database is the certain 3D model that satisfies the one or more spatial criteria.
23. The method of claim 22 wherein the one or more overlapping parameters comprises computing a percentage volume of the at least one of the 3D models stored in the 3D objects database that lies within the 3D search shell.
24. The method of claim 22 wherein the one or more overlapping parameters comprises computing a percentage volume of the 3D search shell that lies within the at least one of the 3D models stored in the 3D objects database.
25. The method of claim 22 wherein the one or more overlapping parameters comprises computing a percentage area of the at least one of the 3D models stored in the 3D objects database that lies within the 3D search shell.
26. The method of claim 22 wherein the one or more overlapping parameters comprises computing a percentage area of the 3D search shell that lies within the at least one of the 3D models stored in the 3D objects database.
27. A computer readable medium comprising computer executable instructions for object searching, the computer executable instructions comprising:
-obtaining 2D images of a target object;
-obtaining a 3D model of the target object;
-obtaining one or more spatial criteria related to the 2D images and 3D model to perform a search;
-conducting a 2D search for 2D images of similar objects in a 2D image database, the 2D search based on the 2D images of the target object;
-conducting a 3D search for 3D models of similar objects in a 3D objects database, the 3D search based on the 3D model of the target object;
-determining a certain 2D image stored in the 2D image database, or a certain 3D model stored in the 3D objects database, or both, satisfies the one or more spatial criteria;
-returning at least one of the certain 2D image and the certain 3D model as being similar to the target object; and
-combining results of the 2D search and the 3D search to decrease or increase the number of search results.
28. A method for object identification, the method comprising:
-obtaining a 2D image of a target object and a 3D model of the target object; -obtaining one or more spatial criteria related to the 2D image of the target object and the 3D model of the target object;
-performing a combined 2D and 3D search for similar objects in a 2D image and 3D models database, the combined 2D and 3D search comprising comparing the one or more spatial criteria related to the 2D image and the 3D model of the target object with one or more 2D images or 3D models in the database;
-determining a certain 2D image and a certain 3D model stored in the database that satisfies the one or more spatial criteria;
-returning the certain 2D image and the certain 3D model as being similar to the target object;
-obtaining an identity of the object from the database and associating the identity and its associated attributes with the target object; and
-storing the 2D images and 3D models of the target object to the 2D and 3D database in association with the certain 2D image and the certain 3D model.
29. The method of claim 28 wherein the combined 2D and 3D search further comprises geometrically comparing 2D images to 3D models from different perspectives by mathematically projecting the certain 3D model onto the 2D image of the target object, or projecting the 3D model of the target object onto the certain 2D image, or both.
30. The method of claim 28 wherein the combined 2D and 3D search further comprises geometrically comparing 2D images to 3D models from different perspectives by mathematically projecting the 2D image of the target object onto the certain 3D model, or projecting the certain 2D image onto the 3D model of the target object, or both.
31. The method of claim 28 wherein the method is iterated as more 2D images and 3D models become available.
32. The method of claim 28 further comprising:
-determining a proportionality relationship between the certain 2D image and at least one of the 2D image of the target object and the 3D model of the target object;
-obtaining attributes of the certain 2D image from the database;
-scaling the attributes based on the proportionality relationship;
-applying the scaled attributes to the at least one of the 2D image of a target object and the 3D model of the target object.
33. The method of claim 32 wherein the proportionality relationship is a proportion of spatial dimensions.
34. The method of claim 28 further comprising:
-determining a proportionality relationship between the certain 3D model and at least one of the 2D image of the target object and the 3D model of the target object;
-obtaining attributes of the certain 3D model from the database;
-scaling the attributes based on the proportionality relationship;
-applying the scaled attributes to the at least one of the 2D image of a target object and the 3D model of the target object.
35. The method of claim 34 wherein the proportionality relationship is a proportion of spatial dimensions.
36. A computer readable medium comprising computer executable instructions for object identification, the computer executable instructions comprising:
-obtaining a 2D image of a target object and a 3D model of the target object;
-obtaining one or more spatial criteria related to the 2D image of the target object and the 3D model of the target object;
-performing a combined 2D and 3D search for similar objects in a 2D image and 3D models database, the combined 2D and 3D search comprising comparing the one or more spatial criteria related to the 2D image and the 3D model of the target object with one or more 2D images or 3D models in the database;
-determining a certain 2D image and a certain 3D model stored in the database that satisfies the one or more spatial criteria;
-returning the certain 2D image and the certain 3D model as being similar to the target object;
-obtaining an identity of the object from the database and associating the identity and its associated attributes with the target object; and
-storing the 2D images and 3D models of the target object to the 2D and 3D database in association with the certain 2D image and the certain 3D model.
PCT/CA2011/050700 2010-11-10 2011-11-10 System and method for object searching using spatial data WO2012061945A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US41211210P 2010-11-10 2010-11-10
US61/412,112 2010-11-10

Publications (1)

Publication Number Publication Date
WO2012061945A1 true WO2012061945A1 (en) 2012-05-18

Family

ID=46050283

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2011/050700 WO2012061945A1 (en) 2010-11-10 2011-11-10 System and method for object searching using spatial data

Country Status (1)

Country Link
WO (1) WO2012061945A1 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014022134A2 (en) * 2012-07-31 2014-02-06 Makerbot Industries, Llc Three-dimensional printer with laser line scanner
WO2014035844A3 (en) * 2012-08-28 2014-05-01 Digital Signal Corporation System and method for refining coordinate-based three-dimensional images obtained from a three-dimensional measurement system
WO2014151746A2 (en) 2013-03-15 2014-09-25 Urc Ventures Inc. Determining object volume from mobile device images
US20150169723A1 (en) * 2013-12-12 2015-06-18 Xyzprinting, Inc. Three-dimensional image file searching method and three-dimensional image file searching system
US9495764B1 (en) 2016-03-21 2016-11-15 URC Ventures, Inc. Verifying object measurements determined from mobile device images
WO2017131771A1 (en) * 2016-01-29 2017-08-03 Hewlett-Packard Development Company, L.P. Identify a model that matches a 3d object
US9836483B1 (en) * 2012-08-29 2017-12-05 Google Llc Using a mobile device for coarse shape matching against cloud-based 3D model database
US20180082414A1 (en) * 2016-09-21 2018-03-22 Astralink Ltd. Methods Circuits Assemblies Devices Systems Platforms and Functionally Associated Machine Executable Code for Computer Vision Assisted Construction Site Inspection
US20180089524A1 (en) * 2016-09-29 2018-03-29 Fanuc Corporation Object recognition device and object recognition method
US9947126B2 (en) 2015-09-30 2018-04-17 International Business Machines Corporation Storing and comparing three-dimensional objects in three-dimensional storage
CN108776342A (en) * 2018-07-13 2018-11-09 电子科技大学 A kind of high speed platform SAR moving-target detection and speed estimation method at a slow speed
US10186049B1 (en) 2017-03-06 2019-01-22 URC Ventures, Inc. Determining changes in object structure over time using mobile device images
US10201961B2 (en) 2014-08-29 2019-02-12 Hewlett-Packard Development Company, L.P. Generation of three-dimensional objects
WO2019099167A1 (en) * 2017-11-17 2019-05-23 Kodak Alaris Inc. Automated in-line object inspection
US20190251744A1 (en) * 2018-02-12 2019-08-15 Express Search, Inc. System and method for searching 3d models using 2d images
US10403037B1 (en) 2016-03-21 2019-09-03 URC Ventures, Inc. Verifying object measurements determined from mobile device images
CN110378953A (en) * 2019-07-17 2019-10-25 重庆市畜牧科学院 A kind of method of spatial distribution behavior in intelligent recognition swinery circle
US11282291B1 (en) 2021-02-09 2022-03-22 URC Ventures, Inc. Determining object structure using fixed-location cameras with only partial view of object
US11618217B2 (en) 2014-01-16 2023-04-04 Hewlett-Packard Development Company, L.P. Generating three-dimensional objects
US11673314B2 (en) 2014-01-16 2023-06-13 Hewlett-Packard Development Company, L.P. Generating three-dimensional objects
US11679560B2 (en) 2014-01-16 2023-06-20 Hewlett-Packard Development Company, L.P. Generating a three-dimensional object
US11687687B1 (en) 2022-03-18 2023-06-27 Protolabs, Inc. Apparatuses and methods for superimposition of a cross-sectional drawing over a three-dimensional model
WO2023119293A1 (en) * 2021-12-22 2023-06-29 Beegris Ltd. 3d model search
US11741618B2 (en) 2021-03-22 2023-08-29 Everypoint, Inc. Performing object modeling by combining visual data from images with motion data of the image acquisition device
US11846733B2 (en) * 2015-10-30 2023-12-19 Coda Octopus Group Inc. Method of stabilizing sonar images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1148934A (en) * 1981-04-09 1983-06-28 Don Andrews Waste water heat recovery system
CA1162917A (en) * 1983-03-03 1984-02-28 Eric V. Pemberton System for recovering heat from a first liquid and using it to heat second liquid
JPS5966625A (en) * 1982-10-08 1984-04-16 Sanyo Electric Co Ltd Bathroom system utilizing absorption heat pump
CN201355156Y (en) * 2008-10-10 2009-12-02 姜衍礼 Heat pump unit capable of recovering waste water and residual heat

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1148934A (en) * 1981-04-09 1983-06-28 Don Andrews Waste water heat recovery system
JPS5966625A (en) * 1982-10-08 1984-04-16 Sanyo Electric Co Ltd Bathroom system utilizing absorption heat pump
CA1162917A (en) * 1983-03-03 1984-02-28 Eric V. Pemberton System for recovering heat from a first liquid and using it to heat second liquid
CN201355156Y (en) * 2008-10-10 2009-12-02 姜衍礼 Heat pump unit capable of recovering waste water and residual heat

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9789649B2 (en) 2012-07-31 2017-10-17 Makerbot Industries, Llc Printer with laser scanner and tool-mounted camera
WO2014022134A3 (en) * 2012-07-31 2014-05-01 Makerbot Industries, Llc Three-dimensional printer with laser line scanner
US9172829B2 (en) 2012-07-31 2015-10-27 Makerbot Industries, Llc Three-dimensional printer with laser line scanner
WO2014022134A2 (en) * 2012-07-31 2014-02-06 Makerbot Industries, Llc Three-dimensional printer with laser line scanner
WO2014035844A3 (en) * 2012-08-28 2014-05-01 Digital Signal Corporation System and method for refining coordinate-based three-dimensional images obtained from a three-dimensional measurement system
US9836483B1 (en) * 2012-08-29 2017-12-05 Google Llc Using a mobile device for coarse shape matching against cloud-based 3D model database
WO2014151746A2 (en) 2013-03-15 2014-09-25 Urc Ventures Inc. Determining object volume from mobile device images
WO2014151746A3 (en) * 2013-03-15 2014-11-13 Urc Ventures Inc. Determining object volume from mobile device images
US9196084B2 (en) 2013-03-15 2015-11-24 Urc Ventures Inc. Determining object volume from mobile device images
US9367921B2 (en) 2013-03-15 2016-06-14 URC Ventures, Inc. Determining object volume from mobile device images
AU2014236959B2 (en) * 2013-03-15 2017-05-04 Everypoint, Inc. Determining object volume from mobile device images
US20150169723A1 (en) * 2013-12-12 2015-06-18 Xyzprinting, Inc. Three-dimensional image file searching method and three-dimensional image file searching system
US9817845B2 (en) * 2013-12-12 2017-11-14 Xyzprinting, Inc. Three-dimensional image file searching method and three-dimensional image file searching system
US11679560B2 (en) 2014-01-16 2023-06-20 Hewlett-Packard Development Company, L.P. Generating a three-dimensional object
US11673314B2 (en) 2014-01-16 2023-06-13 Hewlett-Packard Development Company, L.P. Generating three-dimensional objects
US11618217B2 (en) 2014-01-16 2023-04-04 Hewlett-Packard Development Company, L.P. Generating three-dimensional objects
US10201961B2 (en) 2014-08-29 2019-02-12 Hewlett-Packard Development Company, L.P. Generation of three-dimensional objects
US10800155B2 (en) 2014-08-29 2020-10-13 Hewlett-Packard Development Company, L.P. Generation of three-dimensional objects
US9947126B2 (en) 2015-09-30 2018-04-17 International Business Machines Corporation Storing and comparing three-dimensional objects in three-dimensional storage
US11846733B2 (en) * 2015-10-30 2023-12-19 Coda Octopus Group Inc. Method of stabilizing sonar images
WO2017131771A1 (en) * 2016-01-29 2017-08-03 Hewlett-Packard Development Company, L.P. Identify a model that matches a 3d object
US10656624B2 (en) 2016-01-29 2020-05-19 Hewlett-Packard Development Company, L.P. Identify a model that matches a 3D object
US9495764B1 (en) 2016-03-21 2016-11-15 URC Ventures, Inc. Verifying object measurements determined from mobile device images
US10403037B1 (en) 2016-03-21 2019-09-03 URC Ventures, Inc. Verifying object measurements determined from mobile device images
US20180082414A1 (en) * 2016-09-21 2018-03-22 Astralink Ltd. Methods Circuits Assemblies Devices Systems Platforms and Functionally Associated Machine Executable Code for Computer Vision Assisted Construction Site Inspection
US20180089524A1 (en) * 2016-09-29 2018-03-29 Fanuc Corporation Object recognition device and object recognition method
US10482341B2 (en) * 2016-09-29 2019-11-19 Fanuc Corporation Object recognition device and object recognition method
US10186049B1 (en) 2017-03-06 2019-01-22 URC Ventures, Inc. Determining changes in object structure over time using mobile device images
WO2019099167A1 (en) * 2017-11-17 2019-05-23 Kodak Alaris Inc. Automated in-line object inspection
US20190251744A1 (en) * 2018-02-12 2019-08-15 Express Search, Inc. System and method for searching 3d models using 2d images
WO2019157515A1 (en) * 2018-02-12 2019-08-15 Express Search, Inc. System and method for searching 3d models using 2d images
CN108776342A (en) * 2018-07-13 2018-11-09 电子科技大学 A kind of high speed platform SAR moving-target detection and speed estimation method at a slow speed
CN110378953A (en) * 2019-07-17 2019-10-25 重庆市畜牧科学院 A kind of method of spatial distribution behavior in intelligent recognition swinery circle
US11282291B1 (en) 2021-02-09 2022-03-22 URC Ventures, Inc. Determining object structure using fixed-location cameras with only partial view of object
US11741618B2 (en) 2021-03-22 2023-08-29 Everypoint, Inc. Performing object modeling by combining visual data from images with motion data of the image acquisition device
WO2023119293A1 (en) * 2021-12-22 2023-06-29 Beegris Ltd. 3d model search
US11687687B1 (en) 2022-03-18 2023-06-27 Protolabs, Inc. Apparatuses and methods for superimposition of a cross-sectional drawing over a three-dimensional model

Similar Documents

Publication Publication Date Title
WO2012061945A1 (en) System and method for object searching using spatial data
US8396284B2 (en) Smart picking in 3D point clouds
JP6810247B2 (en) Systems and methods to automatically generate metadata for media documents
CN107111833B (en) Fast 3D model adaptation and anthropometry
US20190279420A1 (en) Automated roof surface measurement from combined aerial lidar data and imagery
Bernardini et al. The 3D model acquisition pipeline
US10803292B2 (en) Separation of objects in images from three-dimensional cameras
Monnier et al. Trees detection from laser point clouds acquired in dense urban areas by a mobile mapping system
CN113498530A (en) Object size marking system and method based on local visual information
US20130202197A1 (en) System and Method for Manipulating Data Having Spatial Co-ordinates
Rašković et al. Clean construction and demolition waste material cycles through optimised pre-demolition waste audit documentation: A review on building material assessment tools
Demir et al. Automated modeling of 3D building roofs using image and LiDAR data
WO2012034236A1 (en) System and method for detailed automated feature extraction from data having spatial coordinates
Pound et al. A patch-based approach to 3D plant shoot phenotyping
Remondino et al. Design and implement a reality-based 3D digitisation and modelling project
US11869256B2 (en) Separation of objects in images from three-dimensional cameras
Kim et al. Block world reconstruction from spherical stereo image pairs
Guan et al. Partially supervised hierarchical classification for urban features from lidar data with aerial imagery
CN112149348A (en) Simulation space model training data generation method based on unmanned container scene
Galantucci et al. Coded targets and hybrid grids for photogrammetric 3D digitisation of human faces
Kampel et al. Profile-based pottery reconstruction
CN111753112A (en) Information generation method and device and storage medium
CN104680520B (en) It is a kind of scene three-dimensional information investigate method and system on the spot
Ward et al. A model-based approach to recovering the structure of a plant from images
Abdelhafiz et al. Automatic texture mapping mega-projects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11840126

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHT PURSUANT TO RULE 112(1) EPC DATED 21.08.13

122 Ep: pct application non-entry in european phase

Ref document number: 11840126

Country of ref document: EP

Kind code of ref document: A1