US20210272316A1 - Method, System and Apparatus for Object Detection in Point Clouds - Google Patents
Method, System and Apparatus for Object Detection in Point Clouds Download PDFInfo
- Publication number
- US20210272316A1 US20210272316A1 US17/322,545 US202117322545A US2021272316A1 US 20210272316 A1 US20210272316 A1 US 20210272316A1 US 202117322545 A US202117322545 A US 202117322545A US 2021272316 A1 US2021272316 A1 US 2021272316A1
- Authority
- US
- United States
- Prior art keywords
- support structure
- point cloud
- depth
- region
- contiguous
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 title description 8
- 238000003384 imaging method Methods 0.000 claims description 18
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000008901 benefit Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 2
- 238000013481 data capture Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000003137 locomotive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/087—Inventory or stock management, e.g. order filling, procurement or balancing against orders
-
- G06K9/2054—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- Environments in which inventories of objects are managed may be complex and fluid.
- a given environment may contain a wide variety of objects with different attributes (size, shape, price and the like).
- the placement and quantity of the objects in the environment may change frequently.
- imaging conditions such as lighting may be variable both over time and at different locations in the environment. These factors may reduce the accuracy with which such objects can be detected in data captured within the environment.
- FIG. 1 is a schematic of a mobile automation system.
- FIG. 2A depicts a mobile automation apparatus in the system of FIG. 1 .
- FIG. 2B is a block diagram of certain internal hardware components of the mobile automation apparatus in the system of FIG. 1 .
- FIG. 3 is a flowchart of a method for detecting objects in a point cloud.
- FIG. 4A is a diagram illustrating data obtained at block 305 of the method of FIG. 3 .
- FIG. 4B is a diagram illustrating the transformation of the data obtained at block 305 of the method of FIG. 3 to a secondary frame of reference.
- FIG. 5A is a diagram illustrating the removal of a portion of the point cloud at block 315 of the method of FIG. 3
- FIGS. 5B and 6A-6B are diagrams illustrating the generation of an occupancy grid at block 320 of the method of FIG. 3 .
- FIG. 7 is a diagram illustrating contiguous sub-regions identified in the occupancy grid at block 330 of the method of FIG. 3 .
- FIGS. 8 and 9A are diagrams illustrating the performance of blocks 340 - 350 of the method of FIG. 3 .
- FIG. 9B is a diagram illustrating detected object positions generated at block 355 of the method of FIG. 3 .
- FIG. 10 is a diagram illustrating the segmentation of a point cloud prior to object detection.
- Examples disclosed herein are directed to a method of detecting objects on a support structure, the method comprising: obtaining a point cloud of the support structure supporting an object; obtaining a position of a back of the support structure in the point cloud; discarding a portion of the point cloud based on the position of the back of the support structure; generating a three-dimensional occupancy grid from the point cloud, the occupancy grid having a plurality of cells each indicating whether the cell is one of occupied and unoccupied; identifying a contiguous region of occupied cells in the occupancy grid corresponding to the object; and generating a detected object position based on the contiguous region.
- Additional examples disclosed herein are directed to a computing device for detecting objects on a support structure, comprising: a memory storing (i) a point cloud of the support structure supporting an object, and (ii) a position of a back of the support structure in the point cloud; an imaging controller connected to the memory, the imaging controller configured to: retrieve, from the memory, the point cloud and the position of the back of the support structure in the point cloud; discard a portion of the point cloud based on the position of the back of the support structure; generate a three-dimensional occupancy grid from the point cloud, the occupancy grid having a plurality of cells each indicating whether the cell is one of occupied and unoccupied; identify a contiguous region of occupied cells in the occupancy grid corresponding to the object; and generate a detected object position based on the contiguous region.
- FIG. 1 depicts a mobile automation system 100 in accordance with the teachings of this disclosure.
- the system 100 is illustrated as being deployed in a retail environment, but in other embodiments can be deployed in a variety of other environments, including warehouses, hospitals, and the like.
- the system 100 includes a server 101 in communication with at least one mobile automation apparatus 103 (also referred to herein simply as the apparatus 103 ) and at least one client computing device 105 via communication links 107 , illustrated in the present example as including wireless links.
- the links 107 are provided by a wireless local area network (WLAN) deployed within the retail environment by one or more access points (not shown).
- WLAN wireless local area network
- the server 101 , the client device 105 , or both, are located outside the retail environment, and the links 107 therefore include wide-area networks such as the Internet, mobile networks, and the like.
- the system 100 also includes a dock 108 for the apparatus 103 in the present example.
- the dock 108 is in communication with the server 101 via a link 109 that in the present example is a wired link. In other examples, however, the link 109 is a wireless link.
- the client computing device 105 is illustrated in FIG. 1 as a mobile computing device, such as a tablet, smart phone or the like. In other examples, the client device 105 is implemented as another type of computing device, such as a desktop computer, a laptop computer, another server, a kiosk, a monitor, and the like.
- the system 100 can include a plurality of client devices 105 in communication with the server 101 via respective links 107 .
- the system 100 is deployed, in the illustrated example, in a retail environment including a plurality of support structures such as shelf modules 110 - 1 , 110 - 2 , 110 - 3 and so on (collectively referred to as shelves 110 , and generically referred to as a shelf 110 —this nomenclature is also employed for other elements discussed herein). In other examples, additional types of support structures may also be present, such as pegboards.
- Each shelf module 110 supports a plurality of products 112 .
- Each shelf module 110 includes a shelf back 116 - 1 , 116 - 2 , 116 - 3 and a support surface (e.g. support surface 117 - 3 as illustrated in FIG. 1 ) extending from the shelf back 116 to a shelf edge 118 - 1 , 118 - 2 , 118 - 3 .
- shelf modules 110 are typically arranged in a plurality of aisles, each of which includes a plurality of modules 110 aligned end-to-end.
- the shelf edges 118 face into the aisles, through which customers in the retail environment as well as the apparatus 103 may travel.
- shelf edge as employed herein, which may also be referred to as the edge of a support surface (e.g., the support surfaces 117 ) refers to a surface bounded by adjacent surfaces having different angles of inclination. In the example illustrated in FIG.
- the shelf edge 118 - 3 is at an angle of about ninety degrees relative to each of the support surface 117 - 3 and the underside (not shown) of the support surface 117 - 3 . In other examples, the angles between the shelf edge 118 - 3 and the adjacent surfaces, such as the support surface 117 - 3 , is more or less than ninety degrees.
- the shelf edges 118 define a front of the shelves 110 , separated from the shelf backs 116 by a shelf depth.
- a common frame of reference 102 is illustrated in FIG. 1 . In the present example, the shelf depth is defined in the Y dimension of the frame of reference 102 , while the shelf backs 116 and shelf edges 118 are shown as being parallel to the XZ plane.
- the apparatus 103 is deployed within the retail environment, and communicates with the server 101 (e.g. via the link 107 ) to navigate, autonomously or partially autonomously, along a length 119 (illustrated in FIG. 1 as being parallel to the X axis of the frame of reference 102 ) of at least a portion of the shelves 110 .
- the apparatus 103 autonomously or in conjunction with the server 101 , is configured to continuously determine its location within the environment, for example with respect to a map of the environment.
- the apparatus 103 may also be configured to update the map (e.g. via a simultaneous mapping and localization, or SLAM, process).
- the apparatus 103 is equipped with a plurality of navigation and data capture sensors 104 , such as image sensors (e.g. one or more digital cameras) and depth sensors (e.g. one or more Light Detection and Ranging (LIDAR) sensors, one or more depth cameras employing structured light patterns, such as infrared light, or the like).
- the apparatus 103 can be configured to employ the sensors 104 to both navigate among the shelves 110 (e.g. according to the paths mentioned above) and to capture shelf data, such as point cloud and image data, during such navigation.
- image sensors e.g. one or more digital cameras
- depth sensors e.g. one or more Light Detection and Ranging (LIDAR) sensors, one or more depth cameras employing structured light patterns, such as infrared light, or the like.
- LIDAR Light Detection and Ranging
- the apparatus 103 can be configured to employ the sensors 104 to both navigate among the shelves 110 (e.g. according to the paths mentioned above) and to capture shelf data,
- the server 101 includes a special purpose imaging controller, such as a processor 120 , specifically designed to control and/or assist the mobile automation apparatus 103 to navigate the environment and to capture data.
- the processor 120 can be further configured to obtain the captured data via a communications interface 124 for storage in a repository 132 and subsequent processing (e.g. to detect objects such as shelved products 112 in the captured data, and detect status information corresponding to the objects).
- the server 101 may also be configured to transmit status notifications (e.g. notifications indicating that products are out-of-stock, low stock or misplaced) to the client device 105 responsive to the determination of product status data.
- the client device 105 includes one or more controllers (e.g. central processing units (CPUs) and/or field-programmable gate arrays (FPGAs) and the like) configured to process (e.g. to display) notifications received from the server 101 .
- controllers e.g. central processing units (CPUs) and/or field-programmable gate arrays (FPGAs)
- the processor 120 is interconnected with a non-transitory computer readable storage medium, such as the above-mentioned memory 122 , having stored thereon computer readable instructions for performing various functionality, including control of the apparatus 103 to capture shelf data, post-processing of the shelf data, and generating and providing certain navigational data to the apparatus 103 , such as target locations at which to capture shelf data.
- the memory 122 includes a combination of volatile (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory).
- the processor 120 and the memory 122 each comprise one or more integrated circuits.
- the processor 120 is implemented as one or more central processing units (CPUs) and/or graphics processing units (GPUs).
- the server 101 also includes the above-mentioned communications interface 124 interconnected with the processor 120 .
- the communications interface 124 includes suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the server 101 to communicate with other computing devices—particularly the apparatus 103 , the client device 105 and the dock 108 —via the links 107 and 109 .
- the links 107 and 109 may be direct links, or links that traverse one or more networks, including both local and wide-area networks.
- the specific components of the communications interface 124 are selected based on the type of network or other links that the server 101 is required to communicate over.
- a wireless local-area network is implemented within the retail environment via the deployment of one or more wireless access points.
- the links 107 therefore include either or both wireless links between the apparatus 103 and the mobile device 105 and the above-mentioned access points, and a wired link (e.g. an Ethernet-based link) between the server 101 and the access point.
- the memory 122 stores a plurality of applications, each including a plurality of computer readable instructions executable by the processor 120 .
- the execution of the above-mentioned instructions by the processor 120 configures the server 101 to perform various actions discussed herein.
- the applications stored in the memory 122 include an object detection application 128 (also referred to herein as the application 128 ), which may also be implemented as a suite of logically distinct applications.
- the processor 120 via execution of the application 128 or subcomponents thereof and in conjunction with the other components of the server 101 , the processor 120 is configured to implement various functionality related to obtaining captured data from the apparatus 103 and performing various post-processing operations on the captured data.
- execution of the application 128 configures the server 101 to detect objects (e.g. the products 112 ) on the shelves 110 from point cloud data, such as a point cloud generated from data captured by the apparatus 103 .
- the processor 120 as configured via the execution of the control application 128 , is also referred to herein as the above-mentioned imaging controller 120 .
- the controller 120 may also be performed by preconfigured special purpose hardware controllers (e.g. one or more FPGAs and/or Application-Specific Integrated Circuits (ASICs) having logic circuit arrangements configured to enhance the processing speed of imaging computations) rather than by execution of the application 128 by the processor 120 .
- special purpose hardware controllers e.g. one or more FPGAs and/or Application-Specific Integrated Circuits (ASICs) having logic circuit arrangements configured to enhance the processing speed of imaging computations
- the apparatus 103 includes a chassis 201 containing a locomotive mechanism 203 (e.g. one or more electrical motors driving wheels, tracks or the like).
- the apparatus 103 further includes a sensor mast 205 supported on the chassis 201 and, in the present example, extending upwards (e.g., substantially vertically) from the chassis 201 .
- the mast 205 supports the sensors 104 mentioned earlier.
- the sensors 104 include at least one imaging sensor 207 , such as a digital camera, as well as at least one depth sensor 209 , such as a 3D digital camera.
- the apparatus 103 also includes additional depth sensors, such as LIDAR sensors 211 .
- the apparatus 103 includes additional sensors, such as one or more RFID readers, temperature sensors, and the like.
- the mast 205 supports seven digital cameras 207 - 1 through 207 - 7 , and two LIDAR sensors 211 - 1 and 211 - 2 .
- the mast 205 also supports a plurality of illumination assemblies 213 , configured to illuminate the fields of view of the respective cameras 207 . That is, the illumination assembly 213 - 1 illuminates the field of view of the camera 207 - 1 , and so on.
- the sensors 207 and 211 are oriented on the mast 205 such that the fields of view of each sensor face a shelf 110 along the length 119 of which the apparatus 103 is travelling.
- the apparatus 103 is configured to track a location of the apparatus 103 (e.g. a location of the center of the chassis 201 ) in the common frame of reference 102 previously established in the retail facility, permitting data captured by the mobile automation apparatus 103 to be registered to the common frame of reference.
- the mobile automation apparatus 103 includes a special-purpose controller, such as a processor 220 , as shown in FIG. 2B , interconnected with a non-transitory computer readable storage medium, such as a memory 222 .
- the memory 222 includes a combination of volatile (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory).
- the processor 220 and the memory 222 each comprise one or more integrated circuits.
- the memory 222 stores computer readable instructions for execution by the processor 220 .
- the memory 222 stores a control application 228 which, when executed by the processor 220 , configures the processor 220 to perform various functions related to the navigation of the apparatus 103 and capture of data for subsequent processing, e.g. by the server 101 . In some embodiments, such subsequent processing can be performed by the apparatus 103 itself via execution of the application 228 .
- the application 228 may also be implemented as a suite of distinct applications in other examples.
- the processor 220 when so configured by the execution of the application 228 , may also be referred to as an imaging controller 220 .
- an imaging controller 220 Those skilled in the art will appreciate that the functionality implemented by the processor 220 via the execution of the application 228 may also be implemented by one or more specially designed hardware and firmware components, such as FPGAs, ASICs and the like having logic circuit arrangements configured to enhance the processing speed of navigational and/or imaging computations in other embodiments.
- the memory 222 may also store a repository 232 containing, for example, one or more maps representing the environment in which the apparatus 103 operates, for use during the execution of the application 228 .
- the apparatus 103 may communicate with the server 101 , for example to receive instructions to navigate to specified locations and initiate data capture operations, via a communications interface 224 over the link 107 shown in FIG. 1 .
- the communications interface 224 also enables the apparatus 103 to communicate with the server 101 via the dock 108 and the link 109 .
- some or all of the processing performed by the server 101 may be performed by the apparatus 103 , and some or all of the processing performed by the apparatus 103 may be performed by the server 101 . That is, although in the illustrated example the application 128 resides in the server 101 , in other embodiments some or all of the actions described below to detect objects on the shelves 110 from captured data may be performed by the processor 220 of the apparatus 103 , either in conjunction with or independently from the processor 120 of the server 101 .
- distribution of such computations between the server 101 and the mobile automation apparatus 103 may depend upon respective processing speeds of the processors 120 and 220 , the quality and bandwidth of the link 107 , as well as criticality level of the underlying instruction(s).
- FIG. 3 a method 300 of detecting objects is shown. The method 300 will be described in conjunction with its performance by the server 101 , with reference to the components illustrated in FIG. 1 .
- the server 101 is configured to obtain a point cloud of the support structure, as well as a plane definition corresponding to the front of the support structure.
- the point cloud obtained at block 305 therefore represents at least a portion of a shelf module 110 (and may represent a plurality of shelf modules 110 ), and the plane definition corresponds to a shelf plane that corresponds to the front of the shelf modules 110 .
- the plane definition defines a plane that contains the shelf edges 118 .
- the point cloud and plane definition obtained at block 305 can be retrieved from the repository 132 .
- the server 101 may have previously received captured data from the apparatus 103 including a plurality of lidar scans of the shelf modules 110 , and generated a point cloud from the lidar scans.
- Each point in the point cloud represents a point on a surface of the shelves 110 , products 112 , and the like (e.g. a point that the scan line of a lidar sensor 211 impacted), and is defined by a set of coordinates (X, Y and Z) in the frame of reference 102 .
- the plane definition may also be previously generated by the server 101 and stored in the repository 132 , for example from the above-mentioned point cloud.
- the server 101 can be configured to process the point cloud, the raw lidar data, image data captured by the cameras 207 , or a combination thereof, to identify shelf edges 118 according to predefined characteristics of the shelf edges 118 .
- characteristics include that the shelf edges 118 are likely to be substantially planar, and are also likely to be closer to the apparatus 103 as the apparatus 103 travels the length 119 of a shelf module 110 ) than other objects (such as the shelf backs 116 and products 112 ).
- the plane definition can be obtained in a variety of suitable formats, such as a suitable set of parameters defining the plane.
- An example of such parameters includes a normal vector (i.e. a vector defined according to the frame of reference 102 that is perpendicular to the plane) and a displacement (indicating the distance along the normal vector from the origin of the frame of reference 102 to the plane).
- the server 101 is also configured to obtain a depth of the back 116 of the shelf 110 , also referred to herein as the shelf depth.
- the shelf depth may be determined previously at the server 101 and therefore retrieved from the repository 132 .
- the shelf depth can be determined, for example, by processing the point cloud, images of the shelf 110 , or a combination thereof, to identify portions of the point cloud that are likely to correspond to the shelf back 116 .
- An example of such processing includes decomposing an image of the shelf 110 into patches, and classifying each patch as depicting the shelf back 116 or not according to a similarity between the patch and a reference image of the shelf back 116 .
- the server 101 can then be configured to identify points in the point cloud that correspond to the patches classified as depicting the shelf back 116 , and to average the depth of such points to determine the shelf depth.
- the server 101 is configured to obtain shelf edge positions.
- the shelf edge positions can be determined previously by the server (e.g. based on the characteristics noted above), and retrieved from the repository 132 at block 305 . Shelf edge positions can be defined as bounding boxes in the frame of reference 102 , relative to the plane definition, or the like.
- FIG. 4A a point cloud 400 is illustrated, depicting the shelf module 110 - 3 .
- the shelf back 116 - 3 , the shelf 117 - 3 and the shelf edge 118 - 3 are therefore represented in the point cloud 400 , as are the products 112 .
- a plane definition 404 corresponding to the front of the shelf module 110 - 3 (that is, the plane definition 404 contains the shelf edges 118 - 3 ).
- FIG. 4A also illustrates the remaining inputs obtained at block 305 , including a shelf depth 408 and shelf edge positions 412 - 1 and 412 - 2 (shown as bounding boxes overlaid on the portions of the point cloud 400 representing shelf edges 118 ).
- the point cloud 400 , plane definition 404 , shelf depth 408 and shelf edge positions 412 need not be obtained in the graphical forms shown in FIG. 4A .
- the point cloud may be obtained as a list of coordinates.
- the plane definition 404 can be obtained as the above-mentioned parameters defining a normal vector and displacement.
- the shelf depth 408 can be obtained as a scalar quantity, a vector, or the like, and the shelf edge positions 412 can be obtained as sets of coordinates (e.g. in the frame of reference 102 ) defining the corners of the bounding boxes shown in FIG. 4A .
- the server 101 can be configured to transform the point cloud 400 to a secondary frame of reference based on the shelf plane 404 .
- a transformed point cloud 400 ′ in shown, in which the coordinates of each point of the point cloud 400 ′ are expressed in a secondary frame of reference 416 .
- the secondary frame of reference 416 has an origin on the plane 404 and thus, for each point in the point cloud, defines a planar position (in the X and Z dimensions, in the illustrated example) on the shelf plane 404 as well as a depth (in the Y dimension as illustrated) orthogonal to the shelf plane 404 .
- Block 310 may reduce the computational load imposed by the remaining blocks of the method 300 . However, in other embodiments, block 310 can be omitted.
- the server 101 is configured to discard a portion of the point cloud based on the position of the shelf back 116 , as defined by the shelf depth 408 .
- the server 101 can be configured to discard any points in the point cloud with depths equal to or greater than the shelf depth 408 .
- the server 101 is configured to discard any points in the point cloud with depths that are within a threshold (e.g. 10% below or above the shelf depth 408 ) of the shelf depth 408 .
- a threshold e.g. 10% below or above the shelf depth 408
- the server 101 is configured to generate a three-dimensional occupancy grid from the point cloud as modified at blocks 305 and 310 (i.e. the point cloud 500 , in the present example).
- the occupancy grid defines a plurality of cells, arranged according to the frame of reference 416 .
- An example grid 502 is shown in FIG. 5B .
- the cells 504 of the grid 502 are arranged in depthwise layers or slices, as will be discussed below in greater detail.
- the cells 504 have a lower resolution than the point cloud 500 . That is, each cell 504 represents a larger portion of the shelf module 110 - 3 than each point in the point cloud.
- the point cloud may include points spaced apart by about 2 mm, while each cell 504 may have dimensions of about 2 cm ⁇ 2 cm ⁇ 2 cm. As will be apparent to those skilled in the art, a wide variety of other dimensions may also be employed for the point cloud 500 and the cells 504 .
- the occupancy grid 502 is generated by assigning each point of the point cloud to one of the cells 504 (specifically, to the cell encompassing a volume on the shelf module 110 that contains that point). Each cell 504 is then assigned a value indicating that the cell is either occupied (if any points were assigned to the cell 504 ) or unoccupied (if no points were assigned to the cell 504 ).
- the server 101 can be configured to store the assignment of points to cells 504 , for example in the form of a list of points with a cell identifier corresponding to each point. The generation of the occupancy grid will be described below, for a portion 508 of the point cloud 500 , as indicated in FIG. 5B
- the server 101 is configured to determine, for each cell 504 in the layer 600 - 1 , whether the cell 504 contains any points from the point cloud 500 .
- an example cell 504 a is assigned an occupied value (e.g. a value of one) because the cell 504 a contains points corresponding to a product 112 .
- Another example cell 504 b is assigned an unoccupied value because the cell 504 b does not contain any points in the point cloud 500 (that is, the volume contained within the cell 504 b is empty).
- FIG. 6B illustrates, in two dimensions, each layer 600 mentioned above in the grid 502 .
- cells assigned an occupied value are illustrated in white, while cells assigned an unoccupied value are illustrated in black.
- other values may also be selected to indicate that a cell is occupied or unoccupied.
- the cell 504 a mentioned in connection with FIG. 6A is occupied, while the cell 504 b is unoccupied.
- the server 101 is configured, upon setting the value of a cell to “occupied”, to automatically set the value of every cell with the same planar position (i.e. in the and Z dimension) but a greater depth (in the Y dimension) to unoccupied, whether or not those cells contain points of the point cloud 500 .
- the cells at the same planar position as the cell 504 a but at greater depths are assigned unoccupied values, even though they may contain points corresponding to a product 112 .
- the layers 600 - 1 , 600 - 2 and 600 - 3 each contain occupied cells corresponding to different portions of the cylindrical product 112 .
- the layer 600 - 1 contains occupied cells that correspond to the shelf edge 118 - 3 .
- the server 101 is configured to discard, e.g. by setting cell values to unoccupied, any cells corresponding to the shelf edge positions 412 - 1 and 412 - 2 .
- the server 101 can be configured to identify any cells (e.g. at any depth) having the same planar positions (i.e. in the XZ plane) as the shelf edge positions 412 , and to update the values of such cells to unoccupied.
- the server 101 is configured to update the layer 600 - 1 of the grid to generate a later 600 - 1 ′ in which the cells coinciding with the shelf edge position 412 - 2 are set to unoccupied.
- Updated versions of the layers 600 - 2 and 600 - 3 may also be generated, but their content is identical to the layers 600 - 2 and 600 - 3 as shown in FIG. 6B .
- the performance of block 325 may be delayed until later in the method 300 , as will be discussed below.
- the server 101 is configured to detect contiguous regions in the occupancy grid 502 . Each contiguous region so detected corresponds to an object, such as a product 112 .
- the server 101 is configured to detect contiguous regions beginning at block 330 .
- the server 101 is configured to select a layer of the grid 502 (e.g. the layer closest to the shelf plane 404 ), and to detect contiguous sub-regions in the selected layer.
- the server 101 is configured to determine whether any layers remain to be processed. When the determination at block 335 is affirmative, the next layer 600 is selected and contiguous sub-regions detected, at block 330 . When the determination at block 335 is negative, the performance of the method 300 proceeds to block 340 .
- FIG. 7 three sets of contiguous sub-regions are illustrated, arising from three performances of block 330 (for each of the layers 600 - 1 , 600 - 2 and 600 - 3 ).
- a first set of contiguous sub-regions 700 - 1 and 700 - 2 are identified in the layer 600 - 1 .
- a second set of contiguous sub-regions 704 - 1 and 704 - 2 are identified in the layer 600 - 2
- a third set of contiguous sub-regions 708 - 1 and 708 - 2 are identified in the layer 600 - 3 .
- Identification of contiguous sub-regions 700 , 704 , 708 and the like can be implemented via a suitable blob extraction (also referred to as connected-component analysis) algorithm.
- the detection of contiguous sub-regions is configured to detect regions of cells in each layer 600 with the same value. More specifically, in the present example the server 101 is configured to identify regions of cells in each layer 600 with “occupied” values.
- the server 101 is configured to continue the identification of contiguous regions by determining, for each sub-region detected at block 330 , whether any adjacent layers 600 (to the layer containing the current sub-region) contain abutting sub-regions.
- certain objects e.g. the cylindrical product 112 shown in FIG. 6A
- the server 101 is configured to determine whether any sub-regions detected through one or more performances of blocks 330 - 335 correspond to a single object.
- the determination at block 340 for a selected sub-region includes determining whether the planar position of the selected sub-region and the planar position of another sub-region in an adjacent layer 600 of the grid 502 abut each other.
- the server 101 is configured to determine whether the sub-region 700 - 2 shares a boundary in the XZ plane of the frame of reference 416 with a boundary of any sub-region in the layer 600 - 2 (which is adjacent in depth to the layer 600 - 1 ).
- the determination is affirmative for both the sub-regions 704 - 1 and 704 - 2 in the layer 600 - 2 .
- the server 101 is therefore configured, at block 345 , to merge the sub-regions 700 - 2 , 704 - 1 and 704 - 2 , e.g. by assigning a common region identifier to all three sub-regions.
- the server 101 is configured to determine whether any sub-regions remain to be assessed via a further performance of block 340 .
- the determination is affirmative, and block 340 is repeated, for example by selecting the sub-region 700 - 1 .
- the determination at block 340 is negative.
- the server 101 may be configured to select the sub-region 704 - 1 of the layer 600 - 2 .
- the boundary of the sub-region 708 - 1 coincides with the boundary of the sub-region 704 - 1 in the XZ plane.
- the determination at block 340 is therefore affirmative, and the sub-regions 704 - 1 and 708 - 1 are merged (i.e. assigned the same region identifier).
- FIG. 9A illustrates the contiguous regions 800 and 804 in an overhead view.
- the server 101 is configured to generate detected object positions based on the contiguous regions detected at blocks 330 - 350 . For each detected contiguous region, the server 101 is configured to generate one detected object position. Various forms of object position are contemplated. In the present example, as illustrated in FIG. 9B , the detected object positions are generated as bounding boxes containing the volumes encompassed by the cells of the corresponding contiguous regions. Thus, a first bounding box 900 is generated from the contiguous region 800 , and a second bounding box 904 is generated from the contiguous region 804 .
- the detected object positions can be generated as the centroids of each contiguous region (e.g. a single point in the frame of reference 416 ).
- the above-mentioned bounding boxes can be generated based on the point cloud 500 rather than based directly on the contiguous regions 800 and 804 .
- the allocation of points to the cells of the occupancy grid 502 can be stored in the memory 122 .
- the points associated with the cells of that contiguous region are retrieved from the memory 122 and a bounding box is fitted to the retrieved cells.
- the detected object positions generated at block 355 can be stored in the memory 122 (e.g. in the repository 132 ), and can also be transmitted to a further computing device such as the client device 105 , e.g. for presentation on a display thereof.
- the detected object positions generated at block 355 may also be employed by the server 101 itself or by another computing device for the detection of gaps between products 112 .
- the server 101 can be configured to retrieve label positions on the shelf edges 118 , indicating the expected position for products 112 , and to determine whether a detected object position was generated in association with each label position (e.g. above each label position, indicating the presence of a product 112 above the corresponding label). Any label positions without corresponding detected object positions may be detected as gaps (e.g. out of stock products 112 ) by the server 101 .
- block 325 may be performed following a negative determination at block 335 (i.e. between blocks 335 and 340 ).
- the presence of occupied cells corresponding to shelf edges may lead to the detection of a single contiguous sub-region that in fact corresponds to distinct objects as a result of the shelf edge extending between the portions of the sub-region corresponding to each object. Responsive to discarding cells corresponding to the shelf edges 118 , the server 101 may therefore be configured to relabel remaining sub-regions where such sub-regions have been separated (i.e. are no longer contiguous).
- the server 101 can be configured, prior to the performance of block 310 , to segment the point cloud obtained at block 305 .
- an example point cloud 1000 obtained at block 305 is shown.
- the point cloud 1000 represents two distinct shelf modules, separated by a module boundary 1004 .
- the server 101 can be configured, in such embodiments, to retrieve module boundary positions (e.g. in the frame of reference 102 ) from the repository 132 , or to detect the module boundary 1004 , for example via image gradients or the like, and to segment the point cloud 1000 into first and second segments 1008 - 1 and 1008 - 2 .
- the server 101 can then be configured to perform the remainder of the method 300 separately for each segment 1008 .
- a includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element.
- the terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein.
- the terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%.
- the term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically.
- a device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
- processors such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein.
- processors or “processing devices”
- FPGAs field programmable gate arrays
- unique stored program instructions including both software and firmware
- some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic.
- ASICs application specific integrated circuits
- an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein.
- Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Accounting & Taxation (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
- Environments in which inventories of objects are managed, such as products for purchase in a retail environment, may be complex and fluid. For example, a given environment may contain a wide variety of objects with different attributes (size, shape, price and the like). Further, the placement and quantity of the objects in the environment may change frequently. Still further, imaging conditions such as lighting may be variable both over time and at different locations in the environment. These factors may reduce the accuracy with which such objects can be detected in data captured within the environment.
- The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
-
FIG. 1 is a schematic of a mobile automation system. -
FIG. 2A depicts a mobile automation apparatus in the system ofFIG. 1 . -
FIG. 2B is a block diagram of certain internal hardware components of the mobile automation apparatus in the system ofFIG. 1 . -
FIG. 3 is a flowchart of a method for detecting objects in a point cloud. -
FIG. 4A is a diagram illustrating data obtained atblock 305 of the method ofFIG. 3 . -
FIG. 4B is a diagram illustrating the transformation of the data obtained atblock 305 of the method ofFIG. 3 to a secondary frame of reference. -
FIG. 5A is a diagram illustrating the removal of a portion of the point cloud atblock 315 of the method ofFIG. 3 -
FIGS. 5B and 6A-6B are diagrams illustrating the generation of an occupancy grid atblock 320 of the method ofFIG. 3 . -
FIG. 7 is a diagram illustrating contiguous sub-regions identified in the occupancy grid atblock 330 of the method ofFIG. 3 . -
FIGS. 8 and 9A are diagrams illustrating the performance of blocks 340-350 of the method ofFIG. 3 . -
FIG. 9B is a diagram illustrating detected object positions generated atblock 355 of the method ofFIG. 3 . -
FIG. 10 is a diagram illustrating the segmentation of a point cloud prior to object detection. - Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
- The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
- Examples disclosed herein are directed to a method of detecting objects on a support structure, the method comprising: obtaining a point cloud of the support structure supporting an object; obtaining a position of a back of the support structure in the point cloud; discarding a portion of the point cloud based on the position of the back of the support structure; generating a three-dimensional occupancy grid from the point cloud, the occupancy grid having a plurality of cells each indicating whether the cell is one of occupied and unoccupied; identifying a contiguous region of occupied cells in the occupancy grid corresponding to the object; and generating a detected object position based on the contiguous region.
- Additional examples disclosed herein are directed to a computing device for detecting objects on a support structure, comprising: a memory storing (i) a point cloud of the support structure supporting an object, and (ii) a position of a back of the support structure in the point cloud; an imaging controller connected to the memory, the imaging controller configured to: retrieve, from the memory, the point cloud and the position of the back of the support structure in the point cloud; discard a portion of the point cloud based on the position of the back of the support structure; generate a three-dimensional occupancy grid from the point cloud, the occupancy grid having a plurality of cells each indicating whether the cell is one of occupied and unoccupied; identify a contiguous region of occupied cells in the occupancy grid corresponding to the object; and generate a detected object position based on the contiguous region.
- Further examples disclosed herein are directed to a non-transitory computer-readable medium storing a plurality of computer-readable instructions executable by a processor of a computing device, wherein execution of the instructions configures the computing device to: obtain a point cloud of a support structure supporting an object; obtain a position of a back of the support structure in the point cloud; discard a portion of the point cloud based on the position of the back of the support structure; generate a three-dimensional occupancy grid from the point cloud, the occupancy grid having a plurality of cells each indicating whether the cell is one of occupied and unoccupied; identify a contiguous region of occupied cells in the occupancy grid corresponding to the object; and generate a detected object position based on the contiguous region.
-
FIG. 1 depicts amobile automation system 100 in accordance with the teachings of this disclosure. Thesystem 100 is illustrated as being deployed in a retail environment, but in other embodiments can be deployed in a variety of other environments, including warehouses, hospitals, and the like. Thesystem 100 includes aserver 101 in communication with at least one mobile automation apparatus 103 (also referred to herein simply as the apparatus 103) and at least oneclient computing device 105 viacommunication links 107, illustrated in the present example as including wireless links. In the present example, thelinks 107 are provided by a wireless local area network (WLAN) deployed within the retail environment by one or more access points (not shown). In other examples, theserver 101, theclient device 105, or both, are located outside the retail environment, and thelinks 107 therefore include wide-area networks such as the Internet, mobile networks, and the like. Thesystem 100 also includes adock 108 for theapparatus 103 in the present example. Thedock 108 is in communication with theserver 101 via alink 109 that in the present example is a wired link. In other examples, however, thelink 109 is a wireless link. - The
client computing device 105 is illustrated inFIG. 1 as a mobile computing device, such as a tablet, smart phone or the like. In other examples, theclient device 105 is implemented as another type of computing device, such as a desktop computer, a laptop computer, another server, a kiosk, a monitor, and the like. Thesystem 100 can include a plurality ofclient devices 105 in communication with theserver 101 viarespective links 107. - The
system 100 is deployed, in the illustrated example, in a retail environment including a plurality of support structures such as shelf modules 110-1, 110-2, 110-3 and so on (collectively referred to as shelves 110, and generically referred to as a shelf 110—this nomenclature is also employed for other elements discussed herein). In other examples, additional types of support structures may also be present, such as pegboards. Each shelf module 110 supports a plurality ofproducts 112. Each shelf module 110 includes a shelf back 116-1, 116-2, 116-3 and a support surface (e.g. support surface 117-3 as illustrated inFIG. 1 ) extending from the shelf back 116 to a shelf edge 118-1, 118-2, 118-3. - The shelf modules 110 are typically arranged in a plurality of aisles, each of which includes a plurality of modules 110 aligned end-to-end. In such arrangements, the shelf edges 118 face into the aisles, through which customers in the retail environment as well as the
apparatus 103 may travel. As will be apparent fromFIG. 1 , the term “shelf edge” 118 as employed herein, which may also be referred to as the edge of a support surface (e.g., the support surfaces 117) refers to a surface bounded by adjacent surfaces having different angles of inclination. In the example illustrated inFIG. 1 , the shelf edge 118-3 is at an angle of about ninety degrees relative to each of the support surface 117-3 and the underside (not shown) of the support surface 117-3. In other examples, the angles between the shelf edge 118-3 and the adjacent surfaces, such as the support surface 117-3, is more or less than ninety degrees. The shelf edges 118 define a front of the shelves 110, separated from the shelf backs 116 by a shelf depth. A common frame ofreference 102 is illustrated inFIG. 1 . In the present example, the shelf depth is defined in the Y dimension of the frame ofreference 102, while the shelf backs 116 and shelf edges 118 are shown as being parallel to the XZ plane. - The
apparatus 103 is deployed within the retail environment, and communicates with the server 101 (e.g. via the link 107) to navigate, autonomously or partially autonomously, along a length 119 (illustrated inFIG. 1 as being parallel to the X axis of the frame of reference 102) of at least a portion of the shelves 110. Theapparatus 103, autonomously or in conjunction with theserver 101, is configured to continuously determine its location within the environment, for example with respect to a map of the environment. Theapparatus 103 may also be configured to update the map (e.g. via a simultaneous mapping and localization, or SLAM, process). - The
apparatus 103 is equipped with a plurality of navigation anddata capture sensors 104, such as image sensors (e.g. one or more digital cameras) and depth sensors (e.g. one or more Light Detection and Ranging (LIDAR) sensors, one or more depth cameras employing structured light patterns, such as infrared light, or the like). Theapparatus 103 can be configured to employ thesensors 104 to both navigate among the shelves 110 (e.g. according to the paths mentioned above) and to capture shelf data, such as point cloud and image data, during such navigation. - The
server 101 includes a special purpose imaging controller, such as aprocessor 120, specifically designed to control and/or assist themobile automation apparatus 103 to navigate the environment and to capture data. Theprocessor 120 can be further configured to obtain the captured data via acommunications interface 124 for storage in arepository 132 and subsequent processing (e.g. to detect objects such as shelvedproducts 112 in the captured data, and detect status information corresponding to the objects). Theserver 101 may also be configured to transmit status notifications (e.g. notifications indicating that products are out-of-stock, low stock or misplaced) to theclient device 105 responsive to the determination of product status data. Theclient device 105 includes one or more controllers (e.g. central processing units (CPUs) and/or field-programmable gate arrays (FPGAs) and the like) configured to process (e.g. to display) notifications received from theserver 101. - The
processor 120 is interconnected with a non-transitory computer readable storage medium, such as the above-mentionedmemory 122, having stored thereon computer readable instructions for performing various functionality, including control of theapparatus 103 to capture shelf data, post-processing of the shelf data, and generating and providing certain navigational data to theapparatus 103, such as target locations at which to capture shelf data. Thememory 122 includes a combination of volatile (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory). Theprocessor 120 and thememory 122 each comprise one or more integrated circuits. In some embodiments, theprocessor 120 is implemented as one or more central processing units (CPUs) and/or graphics processing units (GPUs). - The
server 101 also includes the above-mentionedcommunications interface 124 interconnected with theprocessor 120. Thecommunications interface 124 includes suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing theserver 101 to communicate with other computing devices—particularly theapparatus 103, theclient device 105 and thedock 108—via thelinks links communications interface 124 are selected based on the type of network or other links that theserver 101 is required to communicate over. In the present example, as noted earlier, a wireless local-area network is implemented within the retail environment via the deployment of one or more wireless access points. Thelinks 107 therefore include either or both wireless links between theapparatus 103 and themobile device 105 and the above-mentioned access points, and a wired link (e.g. an Ethernet-based link) between theserver 101 and the access point. - The
memory 122 stores a plurality of applications, each including a plurality of computer readable instructions executable by theprocessor 120. The execution of the above-mentioned instructions by theprocessor 120 configures theserver 101 to perform various actions discussed herein. The applications stored in thememory 122 include an object detection application 128 (also referred to herein as the application 128), which may also be implemented as a suite of logically distinct applications. In general, via execution of theapplication 128 or subcomponents thereof and in conjunction with the other components of theserver 101, theprocessor 120 is configured to implement various functionality related to obtaining captured data from theapparatus 103 and performing various post-processing operations on the captured data. In the present example, as discussed below in greater detail, execution of theapplication 128 configures theserver 101 to detect objects (e.g. the products 112) on the shelves 110 from point cloud data, such as a point cloud generated from data captured by theapparatus 103. - The
processor 120, as configured via the execution of thecontrol application 128, is also referred to herein as the above-mentionedimaging controller 120. As will now be apparent, some or all of the functionality implemented by thecontroller 120 described below may also be performed by preconfigured special purpose hardware controllers (e.g. one or more FPGAs and/or Application-Specific Integrated Circuits (ASICs) having logic circuit arrangements configured to enhance the processing speed of imaging computations) rather than by execution of theapplication 128 by theprocessor 120. - Turning now to
FIGS. 2A and 2B , themobile automation apparatus 103 is shown in greater detail. Theapparatus 103 includes achassis 201 containing a locomotive mechanism 203 (e.g. one or more electrical motors driving wheels, tracks or the like). Theapparatus 103 further includes asensor mast 205 supported on thechassis 201 and, in the present example, extending upwards (e.g., substantially vertically) from thechassis 201. Themast 205 supports thesensors 104 mentioned earlier. In particular, thesensors 104 include at least oneimaging sensor 207, such as a digital camera, as well as at least onedepth sensor 209, such as a 3D digital camera. Theapparatus 103 also includes additional depth sensors, such asLIDAR sensors 211. In other examples, theapparatus 103 includes additional sensors, such as one or more RFID readers, temperature sensors, and the like. - In the present example, the
mast 205 supports seven digital cameras 207-1 through 207-7, and two LIDAR sensors 211-1 and 211-2. Themast 205 also supports a plurality ofillumination assemblies 213, configured to illuminate the fields of view of therespective cameras 207. That is, the illumination assembly 213-1 illuminates the field of view of the camera 207-1, and so on. Thesensors mast 205 such that the fields of view of each sensor face a shelf 110 along thelength 119 of which theapparatus 103 is travelling. Theapparatus 103 is configured to track a location of the apparatus 103 (e.g. a location of the center of the chassis 201) in the common frame ofreference 102 previously established in the retail facility, permitting data captured by themobile automation apparatus 103 to be registered to the common frame of reference. - The
mobile automation apparatus 103 includes a special-purpose controller, such as aprocessor 220, as shown inFIG. 2B , interconnected with a non-transitory computer readable storage medium, such as amemory 222. Thememory 222 includes a combination of volatile (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory). Theprocessor 220 and thememory 222 each comprise one or more integrated circuits. Thememory 222 stores computer readable instructions for execution by theprocessor 220. In particular, thememory 222 stores acontrol application 228 which, when executed by theprocessor 220, configures theprocessor 220 to perform various functions related to the navigation of theapparatus 103 and capture of data for subsequent processing, e.g. by theserver 101. In some embodiments, such subsequent processing can be performed by theapparatus 103 itself via execution of theapplication 228. Theapplication 228 may also be implemented as a suite of distinct applications in other examples. - The
processor 220, when so configured by the execution of theapplication 228, may also be referred to as animaging controller 220. Those skilled in the art will appreciate that the functionality implemented by theprocessor 220 via the execution of theapplication 228 may also be implemented by one or more specially designed hardware and firmware components, such as FPGAs, ASICs and the like having logic circuit arrangements configured to enhance the processing speed of navigational and/or imaging computations in other embodiments. - The
memory 222 may also store arepository 232 containing, for example, one or more maps representing the environment in which theapparatus 103 operates, for use during the execution of theapplication 228. Theapparatus 103 may communicate with theserver 101, for example to receive instructions to navigate to specified locations and initiate data capture operations, via acommunications interface 224 over thelink 107 shown inFIG. 1 . Thecommunications interface 224 also enables theapparatus 103 to communicate with theserver 101 via thedock 108 and thelink 109. - As will be apparent in the discussion below, other examples, some or all of the processing performed by the
server 101 may be performed by theapparatus 103, and some or all of the processing performed by theapparatus 103 may be performed by theserver 101. That is, although in the illustrated example theapplication 128 resides in theserver 101, in other embodiments some or all of the actions described below to detect objects on the shelves 110 from captured data may be performed by theprocessor 220 of theapparatus 103, either in conjunction with or independently from theprocessor 120 of theserver 101. As those of skill in the art will realize, distribution of such computations between theserver 101 and themobile automation apparatus 103 may depend upon respective processing speeds of theprocessors link 107, as well as criticality level of the underlying instruction(s). - The functionality of the
application 128 will now be described in greater detail. In particular, the detection of objects on the shelves 110 (or other suitable support structures) will be described as performed by theserver 101. Turning toFIG. 3 , amethod 300 of detecting objects is shown. Themethod 300 will be described in conjunction with its performance by theserver 101, with reference to the components illustrated inFIG. 1 . - At
block 305, theserver 101 is configured to obtain a point cloud of the support structure, as well as a plane definition corresponding to the front of the support structure. In the present example, in which the support structures are shelves such as the shelves 110 shown inFIG. 1 , the point cloud obtained atblock 305 therefore represents at least a portion of a shelf module 110 (and may represent a plurality of shelf modules 110), and the plane definition corresponds to a shelf plane that corresponds to the front of the shelf modules 110. In other words, the plane definition defines a plane that contains the shelf edges 118. - The point cloud and plane definition obtained at
block 305 can be retrieved from therepository 132. For example, theserver 101 may have previously received captured data from theapparatus 103 including a plurality of lidar scans of the shelf modules 110, and generated a point cloud from the lidar scans. Each point in the point cloud represents a point on a surface of the shelves 110,products 112, and the like (e.g. a point that the scan line of alidar sensor 211 impacted), and is defined by a set of coordinates (X, Y and Z) in the frame ofreference 102. The plane definition may also be previously generated by theserver 101 and stored in therepository 132, for example from the above-mentioned point cloud. For example, theserver 101 can be configured to process the point cloud, the raw lidar data, image data captured by thecameras 207, or a combination thereof, to identify shelf edges 118 according to predefined characteristics of the shelf edges 118. Examples of such characteristics include that the shelf edges 118 are likely to be substantially planar, and are also likely to be closer to theapparatus 103 as theapparatus 103 travels thelength 119 of a shelf module 110) than other objects (such as the shelf backs 116 and products 112). The plane definition can be obtained in a variety of suitable formats, such as a suitable set of parameters defining the plane. An example of such parameters includes a normal vector (i.e. a vector defined according to the frame ofreference 102 that is perpendicular to the plane) and a displacement (indicating the distance along the normal vector from the origin of the frame ofreference 102 to the plane). - At
block 305 theserver 101 is also configured to obtain a depth of the back 116 of the shelf 110, also referred to herein as the shelf depth. The shelf depth may be determined previously at theserver 101 and therefore retrieved from therepository 132. The shelf depth can be determined, for example, by processing the point cloud, images of the shelf 110, or a combination thereof, to identify portions of the point cloud that are likely to correspond to the shelf back 116. An example of such processing includes decomposing an image of the shelf 110 into patches, and classifying each patch as depicting the shelf back 116 or not according to a similarity between the patch and a reference image of the shelf back 116. Theserver 101 can then be configured to identify points in the point cloud that correspond to the patches classified as depicting the shelf back 116, and to average the depth of such points to determine the shelf depth. - Further, at
block 305 theserver 101 is configured to obtain shelf edge positions. The shelf edge positions can be determined previously by the server (e.g. based on the characteristics noted above), and retrieved from therepository 132 atblock 305. Shelf edge positions can be defined as bounding boxes in the frame ofreference 102, relative to the plane definition, or the like. - Referring to
FIG. 4A , apoint cloud 400 is illustrated, depicting the shelf module 110-3. The shelf back 116-3, the shelf 117-3 and the shelf edge 118-3 are therefore represented in thepoint cloud 400, as are theproducts 112. Also shown inFIG. 4A is aplane definition 404 corresponding to the front of the shelf module 110-3 (that is, theplane definition 404 contains the shelf edges 118-3).FIG. 4A also illustrates the remaining inputs obtained atblock 305, including ashelf depth 408 and shelf edge positions 412-1 and 412-2 (shown as bounding boxes overlaid on the portions of thepoint cloud 400 representing shelf edges 118). - The
point cloud 400,plane definition 404,shelf depth 408 and shelf edge positions 412 need not be obtained in the graphical forms shown inFIG. 4A . As will be apparent to those skilled in the art, the point cloud may be obtained as a list of coordinates. Theplane definition 404 can be obtained as the above-mentioned parameters defining a normal vector and displacement. Theshelf depth 408 can be obtained as a scalar quantity, a vector, or the like, and the shelf edge positions 412 can be obtained as sets of coordinates (e.g. in the frame of reference 102) defining the corners of the bounding boxes shown inFIG. 4A . - Returning to
FIG. 3 , atblock 310 theserver 101 can be configured to transform thepoint cloud 400 to a secondary frame of reference based on theshelf plane 404. As shown inFIG. 4B , a transformedpoint cloud 400′ in shown, in which the coordinates of each point of thepoint cloud 400′ are expressed in a secondary frame ofreference 416. The secondary frame ofreference 416 has an origin on theplane 404 and thus, for each point in the point cloud, defines a planar position (in the X and Z dimensions, in the illustrated example) on theshelf plane 404 as well as a depth (in the Y dimension as illustrated) orthogonal to theshelf plane 404.Block 310 may reduce the computational load imposed by the remaining blocks of themethod 300. However, in other embodiments, block 310 can be omitted. - Referring again to
FIG. 3 , atblock 315, theserver 101 is configured to discard a portion of the point cloud based on the position of the shelf back 116, as defined by theshelf depth 408. For example, theserver 101 can be configured to discard any points in the point cloud with depths equal to or greater than theshelf depth 408. In other examples theserver 101 is configured to discard any points in the point cloud with depths that are within a threshold (e.g. 10% below or above the shelf depth 408) of theshelf depth 408. Turning briefly toFIG. 5A , a further modifiedpoint cloud 500 is illustrated following the performance ofblock 310, at which the points corresponding to the shelf back 116-3 were discarded. - At
block 320, theserver 101 is configured to generate a three-dimensional occupancy grid from the point cloud as modified atblocks 305 and 310 (i.e. thepoint cloud 500, in the present example). The occupancy grid defines a plurality of cells, arranged according to the frame ofreference 416. Anexample grid 502 is shown inFIG. 5B . In particular, thecells 504 of thegrid 502 are arranged in depthwise layers or slices, as will be discussed below in greater detail. As also seen inFIG. 5B , in the present example, thecells 504 have a lower resolution than thepoint cloud 500. That is, eachcell 504 represents a larger portion of the shelf module 110-3 than each point in the point cloud. For example, the point cloud may include points spaced apart by about 2 mm, while eachcell 504 may have dimensions of about 2 cm×2 cm×2 cm. As will be apparent to those skilled in the art, a wide variety of other dimensions may also be employed for thepoint cloud 500 and thecells 504. - The
occupancy grid 502 is generated by assigning each point of the point cloud to one of the cells 504 (specifically, to the cell encompassing a volume on the shelf module 110 that contains that point). Eachcell 504 is then assigned a value indicating that the cell is either occupied (if any points were assigned to the cell 504) or unoccupied (if no points were assigned to the cell 504). In addition, theserver 101 can be configured to store the assignment of points tocells 504, for example in the form of a list of points with a cell identifier corresponding to each point. The generation of the occupancy grid will be described below, for aportion 508 of thepoint cloud 500, as indicated inFIG. 5B - Turning to
FIG. 6A , theportion 508 of thepoint cloud 500 is shown in isolation, along with three layers 600-1, 600-2 and 600-3 of thegrid 502. To generate the first layer 600-1, theserver 101 is configured to determine, for eachcell 504 in the layer 600-1, whether thecell 504 contains any points from thepoint cloud 500. Thus, anexample cell 504 a is assigned an occupied value (e.g. a value of one) because thecell 504 a contains points corresponding to aproduct 112. Anotherexample cell 504 b, on the other hand, is assigned an unoccupied value because thecell 504 b does not contain any points in the point cloud 500 (that is, the volume contained within thecell 504 b is empty). -
FIG. 6B illustrates, in two dimensions, each layer 600 mentioned above in thegrid 502. In particular, in the illustrated example, cells assigned an occupied value are illustrated in white, while cells assigned an unoccupied value are illustrated in black. As will be apparent, other values may also be selected to indicate that a cell is occupied or unoccupied. Thecell 504 a mentioned in connection withFIG. 6A is occupied, while thecell 504 b is unoccupied. - In the present example, the
server 101 is configured, upon setting the value of a cell to “occupied”, to automatically set the value of every cell with the same planar position (i.e. in the and Z dimension) but a greater depth (in the Y dimension) to unoccupied, whether or not those cells contain points of thepoint cloud 500. Thus, in the layers 600-2 and 600-3, the cells at the same planar position as thecell 504 a but at greater depths are assigned unoccupied values, even though they may contain points corresponding to aproduct 112. As also seen inFIG. 6B , the layers 600-1, 600-2 and 600-3 each contain occupied cells corresponding to different portions of thecylindrical product 112. Further, the layer 600-1 contains occupied cells that correspond to the shelf edge 118-3. - Returning to
FIG. 3 , atblock 325 theserver 101 is configured to discard, e.g. by setting cell values to unoccupied, any cells corresponding to the shelf edge positions 412-1 and 412-2. For example, theserver 101 can be configured to identify any cells (e.g. at any depth) having the same planar positions (i.e. in the XZ plane) as the shelf edge positions 412, and to update the values of such cells to unoccupied. Thus, returning toFIG. 6B , theserver 101 is configured to update the layer 600-1 of the grid to generate a later 600-1′ in which the cells coinciding with the shelf edge position 412-2 are set to unoccupied. Updated versions of the layers 600-2 and 600-3 may also be generated, but their content is identical to the layers 600-2 and 600-3 as shown inFIG. 6B . In other embodiments, the performance ofblock 325 may be delayed until later in themethod 300, as will be discussed below. - Following the performance of
block 325, theserver 101 is configured to detect contiguous regions in theoccupancy grid 502. Each contiguous region so detected corresponds to an object, such as aproduct 112. In the present example, theserver 101 is configured to detect contiguous regions beginning atblock 330. Atblock 330, theserver 101 is configured to select a layer of the grid 502 (e.g. the layer closest to the shelf plane 404), and to detect contiguous sub-regions in the selected layer. Atblock 335, theserver 101 is configured to determine whether any layers remain to be processed. When the determination atblock 335 is affirmative, the next layer 600 is selected and contiguous sub-regions detected, atblock 330. When the determination atblock 335 is negative, the performance of themethod 300 proceeds to block 340. - Referring to
FIG. 7 , three sets of contiguous sub-regions are illustrated, arising from three performances of block 330 (for each of the layers 600-1, 600-2 and 600-3). In particular, a first set of contiguous sub-regions 700-1 and 700-2 are identified in the layer 600-1. A second set of contiguous sub-regions 704-1 and 704-2 are identified in the layer 600-2, and a third set of contiguous sub-regions 708-1 and 708-2 are identified in the layer 600-3. Identification of contiguous sub-regions 700, 704, 708 and the like can be implemented via a suitable blob extraction (also referred to as connected-component analysis) algorithm. In general, the detection of contiguous sub-regions is configured to detect regions of cells in each layer 600 with the same value. More specifically, in the present example theserver 101 is configured to identify regions of cells in each layer 600 with “occupied” values. - At
block 340, following a negative determination atblock 335, theserver 101 is configured to continue the identification of contiguous regions by determining, for each sub-region detected atblock 330, whether any adjacent layers 600 (to the layer containing the current sub-region) contain abutting sub-regions. As noted in connection withFIG. 6B and as seen inFIG. 7 , certain objects (e.g. thecylindrical product 112 shown inFIG. 6A ), particularly those with surfaces that are not parallel to theshelf plane 404, appear segmented between layers 600 of thegrid 502, and may therefore be represented by a plurality of sub-regions. In other words, atblock 340, theserver 101 is configured to determine whether any sub-regions detected through one or more performances of blocks 330-335 correspond to a single object. - Turning to
FIG. 8 , the determination atblock 340 for a selected sub-region includes determining whether the planar position of the selected sub-region and the planar position of another sub-region in an adjacent layer 600 of thegrid 502 abut each other. For example, beginning with the sub-region 700-2, which resides in the layer 600-1, theserver 101 is configured to determine whether the sub-region 700-2 shares a boundary in the XZ plane of the frame ofreference 416 with a boundary of any sub-region in the layer 600-2 (which is adjacent in depth to the layer 600-1). In the present example, the determination is affirmative for both the sub-regions 704-1 and 704-2 in the layer 600-2. Theserver 101 is therefore configured, atblock 345, to merge the sub-regions 700-2, 704-1 and 704-2, e.g. by assigning a common region identifier to all three sub-regions. - At
block 350, theserver 101 is configured to determine whether any sub-regions remain to be assessed via a further performance ofblock 340. In the present example, the determination is affirmative, and block 340 is repeated, for example by selecting the sub-region 700-1. As there are no sub-regions in the layer 600-2 with planar positions abutting the planar position of the sub-region 700-1, the determination atblock 340 is negative. - In a further example performance of
block 340, theserver 101 may be configured to select the sub-region 704-1 of the layer 600-2. As is evident inFIG. 8 , the boundary of the sub-region 708-1 (in the layer 600-3) coincides with the boundary of the sub-region 704-1 in the XZ plane. The determination atblock 340 is therefore affirmative, and the sub-regions 704-1 and 708-1 are merged (i.e. assigned the same region identifier). As will now be apparent, repeated performances ofblock contiguous regions grid 502.FIG. 9A illustrates thecontiguous regions - At
block 355, theserver 101 is configured to generate detected object positions based on the contiguous regions detected at blocks 330-350. For each detected contiguous region, theserver 101 is configured to generate one detected object position. Various forms of object position are contemplated. In the present example, as illustrated inFIG. 9B , the detected object positions are generated as bounding boxes containing the volumes encompassed by the cells of the corresponding contiguous regions. Thus, afirst bounding box 900 is generated from thecontiguous region 800, and asecond bounding box 904 is generated from thecontiguous region 804. - In other examples, the detected object positions can be generated as the centroids of each contiguous region (e.g. a single point in the frame of reference 416). In further examples, the above-mentioned bounding boxes can be generated based on the
point cloud 500 rather than based directly on thecontiguous regions occupancy grid 502 can be stored in thememory 122. Atblock 355, for each contiguous region the points associated with the cells of that contiguous region are retrieved from thememory 122 and a bounding box is fitted to the retrieved cells. - The detected object positions generated at
block 355 can be stored in the memory 122 (e.g. in the repository 132), and can also be transmitted to a further computing device such as theclient device 105, e.g. for presentation on a display thereof. The detected object positions generated atblock 355 may also be employed by theserver 101 itself or by another computing device for the detection of gaps betweenproducts 112. For example, theserver 101 can be configured to retrieve label positions on the shelf edges 118, indicating the expected position forproducts 112, and to determine whether a detected object position was generated in association with each label position (e.g. above each label position, indicating the presence of aproduct 112 above the corresponding label). Any label positions without corresponding detected object positions may be detected as gaps (e.g. out of stock products 112) by theserver 101. - Variations to the above systems and methods are contemplated. For example, as noted earlier, in some embodiments, block 325 may be performed following a negative determination at block 335 (i.e. between
blocks 335 and 340). In such embodiments, the presence of occupied cells corresponding to shelf edges may lead to the detection of a single contiguous sub-region that in fact corresponds to distinct objects as a result of the shelf edge extending between the portions of the sub-region corresponding to each object. Responsive to discarding cells corresponding to the shelf edges 118, theserver 101 may therefore be configured to relabel remaining sub-regions where such sub-regions have been separated (i.e. are no longer contiguous). - In further embodiments, the
server 101 can be configured, prior to the performance ofblock 310, to segment the point cloud obtained atblock 305. Specifically, referring toFIG. 10 , anexample point cloud 1000 obtained atblock 305 is shown. Thepoint cloud 1000 represents two distinct shelf modules, separated by amodule boundary 1004. Theserver 101 can be configured, in such embodiments, to retrieve module boundary positions (e.g. in the frame of reference 102) from therepository 132, or to detect themodule boundary 1004, for example via image gradients or the like, and to segment thepoint cloud 1000 into first and second segments 1008-1 and 1008-2. Theserver 101 can then be configured to perform the remainder of themethod 300 separately for each segment 1008. - In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
- The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
- Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
- It will be appreciated that some embodiments may be comprised of one or more specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
- Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
- The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/322,545 US20210272316A1 (en) | 2018-10-05 | 2021-05-17 | Method, System and Apparatus for Object Detection in Point Clouds |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/153,064 US11010920B2 (en) | 2018-10-05 | 2018-10-05 | Method, system and apparatus for object detection in point clouds |
US17/322,545 US20210272316A1 (en) | 2018-10-05 | 2021-05-17 | Method, System and Apparatus for Object Detection in Point Clouds |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/153,064 Continuation US11010920B2 (en) | 2018-10-05 | 2018-10-05 | Method, system and apparatus for object detection in point clouds |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210272316A1 true US20210272316A1 (en) | 2021-09-02 |
Family
ID=70051708
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/153,064 Active US11010920B2 (en) | 2018-10-05 | 2018-10-05 | Method, system and apparatus for object detection in point clouds |
US17/322,545 Pending US20210272316A1 (en) | 2018-10-05 | 2021-05-17 | Method, System and Apparatus for Object Detection in Point Clouds |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/153,064 Active US11010920B2 (en) | 2018-10-05 | 2018-10-05 | Method, system and apparatus for object detection in point clouds |
Country Status (5)
Country | Link |
---|---|
US (2) | US11010920B2 (en) |
AU (1) | AU2019351689B2 (en) |
DE (1) | DE112019004976T5 (en) |
GB (1) | GB2591940B (en) |
WO (1) | WO2020072178A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7393184B2 (en) * | 2019-11-07 | 2023-12-06 | 東芝テック株式会社 | Point cloud data processing device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100017407A1 (en) * | 2008-07-16 | 2010-01-21 | Hitachi, Ltd. | Three-dimensional object recognition system and inventory system using the same |
US20140232826A1 (en) * | 2013-02-15 | 2014-08-21 | Jungheinrich Aktiengesellschaft | Method for detecting objects in a warehouse and/or for spatial orientation in a warehouse |
US20150063707A1 (en) * | 2010-06-10 | 2015-03-05 | Autodesk, Inc. | Outline approximation for point cloud of building |
WO2017175312A1 (en) * | 2016-04-05 | 2017-10-12 | 株式会社日立物流 | Measurement system and measurement method |
US20180108134A1 (en) * | 2016-10-17 | 2018-04-19 | Conduent Business Services, Llc | Store shelf imaging system and method using a vertical lidar |
US9996818B1 (en) * | 2014-12-19 | 2018-06-12 | Amazon Technologies, Inc. | Counting inventory items using image analysis and depth information |
US20190197728A1 (en) * | 2017-12-25 | 2019-06-27 | Fujitsu Limited | Object recognition apparatus, method for recognizing object, and non-transitory computer-readable storage medium for storing program |
Family Cites Families (396)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5214615A (en) | 1990-02-26 | 1993-05-25 | Will Bauer | Three-dimensional displacement of a body with computer interface |
US5209712A (en) | 1991-06-24 | 1993-05-11 | Frederic Ferri | Proprioceptive exercise, training and therapy apparatus |
US5408322A (en) | 1993-04-26 | 1995-04-18 | Materials Research Corporation | Self aligning in-situ ellipsometer and method of using for process monitoring |
JP3311830B2 (en) | 1993-09-20 | 2002-08-05 | 株式会社東芝 | 3D video creation device |
KR100197676B1 (en) | 1993-09-27 | 1999-06-15 | 윤종용 | Robot cleaner |
AU1333895A (en) | 1993-11-30 | 1995-06-19 | Raymond R. Burke | Computer system for allowing a consumer to purchase packaged goods at home |
US5414268A (en) | 1994-02-01 | 1995-05-09 | The Coe Manufacturing Company | Light scanner with interlaced camera fields and parallel light beams |
JPH0996672A (en) | 1995-09-29 | 1997-04-08 | Sukuuea:Kk | Method and system for generating three-dimensional positional data |
US20020014533A1 (en) | 1995-12-18 | 2002-02-07 | Xiaxun Zhu | Automated object dimensioning system employing contour tracing, vertice detection, and forner point detection and reduction methods on 2-d range data maps |
US6034379A (en) | 1996-03-01 | 2000-03-07 | Intermec Ip Corp. | Code reader having replaceable optics assemblies supporting multiple illuminators |
US5831719A (en) | 1996-04-12 | 1998-11-03 | Holometrics, Inc. | Laser scanning system |
US5988862A (en) | 1996-04-24 | 1999-11-23 | Cyra Technologies, Inc. | Integrated system for quickly and accurately imaging and modeling three dimensional objects |
US6075905A (en) | 1996-07-17 | 2000-06-13 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
US5953055A (en) | 1996-08-08 | 1999-09-14 | Ncr Corporation | System and method for detecting and analyzing a queue |
JP3371279B2 (en) | 1997-03-10 | 2003-01-27 | ペンタックス プレシジョン株式会社 | Method and apparatus for selecting lens for TV camera |
US6026376A (en) | 1997-04-15 | 2000-02-15 | Kenney; John A. | Interactive electronic shopping system and method |
GB2330265A (en) | 1997-10-10 | 1999-04-14 | Harlequin Group Limited The | Image compositing using camera data |
IL122079A (en) | 1997-10-30 | 2002-02-10 | Netmor Ltd | Ultrasonic positioning and tracking system |
WO1999023600A1 (en) | 1997-11-04 | 1999-05-14 | The Trustees Of Columbia University In The City Of New York | Video signal face region detection |
US6975764B1 (en) | 1997-11-26 | 2005-12-13 | Cognex Technology And Investment Corporation | Fast high-accuracy multi-dimensional pattern inspection |
US7016539B1 (en) | 1998-07-13 | 2006-03-21 | Cognex Corporation | Method for fast, robust, multi-dimensional pattern recognition |
US6332098B2 (en) | 1998-08-07 | 2001-12-18 | Fedex Corporation | Methods for shipping freight |
US6820895B2 (en) | 1998-09-23 | 2004-11-23 | Vehicle Safety Systems, Inc. | Vehicle air bag minimum distance enforcement apparatus, method and system |
US6442507B1 (en) | 1998-12-29 | 2002-08-27 | Wireless Communications, Inc. | System for creating a computer model and measurement database of a wireless communication network |
US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
US6388688B1 (en) | 1999-04-06 | 2002-05-14 | Vergics Corporation | Graph-based visual navigation through spatial environments |
US6850946B1 (en) | 1999-05-26 | 2005-02-01 | Wireless Valley Communications, Inc. | Method and system for a building database manipulator |
US6687388B2 (en) | 2000-01-28 | 2004-02-03 | Sony Corporation | Picture processing apparatus |
US6711283B1 (en) | 2000-05-03 | 2004-03-23 | Aperio Technologies, Inc. | Fully automatic rapid microscope slide scanner |
WO2002005562A2 (en) | 2000-07-11 | 2002-01-17 | Mediaflow, Llc | Video compression using adaptive selection of groups of frames, adaptive bit allocation, and adaptive replenishment |
GB0020850D0 (en) | 2000-08-23 | 2000-10-11 | Univ London | A system and method for intelligent modelling of public spaces |
TW512478B (en) | 2000-09-14 | 2002-12-01 | Olympus Optical Co | Alignment apparatus |
US7054509B2 (en) | 2000-10-21 | 2006-05-30 | Cardiff Software, Inc. | Determining form identification through the spatial relationship of input data |
BR0115469A (en) | 2000-11-21 | 2003-08-19 | Michael Stuart Gardner | Tag Marking |
US7068852B2 (en) | 2001-01-23 | 2006-06-27 | Zoran Corporation | Edge detection and sharpening process for an image |
JP2002321698A (en) | 2001-04-27 | 2002-11-05 | Mitsubishi Heavy Ind Ltd | Boarding bridge for carrying air cargo |
US7277187B2 (en) | 2001-06-29 | 2007-10-02 | Quantronix, Inc. | Overhead dimensioning system and method |
US7046273B2 (en) | 2001-07-02 | 2006-05-16 | Fuji Photo Film Co., Ltd | System and method for collecting image information |
US6995762B1 (en) | 2001-09-13 | 2006-02-07 | Symbol Technologies, Inc. | Measurement of dimensions of solid objects from two-dimensional image(s) |
CA2460892A1 (en) | 2001-09-18 | 2003-03-27 | Pro-Corp Holdings International Limited | Image recognition inventory management system |
US6722568B2 (en) | 2001-11-20 | 2004-04-20 | Ncr Corporation | Methods and apparatus for detection and processing of supplemental bar code labels |
US7233699B2 (en) | 2002-03-18 | 2007-06-19 | National Instruments Corporation | Pattern matching using multiple techniques |
US20060106742A1 (en) | 2002-04-29 | 2006-05-18 | Speed Trac Technologies, Inc. | System and method for weighing and tracking freight |
US7149749B2 (en) | 2002-06-03 | 2006-12-12 | International Business Machines Corporation | Method of inserting and deleting leaves in tree table structures |
US6928194B2 (en) | 2002-09-19 | 2005-08-09 | M7 Visual Intelligence, Lp | System for mosaicing digital ortho-images |
EP1434170A3 (en) | 2002-11-07 | 2006-04-05 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for adding ornaments to an image of a person |
JP3862652B2 (en) | 2002-12-10 | 2006-12-27 | キヤノン株式会社 | Printing control method and information processing apparatus |
US7248754B2 (en) | 2003-05-05 | 2007-07-24 | International Business Machines Corporation | Apparatus and method for determining whether machine readable information on an item matches the item |
US7137207B2 (en) | 2003-06-23 | 2006-11-21 | Armstrong Timothy D | Measuring arrangement to determine location of corners for a building foundation and a wooden base frame, and the use thereof |
US7090135B2 (en) | 2003-07-07 | 2006-08-15 | Symbol Technologies, Inc. | Imaging arrangement and barcode imager for imaging an optical code or target at a plurality of focal planes |
US7493336B2 (en) | 2003-07-22 | 2009-02-17 | International Business Machines Corporation | System and method of updating planogram information using RFID tags and personal shopping device |
DE10336638A1 (en) | 2003-07-25 | 2005-02-10 | Robert Bosch Gmbh | Apparatus for classifying at least one object in a vehicle environment |
TWI266035B (en) | 2003-11-11 | 2006-11-11 | Hon Hai Prec Ind Co Ltd | A system and method for measuring point-cloud |
SE0400556D0 (en) | 2004-03-05 | 2004-03-05 | Pricer Ab | Electronic shelf labeling system, electronic label, handheld device and method in an electronic labeling system |
JP2007530978A (en) | 2004-03-29 | 2007-11-01 | エヴォリューション ロボティクス インコーポレイテッド | Position estimation method and apparatus using reflected light source |
WO2005098475A1 (en) | 2004-03-29 | 2005-10-20 | Evolution Robotics, Inc. | Sensing device and method for measuring position and orientation relative to multiple light sources |
US7885865B2 (en) | 2004-05-11 | 2011-02-08 | The Kroger Co. | System and method for mapping of planograms |
US7245558B2 (en) | 2004-06-18 | 2007-07-17 | Symbol Technologies, Inc. | System and method for detection using ultrasonic waves |
US7168618B2 (en) | 2004-08-12 | 2007-01-30 | International Business Machines Corporation | Retail store method and system |
US8207964B1 (en) | 2008-02-22 | 2012-06-26 | Meadow William D | Methods and apparatus for generating three-dimensional image data models |
US7643665B2 (en) | 2004-08-31 | 2010-01-05 | Semiconductor Insights Inc. | Method of design analysis of existing integrated circuits |
WO2006065563A2 (en) | 2004-12-14 | 2006-06-22 | Sky-Trax Incorporated | Method and apparatus for determining position and rotational orientation of an object |
US7783383B2 (en) | 2004-12-22 | 2010-08-24 | Intelligent Hospital Systems Ltd. | Automated pharmacy admixture system (APAS) |
US20060164682A1 (en) | 2005-01-25 | 2006-07-27 | Dspv, Ltd. | System and method of improving the legibility and applicability of document pictures using form based image enhancement |
US7440903B2 (en) | 2005-01-28 | 2008-10-21 | Target Brands, Inc. | System and method for evaluating and recommending planograms |
DE102005007536A1 (en) | 2005-02-17 | 2007-01-04 | Isra Vision Systems Ag | Method for calibrating a measuring system |
US7751928B1 (en) | 2005-03-11 | 2010-07-06 | Amazon Technologies, Inc. | Method and system for agent exchange-based materials handling |
AU2006236789A1 (en) | 2005-04-13 | 2006-10-26 | Store Eyes, Inc. | System and method for measuring display compliance |
US20080175513A1 (en) | 2005-04-19 | 2008-07-24 | Ming-Jun Lai | Image Edge Detection Systems and Methods |
US8294809B2 (en) | 2005-05-10 | 2012-10-23 | Advanced Scientific Concepts, Inc. | Dimensioning system |
US7590053B2 (en) | 2005-06-21 | 2009-09-15 | Alcatel Lucent | Multiple endpoint protection using SPVCs |
CN101248330B (en) | 2005-06-28 | 2015-06-17 | 斯甘拉伊斯股份有限公司 | A system and method for measuring and mapping a surface relative to a reference |
US7817826B2 (en) | 2005-08-12 | 2010-10-19 | Intelitrac Inc. | Apparatus and method for partial component facial recognition |
US8625854B2 (en) | 2005-09-09 | 2014-01-07 | Industrial Research Limited | 3D scene scanner and a position and orientation system |
WO2007042251A2 (en) | 2005-10-10 | 2007-04-19 | Nordic Bioscience A/S | A method of segmenting an image |
US7605817B2 (en) | 2005-11-09 | 2009-10-20 | 3M Innovative Properties Company | Determining camera motion |
US7508794B2 (en) | 2005-11-29 | 2009-03-24 | Cisco Technology, Inc. | Authorizing an endpoint node for a communication service |
US8577538B2 (en) | 2006-07-14 | 2013-11-05 | Irobot Corporation | Method and system for controlling a remote vehicle |
JP4730121B2 (en) | 2006-02-07 | 2011-07-20 | ソニー株式会社 | Image processing apparatus and method, recording medium, and program |
US20070197895A1 (en) | 2006-02-17 | 2007-08-23 | Sdgi Holdings, Inc. | Surgical instrument to assess tissue characteristics |
US8157205B2 (en) | 2006-03-04 | 2012-04-17 | Mcwhirk Bruce Kimberly | Multibody aircrane |
US20100171826A1 (en) | 2006-04-12 | 2010-07-08 | Store Eyes, Inc. | Method for measuring retail display and compliance |
EP1850270B1 (en) | 2006-04-28 | 2010-06-09 | Toyota Motor Europe NV | Robust interest point detector and descriptor |
CA2545118C (en) | 2006-04-28 | 2011-07-05 | Global Sensor Systems Inc. | Device for measuring package size |
US20070272732A1 (en) | 2006-05-26 | 2007-11-29 | Mettler-Toledo, Inc. | Weighing and dimensioning system and method for weighing and dimensioning |
EP2041516A2 (en) | 2006-06-22 | 2009-04-01 | Roy Sandberg | Method and apparatus for robotic path planning, selection, and visualization |
JP4910507B2 (en) | 2006-06-29 | 2012-04-04 | コニカミノルタホールディングス株式会社 | Face authentication system and face authentication method |
US7647752B2 (en) | 2006-07-12 | 2010-01-19 | Greg Magnell | System and method for making custom boxes for objects of random size or shape |
US7940955B2 (en) | 2006-07-26 | 2011-05-10 | Delphi Technologies, Inc. | Vision-based method of determining cargo status by boundary detection |
US7693757B2 (en) | 2006-09-21 | 2010-04-06 | International Business Machines Corporation | System and method for performing inventory using a mobile inventory robot |
CA2668364C (en) | 2006-11-02 | 2016-06-14 | Queen's University At Kingston | Method and apparatus for assessing proprioceptive function |
WO2008057504A2 (en) | 2006-11-06 | 2008-05-15 | Aman James A | Load tracking system based on self- tracking forklift |
US8531457B2 (en) | 2006-11-29 | 2013-09-10 | Technion Research And Development Foundation Ltd. | Apparatus and method for finding visible points in a cloud point |
US7474389B2 (en) | 2006-12-19 | 2009-01-06 | Dean Greenberg | Cargo dimensional and weight analyzing system |
US8189926B2 (en) | 2006-12-30 | 2012-05-29 | Videomining Corporation | Method and system for automatically analyzing categories in a physical space based on the visual characterization of people |
US20080164310A1 (en) | 2007-01-09 | 2008-07-10 | Dupuy Charles G | Labeling system |
JP4878644B2 (en) | 2007-03-15 | 2012-02-15 | 学校法人 関西大学 | Moving object noise removal processing apparatus and moving object noise removal processing program |
US7940279B2 (en) | 2007-03-27 | 2011-05-10 | Utah State University | System and method for rendering of texel imagery |
US8132728B2 (en) | 2007-04-04 | 2012-03-13 | Sick, Inc. | Parcel dimensioning measurement system and method |
US8094937B2 (en) | 2007-04-17 | 2012-01-10 | Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. | System and method for labeling feature clusters in frames of image data for optical navigation |
WO2008134562A2 (en) | 2007-04-27 | 2008-11-06 | Nielsen Media Research, Inc. | Methods and apparatus to monitor in-store media and consumer traffic related to retail environments |
JP4561769B2 (en) | 2007-04-27 | 2010-10-13 | アイシン・エィ・ダブリュ株式会社 | Route guidance system and route guidance method |
EP2158576A1 (en) | 2007-06-08 | 2010-03-03 | Tele Atlas B.V. | Method of and apparatus for producing a multi-viewpoint panorama |
EP2165289A4 (en) | 2007-06-11 | 2012-07-04 | Hand Held Prod Inc | Optical reader system for extracting information in a digital image |
US7982423B2 (en) | 2007-07-04 | 2011-07-19 | Bossa Nova Concepts, Llc | Statically stable biped robotic mechanism and method of actuating |
JP4661838B2 (en) | 2007-07-18 | 2011-03-30 | トヨタ自動車株式会社 | Route planning apparatus and method, cost evaluation apparatus, and moving body |
KR100922494B1 (en) | 2007-07-19 | 2009-10-20 | 삼성전자주식회사 | Method for measuring pose of a mobile robot and method and apparatus for measuring position of the mobile robot using the method |
US7726575B2 (en) | 2007-08-10 | 2010-06-01 | Hand Held Products, Inc. | Indicia reading terminal having spatial measurement functionality |
US8950673B2 (en) | 2007-08-30 | 2015-02-10 | Symbol Technologies, Inc. | Imaging system for reading target with multiple symbols |
US7949568B2 (en) | 2007-08-31 | 2011-05-24 | Accenture Global Services Limited | Determination of product display parameters based on image processing |
US8630924B2 (en) | 2007-08-31 | 2014-01-14 | Accenture Global Services Limited | Detection of stock out conditions based on image processing |
US9135491B2 (en) | 2007-08-31 | 2015-09-15 | Accenture Global Services Limited | Digital point-of-sale analyzer |
US8009864B2 (en) | 2007-08-31 | 2011-08-30 | Accenture Global Services Limited | Determination of inventory conditions based on image processing |
US8189855B2 (en) | 2007-08-31 | 2012-05-29 | Accenture Global Services Limited | Planogram extraction based on image processing |
US8295590B2 (en) | 2007-09-14 | 2012-10-23 | Abbyy Software Ltd. | Method and system for creating a form template for a form |
JP4466705B2 (en) | 2007-09-21 | 2010-05-26 | ヤマハ株式会社 | Navigation device |
US8396284B2 (en) | 2007-10-23 | 2013-03-12 | Leica Geosystems Ag | Smart picking in 3D point clouds |
US8091782B2 (en) | 2007-11-08 | 2012-01-10 | International Business Machines Corporation | Using cameras to monitor actual inventory |
US20090125350A1 (en) | 2007-11-14 | 2009-05-14 | Pieter Lessing | System and method for capturing and storing supply chain and logistics support information in a relational database system |
US20090160975A1 (en) | 2007-12-19 | 2009-06-25 | Ncr Corporation | Methods and Apparatus for Improved Image Processing to Provide Retroactive Image Focusing and Improved Depth of Field in Retail Imaging Systems |
US8423431B1 (en) | 2007-12-20 | 2013-04-16 | Amazon Technologies, Inc. | Light emission guidance |
US20090192921A1 (en) | 2008-01-24 | 2009-07-30 | Michael Alan Hicks | Methods and apparatus to survey a retail environment |
US8353457B2 (en) | 2008-02-12 | 2013-01-15 | Datalogic ADC, Inc. | Systems and methods for forming a composite image of multiple portions of an object from multiple perspectives |
US7971664B2 (en) | 2008-03-18 | 2011-07-05 | Bossa Nova Robotics Ip, Inc. | Efficient actuation and selective engaging and locking clutch mechanisms for reconfiguration and multiple-behavior locomotion of an at least two-appendage robot |
US9766074B2 (en) | 2008-03-28 | 2017-09-19 | Regents Of The University Of Minnesota | Vision-aided inertial navigation |
US8064729B2 (en) | 2008-04-03 | 2011-11-22 | Seiko Epson Corporation | Image skew detection apparatus and methods |
US7707073B2 (en) | 2008-05-15 | 2010-04-27 | Sony Ericsson Mobile Communications, Ab | Systems methods and computer program products for providing augmented shopping information |
US20150170256A1 (en) | 2008-06-05 | 2015-06-18 | Aisle411, Inc. | Systems and Methods for Presenting Information Associated With a Three-Dimensional Location on a Two-Dimensional Display |
JP4720859B2 (en) | 2008-07-09 | 2011-07-13 | カシオ計算機株式会社 | Image processing apparatus, image processing method, and program |
US8184196B2 (en) | 2008-08-05 | 2012-05-22 | Qualcomm Incorporated | System and method to generate depth data using edge detection |
US9841314B2 (en) | 2008-08-29 | 2017-12-12 | United Parcel Service Of America, Inc. | Systems and methods for freight tracking and monitoring |
US9824495B2 (en) | 2008-09-11 | 2017-11-21 | Apple Inc. | Method and system for compositing an augmented reality scene |
US20100070365A1 (en) | 2008-09-12 | 2010-03-18 | At&T Intellectual Property I, L.P. | Planogram guided shopping |
EP2733640B1 (en) | 2008-09-14 | 2018-11-07 | Eliezer Magal | Automatic identification system for randomly oriented objects |
US20100091094A1 (en) | 2008-10-14 | 2010-04-15 | Marek Sekowski | Mechanism for Directing a Three-Dimensional Camera System |
US8737168B2 (en) | 2008-10-20 | 2014-05-27 | Siva Somasundaram | System and method for automatic determination of the physical location of data center equipment |
US8479996B2 (en) | 2008-11-07 | 2013-07-09 | Symbol Technologies, Inc. | Identification of non-barcoded products |
KR101234798B1 (en) | 2008-11-24 | 2013-02-20 | 삼성전자주식회사 | Method and apparatus for measuring position of the mobile robot |
US8463079B2 (en) | 2008-12-16 | 2013-06-11 | Intermec Ip Corp. | Method and apparatus for geometrical measurement using an optical device such as a barcode and/or RFID scanner |
US8812226B2 (en) | 2009-01-26 | 2014-08-19 | GM Global Technology Operations LLC | Multiobject fusion module for collision preparation system |
US8265895B2 (en) | 2009-03-27 | 2012-09-11 | Symbol Technologies, Inc. | Interactive sensor systems and methods for dimensioning |
US8284988B2 (en) | 2009-05-13 | 2012-10-09 | Applied Vision Corporation | System and method for dimensioning objects using stereoscopic imaging |
US8743176B2 (en) | 2009-05-20 | 2014-06-03 | Advanced Scientific Concepts, Inc. | 3-dimensional hybrid camera and production system |
US8049621B1 (en) | 2009-05-28 | 2011-11-01 | Walgreen Co. | Method and apparatus for remote merchandise planogram auditing and reporting |
US8542252B2 (en) | 2009-05-29 | 2013-09-24 | Microsoft Corporation | Target digitization, extraction, and tracking |
US8933925B2 (en) | 2009-06-15 | 2015-01-13 | Microsoft Corporation | Piecewise planar reconstruction of three-dimensional scenes |
US7997430B2 (en) | 2009-06-30 | 2011-08-16 | Target Brands, Inc. | Display apparatus and method |
US20120019393A1 (en) | 2009-07-31 | 2012-01-26 | Robert Wolinsky | System and method for tracking carts in a retail environment |
CA2712576C (en) | 2009-08-11 | 2012-04-10 | Certusview Technologies, Llc | Systems and methods for complex event processing of vehicle-related information |
WO2011022716A1 (en) | 2009-08-21 | 2011-02-24 | Syngenta Participations Ag | Crop automated relative maturity system |
KR101619076B1 (en) | 2009-08-25 | 2016-05-10 | 삼성전자 주식회사 | Method of detecting and tracking moving object for mobile platform |
US8942884B2 (en) | 2010-01-14 | 2015-01-27 | Innovative Transport Solutions, Llc | Transport system |
US20130278631A1 (en) | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
US20110216063A1 (en) | 2010-03-08 | 2011-09-08 | Celartem, Inc. | Lidar triangular network compression |
US8456518B2 (en) | 2010-03-31 | 2013-06-04 | James Cameron & Vincent Pace | Stereoscopic camera with automatic obstruction removal |
US8570343B2 (en) | 2010-04-20 | 2013-10-29 | Dassault Systemes | Automatic generation of 3D models from packaged goods product images |
US8619265B2 (en) | 2011-03-14 | 2013-12-31 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US9400170B2 (en) | 2010-04-21 | 2016-07-26 | Faro Technologies, Inc. | Automatic measurement of dimensional data within an acceptance region by a laser tracker |
US8199977B2 (en) | 2010-05-07 | 2012-06-12 | Honeywell International Inc. | System and method for extraction of features from a 3-D point cloud |
US8134717B2 (en) | 2010-05-21 | 2012-03-13 | LTS Scale Company | Dimensional detection system and associated method |
US9109877B2 (en) | 2010-05-21 | 2015-08-18 | Jonathan S. Thierman | Method and apparatus for dimensional measurement |
US20110310088A1 (en) | 2010-06-17 | 2011-12-22 | Microsoft Corporation | Personalized navigation through virtual 3d environments |
US20120022913A1 (en) | 2010-07-20 | 2012-01-26 | Target Brands, Inc. | Planogram Generation for Peg and Shelf Items |
JP4914528B1 (en) | 2010-08-31 | 2012-04-11 | 新日鉄ソリューションズ株式会社 | Augmented reality providing system, information processing terminal, information processing apparatus, augmented reality providing method, information processing method, and program |
US9398205B2 (en) | 2010-09-01 | 2016-07-19 | Apple Inc. | Auto-focus control using image statistics data with coarse and fine auto-focus scores |
US8571314B2 (en) | 2010-09-02 | 2013-10-29 | Samsung Electronics Co., Ltd. | Three-dimensional display system with depth map mechanism and method of operation thereof |
DK2612297T3 (en) | 2010-09-03 | 2017-11-13 | California Inst Of Techn | THREE-DIMENSIONAL IMAGE SYSTEM |
US8872851B2 (en) | 2010-09-24 | 2014-10-28 | Intel Corporation | Augmenting image data based on related 3D point cloud data |
EP2439487B1 (en) | 2010-10-06 | 2012-08-22 | Sick Ag | Volume measuring device for mobile objects |
US8174931B2 (en) | 2010-10-08 | 2012-05-08 | HJ Laboratories, LLC | Apparatus and method for providing indoor location, position, or tracking of a mobile computer using building information |
US9171442B2 (en) | 2010-11-19 | 2015-10-27 | Tyco Fire & Security Gmbh | Item identification using video recognition to supplement bar code or RFID information |
US20120133639A1 (en) | 2010-11-30 | 2012-05-31 | Microsoft Corporation | Strip panorama |
US20120250984A1 (en) | 2010-12-01 | 2012-10-04 | The Trustees Of The University Of Pennsylvania | Image segmentation for distributed target tracking and scene analysis |
US9317761B2 (en) | 2010-12-09 | 2016-04-19 | Nanyang Technological University | Method and an apparatus for determining vein patterns from a colour image |
US8773946B2 (en) | 2010-12-30 | 2014-07-08 | Honeywell International Inc. | Portable housings for generation of building maps |
US8744644B2 (en) | 2011-01-19 | 2014-06-03 | Electronics And Telecommunications Research Institute | Apparatus and method for detecting location of vehicle |
KR101758058B1 (en) | 2011-01-20 | 2017-07-17 | 삼성전자주식회사 | Apparatus and method for estimating camera motion using depth information, augmented reality system |
US8939369B2 (en) | 2011-01-24 | 2015-01-27 | Datalogic ADC, Inc. | Exception detection and handling in automated optical code reading systems |
US20120190453A1 (en) | 2011-01-25 | 2012-07-26 | Bossa Nova Robotics Ip, Inc. | System and method for online-offline interactive experience |
US20120191880A1 (en) | 2011-01-26 | 2012-07-26 | Bossa Nova Robotics IP, Inc | System and method for identifying accessories connected to apparatus |
KR20140040094A (en) | 2011-01-28 | 2014-04-02 | 인터치 테크놀로지스 인코퍼레이티드 | Interfacing with a mobile telepresence robot |
US9207302B2 (en) | 2011-01-30 | 2015-12-08 | Xueming Jiang | Fully-automatic verification system for intelligent electric energy meters |
US8711206B2 (en) | 2011-01-31 | 2014-04-29 | Microsoft Corporation | Mobile camera localization using depth maps |
US8447549B2 (en) | 2011-02-11 | 2013-05-21 | Quality Vision International, Inc. | Tolerance evaluation with reduced measured points |
US8660338B2 (en) | 2011-03-22 | 2014-02-25 | Honeywell International Inc. | Wide baseline feature matching using collobrative navigation and digital terrain elevation data constraints |
US20140019311A1 (en) | 2011-03-31 | 2014-01-16 | Nec Corporation | Store system, control method thereof, and non-transitory computer-readable medium storing a control program thereof |
US8693725B2 (en) | 2011-04-19 | 2014-04-08 | International Business Machines Corporation | Reliability in detecting rail crossing events |
US9854209B2 (en) | 2011-04-19 | 2017-12-26 | Ford Global Technologies, Llc | Display system utilizing vehicle and trailer dynamics |
WO2012155104A1 (en) | 2011-05-11 | 2012-11-15 | Proiam, Llc | Enrollment apparatus, system, and method featuring three dimensional camera |
US20120287249A1 (en) | 2011-05-12 | 2012-11-15 | Electronics And Telecommunications Research Institute | Method for obtaining depth information and apparatus using the same |
US8902353B2 (en) | 2011-05-12 | 2014-12-02 | Symbol Technologies, Inc. | Imaging reader with independently controlled illumination rate |
US9785898B2 (en) | 2011-06-20 | 2017-10-10 | Hi-Tech Solutions Ltd. | System and method for identifying retail products and determining retail product arrangements |
US9064394B1 (en) | 2011-06-22 | 2015-06-23 | Alarm.Com Incorporated | Virtual sensors |
US9070285B1 (en) | 2011-07-25 | 2015-06-30 | UtopiaCompression Corporation | Passive camera based cloud detection and avoidance for aircraft systems |
WO2013016518A1 (en) | 2011-07-27 | 2013-01-31 | Mine Safety Appliances Company | Navigational deployment and initialization systems and methods |
KR101907081B1 (en) | 2011-08-22 | 2018-10-11 | 삼성전자주식회사 | Method for separating object in three dimension point clouds |
US9129277B2 (en) | 2011-08-30 | 2015-09-08 | Digimarc Corporation | Methods and arrangements for identifying objects |
US9367770B2 (en) | 2011-08-30 | 2016-06-14 | Digimarc Corporation | Methods and arrangements for identifying objects |
TWI622540B (en) | 2011-09-09 | 2018-05-01 | 辛波提克有限責任公司 | Automated storage and retrieval system |
US9002099B2 (en) | 2011-09-11 | 2015-04-07 | Apple Inc. | Learning-based estimation of hand and finger pose |
US11074495B2 (en) | 2013-02-28 | 2021-07-27 | Z Advanced Computing, Inc. (Zac) | System and method for extremely efficient image and pattern recognition and artificial intelligence platform |
US10330491B2 (en) | 2011-10-10 | 2019-06-25 | Texas Instruments Incorporated | Robust step detection using low cost MEMS accelerometer in mobile applications, and processing methods, apparatus and systems |
US9033239B2 (en) | 2011-11-11 | 2015-05-19 | James T. Winkel | Projected image planogram system |
EP2776216B1 (en) | 2011-11-11 | 2022-08-31 | iRobot Corporation | Robot apparautus and control method for resuming operation following a pause. |
US9159047B2 (en) | 2011-11-11 | 2015-10-13 | James T. Winkel | Projected image planogram system |
US8726200B2 (en) | 2011-11-23 | 2014-05-13 | Taiwan Semiconductor Manufacturing Co., Ltd. | Recognition of template patterns with mask information |
US8706293B2 (en) | 2011-11-29 | 2014-04-22 | Cereson Co., Ltd. | Vending machine with automated detection of product position |
US8793107B2 (en) | 2011-12-01 | 2014-07-29 | Harris Corporation | Accuracy-based significant point derivation from dense 3D point clouds for terrain modeling |
CN103164842A (en) | 2011-12-14 | 2013-06-19 | 鸿富锦精密工业(深圳)有限公司 | Point cloud extraction system and method |
US20130154802A1 (en) | 2011-12-19 | 2013-06-20 | Symbol Technologies, Inc. | Method and apparatus for updating a central plan for an area based on a location of a plurality of radio frequency identification readers |
US20130162806A1 (en) | 2011-12-23 | 2013-06-27 | Mitutoyo Corporation | Enhanced edge focus tool |
CN104135898B (en) | 2012-01-06 | 2017-04-05 | 日升研发控股有限责任公司 | Display frame module and sectional display stand system |
EP2615580B1 (en) | 2012-01-13 | 2016-08-17 | Softkinetic Software | Automatic scene calibration |
US9740937B2 (en) | 2012-01-17 | 2017-08-22 | Avigilon Fortress Corporation | System and method for monitoring a retail environment using video content analysis with depth sensing |
US9037287B1 (en) | 2012-02-17 | 2015-05-19 | National Presort, Inc. | System and method for optimizing a mail document sorting machine |
US8958911B2 (en) | 2012-02-29 | 2015-02-17 | Irobot Corporation | Mobile robot |
US8668136B2 (en) | 2012-03-01 | 2014-03-11 | Trimble Navigation Limited | Method and system for RFID-assisted imaging |
ES2535122T3 (en) | 2012-03-01 | 2015-05-05 | Caljan Rite-Hite Aps | Extensible conveyor with light |
US9329269B2 (en) | 2012-03-15 | 2016-05-03 | GM Global Technology Operations LLC | Method for registration of range images from multiple LiDARS |
US8989342B2 (en) | 2012-04-18 | 2015-03-24 | The Boeing Company | Methods and systems for volumetric reconstruction using radiography |
US9153061B2 (en) | 2012-05-04 | 2015-10-06 | Qualcomm Incorporated | Segmentation of 3D point clouds for dense 3D modeling |
US9525976B2 (en) | 2012-05-10 | 2016-12-20 | Honeywell International Inc. | BIM-aware location based application |
WO2013170260A1 (en) | 2012-05-11 | 2013-11-14 | Proiam, Llc | Hand held dimension capture apparatus, system, and method |
US8941645B2 (en) | 2012-05-11 | 2015-01-27 | Dassault Systemes | Comparing virtual and real images in a shopping experience |
US9846960B2 (en) | 2012-05-31 | 2017-12-19 | Microsoft Technology Licensing, Llc | Automated camera array calibration |
US9135543B2 (en) | 2012-06-20 | 2015-09-15 | Apple Inc. | Compression and obfuscation of three-dimensional coding |
US9418352B2 (en) | 2012-06-29 | 2016-08-16 | Intel Corporation | Image-augmented inventory management and wayfinding |
US9420265B2 (en) | 2012-06-29 | 2016-08-16 | Mitsubishi Electric Research Laboratories, Inc. | Tracking poses of 3D camera using points and planes |
US20140003655A1 (en) | 2012-06-29 | 2014-01-02 | Praveen Gopalakrishnan | Method, apparatus and system for providing image data to represent inventory |
US8971637B1 (en) | 2012-07-16 | 2015-03-03 | Matrox Electronic Systems Ltd. | Method and system for identifying an edge in an image |
KR101441187B1 (en) | 2012-07-19 | 2014-09-18 | 고려대학교 산학협력단 | Method for planning path for a humanoid robot |
US9651363B2 (en) | 2012-07-24 | 2017-05-16 | Datalogic Usa, Inc. | Systems and methods of object measurement in an automated data reader |
US8757479B2 (en) | 2012-07-31 | 2014-06-24 | Xerox Corporation | Method and system for creating personalized packaging |
EP2693362B1 (en) | 2012-07-31 | 2015-06-17 | Sick Ag | Detection system for mounting on a conveyor belt |
US8923893B2 (en) | 2012-08-07 | 2014-12-30 | Symbol Technologies, Inc. | Real-time planogram generation and maintenance |
US20140047342A1 (en) | 2012-08-07 | 2014-02-13 | Advanced Micro Devices, Inc. | System and method for allocating a cluster of nodes for a cloud computing system based on hardware characteristics |
CN103679164A (en) | 2012-09-21 | 2014-03-26 | 阿里巴巴集团控股有限公司 | A method and a system for identifying and processing a mark based on a mobile terminal |
US9939259B2 (en) | 2012-10-04 | 2018-04-10 | Hand Held Products, Inc. | Measuring object dimensions using mobile computer |
FR2996512B1 (en) | 2012-10-05 | 2014-11-21 | Renault Sa | METHOD FOR EVALUATING THE RISK OF COLLISION AT AN INTERSECTION |
US20140192050A1 (en) | 2012-10-05 | 2014-07-10 | University Of Southern California | Three-dimensional point processing and model generation |
US9472022B2 (en) | 2012-10-05 | 2016-10-18 | University Of Southern California | Three-dimensional point processing and model generation |
US9841311B2 (en) | 2012-10-16 | 2017-12-12 | Hand Held Products, Inc. | Dimensioning system |
WO2014066422A2 (en) | 2012-10-22 | 2014-05-01 | Bossa Nova Robotics Ip, Inc. | Self-deploying support member, and methods and apparatus using same |
US9635606B2 (en) | 2012-11-04 | 2017-04-25 | Kt Corporation | Access point selection and management |
EP2915563B1 (en) | 2012-11-05 | 2018-04-18 | Mitsubishi Electric Corporation | Three-dimensional image capture system, and particle beam therapy device |
ITVI20120303A1 (en) | 2012-11-09 | 2014-05-10 | St Microelectronics Srl | METHOD TO DETECT A STRAIGHT LINE IN A DIGITAL IMAGE |
US9562971B2 (en) | 2012-11-22 | 2017-02-07 | Geosim Systems Ltd. | Point-cloud fusion |
US8825258B2 (en) | 2012-11-30 | 2014-09-02 | Google Inc. | Engaging and disengaging for autonomous driving |
US9380222B2 (en) | 2012-12-04 | 2016-06-28 | Symbol Technologies, Llc | Transmission of images for inventory monitoring |
US10701149B2 (en) | 2012-12-13 | 2020-06-30 | Level 3 Communications, Llc | Content delivery framework having origin services |
MY172143A (en) | 2012-12-13 | 2019-11-14 | Mimos Berhad | Method for non-static foreground feature extraction and classification |
US20140195373A1 (en) | 2013-01-10 | 2014-07-10 | International Business Machines Corporation | Systems and methods for managing inventory in a shopping store |
US20140214547A1 (en) | 2013-01-25 | 2014-07-31 | R4 Technologies, Llc | Systems and methods for augmented retail reality |
US20140214600A1 (en) | 2013-01-31 | 2014-07-31 | Wal-Mart Stores, Inc. | Assisting A Consumer In Locating A Product Within A Retail Store |
US9154773B2 (en) | 2013-03-15 | 2015-10-06 | Seiko Epson Corporation | 2D/3D localization and pose estimation of harness cables using a configurable structure representation for robot operations |
US8965561B2 (en) | 2013-03-15 | 2015-02-24 | Cybernet Systems Corporation | Automated warehousing using robotic forklifts |
TWI594933B (en) | 2013-03-15 | 2017-08-11 | 辛波提克有限責任公司 | Automated storage and retrieval system |
US9558559B2 (en) | 2013-04-05 | 2017-01-31 | Nokia Technologies Oy | Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system |
WO2014181323A1 (en) | 2013-05-05 | 2014-11-13 | Trax Technology Solutions Pte Ltd. | System and method of retail image analysis |
US9037396B2 (en) | 2013-05-23 | 2015-05-19 | Irobot Corporation | Simultaneous localization and mapping for a mobile robot |
CN105246744A (en) | 2013-05-29 | 2016-01-13 | 丰田自动车株式会社 | Parking assistance device |
US9158988B2 (en) | 2013-06-12 | 2015-10-13 | Symbol Technclogies, LLC | Method for detecting a plurality of instances of an object |
US10268983B2 (en) | 2013-06-26 | 2019-04-23 | Amazon Technologies, Inc. | Detecting item interaction and movement |
US9443297B2 (en) | 2013-07-10 | 2016-09-13 | Cognex Corporation | System and method for selective determination of point clouds |
US10290031B2 (en) | 2013-07-24 | 2019-05-14 | Gregorio Reid | Method and system for automated retail checkout using context recognition |
US9473747B2 (en) | 2013-07-25 | 2016-10-18 | Ncr Corporation | Whole store scanner |
WO2015017242A1 (en) | 2013-07-28 | 2015-02-05 | Deluca Michael J | Augmented reality based user interfacing |
US20150088618A1 (en) | 2013-08-26 | 2015-03-26 | Ims Solutions, Inc. | Road tolling |
US9886678B2 (en) | 2013-09-25 | 2018-02-06 | Sap Se | Graphic representations of planograms |
US9615012B2 (en) | 2013-09-30 | 2017-04-04 | Google Inc. | Using a second camera to adjust settings of first camera |
US9248611B2 (en) | 2013-10-07 | 2016-02-02 | David A. Divine | 3-D printed packaging |
US20150106403A1 (en) | 2013-10-15 | 2015-04-16 | Indooratlas Oy | Generating search database based on sensor measurements |
US9412040B2 (en) | 2013-12-04 | 2016-08-09 | Mitsubishi Electric Research Laboratories, Inc. | Method for extracting planes from 3D point cloud sensor data |
US9565400B1 (en) | 2013-12-20 | 2017-02-07 | Amazon Technologies, Inc. | Automatic imaging device selection for video analytics |
US9349076B1 (en) | 2013-12-20 | 2016-05-24 | Amazon Technologies, Inc. | Template-based target object detection in an image |
EP3108686B1 (en) | 2014-02-21 | 2019-06-19 | Telefonaktiebolaget LM Ericsson (publ) | Wlan throughput prediction |
MY177646A (en) | 2014-02-28 | 2020-09-23 | Icm Airport Technics Australia Pty Ltd | Luggage processing station and system thereof |
US20150310601A1 (en) | 2014-03-07 | 2015-10-29 | Digimarc Corporation | Methods and arrangements for identifying objects |
US10203762B2 (en) | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
US20150262116A1 (en) | 2014-03-16 | 2015-09-17 | International Business Machines Corporation | Machine vision technology for shelf inventory management |
US9953420B2 (en) | 2014-03-25 | 2018-04-24 | Ford Global Technologies, Llc | Camera calibration |
CN103945208B (en) | 2014-04-24 | 2015-10-28 | 西安交通大学 | A kind of parallel synchronous zooming engine for multiple views bore hole 3D display and method |
CA2951151A1 (en) | 2014-06-04 | 2015-12-10 | Intelligrated Headquarters Llc | Truck unloader visualization |
CN104023249B (en) | 2014-06-12 | 2015-10-21 | 腾讯科技(深圳)有限公司 | Television channel recognition methods and device |
US9659204B2 (en) | 2014-06-13 | 2017-05-23 | Conduent Business Services, Llc | Image processing methods and systems for barcode and/or product label recognition |
US10453046B2 (en) | 2014-06-13 | 2019-10-22 | Conduent Business Services, Llc | Store shelf imaging system |
US9542746B2 (en) | 2014-06-13 | 2017-01-10 | Xerox Corporation | Method and system for spatial characterization of an imaging system |
US10176452B2 (en) | 2014-06-13 | 2019-01-08 | Conduent Business Services Llc | Store shelf imaging system and method |
CN106662451B (en) | 2014-06-27 | 2018-04-24 | 克朗设备公司 | Recovered using the lost vehicles of linked character pair |
US11051000B2 (en) | 2014-07-14 | 2021-06-29 | Mitsubishi Electric Research Laboratories, Inc. | Method for calibrating cameras with non-overlapping views |
DE102014011821A1 (en) | 2014-08-08 | 2016-02-11 | Cargometer Gmbh | Device and method for determining the volume of an object moved by an industrial truck |
US20160044862A1 (en) | 2014-08-14 | 2016-02-18 | Raven Industries, Inc. | Site specific product application device and method |
CN104200086B (en) | 2014-08-25 | 2017-02-22 | 西北工业大学 | Wide-baseline visible light camera pose estimation method |
US20160061591A1 (en) | 2014-08-28 | 2016-03-03 | Lts Metrology, Llc | Stationary Dimensioning Apparatus |
JP2016057108A (en) | 2014-09-08 | 2016-04-21 | 株式会社トプコン | Arithmetic device, arithmetic system, arithmetic method and program |
US10296950B2 (en) | 2014-09-30 | 2019-05-21 | Apple Inc. | Beacon triggered processes |
US10365110B2 (en) | 2014-09-30 | 2019-07-30 | Nec Corporation | Method and system for determining a path of an object for moving from a starting state to an end state set avoiding one or more obstacles |
US9576194B2 (en) | 2014-10-13 | 2017-02-21 | Klink Technologies | Method and system for identity and age verification |
US9706105B2 (en) | 2014-10-20 | 2017-07-11 | Symbol Technologies, Llc | Apparatus and method for specifying and aiming cameras at shelves |
US10373116B2 (en) | 2014-10-24 | 2019-08-06 | Fellow, Inc. | Intelligent inventory management and related systems and methods |
US9796093B2 (en) | 2014-10-24 | 2017-10-24 | Fellow, Inc. | Customer service robot and related systems and methods |
US9600892B2 (en) | 2014-11-06 | 2017-03-21 | Symbol Technologies, Llc | Non-parametric method of and system for estimating dimensions of objects of arbitrary shape |
JP5946073B2 (en) | 2014-11-07 | 2016-07-05 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Estimation method, estimation system, computer system, and program |
US10022867B2 (en) | 2014-11-11 | 2018-07-17 | X Development Llc | Dynamically maintaining a map of a fleet of robotic devices in an environment to facilitate robotic action |
US9916002B2 (en) | 2014-11-16 | 2018-03-13 | Eonite Perception Inc. | Social applications for augmented reality technologies |
WO2016081722A1 (en) | 2014-11-20 | 2016-05-26 | Cappasity Inc. | Systems and methods for 3d capture of objects using multiple range cameras and multiple rgb cameras |
US10248653B2 (en) | 2014-11-25 | 2019-04-02 | Lionbridge Technologies, Inc. | Information technology platform for language translation and task management |
US9396554B2 (en) | 2014-12-05 | 2016-07-19 | Symbol Technologies, Llc | Apparatus for and method of estimating dimensions of an object associated with a code in automatic response to reading the code |
US9483704B2 (en) | 2014-12-10 | 2016-11-01 | Ricoh Co., Ltd. | Realogram scene analysis of images: superpixel scene analysis |
US9928708B2 (en) | 2014-12-12 | 2018-03-27 | Hawxeye, Inc. | Real-time video analysis for security surveillance |
US9628695B2 (en) | 2014-12-29 | 2017-04-18 | Intel Corporation | Method and system of lens shift correction for a camera array |
US20160253735A1 (en) | 2014-12-30 | 2016-09-01 | Shelfscreen, Llc | Closed-Loop Dynamic Content Display System Utilizing Shopper Proximity and Shopper Context Generated in Response to Wireless Data Triggers |
EP3054404A1 (en) | 2015-02-04 | 2016-08-10 | Hexagon Technology Center GmbH | Work information modelling |
CN107535087B (en) | 2015-02-19 | 2021-01-01 | 迈康尼股份公司 | Method, system and device for changing display information by activating an input device |
US9801517B2 (en) | 2015-03-06 | 2017-10-31 | Wal-Mart Stores, Inc. | Shopping facility assistance object detection systems, devices and methods |
US9367831B1 (en) | 2015-03-16 | 2016-06-14 | The Nielsen Company (Us), Llc | Methods and apparatus for inventory determinations using portable devices |
US9630319B2 (en) | 2015-03-18 | 2017-04-25 | Irobot Corporation | Localization and mapping using physical features |
US9600731B2 (en) | 2015-04-08 | 2017-03-21 | Toshiba Tec Kabushiki Kaisha | Image processing apparatus, image processing method and computer-readable storage medium |
US9868443B2 (en) | 2015-04-27 | 2018-01-16 | GM Global Technology Operations LLC | Reactive path planning for autonomous driving |
US10455226B2 (en) | 2015-05-26 | 2019-10-22 | Crown Equipment Corporation | Systems and methods for image capture device calibration for a materials handling vehicle |
US9646410B2 (en) | 2015-06-30 | 2017-05-09 | Microsoft Technology Licensing, Llc | Mixed three dimensional scene reconstruction from plural surface models |
US10410096B2 (en) | 2015-07-09 | 2019-09-10 | Qualcomm Incorporated | Context-based priors for object detection in images |
US20170011308A1 (en) | 2015-07-09 | 2017-01-12 | SunView Software, Inc. | Methods and Systems for Applying Machine Learning to Automatically Solve Problems |
CN111556253B (en) | 2015-07-10 | 2022-06-14 | 深圳市大疆创新科技有限公司 | Method and system for generating combined image and method and system for displaying image |
US10308410B2 (en) | 2015-07-17 | 2019-06-04 | Nestec S.A. | Multiple-container composite package |
US10455216B2 (en) | 2015-08-19 | 2019-10-22 | Faro Technologies, Inc. | Three-dimensional imager |
US9549125B1 (en) | 2015-09-01 | 2017-01-17 | Amazon Technologies, Inc. | Focus specification and focus stabilization |
GB2542115B (en) | 2015-09-03 | 2017-11-15 | Rail Vision Europe Ltd | Rail track asset survey system |
KR20180049024A (en) | 2015-09-04 | 2018-05-10 | 크라운 이큅먼트 코포레이션 | FEATURES Industrial vehicles using part-based positioning and navigation |
US9684081B2 (en) | 2015-09-16 | 2017-06-20 | Here Global B.V. | Method and apparatus for providing a location data error map |
US10262466B2 (en) | 2015-10-14 | 2019-04-16 | Qualcomm Incorporated | Systems and methods for adjusting a combined image visualization based on depth information |
US9630619B1 (en) | 2015-11-04 | 2017-04-25 | Zoox, Inc. | Robotic vehicle active safety systems and methods |
US9517767B1 (en) | 2015-11-04 | 2016-12-13 | Zoox, Inc. | Internal safety systems for robotic vehicles |
US10607182B2 (en) | 2015-11-09 | 2020-03-31 | Simbe Robotics, Inc. | Method for tracking stock level within a store |
US20170150129A1 (en) | 2015-11-23 | 2017-05-25 | Chicago Measurement, L.L.C. | Dimensioning Apparatus and Method |
US10592854B2 (en) | 2015-12-18 | 2020-03-17 | Ricoh Co., Ltd. | Planogram matching |
US10336543B1 (en) | 2016-01-21 | 2019-07-02 | Wing Aviation Llc | Selective encoding of packages |
US10352689B2 (en) | 2016-01-28 | 2019-07-16 | Symbol Technologies, Llc | Methods and systems for high precision locationing with depth values |
US10145955B2 (en) | 2016-02-04 | 2018-12-04 | Symbol Technologies, Llc | Methods and systems for processing point-cloud data with a line scanner |
KR102373926B1 (en) | 2016-02-05 | 2022-03-14 | 삼성전자주식회사 | Vehicle and recognizing method of vehicle's position based on map |
US10197400B2 (en) | 2016-02-25 | 2019-02-05 | Sharp Laboratories Of America, Inc. | Calibration methods and systems for an autonomous navigation vehicle |
US10229386B2 (en) | 2016-03-03 | 2019-03-12 | Ebay Inc. | Product tags, systems, and methods for crowdsourcing and electronic article surveillance in retail inventory management |
US20170261993A1 (en) | 2016-03-10 | 2017-09-14 | Xerox Corporation | Systems and methods for robot motion control and improved positional accuracy |
US9928438B2 (en) | 2016-03-10 | 2018-03-27 | Conduent Business Services, Llc | High accuracy localization system and method for retail store profiling via product image recognition and its corresponding dimension database |
US10721451B2 (en) | 2016-03-23 | 2020-07-21 | Symbol Technologies, Llc | Arrangement for, and method of, loading freight into a shipping container |
EP3437031A4 (en) | 2016-03-29 | 2019-11-27 | Bossa Nova Robotics IP, Inc. | SYSTEM AND METHOD FOR LOCATING, IDENTIFYING AND COUNTING lTEMS |
US9805240B1 (en) | 2016-04-18 | 2017-10-31 | Symbol Technologies, Llc | Barcode scanning and dimensioning |
US9791862B1 (en) | 2016-04-25 | 2017-10-17 | Thayermahan, Inc. | Systems and method for unmanned undersea sensor position, orientation, and depth keeping |
GB2569698B (en) | 2016-05-04 | 2021-04-07 | Walmart Apollo Llc | Distributed autonomous robot systems and methods |
EP3454698B1 (en) | 2016-05-09 | 2024-04-17 | Grabango Co. | System and method for computer vision driven applications within an environment |
US10625426B2 (en) | 2016-05-19 | 2020-04-21 | Simbe Robotics, Inc. | Method for automatically generating planograms of shelving structures within a store |
JP6728404B2 (en) | 2016-05-19 | 2020-07-22 | シムビ ロボティクス, インコーポレイテッドSimbe Robotics, Inc. | How to track product placement on store shelves |
US9639935B1 (en) | 2016-05-25 | 2017-05-02 | Gopro, Inc. | Apparatus and methods for camera alignment model calibration |
US10394244B2 (en) | 2016-05-26 | 2019-08-27 | Korea University Research And Business Foundation | Method for controlling mobile robot based on Bayesian network learning |
JP6339633B2 (en) | 2016-06-28 | 2018-06-06 | 新日鉄住金ソリューションズ株式会社 | System, information processing apparatus, information processing method, and program |
CA3028156A1 (en) | 2016-06-30 | 2018-01-04 | Bossa Nova Robotics Ip, Inc. | Multiple camera system for inventory tracking |
US10785418B2 (en) | 2016-07-12 | 2020-09-22 | Bossa Nova Robotics Ip, Inc. | Glare reduction method and system |
US20180025412A1 (en) | 2016-07-22 | 2018-01-25 | Focal Systems, Inc. | Determining in-store location based on images |
US10071856B2 (en) | 2016-07-28 | 2018-09-11 | X Development Llc | Inventory management |
US9827683B1 (en) | 2016-07-28 | 2017-11-28 | X Development Llc | Collaborative inventory monitoring |
US10054447B2 (en) | 2016-08-17 | 2018-08-21 | Sharp Laboratories Of America, Inc. | Lazier graph-based path planning for autonomous navigation |
US20180053091A1 (en) | 2016-08-17 | 2018-02-22 | Hawxeye, Inc. | System and method for model compression of neural networks for use in embedded platforms |
US10776661B2 (en) | 2016-08-19 | 2020-09-15 | Symbol Technologies, Llc | Methods, systems and apparatus for segmenting and dimensioning objects |
US20180101813A1 (en) | 2016-10-12 | 2018-04-12 | Bossa Nova Robotics Ip, Inc. | Method and System for Product Data Review |
US10289990B2 (en) | 2016-10-17 | 2019-05-14 | Conduent Business Services, Llc | Store shelf imaging system and method |
US10210603B2 (en) | 2016-10-17 | 2019-02-19 | Conduent Business Services Llc | Store shelf imaging system and method |
US20180114183A1 (en) | 2016-10-25 | 2018-04-26 | Wal-Mart Stores, Inc. | Stock Level Determination |
US10451405B2 (en) | 2016-11-22 | 2019-10-22 | Symbol Technologies, Llc | Dimensioning system for, and method of, dimensioning freight in motion along an unconstrained path in a venue |
US10354411B2 (en) | 2016-12-20 | 2019-07-16 | Symbol Technologies, Llc | Methods, systems and apparatus for segmenting objects |
US9778388B1 (en) | 2016-12-22 | 2017-10-03 | Thayermahan, Inc. | Systems and methods for autonomous towing of an underwater sensor array |
US10121072B1 (en) | 2016-12-30 | 2018-11-06 | Intuit Inc. | Unsupervised removal of text from form images |
JP6938169B2 (en) | 2017-03-01 | 2021-09-22 | 東芝テック株式会社 | Label generator and program |
US10293485B2 (en) | 2017-03-30 | 2019-05-21 | Brain Corporation | Systems and methods for robotic path planning |
US10229322B2 (en) | 2017-04-06 | 2019-03-12 | Ants Technology (Hk) Limited | Apparatus, methods and computer products for video analytics |
US10726273B2 (en) | 2017-05-01 | 2020-07-28 | Symbol Technologies, Llc | Method and apparatus for shelf feature and object placement detection from shelf images |
US10591918B2 (en) | 2017-05-01 | 2020-03-17 | Symbol Technologies, Llc | Fixed segmented lattice planning for a mobile automation apparatus |
US10949798B2 (en) | 2017-05-01 | 2021-03-16 | Symbol Technologies, Llc | Multimodal localization and mapping for a mobile automation apparatus |
US10505057B2 (en) | 2017-05-01 | 2019-12-10 | Symbol Technologies, Llc | Device and method for operating cameras and light sources wherein parasitic reflections from a paired light source are not reflected into the paired camera |
EP3619600A4 (en) | 2017-05-01 | 2020-10-21 | Symbol Technologies, LLC | Method and apparatus for object status detection |
US20180314908A1 (en) | 2017-05-01 | 2018-11-01 | Symbol Technologies, Llc | Method and apparatus for label detection |
US11093896B2 (en) | 2017-05-01 | 2021-08-17 | Symbol Technologies, Llc | Product status detection system |
US10663590B2 (en) | 2017-05-01 | 2020-05-26 | Symbol Technologies, Llc | Device and method for merging lidar data |
US11367092B2 (en) | 2017-05-01 | 2022-06-21 | Symbol Technologies, Llc | Method and apparatus for extracting and processing price text from an image set |
CN107067382A (en) | 2017-05-11 | 2017-08-18 | 南宁市正祥科技有限公司 | A kind of improved method for detecting image edge |
WO2019023249A1 (en) | 2017-07-25 | 2019-01-31 | Bossa Nova Robotics Ip, Inc. | Data reduction in a bar code reading robot shelf monitoring system |
US10127438B1 (en) | 2017-08-07 | 2018-11-13 | Standard Cognition, Corp | Predicting inventory events using semantic diffing |
US10861302B2 (en) | 2017-08-17 | 2020-12-08 | Bossa Nova Robotics Ip, Inc. | Robust motion filtering for real-time video surveillance |
WO2019040659A1 (en) | 2017-08-23 | 2019-02-28 | Bossa Nova Robotics Ip, Inc. | Method for new package detection |
US10489677B2 (en) | 2017-09-07 | 2019-11-26 | Symbol Technologies, Llc | Method and apparatus for shelf edge detection |
US10572763B2 (en) | 2017-09-07 | 2020-02-25 | Symbol Technologies, Llc | Method and apparatus for support surface edge detection |
JP6608890B2 (en) | 2017-09-12 | 2019-11-20 | ファナック株式会社 | Machine learning apparatus, robot system, and machine learning method |
JP7019357B2 (en) | 2017-09-19 | 2022-02-15 | 東芝テック株式会社 | Shelf information estimation device and information processing program |
US20190180150A1 (en) | 2017-12-13 | 2019-06-13 | Bossa Nova Robotics Ip, Inc. | Color Haar Classifier for Retail Shelf Label Detection |
WO2019152279A1 (en) | 2018-01-31 | 2019-08-08 | Walmart Apollo, Llc | Product inventorying using image differences |
US11049279B2 (en) | 2018-03-27 | 2021-06-29 | Denso Wave Incorporated | Device for detecting positional relationship among objects |
US10726264B2 (en) | 2018-06-25 | 2020-07-28 | Microsoft Technology Licensing, Llc | Object-based localization |
-
2018
- 2018-10-05 US US16/153,064 patent/US11010920B2/en active Active
-
2019
- 2019-09-10 DE DE112019004976.3T patent/DE112019004976T5/en active Pending
- 2019-09-10 GB GB2104805.3A patent/GB2591940B/en active Active
- 2019-09-10 WO PCT/US2019/050370 patent/WO2020072178A1/en active Application Filing
- 2019-09-10 AU AU2019351689A patent/AU2019351689B2/en active Active
-
2021
- 2021-05-17 US US17/322,545 patent/US20210272316A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100017407A1 (en) * | 2008-07-16 | 2010-01-21 | Hitachi, Ltd. | Three-dimensional object recognition system and inventory system using the same |
US20150063707A1 (en) * | 2010-06-10 | 2015-03-05 | Autodesk, Inc. | Outline approximation for point cloud of building |
US20140232826A1 (en) * | 2013-02-15 | 2014-08-21 | Jungheinrich Aktiengesellschaft | Method for detecting objects in a warehouse and/or for spatial orientation in a warehouse |
US9996818B1 (en) * | 2014-12-19 | 2018-06-12 | Amazon Technologies, Inc. | Counting inventory items using image analysis and depth information |
WO2017175312A1 (en) * | 2016-04-05 | 2017-10-12 | 株式会社日立物流 | Measurement system and measurement method |
US20180108134A1 (en) * | 2016-10-17 | 2018-04-19 | Conduent Business Services, Llc | Store shelf imaging system and method using a vertical lidar |
US20190197728A1 (en) * | 2017-12-25 | 2019-06-27 | Fujitsu Limited | Object recognition apparatus, method for recognizing object, and non-transitory computer-readable storage medium for storing program |
Non-Patent Citations (2)
Title |
---|
Cleveland, Jonas, et al. "Automated system for semantic object labeling with soft-object recognition and dynamic programming segmentation." IEEE Transactions on Automation Science and Engineering 14.2 (2016): 820-833. (Year: 2016) * |
Hornung, Armin, et al. "OctoMap: An efficient probabilistic 3D mapping framework based on octrees." Autonomous robots 34.3 (2013): 189-206. (Year: 2013) * |
Also Published As
Publication number | Publication date |
---|---|
DE112019004976T5 (en) | 2021-06-24 |
GB2591940A (en) | 2021-08-11 |
WO2020072178A1 (en) | 2020-04-09 |
AU2019351689A1 (en) | 2021-04-29 |
GB202104805D0 (en) | 2021-05-19 |
US20200111228A1 (en) | 2020-04-09 |
US11010920B2 (en) | 2021-05-18 |
GB2591940B (en) | 2022-10-19 |
AU2019351689B2 (en) | 2022-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11354886B2 (en) | Method and apparatus for shelf edge detection | |
US10769794B2 (en) | Multi-sensor object recognition system and method | |
US10740911B2 (en) | Method, system and apparatus for correcting translucency artifacts in data representing a support structure | |
US10832436B2 (en) | Method, system and apparatus for recovering label positions | |
US20220414926A1 (en) | Mixed Depth Object Detection | |
AU2019396253B2 (en) | Method, system and apparatus for auxiliary label detection and association | |
US11416000B2 (en) | Method and apparatus for navigational ray tracing | |
US10731970B2 (en) | Method, system and apparatus for support structure detection | |
US20210272316A1 (en) | Method, System and Apparatus for Object Detection in Point Clouds | |
US11200677B2 (en) | Method, system and apparatus for shelf edge detection | |
US11341663B2 (en) | Method, system and apparatus for detecting support structure obstructions | |
US11107238B2 (en) | Method, system and apparatus for detecting item facings | |
US11506483B2 (en) | Method, system and apparatus for support structure depth determination | |
US11151743B2 (en) | Method, system and apparatus for end of aisle detection | |
US20200182623A1 (en) | Method, system and apparatus for dynamic target feature mapping | |
US11080566B2 (en) | Method, system and apparatus for gap detection in support structures with peg regions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: ZEBRA TECHNOLOGIES CORPORATION, ILLINOIS Free format text: MERGER;ASSIGNOR:ZIH CORP.;REEL/FRAME:059513/0713 Effective date: 20181220 Owner name: ZIH CORP., ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YU, YUANHAO;PHAN, RAYMOND;REEL/FRAME:059513/0639 Effective date: 20181004 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |