WO2024124421A1 - Estimation de matrice de rotation de robot à l'aide d'une hypothèse de monde de manhattan - Google Patents
Estimation de matrice de rotation de robot à l'aide d'une hypothèse de monde de manhattan Download PDFInfo
- Publication number
- WO2024124421A1 WO2024124421A1 PCT/CN2022/138892 CN2022138892W WO2024124421A1 WO 2024124421 A1 WO2024124421 A1 WO 2024124421A1 CN 2022138892 W CN2022138892 W CN 2022138892W WO 2024124421 A1 WO2024124421 A1 WO 2024124421A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robotic device
- processor
- frame
- linear patterns
- manhattan
- Prior art date
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 77
- 230000004807 localization Effects 0.000 claims description 36
- 238000003860 storage Methods 0.000 claims description 24
- 238000013507 mapping Methods 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 description 32
- 238000010586 diagram Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 16
- 230000015654 memory Effects 0.000 description 14
- 230000033001 locomotion Effects 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 239000013598 vector Substances 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- HPNSNYBUADCFDR-UHFFFAOYSA-N chromafenozide Chemical compound CC1=CC(C)=CC(C(=O)N(NC(=O)C=2C(=C3CCCOC3=CC=2)C)C(C)(C)C)=C1 HPNSNYBUADCFDR-UHFFFAOYSA-N 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004146 energy storage Methods 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- Robotic devices may be equipped with sensors for detecting aspects of the environment.
- some robotic devices may be equipped with cameras capable of capturing images, a sequence of images, or video, and using image data in performing robotic operations, such as navigation, guiding an actuator, etc.
- some robotic devices may be equipped with lidar, radar, or other sensors configured to detect objects, elements, and/or features of the surrounding environment.
- Localization in robotics involves calculating the robot’s position in its environment, as well as the relative orientation of its own frame of reference with respect to that of its environment.
- Robots often use one or more sensors to perform localization, which may be used by autonomous or semi-autonomous robotic devices to determine the robot's own position and orientation (i.e., rotation) .
- Such localization calculations may be precursors for calculations supporting future movements and actions of the robot, such as steering.
- inaccuracies in the localization of a robotic device may lead to errors or inefficiencies in the movements and actions of the robot.
- determining the robotic device’s position relative to objects in the environment may be done with reasonable accuracy, calculating a rotation between frames of reference is more challenging and often prone to errors as the robotic device moves through the environment.
- aspects include methods that may be implemented on a processor of a robotic device for application of a rotation matrix estimation.
- Aspect methods may include receiving, by the processor of the robotic device, spatial data captured by a sensor of the robotic device, identifying linear patterns in the received spatial data from forms or indicia visible within a field of view of the sensor, generating line descriptors for the identified linear patterns in which each of the generated line descriptors may define a magnitude and direction of the identified linear patterns, identifying selected linear patterns from among the identified linear patterns based on the generated line descriptors indicating that the selected linear patterns extend parallel to one of three orthogonal axes, defining a Manhattan frame from the generated line descriptors of the selected linear patterns in which the origin of the Manhattan frame may be located at a convergence point of the three orthogonal axes, and applying a rotation matrix estimation for spatial calibration of the robotic device that translates the defined Manhattan frame relative to a global frame of reference of the robotic device.
- Some aspects may include comparing the defined Manhattan frame to localization and mapping data determined for the robotic device.
- the rotation matrix estimation may be derived from the comparison of the defined Manhattan frame to the localization and mapping data.
- Some aspects may include grouping one or more linear patterns extending parallel to one another relative to the three orthogonal axes for defining the Manhattan frame.
- the origin of the Manhattan frame may be disposed beyond vanishing points of the selected linear patterns within in the field of view of the sensor.
- the spatial data captured by the sensor may include an image within the field of view.
- the spatial data captured by the sensor may include measured ranges between the sensor and the forms or indicia visible within the field of view.
- Some aspects may include matching generated line descriptors associated with spatial data captured by the sensor from two distinct positions or orientations of the robotic device.
- the rotation matrix estimation may be updated in response to identifying a localization error from matched generated line descriptors.
- Further aspects may include a processor for use in a robotic device configured to perform operations of any of the methods summarized above. Further aspects may include a robotic device configured with processor-executable instructions to perform operations of any of the methods summarized above. Further aspects may include a robotic device including means for performing functions of any of the methods summarized above.
- FIG. 1A is a system block diagram of a robotic device operating within an environment according to various embodiments.
- FIG. 1B is a line segment with a line descriptor that may be used for determining a vanishing point according to various embodiments.
- FIGS. 2A-2C each include three stages of image analysis for identifying linear patterns and grouping parallel lines according to various embodiments.
- FIG. 2D is an image frame including linear patterns converging on vanishing points according to various embodiments.
- FIG. 2E is an environment with a fixed Manhattan frame and a moving camera frame according to various embodiments.
- FIG. 2F is a process flow diagram illustrating operations that may be performed by a processor of a robotic device as part of the methods according to various embodiments.
- FIG. 2G is a process flow diagram illustrating operations that may be performed by a processor of a robotic device as part of the methods according to various embodiments.
- FIG. 3 is a component block diagram illustrating a robotic device and a control unit thereof, suitable for implementing various embodiments.
- FIG. 4 is a component block diagram illustrating a processor suitable for use in robotic devices implementing various embodiments.
- FIG. 5 is a component block diagram illustrating a processor of a robotic device configured with various modules in accordance with various embodiments.
- FIG. 6A is a processor flow diagram illustrating an example method performed by a processor of a robotic device for application of a rotation matrix estimation according to various embodiments
- FIG. 6B is a process flow diagram illustrating operations that may be performed by a processor of a robotic device as part of the method for application of a rotation matrix estimation according to various embodiments.
- FIG. 6C is a process flow diagram illustrating operations that may be performed by a processor of a robotic device as part of the method for application of a rotation matrix estimation according to various embodiments.
- FIG. 6D is a process flow diagram illustrating operations that may be performed by a processor of a robotic device as part of the method for application of a rotation matrix estimation according to various embodiments.
- Various embodiments include methods that may be implemented by a processor of a robotic device for rotation matrix estimation. Various embodiments improve the operation of robotic devices by increasing the efficiency of navigation processing and decreasing localization errors.
- robot device refers to one of various types of vehicles, automated and self-propelled machines, and other forms of robots including one or more sensors, such as a camera, lidar, radar, etc., and an onboard processor configured to provide some autonomous or semi-autonomous capabilities.
- robotic devices include but are not limited to factory robotic devices, autonomous robots, aerial vehicles, such as an unmanned aerial vehicle (UAV) ; ground vehicles (e.g., an autonomous or semi-autonomous car, a vacuum robot, etc. ) ; water-based vehicles (i.e., vehicles configured for operation on the surface of the water or under water) ; space-based vehicles (e.g., a spacecraft or space probe) ; and/or some combination thereof.
- UAV unmanned aerial vehicle
- ground vehicles e.g., an autonomous or semi-autonomous car, a vacuum robot, etc.
- water-based vehicles i.e., vehicles configured for operation on the surface of the water or under water
- space-based vehicles e.g., a
- the robotic device may be manned. In other embodiments, the robotic device may be unmanned. In embodiments in which the robotic device is autonomous, the robotic device may include an onboard computing device configured to maneuver and/or navigate the robotic device without remote operating instructions (i.e., autonomously) , such as from a human operator (e.g., via a remote computing device) . In embodiments in which the robotic device is semi-autonomous, the robotic device may include an onboard computing device configured to receive some information or instructions, such as from a human operator (e.g., via a remote computing device) , and autonomously maneuver and/or navigate the robotic device consistent with the received information or instructions.
- a robotic device may include a variety of components and payloads that may perform a variety of functions.
- a robotic device In order to navigate, path plan, and perform tasks, a robotic device typically needs to localize itself in its environment. Localization involves both determining a position in that environment, as well as a rotation (or rotational orientation) relative to the environment.
- a simple form of localization uses a wheel odometry technique, such as measuring an amount of rotation of robotic device’s wheels, e.g., using wheel encoders or other suitable devices.
- a wheel odometry technique such as measuring an amount of rotation of robotic device’s wheels, e.g., using wheel encoders or other suitable devices.
- wheel encoders or other suitable devices.
- error sources including wheel slippage in uneven terrain or slippery floors, make such techniques relatively unreliable.
- SLAM Simultaneous Localization and Mapping
- vSLAM visual simultaneous localization and mapping
- Various embodiments make use of an environment’s effective features in order to accurately localize the robotic device.
- interior environments or environments with elements that generally follow a Cartesian grid may be suited for using the Manhattan World Assumption to determine a rotation matrix for frames of reference.
- roadways tend to include extended linear features in edges, edge lines, centerlines, sidewalks, etc.
- buildings along a roadway typically exhibit vertically oriented linear features, such as corners, window edges, doorways, etc.
- the interiors of buildings include horizontal and vertical linear features in floor tiles, cabinets, furniture, etc.
- many urban scenes or indoor scenes contain sufficient structure on the distribution of edges to provide a natural Cartesian reference frame for the viewer.
- Various embodiments take advantage of observable linear features in environments as a Manhattan frame for use as a reference frame to estimate the rotation of a global frame of reference for the robotic device.
- Various embodiments enable a robotic device to optimize rotation matrix estimation to improve efficiencies and reduce errors in robotic operation, such as for navigation, guiding an actuator, and otherwise interacting with its environment.
- various embodiments use a rotation matrix estimated from a frame of reference of the robotic device relative to a frame of reference for the environment based on a Manhattan World Assumption.
- Manhattan World Assumption refers to the use of a Euclidean coordinate system to define a framework for an environment that contains sufficient structures that may be approximated by linear features that are parallel to one of three orthogonal axes of a common orthogonal coordinate system.
- Manhattan Frame, is used herein to refer to an orthogonal coordinate system used as a framework relative to a robotic device derived using the Manhattan World Assumption. Environments suitable for the Manhattan World Assumption are often found in urban centers, such as in Manhattan, New York, which includes a grid-like configuration of streets and avenues. However, other environments are similarly suitable, such as inside buildings and in other man-made environments or environments altered by humans, which tend to include linear patterns from forms (e.g., objects, surfaces, etc. ) or indicia visible within a field of view.
- the environment 100 may include a robotic device 102, a base station 104, an access point 106, a communication network 108, and a network element 110.
- the robotic device 102 may be equipped with a camera 103 and/or one or more other sensors.
- the base station 104 and the access point 106 may provide wireless communications to access the communication network 108 over a wired and/or wireless communication backhaul 116 and 118, respectively.
- the base station 104 may include base stations configured to provide wireless communications over a wide area (e.g., macro cells) , as well as small cells, which may include a micro cell, a femto cell, a pico cell, and other similar network access points.
- the access point 106 may include access points configured to provide wireless communications over a relatively smaller area. Other examples of base stations and access points are also possible.
- the robotic device 102 may communicate with the base station 104 over a wireless communication link 112, and with the access point 106 over a wireless communication link 114.
- the wireless communication links 112 and 114 may include a plurality of carrier signals, frequencies, or frequency bands, each of which may include a plurality of logical channels.
- the wireless communication links 112 and 114 may utilize one or more radio access technologies (RATs) .
- RATs radio access technologies
- Examples of RATs that may be used in a wireless communication link in various embodiments include medium range protocols such as Wi-Fi, LTE-U, LTE-Direct, LAA, MuLTEfire, and relatively short range RATs such as ZigBee, Bluetooth, and Bluetooth Low Energy (LE) .
- RATs radio access technologies
- Examples of RATs that may be used in a wireless communication link in various embodiments include medium range protocols such as Wi-Fi, LTE-U, LTE-Direct, LAA, MuLTEfire, and relatively short range RATs such as ZigBe
- a wireless communication link uses other RATs, such as 3GPP Long Term Evolution (LTE) , 3G, 4G, 5G, Global System for Mobility (GSM) , Code Division Multiple Access (CDMA) , Wideband Code Division Multiple Access (WCDMA) , Worldwide Interoperability for Microwave Access (WiMAX) , Time Division Multiple Access (TDMA) , and other mobile telephony communication technologies cellular RATs
- LTE Long Term Evolution
- GSM Global System for Mobility
- CDMA Code Division Multiple Access
- WCDMA Wideband Code Division Multiple Access
- WiMAX Worldwide Interoperability for Microwave Access
- TDMA Time Division Multiple Access
- the network element 110 may include a network server or another similar network element.
- the network element 110 may communicate with the communication network 108 over a communication link 122.
- the robotic device 102 and the network element 110 may communicate via the communication network 108.
- the network element 110 may provide the robotic device 102 with a variety of information, such as navigation information, weather information, information about local air, ground, and/or sea traffic, movement control instructions, and other information, instructions, or commands relevant to operations of the robotic device 102.
- the robotic device 102 may move in the environment 100 and more particularly move within an interior space 120 thereof.
- the robotic device 102 may be configured to perform operations to localize the robotic device 102 within the interior space 120 to enable the robotic device 102 to maneuver in and interact with the interior space 120.
- the robotic device may use received spatial data from sensors, such as a camera 103 to map the interior space 120.
- the camera 103 may be configured to capture one or more images of the interior space 120.
- the one or more images may include spatial data, which is used herein to refer to any type of data that measures, characterizes, identifies, and/or describes physical or observable features of an area. Spatial data can also numerically represent a physical object, structure, form, or other visible element that may be located within the area.
- the robotic device 102 may include one or more different or additional sensors configured to capture spatial data.
- the robotic device 102 may include one or more other sensors such as lidar, radar, etc.
- a processor of the robotic device 102 will be programmed and/or configured to have a global frame of reference originating somewhere on the robotic device 102, such as at a lens of the camera 103.
- the robotic device 102 includes a camera frame F C , which is a global frame of reference for the robotic device 102 with three orthogonal axes X C , Y C , Z C .
- the interior space 120 may include elements that define forms or indicia that are visible within a field of view of the camera 103 or another sensor of the robotic device 102.
- the expression “forms or indicia” refer to visible shapes, edges, outlines, contrasts, demarcations, markings, labels, or other visible characteristics.
- the interior space 120 includes two walls 130, 140, and a door 150.
- the walls 130, 140 and the door 150 include lines formed by linear patterns 132, 134, 142, 144, 146, 152, 154, 156 at their edges or borders with neighboring elements.
- the left wall (LW) 130 includes a first linear LW pattern 132 along its bottom edge where the LW 130 meets the floor.
- the LW 130 includes a second LW linear pattern 134 along its top edge where the LW 130 meets the ceiling or ends.
- the right wall (RW) 140 includes a first RW linear pattern 142 along its bottom edge where the RW 140 meets the floor.
- the RW 140 includes a second RW linear pattern 144 along its top edge where the RW 140 meets the ceiling or ends.
- the door 150 includes a third RW linear pattern 146 that extends horizontally along the top trim of the door 150 on the RW 140.
- a corner between the LW 130 and the RW 140 forms a first vertical pattern 152.
- Opposed sides of the door 150 form a second vertical pattern 154 and a third vertical pattern 156. Any or all of the linear patterns 132, 134, 142, 144, 146, 152, 154, 156 that lie within a field of view of the camera 103 or other sensor of the robotic device 102 may be identified as such.
- Various embodiments may analyze spatial data captured by the camera 103 and/or other sensor (s) to identify the linear patterns 132, 134, 142, 144, 146, 152, 154, 156.
- a processor of the robotic device 102 may identify selected linear patterns that extend parallel to one of three orthogonal axes that may serve to define a Manhattan frame of the environment. In FIG.
- all the linear patterns 132, 134, 142, 144, 146, 152, 154, 156 extend parallel to one of three orthogonal axes X M , Y M , Z M , which together form a Manhattan frame F M having an origin 135 at an intersection of the three orthogonal axes X M , Y M , Z M .
- Parallel lines when viewed by an observer or visible in a 2D image of 3D space, as illustrated in FIG. 1A, will appear to converge upon one another as the lines recede into the background.
- a point at which receding parallel lines viewed in perspective appear to converge is referred to as a “vanishing point. ”
- FIG. 1B illustrates a line 161 in 3D space, which may be parametrized by a point A on the line 161 and its direction vector d.
- a projection of the line may be defined as follows:
- K is the intrinsic matrix (global frame of reference) of the camera and/or robotic device (C)
- v is the vanishing point
- ⁇ is a scalar.
- Eq. (1) illustrates that a vanishing point v of a 3D line may be defined by the projection point of its direction vector d in an image and parallel lines in 3D space may have the same vanishing point v in the image frame. In this way, the direction d of a line may be calculated using its vanishing point v in the image.
- Parallel lines in 3D space may have the same vanishing point v in the image, which is the intersection point of the corresponding lines in the image.
- a processor may calculate a direction vector of a set of parallel lines, which parallel lines may be used to define the x, y, z axis of a Manhattan frame (e.g., F M ) .
- a processor may identify linear patterns that either converge or nearly converge on a common vanishing point. Once identified, the processor may select some linear patterns for defining one of three orthogonal axes of a Manhattan frame.
- a processor of the robotic device 102 may attribute a quantifiable mathematical value or set of values to each of the selected linear patterns in order to calculate a magnitude and direction of a line that corresponds to each linear pattern. Such a quantifiable mathematical value or set of values may be a vector quantity and is referred to herein as a “line descriptor.
- each line descriptor for the selected linear patterns may be defined as a vector quantity relative to each of the three orthogonal axes X M , Y M , Z M .
- a rotation matrix may be determined that represents an estimation of the rotation between the Manhattan frame and the camera frame, as well as the orientation of the Manhattan frame relative to the camera frame.
- FIGS. 2A-2C illustrate three stages of image analysis for three different raw images captured by a camera in different environments, in accordance with various embodiments.
- the images at the top (a) of each of the series in FIGS. 2A-2C represent a raw image
- the images in the middle (b) of each of the series in FIGS. 2A-2C represent the linear patterns (i.e., lines) identified in the raw images, respectively
- the images at the bottom (c) of each of the series in FIGS. 2A-2C represent the groupings of parallel lines, wherein each group is illustrated with a different color
- Manhattan Frames may be studied together with vanishing points, because images often include a large number of parallel lines distributed in three orthogonal directions, which may be used to form Manhattan Frames.
- the vanishing points of each set of parallel lines may be obtained by estimating the intersection points of image lines. With the vanishing points estimated, the rotation between the Manhattan Frames and the camera frame may be determined.
- FIG. 2D illustrates an image frame 201 including a plurality of linear patterns 231, 233, 235, 241, 243, 245, 251, 253, 255 that may be identified and used to determine optimal vanishing points in accordance with various embodiments.
- a processor of the robotic device e.g., 103 may detect the linear patterns 231, 233, 235, 241, 243, 245, 251, 253, 255 therein.
- LSD line segment detection
- Two lines l i , l j may be selected (e.g., randomly) from the sequence of detected linear patterns and an intersection point v ij may be calculated as a vanishing hypothesis.
- RANSAC random sample consensus
- the outliers in the data set may be identified and a desired model of vanishing points may be estimated using data that does not contain outliers. For example, the following weighting functions may be applied.
- ⁇ p is the length of line segment l p .
- a processor may re-execute the procedure to calculate the second and third vanishing points v y , v z with their inliers.
- a best estimate of the vanishing points V x , V y , V z may result in their being located in intersection areas of grouped sets of the linear patterns, such as a first set of linear patterns 231, 233, 235 attributable to the x-axis vanishing point V x ; a second set of linear patterns 241, 243, 245 attributable to the y-axis vanishing point V y ; and a third set of linear patterns 241, 243, 245 attributable to the z-axis vanishing point V z .
- a processor may determine the vanishing point v x .
- the processor may perform a similar procedure to obtain the optimized vanishing point v y , v z along the other orthogonal axes.
- an orientation of the Manhattan frame F M may be determined with respect to camera frame F C .
- Eq. (5) are three orthogonal axes that define a Manhattan frame. Based on this, the vanishing points may define a rotation matrix as follows:
- F represents a Frobenius norm.
- SSD singular value decomposition
- FIG. 2E illustrates an environment 200 in which a Manhattan frame F M has been defined and the camera frame F C moves relative thereto with the movement of the robotic device.
- the sensor 103 represents the robotic device since the sensor 103 may form the point of origin of the camera frame F C .
- the robotic device moves relative to a stationary background, which means the Manhattan frame F M of the environment 200 may remain fixed, while the camera frame F C changes position and orientation.
- the sensor In a first position, the sensor has a first camera frame F C-1 .
- the sensor In a second position, after a first movement M 1 by the robotic device, the sensor has a second camera frame F C-2 .
- the sensor has a third camera frame F C-3 .
- a processor of the robotic device may define the Manhattan frame F M based upon line descriptors calculated from select linear patterns observed in the environment 200. Using the first camera frame F C-1 , the processor of the robotic device may determine a rotation matrix estimation for the Manhattan frame F M relative to the first camera frame F C-1 based upon line descriptors used to define the Manhattan frame F M .
- a processor of the robotic device may re-identify linear patterns in the sensor’s field of view and generate line descriptors for the re-identified linear patterns.
- a processor may match the line descriptors for the re-identified linear patterns seen from the second position to the line descriptors identified from the first position in order to determine a new rotation matrix estimation for the second position.
- a processor of the robotic device may re-identify linear patterns in the sensor’s field of view and generate line descriptors for the newly re-identified linear patterns.
- a processor may match line descriptors for the newly re-identified linear patterns seen from the third position to the line descriptors identified from the first and/or second position (s) in order to determine a revised rotation matrix estimation for the third position.
- FIG. 2F is a process flow diagram illustrating operations 210 for determining a vSLAM Frame F S to an initial Manhattan Frame F Mi rotation matrix as a robotic device starts to move in accordance with various embodiments.
- the robotic device e.g., 102
- the robotic device may start to move (i.e., Robot start to run) , which may or may not include rotation.
- a processor of the robotic device may begin a vSLAM initialization to obtain localization and mapping data, and generate an initial vSLAM Frame F V to Camera Frame F C initialization rotation matrix
- the processor of the robotic device may begin to calculate a Manhattan Frame descriptor (i.e., “calc Manhattan descriptor” ) and generate a Manhattan Frame F M to Camera Frame F C initialization rotation matrix
- the processor of the robotic device may combine and transform the and the to generate the vSLAM Frame F V to Manhattan Frame F M rotation matrix
- the generated vSLAM Frame F V to Manhattan Frame F M rotation matrix in block 267 may be applied as the robotic device continues to move, as described with reference to FIG. 2G.
- FIG. 2G is a process flow diagram illustrating operations 220 for determining rotation matrix rotational errors as a robotic device continues to move in accordance with various embodiments.
- the robotic device e.g., 102 may continue moving.
- the processor may perform vSLAM tracking identifying any incremental changes to determine an incrementally updated vSLAM Frame F V to Camera Frame F C rotation matrix
- the processor may calculate incrementally updated Manhattan Frame descriptors (i.e., “calc Manhattan descriptor” ) , match line descriptors for the re-identified linear patterns, and generate an incrementally updated Manhattan Frame F M to Camera Frame F C rotation matrix
- the multiple updated Manhattan Frame F M to Camera Frame F C rotation matrixes may be compared and combined with the generated vSLAM Frame F V to Manhattan Frame F M rotation matrix from block 267 (FIG.
- the processor may compare the from block 272 and the from block 274.
- the processor may correct any vSLAM rotation errors noted from the comparison in block 275 if the compared difference exceeds a predetermined threshold.
- FIG. 3 is a component block diagram illustrating components of an example robotic device 300 according to various embodiments.
- Robotic devices may include winged or rotorcraft varieties.
- An example robotic device 300 is illustrated as a ground vehicle design that utilizes one or more wheels 302 driven by corresponding motors to provide locomotion to the robotic device 300.
- the illustration of robotic device 300 is not intended to imply or require that various embodiments are limited to ground robotic devices.
- various embodiments may be used with rotorcraft or winged robotic devices, water-borne robotic devices, and space-based robotic devices.
- the robotic device 300 may be the same or similar to the robotic device (e.g., 102) described with reference to FIG. 1A.
- the robotic device 300 may include a number of wheels 302, a body 304, and a sensor 306.
- the sensor 306 may be the same or similar to the camera (e.g., 103) described with reference to FIG. 1A.
- the frame 304 may provide structural support for the motors and their associated wheels 302 as well as for the sensor 306.
- the frame 304 may also support an arm 308 or another suitable extension, which may in turn support the sensor 306.
- the arm 308, or segments of the arm 308, may be configured to articulate or move by one or more joints, bending elements, or rotating elements.
- the sensor 306 may be moveably attached to the arm 308 by a joint element that enables the sensor 306 to move relative to the arm 308.
- some detailed aspects of the robotic device 300 are omitted such as wiring, motors, frame structure interconnects, or other features that would be known to one of skill in the art.
- the illustrated robotic device 300 has wheels 302, this is merely exemplary and various embodiments may include any variety of components to provide propulsion and maneuvering capabilities, such as treads, paddles, skids, or any combination thereof or of other components.
- the robotic device 300 may further include a control unit 310 that may house various circuits and devices used to power and control the operation of the robotic device 300.
- the control unit 310 may include a processor 320, a power module 330, sensors 340, one or more payload securing units 344, one or more image sensors 345 (e.g., cameras) , an output module 350, an input module 360, and a radio module 370.
- the processor 320 may be configured with processor-executable instructions to control travel and other operations of the robotic device 300, including operations of various embodiments.
- the processor 320 may include or be coupled to a navigation unit 322, a memory 324, a gyro/accelerometer unit 326, and a maneuvering data module 328.
- the processor 320 and/or the navigation unit 322 may be configured to communicate with a server through a wireless connection (e.g., a wireless wide area network, a cellular data network, etc. ) to receive data useful in navigation, provide real-time position reports, and assess data.
- a wireless connection e.g., a wireless wide area network, a cellular data network, etc.
- the maneuvering data module 328 may be coupled to the processor 320 and/or the navigation unit 322 and may be configured to provide travel control-related information such as orientation, attitude, speed, heading, and similar information that the navigation unit 322 may use for navigation purposes, such as dead reckoning between Global Navigation Satellite System (GNSS) position updates.
- the gyro/accelerometer unit 326 may include an accelerometer, a gyroscope, an inertial sensor, an inertial measurement unit (IMU) , or other similar sensors.
- the maneuvering data module 328 may include or receive data from the gyro/accelerometer unit 326 that provides data regarding the orientation and accelerations of the robotic device 300 that may be used in navigation and positioning calculations, as well as providing data used in various embodiments for processing images.
- the processor 320 may receive additional information from one or more image sensors 345 and/or other sensors 340.
- the camera (s) 345 may include an optical sensor capable of infrared, ultraviolet, and/or other wavelengths of light.
- the sensors 340 may also include a wheel rotation counter or sensor, a radio frequency (RF) sensor, a barometer, a sonar emitter/detector, a radar emitter/detector, a microphone or another acoustic sensor, or another sensor that may provide information usable by the processor 320 for movement operations as well as navigation and positioning calculations.
- the sensors 340 may include contact or pressure sensors that may provide a signal that indicates when the robotic device 300 has contacted a surface.
- the payload-securing units 344 may include an actuator motor that drives a gripping and release mechanism and related controls that are responsive to the control unit 310 to grip and release a payload in response to commands from the control unit 310.
- the power module 330 may include one or more batteries that may provide power to various components, including the processor 320, the sensors 340, the payload-securing unit (s) 344, the camera (s) 345, the output module 350, the input module 360, and the radio module 370.
- the power module 330 may include energy storage components, such as rechargeable batteries.
- the processor 320 may be configured with processor-executable instructions to control the charging of the power module 330 (i.e., the storage of harvested energy) , such as by executing a charging control algorithm using a charge control circuit.
- the power module 330 may be configured to manage its own charging.
- the processor 320 may be coupled to the output module 350, which may output control signals for managing the motors that drive the rotors 302 and other components.
- the robotic device 300 may be controlled through control of the individual motors of the rotors 302 as the robotic device 300 progresses toward a destination.
- the processor 320 may receive data from the navigation unit 322 and use such data in order to determine the present position and orientation of the robotic device 300, as well as the appropriate course towards the destination or intermediate sites.
- the navigation unit 322 may include a GNSS receiver system (e.g., one or more global positioning system (GPS) receivers) enabling the robotic device 300 to navigate using GNSS signals.
- GPS global positioning system
- the navigation unit 322 may be equipped with radio navigation receivers for receiving navigation beacons or other signals from radio nodes, such as navigation beacons (e.g., very high frequency (VHF) omni-directional range (VOR) beacons) , Wi-Fi access points, cellular network sites, radio station, remote computing devices, other robotic devices, etc.
- radio navigation receivers for receiving navigation beacons or other signals from radio nodes, such as navigation beacons (e.g., very high frequency (VHF) omni-directional range (VOR) beacons) , Wi-Fi access points, cellular network sites, radio station, remote computing devices, other robotic devices, etc.
- VHF very high frequency
- VOR very high frequency
- the radio module 370 may be configured to receive navigation signals, such as signals from aviation navigation facilities, etc., and provide such signals to the processor 320 and/or the navigation unit 322 to assist in robotic device navigation.
- the navigation unit 322 may use signals received from recognizable RF emitters (e.g., AM/FM radio stations, Wi-Fi access points, and cellular network base stations) on the ground.
- recognizable RF emitters e.g., AM/FM radio stations, Wi-Fi access points, and cellular network base stations
- the radio module 370 may include a modem 374 and a transmit/receive antenna 372.
- the radio module 370 may be configured to conduct wireless communications with a variety of wireless communication devices (e.g., a wireless communication device (WCD) 390) , examples of which include a wireless telephony base station or cell tower (e.g., the base station 104) , a network access point (e.g., the access point 106) , a beacon, a smartphone, a tablet, or another computing device with which the robotic device 300 may communicate (such as the network element 110) .
- WCD wireless communication device
- the processor 320 may establish a bi-directional wireless communication link 394 via the modem 374 and the antenna 372 of the radio module 370 and the wireless communication device 390 via a transmit/receive antenna 392.
- the radio module 370 may be configured to support multiple connections with different wireless communication devices using different radio access technologies.
- the wireless communication device 390 may be connected to a server through intermediate access points.
- the wireless communication device 390 may be a server of a robotic device operator, a third party service (e.g., package delivery, billing, etc. ) , or a site communication access point.
- the robotic device 300 may communicate with a server through one or more intermediate communication links, such as a wireless telephony network that is coupled to a wide area network (e.g., the Internet) or other communication devices.
- the robotic device 300 may include and employ other forms of radio communication, such as mesh connections with other robotic devices or connections to other information sources (e.g., balloons or other stations for collecting and/or distributing weather or other data harvesting information) .
- control unit 310 may be equipped with an input module 360, which may be used for a variety of applications.
- the input module 360 may receive images or data from the sensor 306, or may receive electronic signals from other components (e.g., a payload) .
- FIG. 4 is a component block diagram illustrating a processor 410 suitable for use in robotic devices implementing various embodiments.
- the processor 410 may be configured to be used in a robotic device and may be configured as or including a system-on-chip (SoC) 412.
- SoC system-on-chip
- a variety of components e.g., the processor 320, the output module 350, the radio module 370
- the SoC 412 may include (but is not limited to) a processor 414, a memory 416, a communication interface 418, and a storage memory interface 420.
- the processor 410 or the SoC 412 may further include a communication component 422, such as a wired or wireless modem, a storage memory 424, an antenna 426 for establishing a wireless communication link, and/or the like.
- the processor 410 or the SoC 412 may further include a hardware interface 428 configured to enable the processor 414 to communicate with and control various components of a robotic device.
- the processor 414 may include any of a variety of processors, for example any number of processor cores.
- SoC system-on-chip
- processors e.g., 414
- memory e.g., 416)
- communication interface e.g., 4128
- the SoC 412 may include a variety of different types of processors 414 and processor cores, such as a general purpose processor, a central processing unit (CPU) , a digital signal processor (DSP) , a graphics processing unit (GPU) , an accelerated processing unit (APU) , a subsystem processor of specific components of the processor, such as an image processor for a camera subsystem or a display processor for a display, an auxiliary processor, a single-core processor, and a multicore processor.
- processors 414 and processor cores such as a general purpose processor, a central processing unit (CPU) , a digital signal processor (DSP) , a graphics processing unit (GPU) , an accelerated processing unit (APU) , a subsystem processor of specific components of the processor, such as an image processor for a camera subsystem or a display processor for a display, an auxiliary processor, a single-core processor, and a multicore processor.
- the SoC 412 may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA) , an application-specific integrated circuit (ASIC) , other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references.
- FPGA field programmable gate array
- ASIC application-specific integrated circuit
- Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon.
- the SoC 412 may include one or more processors 414.
- the processor 410 may include more than one SoC 412, thereby increasing the number of processors 414 and processor cores.
- the processor 410 may also include processors 414 that are not associated with an SoC 412 (i.e., external to the SoC 412) .
- Individual processors 414 may be multicore processors.
- the processors 414 may each be configured for specific purposes that may be the same as or different from other processors 414 of the processor 410 or SoC 412.
- One or more of the processors 414 and processor cores of the same or different configurations may be grouped together.
- a group of processors 414 or processor cores may be referred to as a multi-processor cluster.
- the memory 416 of the SoC 412 may be a volatile or non-volatile memory configured for storing data and processor-executable instructions for access by the processor 414.
- the processor 410 and/or SoC 412 may include one or more memories 416 configured for various purposes.
- One or more memories 416 may include volatile memories such as random access memory (RAM) or main memory, or cache memory.
- processor 410 and the SoC 412 may be arranged differently and/or combined while still serving the functions of the various aspects.
- the processor 410 and the SoC 412 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the processor 410.
- FIG. 5 is a software module diagram illustrating various processing modules that may be implemented in a processing system 500 of a robotic device in accordance with various embodiments.
- the processing system 500 may be implemented in or as a part of a processor (e.g., 320, 414) or SoC (e.g., 412) of a robotic device (e.g., 102, 300) .
- the processing system 500 processing modules may be implemented as machine-readable or processor-executable instructions that may configured as instruction modules that are stored in electronic storage 530 (e.g., memory XXX) prior to executing in a processor (e.g., 320, 414) .
- the instruction modules may include one or more of a spatial data receiving module 502, a linear pattern identification module 504, a line descriptor generation module 506, a linear pattern selection/grouping module 508, a vanishing point estimation module 510, a frame determination module 512, a rotation matrix estimation determination module 514, a rotation matrix application module 516, a line descriptor matching module 518, and a Manhattan frame comparison module 520, as well as other instruction modules.
- not every instruction module shown in FIG. 5 may be implemented.
- the spatial data receiving module 502 may be configured to receive and process spatial data captured by a sensor (e.g., 103) of the robotic device (e.g., 102) .
- the linear pattern identification module 504 may be configured to identify linear patterns in the received spatial data from forms or indicia visible within a field of view of the sensor.
- the line descriptor generation module 506 may be configured to generate line descriptors for the linear patterns identified by the linear pattern identification module 504. Each of the generated line descriptors may define a magnitude and direction of the identified linear patterns.
- the linear pattern selection/grouping module 508 may be configured to group the one or more linear patterns that extend parallel to one another relative to three orthogonal axes for defining the Manhattan frame.
- the vanishing point estimation module 510 may be configured to calculate one or more vanishing points based on a convergence of the selected linear patterns.
- the frame determination module 512 may be configured to define a frame of reference (e.g., a Manhattan Frame) from the line descriptors of the selected linear patterns.
- a frame of reference e.g., a Manhattan Frame
- An origin of the frame of reference e.g., the Manhattan frame
- the rotation matrix estimation determination module 514 may be configured to determine a rotation matrix estimation (e.g., ) based upon line descriptors defining the frame of reference (e.g., the Manhattan frame) as determined by the frame determination module 512 relative to a global frame of reference of the sensor (e.g., a camera frame F C ) .
- the rotation matrix estimation such as the may be determined as described above with regard to equations (1) - (7) .
- the rotation matrix application module 516 may be configured to apply the rotation matrix estimation, determined by the rotation matrix estimation determination module 514, for spatial calibration of the robotic device that translates the defined Manhattan frame relative to a global frame of reference of the robotic device. For example, the rotation matrix application module 516 may use the rotation matrix estimated by rotation matrix estimation determination module 514 in order to correct a vSLAM rotation error.
- the line descriptor matching module 518 may be configured to match line descriptors associated with spatial data captured by the sensor from two distinct positions or orientations of the robotic device.
- the frame comparison module 520 may be configured to compare at least two distinct frames of reference, such as two determined Manhattan frames, for updating the rotation matrix estimation, such as in response to identifying a localization error from matched the line descriptors.
- module may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components.
- FIG. 6A is a process flow diagram illustrating an example method 600 performed by a processor of a robotic device for application of a rotation matrix estimation according to various embodiments.
- means for performing each of the operations of the method 600 may be a processor of the robotic device, such as the processor 320, the processor 410, the SoC 412, and/or the like.
- the processor may receive spatial data captured by a sensor of the robotic device.
- the processor may receive one or more images from a camera, which images include special data.
- the processor may receive spatial data in the form of lidar measurements.
- the processor may receive other forms of spatial data, such as from radar, motion detectors, etc.
- the origin of the Manhattan frame may be disposed beyond vanishing points of the selected linear patterns within in the field of view of the sensor.
- the spatial data captured by the sensor may include an image within the field of view, such as from a camera.
- the spatial data captured by the sensor may include measured ranges between the sensor and the forms or indicia visible within the field of view, such as from lidar.
- the processor may identify linear patterns in the received spatial data from forms or indicia visible within a field of view of the sensor. For example, edges between surfaces, a contrast visible between structures or colors, or lines or markings on objects or structures in the environment may appear as forms or indicia.
- the processor may generate line descriptors for the identified linear patterns.
- Each of the generated line descriptors may define a magnitude and direction of the identified linear patterns.
- the generated line descriptors may define as vectors for distinct line segments.
- the processor may identify selected linear patterns from among the identified linear patterns based on the generated line descriptors indicating that the selected linear patterns extend parallel to one of three orthogonal axes. For example, only select linear patterns may converge to a vanishing point, thus the processor may only select identified linear patterns determined to converge to a vanishing point.
- the processor may define a Manhattan frame from the line descriptors of the selected linear patterns.
- An origin of the Manhattan frame may be located at a convergence point of the three orthogonal axes.
- the Manhattan frame may be a frame of reference of the environment based on structures therein and defined by using a Manhattan World Assumption.
- the processor may apply a rotation matrix estimation for spatial calibration of the robotic device that translates the defined Manhattan frame relative to a global frame of reference of the robotic device.
- FIG. 6B is a process flow diagram illustrating operations 602 that may be performed by a processor of a robotic device as part of the method 600 for application of a rotation matrix estimation according to various embodiments.
- means for performing the operations of the method 602 may be a processor of the robotic device, such as the processor 320, the processor 410, the SoC 412, and/or the like.
- the processor may compare the defined Manhattan frame to localization and mapping data determined for the robotic device in block 624.
- the rotation matrix estimation may be derived from the comparison of the defined Manhattan frame to the localization and mapping data. For example, a Manhattan frame generated, based on SLAM or vSLAM data, may be compared to a Manhattan frame generated from line descriptors of selected linear patterns. This comparison may be helpful to refine or revise the rotation matrix estimation previously based on only the SLAM or vSLAM data. As a further example, the comparison may be between localization and mapping data collected from two distinct positions or orientations within an environment. An example of operations in block 602 are described in operations 210 with reference to FIG. 2F.
- the processor may apply the rotation matrix estimation in block 622 of the method 600 as described.
- FIG. 6C is a process flow diagram illustrating operations 604 that may be performed by a processor of a robotic device as part of the method 600 for application of a rotation matrix estimation according to various embodiments.
- means for performing the operations of the method 604 may be a processor of the robotic device, such as the processor 320, the processor 410, the SoC 412, and/or the like.
- the processor may group the one or more linear patterns extending parallel to one another relative to the three orthogonal axes for defining the Manhattan frame in block 626. For example, the processor may use the line descriptors to determine which linear patterns converge to the same vanishing point or near the same vanishing point in order to group those linear patterns together.
- the processor may identify selected linear patterns from among the identified linear patterns in block 618 of the method 600 as described.
- FIG. 6D is a process flow diagram illustrating operations 606 that may be performed by a processor of a robotic device as part of the method 600 for application of a rotation matrix estimation according to various embodiments.
- means for performing the operations of the method 606 may be a processor of the robotic device, such as the processor 320, the processor 410, the SoC 412, and/or the like.
- the processor may match line descriptors associated with spatial data captured by the sensor from two distinct positions or orientations of the robotic device in block 628.
- the matching line descriptors may correspond to the same linear patterns but from a different perspective, such as after the robotic device moves to a new position.
- the processor may update the rotation matrix estimation in response to identifying a localization error from matched the line descriptors. For example, after moving from one position to another, errors may appear in the rotation matrix estimation, which errors may be identified by periodically regenerating line descriptors for defining updates to the Manhattan frame and comparing those results with older data.
- An example of operations 606 are described regarding operations 220 with reference to FIG. 2G.
- the processor may apply the rotation matrix estimation in block 622 of the method 600 as described.
- Implementation examples are described in the following paragraphs. While some of the following implementation examples are described in terms of example methods, further example implementations may include: the example methods discussed in the following paragraphs implemented by a robotic device including a processor configured to perform operations of the example methods; the example methods discussed in the following paragraphs implemented by a robotic device including means for performing functions of the example methods; and the example methods discussed in the following paragraphs implemented as a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a robotic device to perform the operations of the example methods.
- Example 1 A method performed by a processor of a robotic device for application of a rotation matrix estimation, including receiving, by the processor of the robotic device, spatial data captured by a sensor of the robotic device; identifying linear patterns in the received spatial data from forms or indicia visible within a field of view of the sensor; generating line descriptors for the identified linear patterns, wherein each of the generated line descriptors define a magnitude and direction of the identified linear patterns; identifying selected linear patterns from among the identified linear patterns based on the generated line descriptors indicating that the selected linear patterns extend parallel to one of three orthogonal axes; defining a Manhattan frame from the generated line descriptors of the selected linear patterns, wherein an origin of the Manhattan frame is located at a convergence point of the three orthogonal axes; and applying a rotation matrix estimation for spatial calibration of the robotic device that translates the defined Manhattan frame relative to a global frame of reference of the robotic device
- Example 2 The method of example 1, further including comparing the defined Manhattan frame to localization and mapping data determined for the robotic device, in which the rotation matrix estimation is derived from the comparison of the defined Manhattan frame to the localization and mapping data.
- Example 3 The method of either of examples 1 or 2, further including grouping one or more linear patterns extending parallel to one another relative to the three orthogonal axes for defining the Manhattan frame.
- Example 4 The method of any of examples 1-3, in which the origin of the Manhattan frame is disposed beyond vanishing points of the selected linear patterns within in the field of view of the sensor.
- Example 5 The method of any of examples 1–4, in which the spatial data captured by the sensor includes an image within the field of view.
- Example 6 The method of any of examples 1–5, in which the spatial data captured by the sensor includes measured ranges between the sensor and the forms or indicia visible within the field of view.
- Example 7 The method of any of examples 1–6, further including matching generated line descriptors associated with spatial data captured by the sensor from two distinct positions or orientations of the robotic device; and updating the rotation matrix estimation in response to identifying a localization error from matched generated line descriptors.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of receiver smart objects, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.
- the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium.
- the operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module or processor-executable instructions, which may reside on a non-transitory computer-readable or processor-readable storage medium.
- Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor.
- non-transitory computer- readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage smart objects, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer.
- Disk and disc includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media.
- the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Des procédés d'estimation de matrice de rotation mis en œuvre par un processeur d'un dispositif robotisé peuvent comprendre la réception de données spatiales capturées par un capteur du dispositif robotisé. Des motifs linéaires peuvent être identifiés dans les données spatiales reçues. Des descripteurs de ligne peuvent être générés, qui peuvent définir une grandeur et une direction des motifs linéaires identifiés. Des motifs linéaires sélectionnés parmi les motifs linéaires identifiés peuvent être identifiés sur la base du fait que les descripteurs de ligne générés indiquent que les motifs linéaires sélectionnés s'étendent parallèlement à un axe parmi trois axes orthogonaux. Un repère de Manhattan peut être défini à partir des descripteurs de ligne des motifs linéaires sélectionnés. Une origine de celui-ci peut être située en un point de convergence des trois axes orthogonaux. Une estimation de matrice de rotation peut être appliquée pour un étalonnage spatial du dispositif robotisé, qui permet un changement de repère du repère de Manhattan défini à un repère global de référence du dispositif robotisé.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/138892 WO2024124421A1 (fr) | 2022-12-14 | 2022-12-14 | Estimation de matrice de rotation de robot à l'aide d'une hypothèse de monde de manhattan |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/138892 WO2024124421A1 (fr) | 2022-12-14 | 2022-12-14 | Estimation de matrice de rotation de robot à l'aide d'une hypothèse de monde de manhattan |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024124421A1 true WO2024124421A1 (fr) | 2024-06-20 |
Family
ID=91484241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/138892 WO2024124421A1 (fr) | 2022-12-14 | 2022-12-14 | Estimation de matrice de rotation de robot à l'aide d'une hypothèse de monde de manhattan |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024124421A1 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004900A (zh) * | 2021-11-17 | 2022-02-01 | 中国科学院合肥物质科学研究院 | 一种基于点线面特征的室内双目视觉里程计方法 |
CN114463406A (zh) * | 2022-01-25 | 2022-05-10 | 北京工业大学 | 基于曼哈顿假设下室内环境的相机旋转估计方法 |
CN114862949A (zh) * | 2022-04-02 | 2022-08-05 | 华南理工大学 | 一种基于点线面特征的结构化场景视觉slam方法 |
US11481970B1 (en) * | 2021-05-28 | 2022-10-25 | End To End, Inc. | Modeling indoor scenes using measurements captured using mobile devices |
-
2022
- 2022-12-14 WO PCT/CN2022/138892 patent/WO2024124421A1/fr unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11481970B1 (en) * | 2021-05-28 | 2022-10-25 | End To End, Inc. | Modeling indoor scenes using measurements captured using mobile devices |
CN114004900A (zh) * | 2021-11-17 | 2022-02-01 | 中国科学院合肥物质科学研究院 | 一种基于点线面特征的室内双目视觉里程计方法 |
CN114463406A (zh) * | 2022-01-25 | 2022-05-10 | 北京工业大学 | 基于曼哈顿假设下室内环境的相机旋转估计方法 |
CN114862949A (zh) * | 2022-04-02 | 2022-08-05 | 华南理工大学 | 一种基于点线面特征的结构化场景视觉slam方法 |
Non-Patent Citations (1)
Title |
---|
THATAVARTHY AYYAPPA SWAMY; SHARMA TANU; KRISHNA K. MADHAVA: "A new geometric approach for three view line reconstruction and motion estimation in Manhattan Scenes", 2021 18TH CONFERENCE ON ROBOTS AND VISION (CRV), IEEE, 26 May 2021 (2021-05-26), pages 135 - 141, XP033937661, DOI: 10.1109/CRV52889.2021.00026 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11720100B2 (en) | Systems and methods for utilizing semantic information for navigation of a robotic device | |
TWI827649B (zh) | 用於vslam比例估計的設備、系統和方法 | |
US11218689B2 (en) | Methods and systems for selective sensor fusion | |
US20200117210A1 (en) | Auto-Exploration Control of a Robotic Vehicle | |
US8930023B2 (en) | Localization by learning of wave-signal distributions | |
CN111093907B (zh) | 机器人式运载工具的稳健导航 | |
US11080890B2 (en) | Image sensor initialization in a robotic vehicle | |
CN111247390A (zh) | Vslam的并发的重新定位和重新初始化 | |
Zhao et al. | 2D LIDAR aided INS for vehicle positioning in urban environments | |
CN115790571A (zh) | 基于异构无人系统相互观测的同时定位与地图构建方法 | |
Choi et al. | Improved CNN-based path planning for stairs climbing in autonomous UAV with LiDAR sensor | |
Choi et al. | Cellular Communication-Based Autonomous UAV Navigation with Obstacle Avoidance for Unknown Indoor Environments. | |
Li et al. | Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system | |
Hu et al. | A small and lightweight autonomous laser mapping system without GPS | |
WO2024124421A1 (fr) | Estimation de matrice de rotation de robot à l'aide d'une hypothèse de monde de manhattan | |
Roggeman et al. | Embedded vision-based localization and model predictive control for autonomous exploration | |
Yang et al. | A robust and accurate SLAM algorithm for omni-directional mobile robots based on a novel 2.5 D lidar device | |
WO2023060461A1 (fr) | Sélection d'un objectif de frontière pour la construction d'une carte autonome à l'intérieur d'un espace | |
Ehrenfeld et al. | Visual navigation for airborne control of ground robots from tethered platform: creation of the first prototype | |
Kelly et al. | Simultaneous mapping and stereo extrinsic parameter calibration using GPS measurements | |
Joshi et al. | Simultaneous Navigator for Autonomous Identification and Localization Robot | |
Zheng et al. | Indoor Mobile Robot Map Construction Based on Improved Cartographer Algorithm | |
CN115014319A (zh) | 基于激光slam用于室内测量的装置及方法 | |
AL-MANSOUB | Integration of Dead Reckoning, Wi-Fi, and Vision for Vehicle Localization in Underground Parking Lots | |
Wong | Autonomous 3d mapping and exploration using a micro aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22968132 Country of ref document: EP Kind code of ref document: A1 |