CN112338910A - Space map determination method, robot, storage medium and system - Google Patents
Space map determination method, robot, storage medium and system Download PDFInfo
- Publication number
- CN112338910A CN112338910A CN202011003650.3A CN202011003650A CN112338910A CN 112338910 A CN112338910 A CN 112338910A CN 202011003650 A CN202011003650 A CN 202011003650A CN 112338910 A CN112338910 A CN 112338910A
- Authority
- CN
- China
- Prior art keywords
- space
- feature information
- objects
- coordinate
- coordinate values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a space map determining method, a robot, a storage medium and a system. The method comprises the following steps: acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information; acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object; and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects. In some embodiments of the present application, when a map is created, after the second coordinate value is corrected by using the first coordinate value obtained through preselection, the standard coordinate value of the object in the map is determined, so that the accuracy of creating the map and the coordinate value of the object in the map is effectively improved.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a space map determining method, a robot, a storage medium and a system.
Background
With the continuous development of the augmented reality technology, the method is applied to more and more application scenes. Besides being applied to entertainment scenes, the augmented reality technology endows the handheld device with spatial positioning capability and can also be applied to other scenes. For example, in some map creation processes, better and faster map creation may be performed by augmented reality techniques.
In the prior art, a handheld device (such as a mobile phone and a tablet personal computer) or a mobile robot platform can be utilized, and based on abundant sensors such as an inertial navigation sensor, a camera and a laser radar which are carried by the handheld device, the mobile robot platform can realize the positioning of the handheld device relative to a measurement space. When a map is created, due to systematic deviation of an inertial navigation sensor, coordinates of some target objects in the map can be deviated when the map is created, and therefore the map creation accuracy effect is influenced.
Disclosure of Invention
Aspects of the present disclosure provide a method, an apparatus, and a storage medium for determining a spatial map, so as to implement accurate creation of a map including an object.
The embodiment of the application provides a method for determining a space map, which comprises the following steps:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
Embodiments of the present application provide a computer-readable storage medium storing a computer program that, when executed by one or more processors, causes the one or more processors to perform actions comprising:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
The embodiment of the application provides a from mobile robot, includes: the machine body is provided with one or more processors, one or more memories for storing computer programs and a first sensor;
the one or more processors to execute the computer program to:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
An embodiment of the present application provides a space map determination system, including:
a space in which a plurality of shelves are disposed, the shelves having labels disposed thereon;
a robot, comprising: the mobile chassis is provided with image acquisition equipment;
the robot acquires a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
In some embodiments of the present application, before map creation is performed, a plurality of pieces of feature information of a current space and first coordinate values corresponding to the pieces of feature information are obtained. After the collection of the plurality of characteristic information in the current space is completed, the second coordinate values of the objects in the space and the first characteristic information corresponding to each object are obtained. And determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects. According to the technical scheme, when the map is created, the standard coordinate value of the object in the map is determined after the second coordinate value is corrected by the first coordinate value obtained through preselection, so that the accuracy of creating the map and the coordinate value of the object in the map is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a method for determining a space map according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for acquiring multiple pieces of feature information according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for determining a map according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a method for determining a map containing an object according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a method for creating a base coordinate system according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a space map determination apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a self-moving robot according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a space map determination system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
In the existing warehouse article management or commodity management process in a market shelf, a large amount of manpower and material resources are often needed to complete the related article management work such as inventory of goods. It is easy to understand that the warehouse space is bigger, and there can be a lot of types goods moreover, has placed a plurality of goods shelves according to certain rule in the space, all has placed different goods again in each position on every goods shelf. In order to facilitate management of goods, commodities and the like, a map can be created for the warehouse, and the positions of the shelves can be accurately marked in the map. Therefore, the staff or the robot can perform management operations such as inventory on the articles according to the accurate marking map. For example, the current space may be mapped based on a mobile phone, tablet computer, or robot, etc. that supports an ARKit (which may be understood as a generic term for any third party AR distance and location calculation program). However, when a map is created for a specified object in the current space, due to systematic deviation of the inertial navigation sensor, the platform may have a gradually enlarged accumulated error for its own positioning, which may further cause a serious error for positioning the spatial tag object. Although inertial navigation errors can be corrected by adding visual sensors, due to the lack of a priori knowledge, such correction is limited in unknown spatial environments and accurate coordinates of the position of the item in space (or shelves within space) cannot be obtained. Therefore, the technology of the present application proposes a scheme capable of accurately determining the position of each object in a space in a map, where the map is a POI map formed by using each feature information as the relative or absolute coordinates of a Point of Interest (Point of Interest).
Fig. 1 is a schematic flowchart of a method for determining a space map according to an embodiment of the present disclosure. The main execution body of the method can be a mobile phone and a camera with image acquisition equipment, or a movable robot with the image acquisition equipment, and the mobile phone, the camera, the robot and the like all support synchronous positioning and mapping (SLAM), the equipment (the mobile phone or the robot) is placed at an unknown position in an unknown space, and the equipment creates a map while walking and scanning feature information. In practical applications, the SLAM location correction strategy may be orb (organized FAST and brief) -SLAM, which is a complete SLAM system including visual odometer, tracking, loop detection, a monocular SLAM system based on sparse feature points, and also has an interface of monocular, binocular, RGBD cameras.
Specifically, the method for determining a space map corresponding to fig. 1 includes the following steps:
101: the method comprises the steps of obtaining a plurality of pieces of characteristic information of a space and first coordinate values corresponding to the characteristic information.
102: and acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object.
103: and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
For ease of understanding, the following examples are given. It is assumed that the device will move along the passage in the current space by collecting information in the unknown target space by the collecting device. In the moving process, a plurality of pieces of characteristic information in the space and a first coordinate value corresponding to each piece of characteristic information can be acquired, and in addition, the track information of the image acquisition equipment can be acquired through a related sensor. In the process of acquiring the feature information and the first coordinate value, the image acquisition equipment moves and acquires the feature information and the first coordinate value corresponding to each feature information, so that all feature information in the current space and the first coordinate value corresponding to each feature information can be continuously acquired. The feature information may be feature information in the current spatial environment, may include some target objects, or may be simple background information. In order to facilitate post-processing, in practical application, when the characteristic information is acquired, scenes which are pure colors and lack of textures are avoided as much as possible. In the process of acquiring the feature information, feature acquisition is not performed on a certain specified object.
Next, relevant information acquisition is performed for the object in the current space. Specifically, second coordinate values of a plurality of objects in the current space and first feature information corresponding to each object are acquired through the image acquisition device. When the relevant information is collected, the object in the current space, the background around the object in the current space and other relevant contents in the space are collected, and in order to collect more comprehensive relevant information, the collection can be realized in a continuous collection mode. When the relevant information is collected at this time, discontinuous collection may be performed on the relevant information of the object, and it is not necessary to collect too much other information irrelevant to the object in the space.
Through the two information acquisition processes, the related information in the same space can be acquired. It should be noted that there are at least partially the same feature information in the plurality of pieces of feature information and the first feature information acquired twice respectively. Further, based on a SLAM positioning correction policy (for example, ORB-SLAM), the coordinate values of the objects are corrected using the plurality of feature information acquired by the pre-scanning and the first feature information acquired by the re-scanning. The first coordinate value and the second coordinate value may be coordinate values in the same coordinate system or coordinate values in different coordinate systems. However, when comparing the coordinate values, it is necessary to ensure that the first coordinate value and the second coordinate value belong to the coordinate value in the same coordinate system (for example, both are coordinate values in a coordinate system established based on a real space).
In one or more embodiments of the present application, as shown in fig. 2, a schematic flowchart of a method for acquiring multiple pieces of feature information is provided in the embodiments of the present application. The step 101 of acquiring a plurality of pieces of feature information of a space and a first coordinate value corresponding to each piece of feature information includes the following steps:
201: and acquiring a plurality of characteristic information continuously acquired by the image acquisition equipment in the moving process in space.
202: and recording a first coordinate value corresponding to the image acquisition equipment when the image acquisition equipment acquires the plurality of characteristic information.
203: associating the plurality of feature information with the first coordinate value.
When the actual image capturing device performs the related feature information capturing, the image capturing device will continuously move in the current space, and generally, will move along the passage on the ground of the space. A plurality of characteristic information can be continuously collected in the moving process. In the collecting process, first coordinate values corresponding to a plurality of pieces of feature information are also required to be recorded, that is, each piece of feature information has a corresponding first coordinate value. After the plurality of feature information and the first coordinate values are obtained, an association relationship between the plurality of feature information and the first coordinate values needs to be established, in other words, a corresponding first coordinate value can be found with at least any one feature information of the plurality of feature information, and a certain correspondence relationship (for example, one-to-one or many-to-one) exists between any one feature information and the first coordinate value. As an optional embodiment, in the process of acquiring the feature information this time, only the SLAM model is used to acquire the feature information in the space, and the object in other spaces is not acquired in a targeted manner, so that the first feature information and the second coordinate value acquired subsequently for the object are corrected based on the plurality of feature information and the first coordinate value acquired this time.
As can be seen from the foregoing, the present solution can be implemented by a robot having an image acquisition device. Specifically, the acquiring second coordinate values of a plurality of objects in the space and first feature information corresponding to each object includes: identifying the object within the space; and when the object is identified, acquiring first characteristic information corresponding to the object, and acquiring a second coordinate value of the object.
For example, when the robot walks automatically in the current space or through background control, the robot recognizes an object contained in the current space through the image capturing device, where the object may be, for example, a two-dimensional code, a barcode, a three-dimensional object with features, or an identification picture. All the objects from which the feature information can be extracted by the image capturing apparatus can be taken as objects. The identification modes adopted for different types of objects may also be different, for example, if the object is a two-dimensional code, the object can be identified by a code reader; if the object is a picture formed by character marks, the object can be recognized in an OCR recognition mode, and the like. Here, merely as an example, in an actual application, the identification manner of the object may be selected according to the actual needs of the user. When the object is identified, first characteristic information corresponding to the object and a second coordinate value corresponding to the object are acquired. The first feature information corresponding to the object here is understood to be feature information of the object itself, feature information of the environment around the object, or feature information of the object itself and the environment around the object.
In one or more embodiments of the present application, if the object is a two-dimensional code, the robot may acquire the two-dimensional code in the space through an image acquisition device; and identifying the two-dimensional code to obtain the identification of the object. The resulting identification of the objects is unique, in other words, each object has a unique corresponding identification in the current space. If a plurality of similar objects exist in the current space, the objects can be distinguished according to the identification, and the objects are accurately marked in the map. In practical application, the two-dimensional code may be used as an object, or may be an object with the two-dimensional code, for example, a toy with the two-dimensional code, a shelf with the two-dimensional code, or the like.
As can be seen from the foregoing, the present solution can also be implemented by a mobile phone with an image capturing device, a camera, and other related electronic devices.
The obtaining of the second coordinate values of the plurality of objects in the space and the first feature information corresponding to each object includes: responding to a user trigger acquisition instruction, controlling an image acquisition device to acquire an image of the object, and acquiring a second coordinate value of the object; and determining first characteristic information corresponding to the object based on the image.
For example, the user may hold a mobile phone or a terminal device such as a camera for relevant information collection. And after the user triggers the acquisition instruction through the terminal equipment, the terminal controls the image acquisition equipment to acquire the image of the object. In this image, an object and a background image corresponding to the object are included. Therefore, when feature extraction is performed based on an image, feature information of an object may be extracted as first feature information, feature information of a surrounding background of the object may be extracted as the first feature information, or feature information of the object and the surrounding background in the image may be extracted as the first feature information.
When the image of the object is acquired by the image acquisition device, the second coordinate value of the object is acquired at the same time. The second coordinate value is a coordinate value corresponding to the object in the map.
In one or more embodiments of the present application, the image includes a two-dimensional code; and the determining of the first feature information corresponding to the object based on the image comprises: recognizing the two-dimensional code to determine an identity of the object; identifying the image to obtain image feature points; wherein the first feature information includes: the identification of the object and the image feature points.
For example, assume that the object is a shelf containing a two-dimensional code. A user holds the terminal equipment by hand to start an image acquisition function, and an image which can be acquired contains a two-dimensional code, a part of a shelf and other backgrounds. After the image is acquired, the two-dimensional code content in the image is identified. And identifying the two-dimensional code, and acquiring an identifier in the two-dimensional code, wherein the identifier is used for distinguishing different goods shelves in space. Furthermore, the acquired image is identified, and feature points of the image are acquired, wherein the feature points may be feature points of the two-dimensional code, feature points of a shelf or a background except the two-dimensional code in the image, or feature points obtained from all contents (including the two-dimensional code, the shelf and the background) in the image. The first characteristic information may include an identification of the shelf and an image characteristic point.
In one or more embodiments of the present application, the obtaining a second coordinate value of the object includes: determining an equipment coordinate value of the image acquisition equipment when the first characteristic information is acquired; determining the relative position relation between the image acquisition equipment and the object according to the depth information of the image acquired by the image acquisition equipment; and determining a second coordinate value corresponding to the object based on the relative position relation and the equipment coordinate value.
For example, assuming that the terminal device supports the ARKit (the ARKit herein may be understood as a generic term for any third-party AR distance and location calculation procedure), when the terminal device is powered on, the terminal device generates a coordinate system (generally following the rules of a right-hand coordinate system) based on the current location, the coordinate system depending on the location and orientation of the terminal device when the terminal device is powered on. In the coordinate system, the position of the terminal device, that is, the three-dimensional coordinate value of the terminal device, can be determined. Further, the relative positional relationship between the image pickup device and the object can be calculated using the depth information, the feature information, and the like of the image picked up by the image pickup device. And then, calculating to obtain a second coordinate value corresponding to the object according to the relative position relation and the equipment coordinate value. When calculating the relative position relationship, for example, a Simultaneous Localization and Mapping (SLAM) algorithm may be used to determine the relative position relationship.
In practical application, the object is assumed to be an electronic tag with a two-dimensional code. When the electronic tag two-dimensional code is identified, the position of the two-dimensional code relative to the image acquisition equipment can be obtained by combining the SLAM algorithm according to the position of the two-dimensional code in the screen, and then the position of the two-dimensional code in the space coordinate is obtained. These position information are presented in the form of a 4 x 4 matrix of world transform. In an application scenario, the two-dimensional code can be placed in a needed shelf (including various layers of the shelf and different positions of each layer), and then the position of the two-dimensional code relative to the image acquisition equipment can be obtained by combining with a SLAM algorithm, so that the position of the two-dimensional code in space is determined, and the position can be represented by a three-dimensional coordinate value. And connecting the positions of the obtained two-dimensional codes in the space to obtain a two-dimensional profile or a three-dimensional profile of the shelf.
According to the scheme, based on a plurality of pieces of feature information and a first coordinate value corresponding to the feature information acquired by performing pre-scanning in a current space for the first time, then, when scanning is performed on a specified object in the same space for the second time, the current position (a second coordinate value) of the object can be corrected according to a certain frequency or after the object is found, so that an accumulative error is avoided.
In one or more embodiments of the present application, fig. 3 is a schematic flowchart of a method for determining a map according to an embodiment of the present application. As can be seen from fig. 3, the step 103 determines the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, and the second coordinate values of the plurality of objects and the first feature information corresponding to the objects, and includes the following steps:
301: and determining a plurality of target feature information which has a matching relation with first feature information corresponding to the plurality of objects in the plurality of feature information.
302: and acquiring first coordinate values corresponding to the target characteristic information.
303: and determining calibration coordinate values of the plurality of objects based on first coordinate values corresponding to the plurality of target characteristic information and second coordinate values of the plurality of objects.
304: determining a map of the space based on the calibration coordinate values of the plurality of objects.
In practical application, when the plurality of feature information is acquired for the first time, the plurality of feature information in the current space can be acquired as comprehensively as possible. When the first feature information is obtained, the related feature information of the object is obtained in a targeted manner, where at least part of the feature information in the first feature information is matched with a plurality of feature information (which may be a set of a plurality of feature information), and the matched part of the feature information is defined as a plurality of target feature information. When a plurality of target feature information with matching relations are found from the plurality of feature information, first coordinate values corresponding to the plurality of target feature information are determined. And comparing the first coordinate value with the second coordinate value, and if the first coordinate value is the same as the second coordinate value or the error is within a reasonable range (namely, the error is less than a certain threshold), determining that the first coordinate value or the second coordinate value is the standard coordinate value of the plurality of objects. Then, a spatially corresponding map in which the positions of the objects can be accurately displayed is determined based on the standard coordinate values of the plurality of objects.
And if the first coordinate value is deviated from the second coordinate value, taking the second coordinate value as the calibration coordinate value of the object. Wherein the target characteristic information has a matching relationship with the object.
In practical application, when the first coordinate value is compared with the second coordinate value, the coordinate values corresponding to the feature information in other adjacent positions or adjacent images are referenced to perform mutual calibration. If the second coordinate value is directly measured based on the acquisition result of the image acquisition equipment, and the first coordinate value is calculated, the precision rate of the second coordinate value corresponding to the object is higher than that of the first target coordinate value. Therefore, when a case occurs in which the deviation of the first coordinate value from the second coordinate value is large, the second coordinate value is taken as the calibration coordinate value. Further, for the sake of safety, when a case of coordinate value deviation occurs, the second coordinate value may be corrected based on a plurality of feature information adjacent to the first feature information, and the second coordinate value after correction may be compared with the second coordinate value before correction. And if a large deviation exists before and after the correction, the first characteristic information and the corresponding second coordinate value of the object can be acquired again. If there is no deviation, the current second coordinate value is relatively accurate, and the second coordinate value can be used as the calibration coordinate value.
In one or more embodiments of the present application, the determining the map of the space according to the calibration coordinate values of the plurality of objects includes: calibrating coordinate values based on a plurality of the objects; acquiring a moving track of the image acquisition equipment in space; wherein the plurality of feature information is acquired by the image acquisition device in an acquisition movement; determining a contour of the plurality of objects in the map based on the moving trajectory in space and the calibrated coordinate values of the plurality of objects.
Fig. 4 is a schematic diagram of determining a map including an object according to an embodiment of the present application. As can be seen from fig. 4, when the image capturing device is used for capturing the relevant feature information, the relevant feature information is generally moved along a channel in the current space, in other words, the place where the device passes through can be regarded as a route. And it is the individual objects that are divided by the route. The moving track is connected to obtain a route, and the calibration coordinate value is connected to obtain an object. It should be noted that no intersection can occur during the process of connecting the route and the object. As can be seen from fig. 4, the shelf is surrounded by the route segments, and the contours of the objects in the map can be accurately determined based on the calibration coordinate values and the route.
As described above, when a map is created by using a terminal device supporting the ARKit (the ARKit is understood as a generic term of any third-party AR distance and location calculation program), the terminal device establishes a three-dimensional coordinate system based on the location of the device after being powered on. As an optional embodiment, before determining the map of the space, the method further includes: creating the base coordinate system; and the first coordinate values corresponding to the characteristic information and the second coordinate values of the objects are coordinate values in the base coordinate system. Fig. 5 is a schematic flowchart of a method for creating a base coordinate system according to an embodiment of the present disclosure. As shown in fig. 5, the step of creating the base coordinate system includes:
501: and determining an origin anchor point and a direction anchor point based on at least one image in the space acquired by the image acquisition equipment.
502: and determining a coordinate origin and any coordinate axis according to the origin anchor point and the direction anchor point.
503: and obtaining the basic coordinate system based on the coordinate origin and any coordinate axis.
For ease of understanding, the following is a specific example of the process of creating the base coordinate system. In a space where a coordinate system needs to be established, two anchor points are designated as an origin anchor point and a direction anchor point respectively. The origin anchor point coordinates are (x0, y0, z0) (note that the coordinate axis in the vertical direction is the z axis), then, for the tag coordinates (x, y, z) in any terminal device coordinate system, the transformation is needed:
(x′,y′,z′)=(x-x0,y-y0,z-z0)
in order to determine the direction of the x-axis, a direction anchor point is defined, which corresponds to any point on the x-axis of the real world coordinate system. The measured coordinates of the direction anchor points are (x1, y1, z 1). Since the z-axes of the two coordinate systems are coincident, we do not pay attention to z1, and calculate the rotation angle α through x1 and y1 as-Img (log (x1+ y1i)) (simply using the inverse trigonometric function tan-1), but the complex expression can express the direction from 0 to 2 pi, and the inverse trigonometric function can only express half of the area)
Based on the above embodiment, before the map is created, a plurality of pieces of feature information of the current space and first coordinate values corresponding to the feature information are acquired. After the collection of the plurality of characteristic information in the current space is completed, the second coordinate values of the objects in the space and the first characteristic information corresponding to each object are obtained. And determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects. According to the technical scheme, when the map is created, the standard coordinate value of the object in the map is determined after the second coordinate value is corrected by the first coordinate value obtained through preselection, so that the accuracy of creating the map and the coordinate value of the object in the map is effectively improved.
Based on the same idea, the embodiment of the application further provides a space map determination device. Fig. 6 is a schematic structural diagram of a space map determination apparatus according to an embodiment of the present application. As can be seen from fig. 6, the apparatus comprises:
the first obtaining module 61 is configured to obtain a plurality of pieces of feature information of a space and a first coordinate value corresponding to each piece of feature information.
The second obtaining module 62 is configured to obtain second coordinate values of the plurality of objects in the space and first feature information corresponding to each object.
The determining module 63 is configured to determine the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, and the second coordinate values of the plurality of objects and the first feature information corresponding to each object.
Optionally, the first obtaining module 61 is further configured to obtain a plurality of feature information continuously collected during the movement of the image collecting apparatus in the space; recording a first coordinate value corresponding to the image acquisition equipment when the image acquisition equipment acquires the plurality of feature information; associating the plurality of feature information with the first coordinate value.
Optionally, the second obtaining module 62 is further configured to identify the object in the space; and when the object is identified, acquiring first characteristic information corresponding to the object, and acquiring a second coordinate value of the object.
Optionally, the second obtaining module 62 is further configured to collect a two-dimensional code in the space; and identifying the two-dimensional code to obtain the identification of the object.
Optionally, the second obtaining module 62 is further configured to, in response to a user triggering a collecting instruction, control the image collecting device to collect an image of the object, and obtain a second coordinate value of the object; and determining first characteristic information corresponding to the object based on the image.
Optionally, the image contains a two-dimensional code; the second obtaining module 62 is further configured to identify the two-dimensional code to determine an identifier of the object; identifying the image to obtain image feature points; wherein the first feature information includes: the identification of the object and the image feature points.
Optionally, the second obtaining module 62 is further configured to determine a device coordinate value of the image capturing device when obtaining the first feature information;
determining the relative position relation between the image acquisition equipment and the object according to the depth information of the image acquired by the image acquisition equipment;
and determining a second coordinate value corresponding to the object based on the relative position relation and the equipment coordinate value.
Optionally, the determining module 63 is configured to determine a plurality of target feature information in the plurality of feature information, where the plurality of target feature information has a matching relationship with first feature information corresponding to the plurality of objects;
acquiring first coordinate values corresponding to a plurality of target characteristic information;
determining calibration coordinate values of the plurality of objects based on first coordinate values corresponding to the plurality of target characteristic information and second coordinate values of the plurality of objects;
determining a map of the space based on the calibration coordinate values of the plurality of objects.
Optionally, the determining module 63 is further configured to use the second coordinate value as the calibration coordinate value of the object when there is a deviation between the first coordinate value and the second coordinate value;
wherein the target characteristic information has a matching relationship with the object.
Optionally, the determining module 63 is further configured to determine a calibration coordinate value based on a plurality of the target objects;
acquiring a moving track of the image acquisition equipment in space; wherein the plurality of feature information is acquired by the image acquisition device in an acquisition movement;
determining a contour of the plurality of target objects in the map based on the moving trajectory in space and the calibration coordinate values of the plurality of target objects.
Optionally, a creating module 64 is further included, configured to create the base coordinate system; and the first coordinate values corresponding to the characteristic information and the second coordinate values of the objects are coordinate values in the base coordinate system.
Optionally, the creating module 64 is further configured to determine an origin anchor point and a direction anchor point based on at least one image in the space acquired by the image acquisition device; determining a coordinate origin and any coordinate axis according to the origin anchor point and the direction anchor point; and obtaining the basic coordinate system based on the coordinate origin and any coordinate axis.
Fig. 7 is a schematic structural diagram of a self-moving robot according to an embodiment of the present application. The self-moving robot comprises a machine body, one or more processors 702, one or more memories 703 storing computer programs, and an image capturing device 701, wherein at least one image capturing device 701 is disposed on the self-moving robot, and necessary components such as other power components 704 installed on the machine body for maintaining the basic functions of the self-moving device.
The at least one external image acquisition device 701 is used for acquiring preset signals within respective signal sensing ranges;
one or more processors 702 to execute computer programs to:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
The embodiment of the application also provides a computer readable storage medium storing the computer program. The computer-readable storage medium stores a computer program, and the computer program, when executed by one or more processors, causes the one or more processors to perform the steps in the respective method embodiments of fig. 1-5.
Based on the same idea, the embodiment of the application further provides a space map determination system. Fig. 8 is a schematic structural diagram of a space map determination system according to an embodiment of the present application. As can be seen from fig. 8, the system comprises:
a space 81 in which a plurality of shelves 82 are disposed, the shelves 82 having labels 83 disposed thereon;
the robot acquires a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
For ease of understanding, the operation of the system is illustrated below in conjunction with FIG. 8.
For example, a map needs to be built for a shelf in a shopping mall or supermarket, and in order to accurately determine the position of the shelf in the current space, the system corresponding to fig. 8 can be adopted. It should be noted that the robot may work autonomously in actual work, or may be controlled by a background worker, or may work by planning a movement rule for the robot in advance.
After entering the current space, the robot moves and scans along the current space channel, which may be referred to as a pre-scanning process, and does not need to scan and identify a designated object (e.g., a two-dimensional code) in the space, or collect an anchor point. The method includes the steps that only a plurality of pieces of feature information in the current space and first coordinate values corresponding to the feature information are collected through an SLAM model. And correcting the object characteristic information and the coordinate values according to the plurality of characteristic information and coordinate values obtained by pre-scanning.
After the pre-scanning is completed, the robot scans the object in the current space to obtain feature information (for example, the feature information may be referred to as an information point POI) of the object. In the scanning process, a basic coordinate system matched with the real space needs to be established according to the designated origin anchor point and the designated direction anchor point, and meanwhile, a coordinate system established by the robot during the pre-scanning needs to be converted into a coordinate system consistent with the basic coordinate system. And determining first characteristic information of the object and a second coordinate value corresponding to the first characteristic information on the basis of the newly-established basic coordinate system.
And then, searching a plurality of target characteristic information with the characteristic information matched with the first characteristic information, determining a first coordinate value corresponding to the target characteristic information, and correcting a second coordinate value by using the first coordinate value, thereby determining a calibration coordinate value. And determining the specific coordinate information of the shelf in the current space on the basis of the calibration coordinate value, wherein the coordinate information can be a three-dimensional coordinate value or a two-dimensional coordinate value.
After the coordinate values of the objects are obtained, the connection is carried out, and the two-dimensional or three-dimensional outline of the shelf can be obtained. And connecting the moving tracks of the robot to obtain a channel in the map. It should be noted that, in the process of connecting the line, it is to be ensured that the moving track line and the shelf line do not intersect. The position of the shelf in the map obtained by the above embodiment is more accurate.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (15)
1. A method for determining a spatial map, the method comprising:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
2. The method according to claim 1, wherein the acquiring the plurality of feature information of the space and the first coordinate value corresponding to each feature information comprises:
acquiring a plurality of characteristic information continuously acquired by image acquisition equipment in a moving process in space;
recording a first coordinate value corresponding to the image acquisition equipment when the image acquisition equipment acquires the plurality of feature information;
associating the plurality of feature information with the first coordinate value.
3. The method according to claim 1, wherein the obtaining the second coordinate values of the plurality of objects in the space and the first feature information corresponding to each object comprises:
identifying the object within the space;
and when the object is identified, acquiring first characteristic information corresponding to the object, and acquiring a second coordinate value of the object.
4. The method of claim 3, wherein the identifying the object within the space comprises:
collecting the two-dimensional codes in the space;
and identifying the two-dimensional code to obtain the identification of the object.
5. The method according to claim 1, wherein the obtaining the second coordinate values of the plurality of objects in the space and the first feature information corresponding to each object comprises:
responding to a user trigger acquisition instruction, controlling an image acquisition device to acquire an image of the object, and acquiring a second coordinate value of the object;
and determining first characteristic information corresponding to the object based on the image.
6. The method of claim 5, wherein the image contains a two-dimensional code; and the determining of the first feature information corresponding to the object based on the image comprises:
recognizing the two-dimensional code to determine an identity of the object;
identifying the image to obtain image feature points;
wherein the first feature information includes: the identification of the object and the image feature points.
7. The method according to claim 3 or 5, wherein the obtaining of the second coordinate value of the object comprises:
determining an equipment coordinate value of the image acquisition equipment when the first characteristic information is acquired;
determining the relative position relation between the image acquisition equipment and the object according to the depth information of the image acquired by the image acquisition equipment;
and determining a second coordinate value corresponding to the object based on the relative position relation and the equipment coordinate value.
8. The method of claim 1, wherein determining the map of the space based on the plurality of feature information and the first coordinate values corresponding to the respective feature information, and the second coordinate values of the plurality of objects and the first feature information corresponding to the respective objects comprises:
determining a plurality of target feature information having a matching relationship with first feature information corresponding to the plurality of objects in the plurality of feature information;
acquiring first coordinate values corresponding to a plurality of target characteristic information;
determining calibration coordinate values of the plurality of objects based on first coordinate values corresponding to the plurality of target characteristic information and second coordinate values of the plurality of objects;
determining a map of the space based on the calibration coordinate values of the plurality of objects.
9. The method of claim 8, wherein determining the calibration coordinate values of the plurality of objects based on the first coordinate values corresponding to the plurality of target feature information and the second coordinate values of the plurality of objects comprises:
when the first coordinate value is deviated from the second coordinate value, taking the second coordinate value as a calibration coordinate value of the object;
wherein the target characteristic information has a matching relationship with the object.
10. The method of claim 8, wherein determining the map of the space from the calibration coordinate values of the plurality of objects comprises:
calibrating coordinate values based on a plurality of the target objects;
acquiring a moving track of the image acquisition equipment in space; wherein the plurality of feature information is acquired by the image acquisition device in an acquisition movement;
determining a contour of the plurality of target objects in the map based on the moving trajectory in space and the calibration coordinate values of the plurality of target objects.
11. The method of claim 1, wherein prior to determining the map of the space, further comprising:
creating the base coordinate system;
and the first coordinate values corresponding to the characteristic information and the second coordinate values of the objects are coordinate values in the base coordinate system.
12. The method of claim 11, wherein the creating the base coordinate system comprises:
determining an origin anchor point and a direction anchor point based on at least one image in a space acquired by image acquisition equipment;
determining a coordinate origin and any coordinate axis according to the origin anchor point and the direction anchor point;
and obtaining the basic coordinate system based on the coordinate origin and any coordinate axis.
13. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform acts comprising:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
14. A self-moving robot, comprising: the machine body is provided with one or more processors, one or more memories for storing computer programs and a first sensor;
the one or more processors execute the computer program to:
acquiring a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
15. A spatial map determination system, the system comprising:
a space in which a plurality of shelves are disposed, the shelves having labels disposed thereon;
a robot, comprising: the mobile chassis is provided with image acquisition equipment;
the robot acquires a plurality of pieces of characteristic information of a space and a first coordinate value corresponding to each piece of characteristic information;
acquiring second coordinate values of a plurality of objects in the space and first characteristic information corresponding to each object;
and determining the map of the space according to the plurality of feature information and the first coordinate values corresponding to the feature information, the second coordinate values of the plurality of objects and the first feature information corresponding to the objects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011003650.3A CN112338910A (en) | 2020-09-22 | 2020-09-22 | Space map determination method, robot, storage medium and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011003650.3A CN112338910A (en) | 2020-09-22 | 2020-09-22 | Space map determination method, robot, storage medium and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112338910A true CN112338910A (en) | 2021-02-09 |
Family
ID=74358087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011003650.3A Pending CN112338910A (en) | 2020-09-22 | 2020-09-22 | Space map determination method, robot, storage medium and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112338910A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113742439A (en) * | 2021-08-27 | 2021-12-03 | 深圳Tcl新技术有限公司 | Space labeling method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094460A1 (en) * | 2008-10-09 | 2010-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
CN107328420A (en) * | 2017-08-18 | 2017-11-07 | 上海木爷机器人技术有限公司 | Localization method and device |
US20170329343A1 (en) * | 2015-01-22 | 2017-11-16 | Guangzhou Airob Robot Technology Co., Ltd. | Method and apparatus for localization and mapping based on color block tags |
CN107727104A (en) * | 2017-08-16 | 2018-02-23 | 北京极智嘉科技有限公司 | Positioning and map building air navigation aid, apparatus and system while with reference to mark |
CN107907131A (en) * | 2017-11-10 | 2018-04-13 | 珊口(上海)智能科技有限公司 | Alignment system, method and the robot being applicable in |
CN108225303A (en) * | 2018-01-18 | 2018-06-29 | 水岩智能科技(宁波)有限公司 | Two-dimensional code positioning label, and positioning navigation system and method based on two-dimensional code |
US20190096091A1 (en) * | 2017-09-28 | 2019-03-28 | Baidu Usa Llc | Systems and methods to improve camera intrinsic parameter calibration |
-
2020
- 2020-09-22 CN CN202011003650.3A patent/CN112338910A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094460A1 (en) * | 2008-10-09 | 2010-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
US20170329343A1 (en) * | 2015-01-22 | 2017-11-16 | Guangzhou Airob Robot Technology Co., Ltd. | Method and apparatus for localization and mapping based on color block tags |
CN107727104A (en) * | 2017-08-16 | 2018-02-23 | 北京极智嘉科技有限公司 | Positioning and map building air navigation aid, apparatus and system while with reference to mark |
CN107328420A (en) * | 2017-08-18 | 2017-11-07 | 上海木爷机器人技术有限公司 | Localization method and device |
US20190096091A1 (en) * | 2017-09-28 | 2019-03-28 | Baidu Usa Llc | Systems and methods to improve camera intrinsic parameter calibration |
CN107907131A (en) * | 2017-11-10 | 2018-04-13 | 珊口(上海)智能科技有限公司 | Alignment system, method and the robot being applicable in |
CN108225303A (en) * | 2018-01-18 | 2018-06-29 | 水岩智能科技(宁波)有限公司 | Two-dimensional code positioning label, and positioning navigation system and method based on two-dimensional code |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113742439A (en) * | 2021-08-27 | 2021-12-03 | 深圳Tcl新技术有限公司 | Space labeling method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11127203B2 (en) | Leveraging crowdsourced data for localization and mapping within an environment | |
CN111199564B (en) | Indoor positioning method and device of intelligent mobile terminal and electronic equipment | |
US9625908B2 (en) | Methods and systems for mobile-agent navigation | |
US9625912B2 (en) | Methods and systems for mobile-agent navigation | |
CN109186606B (en) | Robot composition and navigation method based on SLAM and image information | |
KR20210020945A (en) | Vehicle tracking in warehouse environments | |
CN110967711A (en) | Data acquisition method and system | |
WO2022052660A1 (en) | Warehousing robot localization and mapping methods, robot, and storage medium | |
US10949803B2 (en) | RFID inventory and mapping system | |
KR102075844B1 (en) | Localization system merging results of multi-modal sensor based positioning and method thereof | |
CN108303094A (en) | The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor | |
CN110000793A (en) | A kind of motion planning and robot control method, apparatus, storage medium and robot | |
US20200241551A1 (en) | System and Method for Semantically Identifying One or More of an Object and a Location in a Robotic Environment | |
CN115307641A (en) | Robot positioning method, device, robot and storage medium | |
CN117011457A (en) | Three-dimensional drawing construction method and device, electronic equipment and storage medium | |
CN112338910A (en) | Space map determination method, robot, storage medium and system | |
CN111739088B (en) | Positioning method and device based on visual label | |
Shi et al. | Large-scale three-dimensional measurement based on LED marker tracking | |
US20220405958A1 (en) | Feature-based georegistration for mobile computing devices | |
Hasler et al. | Implementation and first evaluation of an indoor mapping application using smartphones and AR frameworks | |
Dutta | Mobile robot applied to QR landmark localization based on the keystone effect | |
Pirahansiah et al. | Camera Calibration and Video Stabilization Framework for Robot Localization | |
KR102131493B1 (en) | Indoor Positioning Method using Smartphone with QR code | |
Zhang et al. | ARCargo: Multi-Device Integrated Cargo Loading Management System with Augmented Reality | |
Hořejší et al. | Reliability and Accuracy of Indoor Warehouse Navigation Using Augmented Reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210209 |