CN109637339B - Map generation method, map generation device, computer-readable storage medium and computer equipment - Google Patents

Map generation method, map generation device, computer-readable storage medium and computer equipment Download PDF

Info

Publication number
CN109637339B
CN109637339B CN201811378230.6A CN201811378230A CN109637339B CN 109637339 B CN109637339 B CN 109637339B CN 201811378230 A CN201811378230 A CN 201811378230A CN 109637339 B CN109637339 B CN 109637339B
Authority
CN
China
Prior art keywords
type
indoor space
coordinate
sensor
position point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811378230.6A
Other languages
Chinese (zh)
Other versions
CN109637339A (en
Inventor
郑睿群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hai Robotics Co Ltd
Original Assignee
Hai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hai Robotics Co Ltd filed Critical Hai Robotics Co Ltd
Priority to CN201811378230.6A priority Critical patent/CN109637339B/en
Priority to CN202210794996.2A priority patent/CN114999308A/en
Publication of CN109637339A publication Critical patent/CN109637339A/en
Application granted granted Critical
Publication of CN109637339B publication Critical patent/CN109637339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/005Map projections or methods associated specifically therewith
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The application relates to a map generation method, a map generation device, a computer readable storage medium and a computer device. The method comprises the following steps: starting from any first type of position point of the indoor space, scanning the indoor space; when passing through a first type of position point in an indoor space, acquiring absolute coordinates corresponding to the first type of position point; acquiring sensor data obtained by scanning a first type position point and a second type position point in an indoor space in real time; determining a first estimated coordinate of each first-class position point and a second estimated coordinate of each second-class position point according to the sensor data; calculating a deviation between the first estimated coordinates and the absolute coordinates; and when the deviation is smaller than a preset threshold value, generating a map according to the second estimated coordinates. The scheme that this application provided can realize reducing the shared manpower and materials cost of generating indoor map.

Description

Map generation method, map generation device, computer-readable storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a map generation method, an apparatus, a computer-readable storage medium, and a computer device.
Background
In automated warehouse management, transfer robots are becoming more and more popular. As an important tool for automatic transportation, the carrying robot gradually replaces the traditional manual carrying and sorting, and can greatly improve the processing efficiency of goods transshipment transportation and automatic loading and unloading of goods shelves.
As in the case of manual transportation, the transfer robot needs to know the location and the destination of the transfer robot at all times when transporting a load, and thus a map of the indoor space where the transfer robot is located, for example, a map of a warehouse needs to be provided to the transfer robot. At present, a map is constructed in an indoor space where a carrying robot is located, most of the map is constructed by pasting a large number of two-dimensional codes on the ground, and a large amount of manpower and material resources are required to be consumed.
Disclosure of Invention
Based on this, it is necessary to provide a map generation method, a map generation apparatus, a computer-readable storage medium, and a computer device, for solving the technical problem that constructing a map of an indoor space by pasting a large number of two-dimensional codes consumes a large amount of manpower and material resources.
A map generation method applied to a robot includes:
starting from any first type of position point of an indoor space, scanning the indoor space;
when passing through a first type of position point in the indoor space, acquiring an absolute coordinate corresponding to the first type of position point;
acquiring sensor data obtained by scanning a first type position point and a second type position point in the indoor space in real time;
determining a first estimated coordinate of each first-class position point and a second estimated coordinate of each second-class position point according to the sensor data;
calculating a deviation between the first estimated coordinates and the absolute coordinates;
and when the deviation is smaller than a preset threshold value, generating a map according to the second estimated coordinates.
In one embodiment, the acquiring, in real time, sensor data obtained by scanning the first type of location point and the second type of location point in the indoor space includes:
acquiring image data acquired by a visual sensor in the roadway in real time;
judging whether a second graphic code for positioning the second type of position points exists in the image or not according to the image data;
if the first type of position points exist, first offset data of the current robot relative to the first type of position points are obtained;
determining second offset data of the second graphic code relative to the current robot according to the image;
the determining the first estimated coordinates of each of the first type location points and the second estimated coordinates of each of the second type location points from the sensor data includes:
and calculating the estimated coordinate of the second graphic code according to the first offset data, the absolute coordinate and the second offset data, and taking the estimated coordinate of the second graphic code as the second estimated coordinate.
In one embodiment, the determining whether a second graphic code for locating the second type of location point exists in the image according to the image data includes:
acquiring image attributes corresponding to the second graphic code;
extracting image characteristics of an image according to currently acquired image data;
and when the image characteristics are matched with the image attributes, determining that the second graphic code exists in the image.
In one embodiment, the method further comprises:
when there is a blind area in the indoor space that is not scanned; then
Starting from any first type of position point of the indoor space, and scanning the indoor space;
when the second type position points which are not scanned exist in the roadway, then
And returning to the step of scanning the indoor space from any first type position point of the indoor space.
In one embodiment, the sensor data includes data collected by a plurality of sensors; the method further comprises the following steps:
when the deviation is larger than a preset threshold value, adjusting the corresponding trust degree of each sensor;
starting from any first type of position point of the indoor space, and scanning the indoor space;
processing the data collected by the corresponding sensor according to the adjusted trust;
and determining the first estimated coordinates of the first type position points and the second estimated coordinates of the second type position points based on the processed data.
In one embodiment, after the generating a map according to each of the first estimated coordinates when the deviation is smaller than a preset threshold, the method further includes:
receiving a target coordinate to be advanced to a target position;
in the process of moving, acquiring real-time data obtained by scanning the current position;
determining the coordinate of the current position according to the real-time data;
and entering the target position according to the coordinate of the current position and the target coordinate.
In one embodiment, the obtaining the absolute coordinates corresponding to the first kind of location point when passing through the first kind of location point in the indoor space includes:
collecting images corresponding to the first type of position points;
and analyzing the first graphic code in the image to obtain the absolute coordinate of the roadway entrance.
In one embodiment, the first type of location points comprise known location points at a roadway entrance in an indoor space; the second type of position points comprise position points corresponding to all storage positions on a shelf in a roadway in the indoor space and/or position points corresponding to a public area.
A map generation apparatus, the apparatus comprising:
the scanning module is used for scanning the indoor space from any first type of position point of the indoor space;
the absolute coordinate acquisition module is used for acquiring absolute coordinates corresponding to a first type of position points when the first type of position points pass through the indoor space;
the sensor data acquisition module is used for acquiring sensor data obtained by scanning the first type position points and the second type position points in the indoor space in real time;
the estimated coordinate determination module is used for determining a first estimated coordinate of each first-class position point and a second estimated coordinate of each second-class position point according to the sensor data;
a deviation calculation module for calculating a deviation between the first estimated coordinates and the absolute coordinates;
and the map generation module is used for generating a map according to each second estimated coordinate when the deviation is smaller than a preset threshold value.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described map generation method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the map generation method described above.
When the map is generated, the indoor space is scanned from any first type of position point of the indoor space; when passing through a first type of position point in an indoor space, acquiring absolute coordinates corresponding to the first type of position point; calculating the deviation between the second estimated coordinate and the absolute coordinate; when the deviation is smaller than the preset threshold value, the sensor data acquired by the sensors are reliable enough, and the estimated coordinates of all the position points determined based on the sensors are reliable, so that a map can be generated according to the estimated coordinates of the position points, the accuracy of the generated map is guaranteed, and compared with the situation that a large amount of manpower and material resources need to be consumed due to the fact that the map is generated by pasting a large number of two-dimensional codes, the scheme can greatly save manpower and material resources.
Drawings
FIG. 1 is a diagram of an application environment of a map generation method in one embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a method for map generation in one embodiment;
FIG. 3 is a flowchart illustrating a process of determining whether a second graphic code for locating a second type of location point exists in an image according to image data in an embodiment;
FIG. 4 is a schematic flow chart diagram illustrating map generation in one embodiment;
FIG. 5 is a schematic flow chart diagram for navigating according to a generated map in one embodiment;
FIG. 6 is a schematic diagram illustrating a process for generating a map when a known location point is represented by a two-dimensional code in one embodiment;
FIG. 7 is a flowchart illustrating a map generation method according to another embodiment;
FIG. 8 is a block diagram showing a configuration of a map generating apparatus according to an embodiment;
fig. 9 is a block diagram showing the construction of a map generating apparatus according to another embodiment;
FIG. 10 is a block diagram showing the construction of a map generating apparatus according to still another embodiment;
FIG. 11 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first type of location point may be referred to as a second type of location point, and similarly, a second type of location point may be referred to as a first type of location point, without departing from the scope of the present invention. The first type of location point and the second type of location point are both class location points, but they are not the same class location point.
Fig. 1 is an application environment diagram of a map generation method in one embodiment. Referring to fig. 1, the map generation method is applied to a map generation system. The map generation system includes a robot 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The robot 110 may also be an automated guided vehicle that may be used to carry goods according to the generated map. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
Taking the first type of location point as a known location point at the entrance of a tunnel in the indoor space, and the second type of location point as a location point corresponding to each storage location on a shelf in the tunnel in the indoor space, as an example, the robot 110 may start from any known location point in the indoor space, scan the indoor space, obtain sensor data obtained by scanning each storage location on the shelf in the tunnel in the indoor space in real time, determine estimated coordinates of each storage location according to the sensor data, obtain absolute coordinates corresponding to the entrance of the tunnel when passing through the entrance of the tunnel, calculate a deviation between the estimated coordinates and the absolute coordinates at the entrance of the tunnel, and generate a map according to each estimated coordinate when the deviation is smaller than a preset threshold. The robot 110 may also report the acquired sensor data to the server 120, the server 120 determines the estimated coordinates of each storage location on the rack in the roadway according to the sensor data, calculates the deviation between the absolute coordinates and the estimated coordinates at the entrance of the roadway, and when the calculated deviation is smaller than a preset threshold, the server 120 may generate a map according to the estimated coordinates of each storage location.
As shown in FIG. 2, in one embodiment, a map generation method is provided. The present embodiment is mainly illustrated by applying the method to the robot 110 in fig. 1. Referring to fig. 2, the map generation method specifically includes the following steps:
s202, starting from any first type position point of the indoor space, scanning the indoor space.
Where an indoor space is a space where a map is to be generated, such as an indoor warehouse. The process that the robot scans the indoor space is the process that the robot carries out data acquisition on the indoor space through each sensor in the moving process.
The indoor space includes a first type of location point, which is a feature point given absolute coordinates and may be referred to as a known location point. The known position points may be the position points at the entry of the roadway, that is, the position points at the entry of the roadway may be all known position points, and the absolute coordinates are coordinates set in advance for the known position points. In the indoor space, to save the cost of manually measuring the feature points, the number of known location points may be a few in small number, such as two to three. The robot can scan the indoor space through various sensors from any known position point. In order to ensure the accuracy of the map in the roadway, a known position point can be arranged at the entrance of the roadway. For example, in the indoor space, the known location point may include an entrance of the lane 1, an entrance of the lane 2, or a trunk start point C, and the like. In some embodiments, a two-dimensional code or a barcode may be pasted at a known position point, and then the absolute coordinates of the known position point may be obtained by analyzing the scanned two-dimensional code or barcode.
S204, when the first-class position points in the indoor space pass through, the absolute coordinates corresponding to the first-class position points are obtained.
In one embodiment, the first type of location points include location points at a roadway entrance in the indoor space and/or known location points. The first-type position points are taken as the position points at the entry of the roadway in the indoor space for explanation, and when the first-type position points pass through the entry of the roadway, the absolute coordinates corresponding to the entry of the first-type position points are obtained.
Specifically, when the robot passes through the roadway entrance according to the acquired image data in the moving process, the absolute coordinates preset for the roadway entrance can be acquired. For example, the shelf at each entry of the roadway in the indoor space can be used as a known position point, and corresponding absolute coordinates are given to the shelf at the entry of the roadway, so that the corresponding absolute coordinates at the entry of the roadway can be obtained when the robot passes through the entries of the roadway in the moving process, and the estimated coordinates of each storage position can be corrected by using the absolute coordinates, so that a map with high accuracy is obtained.
And S206, acquiring sensor data obtained by scanning the first-type position points and the second-type position points in the indoor space in real time.
In one embodiment, the second type of location points includes location points corresponding to respective locations on shelves and/or location points corresponding to common areas in a roadway in the indoor space. By taking the second type of position points as the position points corresponding to each storage position on the shelf in the tunnel as an example, the robot can acquire sensor data obtained by scanning each storage position on the shelf in the indoor space tunnel in real time.
The sensor data is data collected by various sensors configured for the robot, the deviation direction and the deviation distance of the current state of the robot relative to the initial state can be measured through the configured sensors, and the deviation direction and the deviation distance are used as the sensor data scanned by the current robot to the indoor space. The sensors may include odometers, IMUs (inertial measurement units) consisting of gyroscopes, accelerometers, compasses, etc., and may also include laser rangefinders, visual sensors for acquiring image data, etc.
The sensor data may also include image data collected by a vision sensor, the vision sensor may be a camera, and the robot may perform image processing on the collected image data to determine each bin on the rack in the roadway. The image data and IMU measured offset data may be used in combination to determine a library position in the map to be generated and coordinates corresponding to the library position.
In one embodiment, the robot may acquire image data via a first sensor, which may be a vision sensor such as a camera or the like, and offset data via a second sensor, which may be various measurement sensors making up the IMU, for example. It should be noted that the terms "first" and "second" are not used to limit the number of sensors, but only to distinguish between the sensors collecting different data.
The storage locations are pick-up points on shelves in the lane, such as storage location 1A01 on shelf 1A in lane 1, storage location 1B02 on shelf 1B in lane 1, and storage location 2A01 on shelf 2A in lane 2, and so on. The process of generating the map in the roadway is the process of determining the coordinates of the library positions, and after the map comprising the library positions exists, when the robot moves in the indoor space, the direction and the distance to be traveled next can be determined according to the coordinates of the library positions, so that map navigation is realized. The coordinates of the library sites may be two-dimensional planar coordinates or three-dimensional spatial coordinates. In the process of scanning the indoor space, the robot acquires data acquired by all current sensors in real time, and when the data are scanned to the position on the shelf, the data of the sensors can be acquired when the position is scanned.
And S208, determining the first estimated coordinates of the first type position points and the second estimated coordinates of the second type position points according to the sensor data.
The estimated coordinates are calculated according to sensor data acquired by the robot in the moving process, and the calculated coordinates can be called estimated coordinates because the data acquired by the sensors has noise and is not necessarily very accurate or the confidence level of the sensor data is not high enough when the data are calculated based on the sensor data.
Specifically, taking the example of determining the estimated coordinates of each bin according to the sensor data, the robot may initialize the state of each sensor, scan the indoor space from a certain known position point after initializing each sensor, acquire the sensor data during the scanning process, and determine the estimated coordinates of the bin relative to the known position point according to the current offset data when the robot determines that the bin is scanned according to the image data. The known location point may be provided at the entry to the roadway.
In one embodiment, when the robot detects the next known location point during the scanning process, the robot may again initialize the status of each sensor, continue scanning the indoor space after initialization, and when a library site on the shelf is detected, the estimated coordinates of the library site may be calculated based on this known location point and the current sensor data.
For example, the initial coordinates of the robot corresponding to the X axis and the Y axis when the robot starts from any known position point are S (0,0), the initial angle measured by the robot when the robot starts at the initial coordinates is 0 degree, and the offset data corresponding to the robot when the robot scans a certain library position during the traveling process is 5 meters and 90 degrees, and then the estimated coordinates of the library position are (0,5) through calculation. In some embodiments, the estimated coordinates may be planar coordinates in two dimensions or spatial coordinates in three dimensions.
And S210, calculating the deviation between the first estimation coordinate and the absolute coordinate.
Taking the tunnel entrance as an example for illustration, the robot can calculate the deviation between the estimated coordinates and the absolute coordinates of the tunnel entrance. Specifically, when the robot passes through the roadway entrance, the corresponding absolute coordinates are obtained, and the deviation between the estimated coordinates and the absolute coordinates at the roadway entrance is calculated. In one embodiment, the method may be implemented by EuropeThe formula distance formula calculates the deviation between the estimated coordinate and the absolute coordinate, for example, when the estimated coordinate and the absolute coordinate are both two-dimensional coordinates, the calculated estimated coordinate is (0.96,1.05), the given absolute coordinate is (1,1), the unit is millimeter, and the deviation is
Figure BDA0001871262360000081
And S212, when the deviation is smaller than a preset threshold value, generating a map according to the second estimated coordinates.
Specifically, the absolute coordinates at the entrance of the roadway can be used for performing closed-loop correction on the estimated coordinates of all the library positions, and when the deviation between the estimated coordinates and the absolute coordinates at the entrance of the roadway is small, it is indicated that the estimated coordinates calculated based on the sensor data are closer to the absolute coordinates, the noise in the sensor data is lower, the accuracy of the data acquired by the sensor is higher, and then the accuracy of the estimated coordinates of each library position calculated according to the sensor data is higher.
In one embodiment, the deviation smaller than the preset threshold may mean that the deviation corresponding to each entry of the roadway is smaller than the set threshold, that is, when the robot determines that the deviation corresponding to each entry of the roadway is smaller than the threshold, the map may be output according to the estimated coordinates of each library position calculated in the scanning process. The deviation is smaller than the preset threshold, or the total deviation calculated by the robot based on the deviations at all the roadway entrances is smaller than the set threshold, for example, n roadway entrances with known absolute coordinates exist in the indoor warehouse, and the deviation corresponding to each roadway entrance is E i (i 1,2, 3.., n), then the total deviation can be represented by the average of the sum of the individual deviations:
Figure BDA0001871262360000091
and only when the total deviation is smaller than a preset threshold value, the robot can output a map according to the calculated estimated coordinates of each library position.
According to the map generation method, when the map is generated, the indoor space is scanned from any first-class position point of the indoor space; when passing through a first type of position point in an indoor space, acquiring absolute coordinates corresponding to the first type of position point; calculating the deviation between the second estimated coordinate and the absolute coordinate; when the deviation is smaller than the preset threshold value, the sensor data acquired by the sensors are reliable enough, and the estimated coordinates of each position point determined based on the sensors are also reliable, so that a map can be generated according to the estimated coordinates of the position points, the accuracy of the generated map is guaranteed, and compared with the map generated by pasting a large number of two-dimensional codes, the map generation method can greatly save manpower and material resources.
In one embodiment, step S204 includes: collecting images corresponding to the first type of position points; and analyzing the first graphic code in the image to obtain absolute coordinates of the roadway entrance.
Taking the first type of position point as the position point at the entrance of the roadway as an example for explanation, the robot can acquire the image on the goods shelf at the entrance of the roadway; and analyzing the first graphic code in the image to obtain absolute coordinates at the entrance of the roadway.
The first graphic code is a mark graph containing positioning information. The first graphic code may be a two-dimensional code or a bar code. Known position points arranged in the indoor space can be marked by the first graphic code, so that the corresponding absolute coordinate at the entrance of the roadway can be obtained by analyzing the first graphic code. For example, a first graphic code can be pasted on the side edge of a goods shelf at the entrance of a roadway, and when the robot passes through the entrance of the roadway, the absolute coordinate corresponding to the entrance of the roadway can be obtained by acquiring the first graphic code and analyzing the first graphic code.
In one embodiment, step S206 includes: acquiring image data acquired by a visual sensor in a roadway in real time; judging whether a second graphic code for positioning a second type of position point exists in the image according to the image data; if the current robot exists, first offset data of the current robot relative to the first-class position point is obtained; second offset data of the second graphic code relative to the current robot is determined from the image. Step S208 includes: and calculating the estimated coordinate of the second graphic code according to the first offset data, the absolute coordinate and the second offset data, and taking the estimated coordinate of the second graphic code as the second estimated coordinate.
Similarly, the warehouse location and the roadway entrance are used for explaining, and the robot can acquire image data acquired by the visual sensor in the roadway in real time; judging whether a second graphic code exists in the image according to the image data; if the first offset data exists, first offset data of the current robot relative to the roadway entrance is obtained; determining second offset data of the second graphic code relative to the current robot according to the image; and calculating the estimated coordinate of the second graphic code according to the first offset data, the corresponding absolute coordinate at the entry of the roadway and the second offset data, and taking the estimated coordinate as the estimated coordinate of the library bit.
Wherein the image data comprises an image of the indoor space acquired by the robot during the scanning. The offset data includes offset direction, offset distance, and the like. The first offset data is offset data of the current robot with respect to the entry of the roadway. The first offset data may be data acquired by a sensor when the robot travels from the entry of the roadway to the current position, or may be intermediate data calculated from data acquired when the robot travels from a known position point to the entry of the roadway and data acquired when the robot travels from the known position point to the current position. The second offset data is offset data of the second graphic code with respect to a vision sensor of the current robot. The second offset data can be obtained by calculating the pixel width size and the actual width size of the second graphic code in the acquired image. The second graphic code is a mark graphic code pasted on each library position, is used for positioning each library position, and can be a library position two-dimensional code or a library position bar code. In some embodiments, the various bins on the shelves in the lanes may also be marked by other landmark items.
Specifically, when the robot moves in a roadway, image data can be collected through the camera, the robot can determine whether a second graphic code exists according to the image data, if so, the robot can calculate the distance between the second graphic code and the current camera of the robot according to the actual width size of the second graphic code, the focal length of the camera and the pixel width size of the second graphic code in the image, then calculate the height of the second graphic code relative to the current camera of the robot according to the distance and the torsion angle of the camera, and obtain second offset data of the second graphic code relative to the current robot at each angle according to the distance and the height. Further, the robot determines the current estimated coordinate of the robot according to the currently acquired first offset data and the corresponding absolute coordinate at the entrance of the roadway, and then calculates the estimated coordinate of the second graphic code according to the second offset data and the current estimated coordinate of the robot to serve as the estimated coordinate of the library position corresponding to the second graphic code.
It should be noted that, if the current estimated coordinate of the robot is a two-dimensional plane coordinate, the second offset data may be offset information of the second graphic code on the plane relative to the current robot, and if the estimated coordinate is a three-dimensional space coordinate, the second offset data may be offset information of the second graphic code on the space relative to the current robot.
For example, the following steps are carried out: and when the robot travels to a second graphic code from the entrance of the roadway, the corresponding second offset data is 2.5 meters and 90 degrees, and if the acquired absolute coordinate corresponding to the entrance of the roadway is (0,0), the current estimated coordinate of the robot is (0, 2.5). The robot calculates second offset data of the second graphic code relative to the current robot according to the pixel size of the second graphic code in the image, the torsion angle of the camera and the actual size of the second graphic code, for example, if the second graphic code is determined to be 0.3 meter in the negative direction of the current robot X axis, the estimated coordinate of the second graphic code on the plane is calculated to be (-0.3,2.5), and if the height of the robot camera is 1.6 meters, the second graphic code is determined to be 0.4 meter below the robot camera according to the torsion angle, the estimated coordinate of the second graphic code on the space is calculated to be (-0.3,2.5, 1.2).
Similarly, the robot may determine the first estimated coordinates of the currently passing first-type location point by: the robot acquires image data acquired by the vision sensor in real time; judging whether a first graphic code exists in the image or not according to the image data; if the first type of position points exist, acquiring offset data of the current robot relative to the first type of position points passed by the current robot at the last time; determining offset data of the first graphic code relative to the current robot according to the image; and calculating the estimated coordinate of the first graphic code according to the absolute coordinate corresponding to the first-class position point passing last time and the two offset data, and taking the estimated coordinate as the estimated coordinate of the first-class position point passing currently.
By calculating the corresponding first estimated coordinates and second estimated coordinates for the absolute coordinates corresponding to the first offset data, the second offset data, and the first kind of location points, the accuracy of calculating the estimated coordinates can be improved.
As shown in fig. 3, in an embodiment, determining whether a second graphic code for locating a second type of location point exists in the image according to the image data includes:
s302, acquiring the image attribute corresponding to the second graphic code.
The second graphic code corresponds to the storage position on the shelf, and when the storage position on the shelf is positioned, whether the storage position is scanned or not can be determined through the second graphic code. When scanning in the roadway, the camera can scan a goods shelf, the ground, a second graphic code and the like in the roadway, and the robot needs to determine whether the second graphic code exists from the acquired image data. The image attributes comprise color characteristics and brightness characteristics of each pixel point in the image, and also comprise depth characteristics and the like of the pixel points, the image attributes of the second graphic code can be analyzed in advance, and the second graphic code and the corresponding image attributes are stored correspondingly, so that the robot can obtain the graphic attributes corresponding to the second graphic code, and can determine that the acquired image comprises the second graphic code.
S304, extracting image characteristics of the image according to the currently acquired image data.
S306, when the image characteristics are matched with the image attributes, determining that a second graphic code exists in the image.
Specifically, after the robot acquires the image data, the image features of the image may be extracted from the acquired image data in the same manner as the image attributes are extracted, and the extracted image features may be matched with the image attributes stored in advance to determine whether the currently acquired image data includes the second graphic code. And when the extracted image features are successfully matched with the pre-stored image attributes, determining that the second graphic code exists in the image acquired by the robot at the current position.
In one embodiment, the map generation method further comprises: acquiring third offset data obtained by scanning a public area in an indoor space in real time; and determining the estimated coordinates of the current position of the robot in the public area according to the third offset data and the known position point.
Wherein the indoor space includes a public area and a roadway area for inventory, the roadway area being formed by a space area between adjacent shelves. Specifically, the estimated coordinates of the position where the current robot is located may also be calculated from the third offset data of the current robot with respect to the known position point at the time of scanning in the common area, and the absolute coordinates corresponding to the known position point. Third offset data obtained when the robot scans the utility area may be acquired by the IMU.
In one embodiment, the map generation method further comprises: when the known position point passes, acquiring an absolute coordinate corresponding to the known position point; calculating a second deviation between the estimated coordinates of the known location point and the corresponding absolute coordinates; when the deviation is smaller than a preset threshold, generating a map according to each estimated coordinate comprises: and when the second deviation is smaller than a preset threshold value, generating a map according to each estimated coordinate.
In the embodiment, when the robot scans in the public area, each estimated coordinate can be subjected to closed-loop correction through a known position point in the public area, so that the accuracy of public area mapping is improved. Specifically, when the robot passes through a known position point in the scanning process of the public area, the estimated coordinate of the known position point can be calculated according to the estimated coordinate of the previous position point and the offset data of the current robot relative to the previous position point, during correction, a second deviation between the estimated coordinate of the known position point and the corresponding absolute coordinate can be calculated, and when the second deviation is smaller than a preset threshold value, the data error collected by each sensor on the robot is smaller, so that a map can be generated according to the estimated coordinate of each storage position on a rack in a tunnel.
In one embodiment, the method further comprises: when a blind area which is not scanned exists in the indoor space; returning to the step of scanning the indoor space from any first type of position point of the indoor space; and when the reservoir positions which are not scanned exist in the roadway, returning to the step of scanning the indoor space from any first type of position point of the indoor space.
The blind area refers to an area which is not scanned in the indoor space when the robot scans the indoor space. Specifically, if there is a blind area in the indoor space that has not been scanned, the scanning needs to be continued to ensure that the generated map is complete enough, and if there is a library position in the indoor space that has not been scanned, the scanning needs to be continued to ensure that the coordinates of each library position in the obtained map are accurate enough.
In an embodiment, the robot may count categories to which all position points detected in the process of scanning the indoor space belong, obtain preset categories of all position points set for the indoor space, determine whether the categories to which all the detected position points belong completely cover the preset categories, and when the categories do not completely cover the preset categories, indicate that a blind area that is not scanned still exists in the current indoor space, and the robot needs to return to the step of scanning the indoor space starting from any known position point in the indoor space again, and continue to scan the indoor space until the categories of the scanned position points completely cover the preset categories.
Further, when the category of the position point scanned in the indoor space completely covers the preset category, the robot counts the number of the scanned positions in the room, obtains the total number of the positions in the indoor space, and compares the counted number with the total number to judge whether the positions which are not scanned exist in the indoor space. When the room space has the library positions which are not scanned, the robot needs to return to the step of scanning the room space from any known position point of the room space, and continue to scan the room space until all the library positions are scanned.
Fig. 4 is a schematic flow chart of generating a map in an embodiment. Referring to fig. 4, firstly, a plurality of known position points with given absolute coordinates are set in an indoor space of a map to be generated, a robot starts from any known position point to scan the indoor space, sensor data is obtained in the scanning process, estimated coordinates of each library position are obtained according to the sensor data, when the robot passes through a roadway entrance, deviation between the estimated coordinates and the absolute coordinates at the roadway entrance is calculated and recorded, the robot judges whether a blind area which is not scanned exists in the current indoor space, if so, the step of starting from any known position point of the indoor space and scanning the indoor space is returned, if not, the robot judges whether a library position which is not scanned exists in the current environment indoor space, if so, the step of starting from any known position point of the indoor space and scanning the indoor space is returned, if not, the robot judges whether the recorded corresponding deviation of each roadway entrance is smaller than a threshold value, if not, the robot returns to the step of scanning the indoor space from any known position point of the indoor space, and if so, a map is output according to the finally obtained estimated coordinates of each library position.
In one embodiment, the sensor data includes data collected by a plurality of sensors; the map generation method further includes: when the deviation is larger than a preset threshold value, adjusting the corresponding trust degree of each sensor; returning to the step of scanning the indoor space from any first type of position point of the indoor space; processing the data collected by the corresponding sensor according to the adjusted trust; the estimated coordinates of each bin are determined based on the processed respective data.
The sensor data obtained when the robot scans the indoor space can be obtained by fusing the data collected by each sensor, and the accuracy of the estimated coordinates of each library position obtained by calculation can be improved by processing the sensor data obtained by data fusion. Due to the influence of factors such as sensor precision, noise data and the like, the accuracy of sensor data acquired by each sensor when the robot scans an indoor space is different, the trust level is the trust level of other sensor data on the data acquired by the current sensor, in one embodiment, the trust level can be represented by the weight occupied by the data acquired by the current sensor in the fusion process, and the larger the weight value is, the higher the authenticity of the data acquired by the current sensor is.
Specifically, when the deviation between the estimated coordinate calculated based on the sensor data and the actual coordinate is larger than a preset threshold value, the trust level in the parameters of each sensor can be adjusted, after the adjustment, the step of scanning the indoor space starting from any known position point in the indoor space is returned to be executed again, the data collected in the scanning process is fused according to the adjusted trust level to obtain new sensor data, and the estimated coordinates of each position point and the library level are determined based on the new sensor data.
Accordingly, the robot may calculate again the deviation between the estimated coordinates and the absolute coordinates at the entrance of the roadway to see whether the deviation is smaller than the threshold value to determine whether the map can be generated based on the estimated coordinates obtained this time.
In this embodiment, when the deviation is greater than the threshold, the confidence level of data collected by each sensor may be adjusted, so as to improve the accuracy of the estimated coordinates of each feature point.
In an embodiment, as shown in fig. 5, the map generation method further includes a step of navigating according to the generated map, which specifically includes:
s502, receiving a target coordinate to be moved to a target position.
The target location may be a target library location. After the customer successfully places an order, the order system may determine, according to the goods information in the order, an identifier for marking the goods, such as a single-item identifier (SKU), determine, according to the identifier, which stock location of the goods in the warehouse is used as a target stock location, and issue coordinates corresponding to the target stock location to the robot, where the robot may receive the target coordinates issued by the order system.
S504, in the process of going, real-time data obtained by scanning the current position is obtained.
After receiving the target coordinates, the robot starts from the initial position, scans the indoor space during the traveling process, and acquires real-time data acquired by the sensors, wherein the real-time data comprises image data and offset data obtained by fusing the data acquired by the current sensors.
And S506, determining the coordinates of the current position according to the real-time data.
The robot can calculate the coordinates of the current position in real time according to the acquired real-time data and the coordinates of the starting point in the advancing process, and compares the coordinates of the current position with the target coordinates.
And S508, entering the target position according to the coordinate of the current position and the target coordinate.
After the robot obtains the coordinates of the current position, the robot knows which position is in the map, compares the coordinates of the current position with the target coordinates, determines the next travel route of the robot, and travels to the target coordinates according to the travel route to reach the target storage position.
In this embodiment, after the map corresponding to the current environment is generated, the robot may navigate to the target library location according to the map and the currently scanned sensor data.
Fig. 6 is a schematic flow chart of generating a map when a known location point is represented by a two-dimensional code in one embodiment. Referring to fig. 6, firstly, two-dimensional codes are pasted at known position points (including roadway entrances) according to given absolute coordinates, and the two-dimensional codes can be analyzed to obtain absolute coordinates; second graphic codes are pasted on the side edges of all the storage positions on the shelf, and the second graphic codes are only used for representing all the storage positions; then the robot moves in the indoor space, and the estimated coordinates of each position point are calculated according to the currently collected sensor data during the process of traveling; when the robot reaches the entrance of the roadway, analyzing the two-dimensional code at the entrance of the roadway to obtain absolute coordinates, and calculating deviation according to the absolute coordinates obtained through analysis and estimated coordinates at the entrance of the roadway; and calibrating the data of the sensor according to the deviation, recalculating the estimated coordinates of the library positions and other position points, and outputting a map according to the estimated coordinates of the second graphic codes pasted on each library position until all the library positions are scanned and the deviation between the estimated coordinates and the absolute coordinates of the roadway entrance is less than a preset threshold value.
In one embodiment, as shown in fig. 7, the map generation method specifically includes the following steps:
s702, starting from any known position point of the indoor space, scanning the indoor space.
S704, when the vehicle passes through the roadway entrance, acquiring the corresponding absolute coordinates of the roadway entrance.
And S706, acquiring image data acquired by the vision sensor in the roadway in real time.
And S708, acquiring the image attribute corresponding to the second graphic code.
S710, extracting image characteristics of the image according to the currently acquired image data; and when the image characteristics are matched with the image attributes, determining that the second graphic code exists in the image.
S712, acquiring first offset data of the current robot relative to the roadway entrance; second offset data of the second graphic code relative to the current robot is determined from the image.
And S714, calculating the estimated coordinate of the second graphic code according to the first offset data, the absolute coordinate corresponding to the roadway entrance and the second offset data, and taking the estimated coordinate as the estimated coordinate of the library bit.
And S716, calculating the deviation between the estimated coordinates and the absolute coordinates at the entrance of the roadway.
S718, acquiring third offset data obtained by scanning a public area in the indoor space in real time; and determining the estimated coordinates of the current position of the robot in the public area according to the third offset data and the known position point.
S720, when the known position point is passed, calculating a second deviation between the estimated coordinate of the known position point and the corresponding absolute coordinate.
And S722, when the deviation between the estimated coordinate and the absolute coordinate at the entrance of the roadway or the second deviation between the estimated coordinate of the known position point and the corresponding absolute coordinate is less than a preset threshold value, generating a map according to each estimated coordinate.
S724, judging whether a blind area which is not scanned exists in the indoor space; if yes, go back to step 702; if not, go to step S726.
And S726, judging whether the laneway has a library position which is not scanned, if so, returning to the step 702, otherwise, executing the step S728.
S728, if the deviation or the second deviation is smaller than the preset threshold, executing step S732; if not, go to step S730.
S730, adjusting the corresponding trust level of each sensor, and processing the data acquired by the corresponding sensor according to the adjusted trust level; the estimated coordinates of each bin are determined based on the processed respective data.
S732, receiving the target coordinates to be advanced to the target library location.
S734, acquiring real-time data obtained by scanning the current position in the process of traveling; and determining the coordinates of the current position according to the real-time data.
And S736, entering a target library position according to the coordinates of the current position and the target coordinates.
According to the map generation method, when the map is generated, the indoor space is scanned from any known position point of the indoor space, the sensor data obtained by scanning each storage position on the storage rack in the tunnel of the indoor space through the sensor is used for automatically calculating the estimated coordinate of each storage position on the storage rack, and when the estimated coordinate calculated at the entrance of the tunnel is close to the given absolute coordinate, the sensor data acquired by the sensor is reliable enough, so that the estimated coordinate of each storage position on the storage rack determined based on the sensors is also reliable, the map can be generated according to the estimated coordinates of the storage positions, and compared with the map generated by pasting a large number of two-dimensional codes, the map generation method has the advantage that a large amount of manpower and material resources are consumed, and the manpower and the material resources can be greatly saved.
It should be understood that, although the steps in the flowcharts are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the sub-steps or the stages of other steps.
As shown in fig. 8, a map generation apparatus 800 is provided that includes a scanning module 802, an absolute coordinate acquisition module 804, a sensor data acquisition module 806, an estimated coordinate determination module 808, a deviation calculation module 810, and a map generation module 812. Wherein:
the scanning module 802 is configured to scan an indoor space from any first type of location point in the indoor space; the absolute coordinate acquisition module 804 is configured to acquire absolute coordinates corresponding to a first-type position point when the first-type position point passes through an indoor space; the sensor data acquisition module 806 is configured to acquire sensor data obtained by scanning the first type of location points and the second type of location points in the indoor space in real time; the estimated coordinate determination module 808 is configured to determine a first estimated coordinate of each first-type location point and a second estimated coordinate of each second-type location point according to the sensor data; the deviation calculation module 810 is configured to calculate a deviation between the first estimated coordinates and the absolute coordinates; the map generating module 812 is configured to generate a map according to each second estimated coordinate when the deviation is smaller than a preset threshold.
In one embodiment, the sensor data acquisition module 806 is further configured to acquire image data acquired by the vision sensor in the roadway in real time; judging whether a second graphic code for positioning a second type of position point exists in the image according to the image data; if the current robot exists, first offset data of the current robot relative to the first-class position point is obtained; second offset data of the second graphic code relative to the current robot is determined from the image.
The estimated coordinate determination module 808 is further configured to calculate an estimated coordinate of the second graphic code according to the first offset data, the absolute coordinate, and the second offset data, and use the estimated coordinate of the second graphic code as the second estimated coordinate.
In one embodiment, the sensor data obtaining module 806 is further configured to obtain an image attribute corresponding to the second graphic code; extracting image characteristics of an image according to currently acquired image data; and when the image characteristics are matched with the image attributes, determining that the second graphic code exists in the image.
In one embodiment, the scanning module 802 is further configured to detect when there is a blind area in the indoor space that is not scanned; scanning the indoor space from any first type of position point of the indoor space; and when the second type of position points which are not scanned exist in the roadway, starting from any first type of position point of the indoor space, and scanning the indoor space.
In one embodiment, as shown in FIG. 9, the sensor data includes data collected by a plurality of sensors; the above-mentioned device still includes:
a sensor data correction module 814, configured to adjust the corresponding confidence level of each sensor when the deviation is greater than a preset threshold;
the estimated coordinate determination module 808 is further configured to process data acquired by the corresponding sensor according to the adjusted confidence level; and determining the first estimated coordinates of the first type position points and the second estimated coordinates of the second type position points based on the processed data.
In one embodiment, as shown in fig. 10, the above apparatus further comprises:
a navigation module 816 for receiving target coordinates to be traveled to a target location; in the process of moving, acquiring real-time data obtained by scanning the current position; determining the coordinates of the current position according to the real-time data; and entering the target position according to the coordinate of the current position and the target coordinate.
In one embodiment, the absolute coordinate acquisition module 804 is further configured to acquire an image corresponding to the first type of location point; and analyzing the first graphic code in the image to obtain absolute coordinates at the entrance of the roadway.
The map generation device 800 scans the indoor space from any known position point of the indoor space when generating the map, automatically calculates the estimated coordinates of each storage position on the shelf by using the sensor to scan the sensor data obtained by scanning each storage position on the shelf in the tunnel of the indoor space, and when the estimated coordinates calculated at the entrance of the tunnel are close to the given absolute coordinates, indicates that the sensor data collected by the sensor are reliable enough, so that the estimated coordinates of each storage position on the shelf determined based on the sensors are also reliable.
FIG. 11 is a diagram that illustrates an internal structure of the computer device in one embodiment. The computer device may specifically be the robot 110 in fig. 1. As shown in fig. 11, the computer device includes a processor, a memory, a network interface, and a sensor connected by a system bus. Wherein the sensor comprises a visual sensor and the memory comprises a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the map generation method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a map generation method.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the map generation apparatus 800 provided by the present application may be implemented in the form of a computer program that is executable on a computer device such as the one shown in fig. 11. The memory of the computer device may store various program modules constituting the map generation apparatus 800, and the computer program constituted by the various program modules causes the processor to execute the steps in the map generation method according to the various embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 11 may perform step S202 by the scanning module 802 in the map generating apparatus 800 shown in fig. 8; step S204 is executed by the absolute coordinate obtaining module 804; step S206 is performed by the sensor data acquisition module 806; step S208 is performed by the estimated coordinate determination module 808; step S210 is performed by the deviation calculation module 810; step S212 is performed by the map generation module 812.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the map generation method described above. Here, the steps of the map generation method may be steps in the map generation methods of the respective embodiments described above.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the map generation method described above. Here, the steps of the map generation method may be steps in the map generation methods of the respective embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A map generation method applied to a robot includes:
after the state of each sensor is initialized, scanning the indoor space from any first-class position point of the indoor space, wherein the first-class position point is a known position point at a roadway entrance in the indoor space, and the known position point has preset absolute coordinates;
when the robot passes through a first-class position point in the indoor space, analyzing a first graphic code pasted at the first-class position point, acquiring absolute coordinates corresponding to the first-class position point and acquiring sensor data corresponding to the first-class position point corresponding to each sensor, wherein the sensor data corresponding to the first-class position point is obtained by fusing current data of each sensor according to the adjusted trust degree of the previous time when the robot is initialized and controlled to move from the first-class position point passed through the previous time to the first-class position point passed through the current time; after determining first estimated coordinates of the first-class position points according to sensor data corresponding to the first-class position points and absolute coordinates corresponding to the first-class position points passing through in the previous time, initializing the state of each sensor, and continuously scanning the indoor space;
when the robot passes through a second type position point in the indoor space, acquiring sensor data corresponding to the second type position point obtained by scanning the second type position point in the indoor space, wherein the sensor data corresponding to the second type position point is obtained by fusing current data of each sensor according to the trust degree adjusted at the previous time when the robot is initialized to each sensor through the first type position point passed at the previous time and is controlled to move from the first type position point passed at the previous time to the second type position point passed at the current time; determining a second estimated coordinate of the second type of position point according to the sensor data corresponding to the second type of position point and the absolute coordinate corresponding to the first type of position point passing through last time, wherein the second type of position point is a position point corresponding to each storage position on a shelf in a tunnel in the indoor space, and the second type of position point is a position point with unknown coordinates;
calculating a deviation between the first estimated coordinates and the absolute coordinates;
when the deviation is smaller than a preset threshold value, generating a map according to each second estimated coordinate;
when the deviation is larger than a preset threshold value, adjusting the corresponding trust of each sensor, returning to the state of initializing each sensor, starting from any first type of position point in the indoor space, continuously executing the step of scanning the indoor space, fusing data collected by each sensor in the scanning process according to the adjusted trust, determining a first estimated coordinate of each first type of position point and a second estimated coordinate of each second type of position point based on the fused data of each sensor, and calculating the deviation between the first estimated coordinate and the absolute coordinate.
2. The method of claim 1, wherein acquiring sensor data scanned from the first type of location point and the second type of location point in the indoor space in real time comprises:
acquiring image data acquired by a visual sensor in a roadway of the indoor space in real time;
judging whether a second graphic code for positioning the second type of position points exists in the image or not according to the image data;
if the first type of position points exist, first offset data of the current robot relative to the first type of position points are obtained;
determining second offset data of the second graphic code relative to the current robot according to the image;
determining the first estimated coordinates of each of the first type location points and the second estimated coordinates of each of the second type location points from the sensor data comprises:
and calculating the estimated coordinate of the second graphic code according to the first offset data, the absolute coordinate and the second offset data, and taking the estimated coordinate of the second graphic code as the second estimated coordinate.
3. The method of claim 2, wherein said determining whether a second graphic code for locating the second type of location point exists in the image according to the image data comprises:
acquiring image attributes corresponding to the second graphic code;
extracting image characteristics of an image according to currently acquired image data;
and when the image characteristics are matched with the image attributes, determining that the second graphic code exists in the image.
4. The method of claim 1, further comprising:
when there is a blind area in the indoor space that is not scanned; then
Starting from any first type of position point of the indoor space, and scanning the indoor space;
when the second type of position points which are not scanned exist in the roadway of the indoor space, then
And returning to the step of scanning the indoor space from any first type position point of the indoor space.
5. The method according to claim 1, wherein after generating a map from each of the second estimated coordinates when the deviation is less than a preset threshold, the method further comprises:
receiving a target coordinate to be advanced to a target position;
in the process of moving, acquiring real-time data obtained by scanning the current position;
determining the coordinate of the current position according to the real-time data;
and advancing to the target position according to the coordinate of the current position and the target coordinate.
6. The method according to claim 1, wherein the analyzing the first graphic code pasted at the first type of location point when passing through the first type of location point in the indoor space, and the obtaining the absolute coordinates corresponding to the first type of location point comprises:
collecting an image on a goods shelf at a roadway entrance;
when the image corresponding to the first type position point is acquired, then
And analyzing the first graphic code in the image to obtain absolute coordinates of the roadway entrance of the indoor space.
7. A map generation apparatus, which is applied to a robot, comprising:
the scanning module is used for scanning the indoor space from any first type of position point of the indoor space after the state of each sensor is initialized, wherein the first type of position point is a known position point at a roadway entrance in the indoor space, and the known position point has preset absolute coordinates;
the absolute coordinate acquisition module is used for analyzing a first graphic code pasted at a first type position point when the first graphic code passes through the first type position point in the indoor space, and acquiring an absolute coordinate corresponding to the first type position point;
the sensor data acquisition module is used for acquiring sensor data corresponding to the first type position points corresponding to each sensor when the robot passes through the first type position points in the indoor space, wherein the sensor data corresponding to the first type position points are obtained by fusing current data of each sensor according to the adjusted trust degree of the previous time when the robot is initialized and controlled to move from the first type position points passed through the previous time to the first type position points passed through the current time;
the estimated coordinate determination module is used for determining first estimated coordinates of the first-class position points according to sensor data corresponding to the first-class position points and absolute coordinates corresponding to the first-class position points passing through last time, and then the robot initializes the state of each sensor and continues to scan the indoor space;
the sensor data acquisition module is further used for acquiring sensor data corresponding to a second type of position point obtained by scanning the second type of position point in the indoor space when the sensor data passes through the second type of position point in the indoor space; the sensor data corresponding to the second-class position points are obtained by fusing the current data of each sensor according to the adjusted trust degree of the previous time when the sensor is initialized by the first-class position points passed by the previous time and the robot is controlled to move from the first-class position points passed by the previous time to the second-class position points passed by the current time;
the estimated coordinate determination module is further configured to determine a second estimated coordinate of the second type of location point according to the sensor data corresponding to the second type of location point and the absolute coordinate corresponding to the first type of location point that has passed in the previous time, where the second type of location point is a location point corresponding to each storage location on a shelf in a tunnel in an indoor space, and the second type of location point is a location point whose coordinate is unknown;
a deviation calculation module for calculating a deviation between the first estimated coordinates and the absolute coordinates;
the map generation module is used for generating a map according to each second estimated coordinate when the deviation is smaller than a preset threshold value;
the sensor data correction module is used for adjusting the corresponding trust of each sensor when the deviation is greater than a preset threshold value, then the device continues to execute the step of scanning the indoor space from any first-class position point of the indoor space after the state of each sensor is initialized through the scanning module, and data collected by each sensor in the scanning process are fused according to the adjusted trust;
the estimated coordinate determination module is further configured to determine a first estimated coordinate of each first-type location point and a second estimated coordinate of each second-type location point based on the fused sensor data, and thereafter, the apparatus calculates a deviation between the first estimated coordinate and the absolute coordinate through a deviation calculation module.
8. The apparatus of claim 7, wherein the sensor data acquisition module is further configured to acquire image data acquired by a vision sensor in a roadway of the indoor space in real time; judging whether a second graphic code for positioning the second type of position points exists in the image or not according to the image data; if the first type of position points exist, first offset data of the current robot relative to the first type of position points are obtained; determining second offset data of the second graphic code relative to the current robot according to the image;
the estimated coordinate determination module is further configured to calculate an estimated coordinate of the second graphic code according to the first offset data, the absolute coordinate, and the second offset data, and use the estimated coordinate of the second graphic code as the second estimated coordinate.
9. The apparatus according to claim 8, wherein the sensor data acquiring module is further configured to acquire an image attribute corresponding to the second graphic code; extracting image characteristics of an image according to currently acquired image data; and when the image characteristics are matched with the image attributes, determining that the second graphic code exists in the image.
10. The apparatus according to claim 7, wherein the scanning module is further configured to scan the indoor space starting from any first type location point when there is a blind area in the indoor space that is not scanned; and when the second type of position points which are not scanned exist in the roadway of the indoor space, starting from any first type of position point of the indoor space, and scanning the indoor space.
11. The apparatus of claim 7, further comprising:
the navigation module is used for receiving a target coordinate to be advanced to a target position; in the process of moving, acquiring real-time data obtained by scanning the current position; determining the coordinate of the current position according to the real-time data; and advancing to the target position according to the coordinate of the current position and the target coordinate.
12. The apparatus of claim 7, wherein the absolute coordinate acquisition module is further configured to acquire an image of a shelf at an entrance of a roadway; when the images corresponding to the first type of position points are collected, analyzing the first graphic code in the images to obtain absolute coordinates of the roadway entrance of the indoor space.
13. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 6.
14. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 6.
CN201811378230.6A 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment Active CN109637339B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811378230.6A CN109637339B (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment
CN202210794996.2A CN114999308A (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811378230.6A CN109637339B (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210794996.2A Division CN114999308A (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN109637339A CN109637339A (en) 2019-04-16
CN109637339B true CN109637339B (en) 2022-08-09

Family

ID=66068728

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210794996.2A Pending CN114999308A (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment
CN201811378230.6A Active CN109637339B (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210794996.2A Pending CN114999308A (en) 2018-11-19 2018-11-19 Map generation method, map generation device, computer-readable storage medium and computer equipment

Country Status (1)

Country Link
CN (2) CN114999308A (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110186459B (en) * 2019-05-27 2021-06-29 深圳市海柔创新科技有限公司 Navigation method, mobile carrier and navigation system
CN112214012A (en) * 2019-07-11 2021-01-12 深圳市海柔创新科技有限公司 Navigation method, mobile carrier and navigation system
CN111178315B (en) * 2020-01-03 2023-03-14 深圳市无限动力发展有限公司 Method and device for identifying corner and computer equipment
CN111874512B (en) * 2020-06-10 2022-02-22 北京旷视机器人技术有限公司 Position adjusting method and device, lifting type robot and computer storage medium
CN112611370A (en) * 2020-11-20 2021-04-06 上海能辉科技股份有限公司 Vehicle for carrying truck battery and positioning system thereof
CN113573232B (en) * 2021-07-13 2024-04-19 深圳优地科技有限公司 Robot roadway positioning method, device, equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149792A (en) * 2006-09-21 2008-03-26 国际商业机器公司 System and method for performing inventory using a mobile inventory robot
CN101310163A (en) * 2005-11-18 2008-11-19 丰田自动车株式会社 Mobile object position estimation apparatus and method
CN104036223A (en) * 2014-06-10 2014-09-10 中国人民解放军理工大学 Indoor cargo positioning and navigation system and method based on bar bodes
CN104316050A (en) * 2013-02-28 2015-01-28 三星电子株式会社 Mobile robot and method of localization and mapping of the same
CN104793619A (en) * 2015-04-17 2015-07-22 上海交通大学 Warehouse roadway automatic guided vehicle navigation device based on swing single-line laser radar
WO2016163563A1 (en) * 2015-04-09 2016-10-13 日本電気株式会社 Map generating device, map generating method, and program recording medium
CN205656496U (en) * 2015-11-26 2016-10-19 江苏美的清洁电器股份有限公司 Robot of sweeping floor and device is establish to indoor map thereof
CN106323290A (en) * 2016-08-24 2017-01-11 潘重光 Map generation system and method
CN106355427A (en) * 2016-08-12 2017-01-25 潘重光 Shopping guide map generating method and device
CN106847066A (en) * 2017-01-09 2017-06-13 北京京东尚科信息技术有限公司 Warehouse map constructing method and device
CN106940704A (en) * 2016-11-25 2017-07-11 北京智能管家科技有限公司 A kind of localization method and device based on grating map
CN107145578A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Map constructing method, device, equipment and system
CN107305376A (en) * 2016-04-19 2017-10-31 上海慧流云计算科技有限公司 A kind of automatic drawing robot of indoor map and method for drafting
WO2018074903A1 (en) * 2016-10-20 2018-04-26 엘지전자 주식회사 Control method of mobile robot
CN108803591A (en) * 2017-05-02 2018-11-13 北京米文动力科技有限公司 A kind of ground drawing generating method and robot

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100111795A (en) * 2009-04-08 2010-10-18 (주) 한호기술 Self control moving system for robot and self control moving robot
EP2495632B1 (en) * 2009-10-30 2018-09-12 Yujin Robot Co., Ltd. Map generating and updating method for mobile robot position recognition
US9234965B2 (en) * 2010-09-17 2016-01-12 Qualcomm Incorporated Indoor positioning using pressure sensors
KR101272422B1 (en) * 2012-02-29 2013-06-07 부산대학교 산학협력단 Device and method for locationing using laser scanner and landmark matching
JP6132659B2 (en) * 2013-02-27 2017-05-24 シャープ株式会社 Ambient environment recognition device, autonomous mobile system using the same, and ambient environment recognition method
CN103353305A (en) * 2013-06-13 2013-10-16 张砚炳 Indoor positioning method and system based on mobile phone sensor
EP3249418A4 (en) * 2015-01-22 2018-02-28 Guangzhou Airob Robot Technology Co., Ltd. Rfid-based localization and mapping method and device thereof
CN105115506A (en) * 2015-07-27 2015-12-02 深圳先进技术研究院 Indoor positioning method and system
CN105467382A (en) * 2015-12-31 2016-04-06 南京信息工程大学 SVM (Support Vector Machine)-based multi-sensor target tracking data fusion algorithm and system thereof
ES2585977B1 (en) * 2016-03-15 2017-05-10 Tier1 Technology, S.L. ROBOTIZED EQUIPMENT FOR THE LOCATION OF ITEMS IN A STORE AND ITS OPERATING PROCEDURE
US9864377B2 (en) * 2016-04-01 2018-01-09 Locus Robotics Corporation Navigation using planned robot travel paths
CN106338991A (en) * 2016-08-26 2017-01-18 南京理工大学 Robot based on inertial navigation and two-dimensional code and positioning and navigation method thereof
KR20180094493A (en) * 2017-02-15 2018-08-23 최일권 Method and system for creating indoor map
CN107478214A (en) * 2017-07-24 2017-12-15 杨华军 A kind of indoor orientation method and system based on Multi-sensor Fusion
CN107727104B (en) * 2017-08-16 2019-04-30 北京极智嘉科技有限公司 Positioning and map building air navigation aid, apparatus and system while in conjunction with mark
CN108469826B (en) * 2018-04-23 2021-06-08 宁波Gqy视讯股份有限公司 Robot-based map generation method and system
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
CN108537913A (en) * 2018-06-15 2018-09-14 浙江国自机器人技术有限公司 A kind of cruising inspection system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101310163A (en) * 2005-11-18 2008-11-19 丰田自动车株式会社 Mobile object position estimation apparatus and method
CN101149792A (en) * 2006-09-21 2008-03-26 国际商业机器公司 System and method for performing inventory using a mobile inventory robot
CN104316050A (en) * 2013-02-28 2015-01-28 三星电子株式会社 Mobile robot and method of localization and mapping of the same
CN104036223A (en) * 2014-06-10 2014-09-10 中国人民解放军理工大学 Indoor cargo positioning and navigation system and method based on bar bodes
WO2016163563A1 (en) * 2015-04-09 2016-10-13 日本電気株式会社 Map generating device, map generating method, and program recording medium
CN104793619A (en) * 2015-04-17 2015-07-22 上海交通大学 Warehouse roadway automatic guided vehicle navigation device based on swing single-line laser radar
CN205656496U (en) * 2015-11-26 2016-10-19 江苏美的清洁电器股份有限公司 Robot of sweeping floor and device is establish to indoor map thereof
CN107305376A (en) * 2016-04-19 2017-10-31 上海慧流云计算科技有限公司 A kind of automatic drawing robot of indoor map and method for drafting
CN106355427A (en) * 2016-08-12 2017-01-25 潘重光 Shopping guide map generating method and device
CN106323290A (en) * 2016-08-24 2017-01-11 潘重光 Map generation system and method
WO2018074903A1 (en) * 2016-10-20 2018-04-26 엘지전자 주식회사 Control method of mobile robot
CN106940704A (en) * 2016-11-25 2017-07-11 北京智能管家科技有限公司 A kind of localization method and device based on grating map
CN106847066A (en) * 2017-01-09 2017-06-13 北京京东尚科信息技术有限公司 Warehouse map constructing method and device
CN108803591A (en) * 2017-05-02 2018-11-13 北京米文动力科技有限公司 A kind of ground drawing generating method and robot
CN107145578A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Map constructing method, device, equipment and system

Also Published As

Publication number Publication date
CN114999308A (en) 2022-09-02
CN109637339A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109637339B (en) Map generation method, map generation device, computer-readable storage medium and computer equipment
US10198632B2 (en) Survey data processing device, survey data processing method, and survey data processing program
EP3631494B1 (en) Integrated sensor calibration in natural scenes
CN107850449B (en) Method and system for generating and using positioning reference data
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
CN110807350A (en) System and method for visual SLAM for scan matching
KR20190053217A (en) METHOD AND SYSTEM FOR GENERATING AND USING POSITIONING REFERENCE DATA
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
Knyaz et al. Photogrammetric technique for timber stack volume contol
CN113822299B (en) Map construction method, device, equipment and storage medium
CN114248778B (en) Positioning method and positioning device of mobile equipment
CN113188509B (en) Distance measurement method and device, electronic equipment and storage medium
CN114371484A (en) Vehicle positioning method and device, computer equipment and storage medium
CN116630442B (en) Visual SLAM pose estimation precision evaluation method and device
CN117218350A (en) SLAM implementation method and system based on solid-state radar
US20220148216A1 (en) Position coordinate derivation device, position coordinate derivation method, position coordinate derivation program, and system
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
Lari et al. System considerations and challendes in 3d mapping and modeling using low-Cost uav systems
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
US9996085B2 (en) Automatic guiding system for analyzing pavement curvature and method for the same
CN116820074A (en) Method and device for determining congestion point position of robot, robot and storage medium
CN111414804B (en) Identification frame determining method, identification frame determining device, computer equipment, vehicle and storage medium
CN113034538A (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
JPWO2018212280A1 (en) Measuring device, measuring method and program
CN116358600A (en) Point cloud map positioning capability evaluation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant