CN110967029B - Picture construction method and device and intelligent robot - Google Patents

Picture construction method and device and intelligent robot Download PDF

Info

Publication number
CN110967029B
CN110967029B CN201911303185.2A CN201911303185A CN110967029B CN 110967029 B CN110967029 B CN 110967029B CN 201911303185 A CN201911303185 A CN 201911303185A CN 110967029 B CN110967029 B CN 110967029B
Authority
CN
China
Prior art keywords
intelligent robot
detectable
detectable boundary
boundary
optimal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911303185.2A
Other languages
Chinese (zh)
Other versions
CN110967029A (en
Inventor
高文超
黄巍伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weidang Life Technology Co ltd
Original Assignee
International Intelligent Machines Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Intelligent Machines Co ltd filed Critical International Intelligent Machines Co ltd
Priority to CN201911303185.2A priority Critical patent/CN110967029B/en
Publication of CN110967029A publication Critical patent/CN110967029A/en
Application granted granted Critical
Publication of CN110967029B publication Critical patent/CN110967029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Abstract

The embodiment of the invention relates to the technical field of synchronous positioning and mapping, and discloses a mapping method, a device and an intelligent robot.

Description

Mapping method and device and intelligent robot
Technical Field
The invention relates to the technical field of synchronous positioning and mapping, in particular to a mapping method, a mapping device and an intelligent robot.
Background
With the development of the synchronous positioning and map building technology, the robot is more and more intelligent, the application of the robot in various industries is more and more extensive, and with the coming of the intelligent era, an indoor robot is appeared at present, can draw indoor maps and has a navigation function.
In the course of implementing the present invention, the inventors found that at least the following problems exist in the above related art: when an indoor map is drawn in the previous period, a working space where the indoor map is located is usually scanned by a manual remote control robot, and the indoor map is drawn by using a distance measuring sensor.
Disclosure of Invention
In view of the foregoing defects in the prior art, an object of the embodiments of the present invention is to provide a mapping method and apparatus capable of autonomous operation, and an intelligent robot.
The purpose of the embodiment of the invention is realized by the following technical scheme:
in order to solve the above technical problem, in a first aspect, an embodiment of the present invention provides a mapping method applied to an intelligent robot, including:
acquiring a grid map currently detected by the intelligent robot;
extracting a detectable boundary of the grid map;
determining an optimal detectable boundary from the extracted detectable boundaries according to a preset rapid development algorithm;
and controlling the intelligent robot to detect the unknown region from the optimal detectable boundary, and expanding the grid map.
In some embodiments, the step of determining an optimal detectable boundary from the extracted detectable boundaries according to a preset fast exploitation algorithm further comprises:
acquiring the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary, and the size of each detectable boundary;
calculating the detection cost of each detectable boundary according to the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary;
and taking the detectable boundary with the highest detection cost as the optimal detectable boundary.
In some embodiments, the calculation formula for calculating the detection cost of each detectable boundary is as follows:
Figure GDA0003563875240000021
wherein the content of the first and second substances,
Figure GDA0003563875240000022
a detection cost, f, representing the detectable boundary j j D Representing the distance between the intelligent robot and the detectable boundary j, f j S Indicating the size of said detectable boundary j, f j R Represents the angle between the advancing direction of the intelligent robot and the detectable boundary j, omega d Represents said f j D Weight of (a), ω s Represents said f j S Weight of (a), ω r Represents said f j R The weight of (c).
In some embodiments, the step of controlling the intelligent robot to detect the unknown region from the optimal detectable boundary further comprises:
acquiring the intermediate point of the optimal detectable boundary;
finding a first traffic path from a current location of the intelligent robot to the intermediate point in the grid map;
and if the first passing path is found, controlling the intelligent robot to move to the intermediate point along the first passing path, and then detecting an unknown region from the optimal detectable boundary.
In some embodiments, the method further comprises:
if the first passing path is not found, acquiring the gravity center of a geometric figure formed by the optimal detectable boundaries;
finding a second traffic path from a current location of the intelligent robot to the center of gravity in the grid map;
and if the second passing path is found, controlling the intelligent robot to move to the gravity center along the second passing path, and then detecting an unknown region from the optimal detectable boundary.
In order to solve the foregoing technical problem, in a second aspect, an embodiment of the present invention provides an apparatus for creating a map, including:
the acquisition module is used for acquiring a grid map obtained by current detection of the intelligent robot;
the extraction module is used for extracting the detectable boundaries of the grid map;
the determining module is used for determining the optimal detectable boundary from the extracted detectable boundaries according to a preset rapid exploitation algorithm;
and the control module is used for controlling the intelligent robot to detect the unknown region from the optimal detectable boundary and expand the grid map.
In some embodiments, the determination module is further configured to obtain a distance between the intelligent robot and each detectable boundary, an angle between a direction of travel of the intelligent robot and each detectable boundary, and a size of each detectable boundary;
calculating the detection cost of each detectable boundary according to the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary;
and taking the detectable boundary with the highest detection cost as the optimal detectable boundary.
In some embodiments, the calculation formula for calculating the detection cost of each of the detectable boundaries is as follows:
Figure GDA0003563875240000041
wherein the content of the first and second substances,
Figure GDA0003563875240000042
a detection cost, f, representing the detectable boundary j j D Representing the distance between the intelligent robot and the detectable boundary j, f j S Indicating the size of said detectable boundary j, f j R Representing the angle, ω, between the heading of the intelligent robot and the detectable boundary j d Represents said f j D Weight of (a), ω s Represents said f j S Weight of (a), ω r Represents the f j R The weight of (c).
In some embodiments, the control module is further configured to obtain a midpoint of the optimal detectable boundary;
finding a first traffic path from a current location of the intelligent robot to the intermediate point in the grid map;
and if the first passing path is found, controlling the intelligent robot to move to the intermediate point along the first passing path, and then detecting an unknown region from the optimal detectable boundary.
In some embodiments, the control module is further configured to obtain a center of gravity of a geometric figure formed by the optimal detectable boundary if the first traffic path is not found;
finding a second traffic path from a current location of the intelligent robot to the center of gravity in the grid map;
and if the second passing path is found, controlling the intelligent robot to move to the gravity center along the second passing path, and then detecting an unknown area from the optimal detectable boundary.
In order to solve the above technical problem, in a third aspect, an embodiment of the present invention provides an intelligent robot, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect as described above.
In order to solve the above technical problem, in a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method according to the first aspect.
In order to solve the above technical problem, in a fifth aspect, the embodiments of the present invention further provide a computer program product, the computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to execute the method according to the first aspect.
Compared with the prior art, the invention has the beneficial effects that: different from the situation of the prior art, the embodiment of the invention provides a mapping method, which includes the steps of firstly obtaining a grid map obtained by current detection of an intelligent robot, then extracting detectable boundaries of the grid map, then determining an optimal detectable boundary from the extracted detectable boundaries according to a preset fast exploitation algorithm, and finally controlling the intelligent robot to detect an unknown region from the optimal detectable boundary and expand the grid map.
Drawings
One or more embodiments are illustrated by the accompanying figures in the drawings that correspond thereto and are not to be construed as limiting the embodiments, wherein elements/modules and steps having the same reference numerals are represented by like elements/modules and steps, unless otherwise specified, and the drawings are not to scale.
FIG. 1 is a schematic diagram of an application scenario of a method provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a method for creating a graph according to an embodiment of the present invention;
FIG. 3 is a sub-flow diagram of step 130 of the method of FIG. 2;
FIG. 4 is a sub-flowchart of step 140 of the method of FIG. 2;
fig. 5 is a schematic structural diagram of an apparatus for creating a diagram according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an intelligent robot according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the concept of the invention. All falling within the scope of the present invention.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the present application. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. Further, the terms "first," "second," and the like, as used herein, do not limit the data and the execution order, but merely distinguish the same items or similar items having substantially the same functions and actions.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a schematic diagram of an application environment of a mapping method according to an embodiment of the present invention, where the application environment includes: the intelligent robot 10 can detect the surrounding environment through the detection device and grid the surrounding environment to obtain a grid map a.
The grid map a is composed of individual grids, each grid can represent the state of the surrounding environment where the grid is located, the state includes three states, the occupied state where an obstacle exists is represented, the idle state where no obstacle is represented and the intelligent robot 10 can freely pass through is represented, and the unknown state where the intelligent robot 10 has not detected yet. The intelligent robot 10 can plan an advancing route according to the grid map a, and continuously detect and update the grid map a in the advancing process of the intelligent robot 10.
The boundary between the grid in the idle state or the occupied state and the grid in the unknown state is marked as a detectable boundary of the grid map a, and an optimal detectable boundary S on the detectable boundary can be obtained by combining the orientation of the intelligent robot 10 and the distance from the detectable boundary, wherein the optimal detectable boundary S is the detectable boundary which can be reached by the intelligent robot 10 most quickly. After the optimal detectable boundary S is determined, the intelligent robot 10 is driven to move to the midpoint D1 or the center of gravity D2 of the optimal detectable boundary S.
The intelligent robot 10 is provided with at least a detection device capable of detecting a space obstacle, for example, a laser detection device, a sensor, a depth camera, and the like capable of detecting distance information of the obstacle from the intelligent robot 10. The intelligent robot 10 is movable, and is provided with a mobile device, and a moving path thereof can be set according to a moving mode of the mobile device. The intelligent robot 10 is internally provided with a central processing unit which can receive and send instructions and process a large amount of data information.
It should be noted that the mapping method provided in the embodiment of the present application is generally performed by the intelligent robot 10, and accordingly, the mapping apparatus is generally disposed in the intelligent robot 10. The number of the intelligent robots 10 used for mapping can be one or more, the number of the intelligent robots is not limited in the application, and when the number of the intelligent robots is multiple, the intelligent robots 10 can share data in real time, so that the intelligent robots can build a mapping automatically in a detection area with a large range.
Specifically, the embodiments of the present invention will be further explained below with reference to the drawings.
An embodiment of the present invention provides a mapping method, which can be executed by the intelligent robot 10, please refer to fig. 2, which shows a flowchart of a mapping method provided by an embodiment of the present invention, and the method includes, but is not limited to, the following steps:
step 110: and acquiring a grid map currently detected by the intelligent robot.
In the embodiment of the present invention, first, a grid map detected by the intelligent robot is obtained, where the detected grid map may be a grid map a as shown in fig. 1, it should be noted that the grid map a shown in fig. 1 is a grid map that does not include the grid in the unknown state, and the intelligent robot may obtain the grid map through detection by various detection devices.
Step 120: detectable boundaries of the grid map are extracted.
In the embodiment of the present invention, a detectable boundary of the grid map is further obtained, where the detectable boundary is a boundary between a grid in an occupied state or an idle state and a grid in an unknown state on the grid map detected by the intelligent robot. As shown in fig. 1, the grid map a is a boundary on the outer periphery of the grid map a formed of a plurality of grids.
Step 130: and determining the optimal detectable boundary from the extracted detectable boundaries according to a preset rapid exploitation algorithm.
In the embodiment of the present invention, further, according to a preset fast exploitation algorithm, the boundary that can be reached fastest in the moving direction and distance of the intelligent robot on the detectable boundary is obtained, as shown in fig. 1, the optimal detectable boundary is represented by a boundary S with a thick line, and the current moving direction of the intelligent robot is a direction toward the middle point D1. The preset rapid development algorithm is an algorithm which can rapidly and efficiently expand unknown space for the intelligent robot and search the unknown space which is nearest to the intelligent robot and has the smallest moving direction change amplitude.
Step 140: and controlling the intelligent robot to detect an unknown region from the optimal detectable boundary, and expanding the grid map.
After the optimal detectable boundary is obtained, the intelligent robot is driven to move to the optimal detectable boundary or the vicinity of the optimal detectable boundary for detection, so as to acquire the information of the obstacles in the grid in an unknown state around the detectable boundary.
The embodiment of the invention provides a mapping method, which comprises the steps of firstly obtaining a grid map obtained by current detection of an intelligent robot, then extracting detectable boundaries of the grid map, then determining an optimal detectable boundary from the extracted detectable boundaries according to a preset rapid development algorithm, finally controlling the intelligent robot to detect an unknown region from the optimal detectable boundary, expanding the grid map, and enabling the intelligent robot executing the mapping method provided by the embodiment of the invention to be capable of automatically mapping without manual operation and saving manpower.
In some embodiments, please refer to fig. 3, which illustrates a sub-flowchart of step 130 of the method shown in fig. 2, wherein the step 130 includes, but is not limited to, the following steps:
step 131: and acquiring the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary.
In the embodiment of the present invention, in order to obtain the optimal detectable boundary, the distance between the intelligent robot and each detected boundary needs to be obtained to obtain the nearest detectable boundary that the intelligent robot can reach; acquiring the advancing direction of the intelligent robot and the included angle between the advancing direction of the intelligent robot and each detectable boundary so as to acquire the detectable boundary with the minimum rotation angle required by the intelligent robot during movement; the size of each detectable boundary needs to be acquired to acquire the detectable boundary with the highest detection efficiency. The intelligent robot can acquire the distance between the intelligent robot and each detection boundary through a distance detection device, such as a laser detection device; the intelligent robot can detect the advancing direction of the intelligent robot through a gyroscope; the intelligent robot can obtain the position information of the grid where the detectable boundary is located, and the intelligent robot can obtain the included angle between the intelligent robot and each detectable boundary by combining the position of the grid where the intelligent robot is located; the intelligent robot can acquire the size of each detectable boundary according to the number of grids occupied by the detectable boundary.
Step 132: and calculating the detection cost of each detectable boundary according to the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary.
In an embodiment of the present invention, further, a certain weight is taken to calculate the distance between the intelligent robot and each detectable boundary, the angle between the advancing direction of the intelligent robot and each detectable boundary, and the size of each detectable boundary, so as to obtain the detection cost of each detectable boundary.
Step 133: and taking the detectable boundary with the highest detection cost as the optimal detectable boundary.
In the embodiment of the invention, the higher the detection cost is, the closer the detectable boundary is to the intelligent robot, the largest detectable boundary range or area is, and the minimum rotation angle required by the intelligent robot is, so that the optimal detectable boundary which can be reached by the intelligent robot most quickly and can perform detection work most efficiently can be obtained by taking the detectable boundary with the highest detection cost. Specifically, the calculation formula for calculating the detection cost of each detectable boundary is as follows:
Figure GDA0003563875240000111
wherein the content of the first and second substances,
Figure GDA0003563875240000112
a detection cost, f, representing the detectable boundary j j D Representing the distance between the intelligent robot and the detectable boundary j, f j S Indicating the size of said detectable boundary j, f j R Representing a heading and a detectable boundary of the intelligent robotAngle between j, ω d Represents the f j D Weight of (a), ω s Represents the f j S Weight of (a), ω r Represents said f j R The weight of (c).
In some embodiments, please refer to fig. 4, which illustrates a sub-flowchart of step 140 of the method of fig. 2, wherein step 140 includes, but is not limited to, the following steps:
step 141: and acquiring the intermediate point of the optimal detectable boundary.
Step 142: finding a first traffic path from a current location of the intelligent robot to the intermediate point in the grid map.
Step 143: and if the first passing path is found, controlling the intelligent robot to move to the intermediate point along the first passing path, and then detecting an unknown region from the optimal detectable boundary.
In the embodiment of the invention, after the optimal detectable boundary is obtained, the intelligent robot needs to be driven to move to the optimal detectable boundary or the vicinity of the optimal detectable boundary for further detection, so as to expand the grid map. Preferably, referring to fig. 1 together, in order to enable the intelligent robot to most conveniently detect the obstacle information of the grid in an unknown state around the optimal detectable boundary S, a middle point D1 of the optimal detectable boundary is selected, a first passing path from the current position to the middle point of the intelligent robot is obtained, then the intelligent robot is driven to move to the middle point D1 along the first passing path, and then the detection is performed from the optimal detectable boundary S.
It should be noted that, if there is no obstacle between the linear distance from the current position of the intelligent robot to the intermediate point D1, the first passing path is a linear segment from the current position of the intelligent robot to the intermediate point D1. If an obstacle exists between the linear distance from the current position of the intelligent robot to the intermediate point D1, a shortest path algorithm, such as Dijkstra algorithm and A-algorithm, is adopted to obtain the shortest path from the current position of the intelligent robot to the intermediate point D1, and the shortest path is used as the first passing path.
In some embodiments, with continued reference to fig. 4, the method further comprises:
step 144: if the first passing path is not found, the gravity center of the geometric figure formed by the optimal detectable boundary is obtained
Step 145: finding a second path in the grid map from the current location of the intelligent robot to the center of gravity
Step 146: and if the second passing path is found, controlling the intelligent robot to move to the gravity center along the second passing path, and then detecting an unknown area from the optimal detectable boundary.
In the embodiment of the present invention, there may be a case where an obstacle exists at the intermediate point D1, in which case the smart robot cannot obtain the first passing path, in this case, a center of gravity of a geometric figure formed by the optimal detectable boundaries, such as the center of gravity D2 shown in fig. 1, is obtained, a second passing path from the current position to the center of gravity D2 is obtained, and then the smart robot is driven to move to the center of gravity D2 along the second passing path. In the same manner as the first passing path, when there is no obstacle in the straight-line distance, the straight-line distance from the current position of the intelligent robot to the center of gravity D2 is the second passing path, and when there is an obstacle in the straight-line distance, the shortest path from the current position to the center of gravity D2 is obtained as the second passing path.
Alternatively, if an obstacle is also present at the center of gravity D2, so that the second traffic route cannot be acquired, it may be determined whether there is a position where there is no obstacle at the straight line distance between the intermediate point D1 and the center of gravity D2, and if so, the smart robot may be driven to that position.
An embodiment of the present invention further provides a drawing establishing apparatus, which is applied to an intelligent robot, please refer to fig. 5, which shows a structure diagram of the drawing establishing apparatus provided in the embodiment of the present invention, where the drawing establishing apparatus 200 includes: an acquisition module 210, an extraction module 220, a determination module 230, and a control module 240.
The acquisition module 210 is configured to acquire detectable boundaries of the grid map.
The extraction module 220 is configured to extract detectable boundaries of the grid map.
The determining module 230 is configured to determine an optimal detectable boundary from the extracted detectable boundaries according to a preset fast exploitation algorithm.
The control module 240 is configured to control the intelligent robot to detect the unknown region from the optimal detectable boundary, and expand the grid map.
In some embodiments, the determining module 230 is further configured to obtain a distance between the intelligent robot and each detectable boundary, an angle between a forward direction of the intelligent robot and each detectable boundary, and a size of each detectable boundary;
calculating the detection cost of each detectable boundary according to the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary;
and taking the detectable boundary with the highest detection cost as the optimal detectable boundary.
In some embodiments, the calculation formula for calculating the detection cost of each of the detectable boundaries is as follows:
Figure GDA0003563875240000131
wherein the content of the first and second substances,
Figure GDA0003563875240000132
a detection cost, f, representing the detectable boundary j j D Representing the distance between the intelligent robot and the detectable boundary j, f j S Indicating the size of said detectable boundary j, f j R Indicating a heading of the intelligent robotAngle, omega, with detectable boundary j d Represents the f j D Weight of (a), ω s Represents said f j S Weight of (a), ω r Represents the f j R The weight of (c).
In some embodiments, the control module 240 is further configured to obtain a midpoint of the optimal detectable boundary;
finding a first traffic path from a current location of the intelligent robot to the intermediate point in the grid map;
and if the first passing path is found, controlling the intelligent robot to move to the intermediate point along the first passing path, and then detecting an unknown region from the optimal detectable boundary.
In some embodiments, the control module 240 is further configured to obtain a center of gravity of a geometric figure formed by the optimal detectable boundary if the first transit path is not found;
finding a second traffic path from a current location of the intelligent robot to the center of gravity in the grid map;
and if the second passing path is found, controlling the intelligent robot to move to the gravity center along the second passing path, and then detecting an unknown region from the optimal detectable boundary.
An embodiment of the present invention further provides an intelligent robot, please refer to fig. 6, which shows a hardware structure of an intelligent robot capable of executing the mapping method described in fig. 2 to fig. 4. The intelligent robot 10 may be the intelligent robot 10 shown in fig. 1.
The intelligent robot 10 includes: at least one processor 11; and a memory 12 communicatively coupled to the at least one processor 11, which is exemplified by one processor 11 in fig. 6. The memory 12 stores instructions executable by the at least one processor 11, and the instructions are executed by the at least one processor 11 to enable the at least one processor 11 to execute the mapping method described in fig. 2 to 4. The processor 11 and the memory 12 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The memory 12 is a non-volatile computer-readable storage medium and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the mapping method in the embodiment of the present application, for example, the modules shown in fig. 5. The processor 11 executes various functional applications and data processing of the intelligent robot by running nonvolatile software programs, instructions and modules stored in the memory 12, so as to implement the method for drawing the embodiment of the method.
The memory 12 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the mapping apparatus, and the like. Further, the memory 12 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 12 may optionally include memory located remotely from the processor 11, and these remote memories may be connected to the mapping apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 12, and when executed by the one or more processors 11, perform the mapping method in any of the above-described method embodiments, for example, perform the method steps of fig. 2 to 4 described above, and implement the functions of the modules and units in fig. 5.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application also provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, for example, to perform the method steps of fig. 2-4 described above to implement the functions of the modules in fig. 5.
Embodiments of the present application further provide a computer program product comprising a computer program stored on a non-volatile computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform a mapping method in any of the above-described method embodiments, for example, to perform the method steps of fig. 2 to 4 described above, to implement the functions of the modules in fig. 5.
The embodiment of the invention provides a mapping method, which comprises the steps of firstly obtaining a grid map obtained by current detection of an intelligent robot, then extracting detectable boundaries of the grid map, then determining an optimal detectable boundary from the extracted detectable boundaries according to a preset rapid development algorithm, finally controlling the intelligent robot to detect an unknown region from the optimal detectable boundary, expanding the grid map, and enabling the intelligent robot executing the mapping method provided by the embodiment of the invention to be capable of automatically mapping without manual operation and saving manpower.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes in the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the computer program can be stored in a computer readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A mapping method is applied to an intelligent robot and comprises the following steps:
acquiring a grid map currently detected by the intelligent robot;
extracting detectable boundaries of the grid map;
determining the optimal detectable boundary which can be reached at the fastest speed in the moving direction and distance of the intelligent robot from the extracted detectable boundaries according to a preset rapid exploitation algorithm;
and controlling the intelligent robot to move to detect an unknown region from a middle point on the optimal detectable boundary, or controlling the intelligent robot to move to a gravity center of a geometric figure formed by the optimal detectable boundary near the optimal detectable boundary to detect the unknown region so as to acquire the obstacle information in a grid in an unknown state around the detectable boundary, and expanding the grid map.
2. The mapping method of claim 1, wherein the step of determining an optimal detectable boundary from the extracted detectable boundaries according to a predetermined fast exploitation algorithm further comprises:
acquiring the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary, and the size of each detectable boundary;
calculating the detection cost of each detectable boundary according to the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary;
and taking the detectable boundary with the highest detection cost as the optimal detectable boundary.
3. The method of claim 2, wherein the calculation of the detection cost for each of the detectable boundaries is formulated as follows:
Figure FDA0003542622210000011
wherein the content of the first and second substances,
Figure FDA0003542622210000012
a detection cost, f, representing the detectable boundary j j D Representing the distance between the intelligent robot and the detectable boundary j, f j S Indicating the size of said detectable boundary j, f j R Representing the angle, ω, between the heading of the intelligent robot and the detectable boundary j d Represents said f j D Weight of (a), ω s Represents said f j S Weight of (a), ω r Represents said f j R The weight of (c).
4. The mapping method according to any of claims 1-3,
the step of controlling the intelligent robot to detect the unknown region from the optimal detectable boundary further comprises:
acquiring the intermediate point of the optimal detectable boundary;
finding a first traffic path from a current location of the intelligent robot to the intermediate point in the grid map;
and if the first passing path is found, controlling the intelligent robot to move to the intermediate point along the first passing path, and then detecting an unknown region from the optimal detectable boundary.
5. The mapping method according to claim 4, wherein the method further comprises:
if the first passing path is not found, acquiring the gravity center of a geometric figure formed by the optimal detectable boundary;
finding a second traffic path from a current location of the intelligent robot to the center of gravity in the grid map;
and if the second passing path is found, controlling the intelligent robot to move to the gravity center along the second passing path, and then detecting an unknown region from the optimal detectable boundary.
6. The utility model provides a build picture device which characterized in that is applied to intelligent robot, includes:
the acquisition module is used for acquiring a grid map currently detected by the intelligent robot;
the extraction module is used for extracting the detectable boundaries of the grid map;
the determining module is used for determining the optimal detectable boundary which can be reached at the fastest speed in the moving direction and the distance of the intelligent robot from the extracted detectable boundaries according to a preset rapid exploitation algorithm;
and the control module is used for controlling the intelligent robot to move to a position of the center of gravity of a geometric figure formed by the optimal detectable boundary near the optimal detectable boundary so as to detect the unknown region, so that the information of the obstacles in a grid in an unknown state around the detectable boundary is acquired, and the grid map is expanded.
7. The apparatus of claim 6,
the determining module is further used for acquiring the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary, and the size of each detectable boundary;
calculating the detection cost of each detectable boundary according to the distance between the intelligent robot and each detectable boundary, the included angle between the advancing direction of the intelligent robot and each detectable boundary and the size of each detectable boundary;
and taking the detectable boundary with the highest detection cost as the optimal detectable boundary.
8. The apparatus of claim 7, wherein the calculation formula for calculating the detection cost of each of the detectable boundaries is as follows:
Figure FDA0003542622210000031
wherein the content of the first and second substances,
Figure FDA0003542622210000032
a detection cost, f, representing the detectable boundary j j D Representing the distance between the intelligent robot and the detectable boundary j, f j S Indicating the size of said detectable boundary j, f j R Representing a pinch between a heading of the intelligent robot and a detectable boundary jAngle, omega d Represents said f j D Weight of (a), ω s Represents said f j S Weight of (a), ω r Represents said f j R The weight of (c).
9. The apparatus according to any one of claims 6 to 8,
the control module is further used for acquiring the intermediate point of the optimal detectable boundary;
finding a first traffic path from a current location of the intelligent robot to the intermediate point in the grid map;
and if the first passing path is found, controlling the intelligent robot to move to the intermediate point along the first passing path, and then detecting an unknown region from the optimal detectable boundary.
10. The apparatus of claim 9,
the control module is further used for acquiring the gravity center of a geometric figure formed by the optimal detectable boundary if the first passing path is not found;
finding a second traffic path from a current location of the intelligent robot to the center of gravity in the grid map;
and if the second passing path is found, controlling the intelligent robot to move to the gravity center along the second passing path, and then detecting an unknown area from the optimal detectable boundary.
11. An intelligent robot, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A computer program product, characterized in that the computer program product comprises a computer program stored on a computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1-5.
CN201911303185.2A 2019-12-17 2019-12-17 Picture construction method and device and intelligent robot Active CN110967029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911303185.2A CN110967029B (en) 2019-12-17 2019-12-17 Picture construction method and device and intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911303185.2A CN110967029B (en) 2019-12-17 2019-12-17 Picture construction method and device and intelligent robot

Publications (2)

Publication Number Publication Date
CN110967029A CN110967029A (en) 2020-04-07
CN110967029B true CN110967029B (en) 2022-08-26

Family

ID=70034974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911303185.2A Active CN110967029B (en) 2019-12-17 2019-12-17 Picture construction method and device and intelligent robot

Country Status (1)

Country Link
CN (1) CN110967029B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000507A (en) * 2006-09-29 2007-07-18 浙江大学 Method for moving robot simultanously positioning and map structuring at unknown environment
JP2011180660A (en) * 2010-02-26 2011-09-15 Advanced Telecommunication Research Institute International Area-dividing device, area division program area division method and communication robot
CN102393741A (en) * 2011-08-25 2012-03-28 东南大学 Control system and control method for visual guiding mobile robot
JP2013117523A (en) * 2011-11-01 2013-06-13 Ge Aviation Systems Llc Methods for adjusting relative navigation system
CN104714551A (en) * 2015-03-23 2015-06-17 中国科学技术大学 Indoor area covering method suitable for vehicle type mobile robot
CN104898660A (en) * 2015-03-27 2015-09-09 中国科学技术大学 Indoor map building method for improving robot path planning efficiency
US9798327B2 (en) * 2016-01-08 2017-10-24 King Fahd University Of Petroleum And Minerals Apparatus and method for deploying sensors
CN108422670A (en) * 2018-03-09 2018-08-21 西安交通大学 A kind of paths planning method in discontinuous grid division three-dimensional point cloud face
CN109324626A (en) * 2018-12-18 2019-02-12 中新智擎科技有限公司 Robot and its short distance barrier-avoiding method, storage medium based on infrared distance measurement
CN109341707A (en) * 2018-12-03 2019-02-15 南开大学 Mobile robot three-dimensional map construction method under circumstances not known
CN109363585A (en) * 2018-12-17 2019-02-22 深圳市银星智能科技股份有限公司 Partition traversing method, sweeping method and sweeping robot thereof
CN109491394A (en) * 2018-12-17 2019-03-19 中新智擎科技有限公司 A kind of virtual barrier-avoiding method, device, storage medium and robot
CN109658432A (en) * 2018-12-27 2019-04-19 南京苏美达智能技术有限公司 A kind of the boundary generation method and system of mobile robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4608022A (en) * 1984-05-23 1986-08-26 Richard C. Bellofatto Air and sea navigational instrument simulation and instructional aid
SE0004466D0 (en) * 2000-12-04 2000-12-04 Abb Ab Mobile Robot
US9020639B2 (en) * 2009-08-06 2015-04-28 The Regents Of The University Of California Multimodal dynamic robotic systems
CN108507578B (en) * 2018-04-03 2021-04-30 珠海市一微半导体有限公司 Navigation method of robot
CN110146090B (en) * 2019-06-26 2023-04-18 张收英 Robot right walking navigation method and robot
CN110531760B (en) * 2019-08-16 2022-09-06 广东工业大学 Boundary exploration autonomous mapping method based on curve fitting and target point neighborhood planning

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000507A (en) * 2006-09-29 2007-07-18 浙江大学 Method for moving robot simultanously positioning and map structuring at unknown environment
JP2011180660A (en) * 2010-02-26 2011-09-15 Advanced Telecommunication Research Institute International Area-dividing device, area division program area division method and communication robot
CN102393741A (en) * 2011-08-25 2012-03-28 东南大学 Control system and control method for visual guiding mobile robot
JP2013117523A (en) * 2011-11-01 2013-06-13 Ge Aviation Systems Llc Methods for adjusting relative navigation system
CN104714551A (en) * 2015-03-23 2015-06-17 中国科学技术大学 Indoor area covering method suitable for vehicle type mobile robot
CN104898660B (en) * 2015-03-27 2017-10-03 中国科学技术大学 A kind of indoor map construction method for improving robot path planning's efficiency
CN104898660A (en) * 2015-03-27 2015-09-09 中国科学技术大学 Indoor map building method for improving robot path planning efficiency
US9798327B2 (en) * 2016-01-08 2017-10-24 King Fahd University Of Petroleum And Minerals Apparatus and method for deploying sensors
CN108422670A (en) * 2018-03-09 2018-08-21 西安交通大学 A kind of paths planning method in discontinuous grid division three-dimensional point cloud face
CN109341707A (en) * 2018-12-03 2019-02-15 南开大学 Mobile robot three-dimensional map construction method under circumstances not known
CN109363585A (en) * 2018-12-17 2019-02-22 深圳市银星智能科技股份有限公司 Partition traversing method, sweeping method and sweeping robot thereof
CN109491394A (en) * 2018-12-17 2019-03-19 中新智擎科技有限公司 A kind of virtual barrier-avoiding method, device, storage medium and robot
CN109324626A (en) * 2018-12-18 2019-02-12 中新智擎科技有限公司 Robot and its short distance barrier-avoiding method, storage medium based on infrared distance measurement
CN109658432A (en) * 2018-12-27 2019-04-19 南京苏美达智能技术有限公司 A kind of the boundary generation method and system of mobile robot

Also Published As

Publication number Publication date
CN110967029A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
KR102273559B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11747477B2 (en) Data collecting method and system
CN107436148B (en) Robot navigation method and device based on multiple maps
US11709058B2 (en) Path planning method and device and mobile device
CN109214248B (en) Method and device for identifying laser point cloud data of unmanned vehicle
KR102106359B1 (en) Method and apparatus for identifying intersections in an electronic map
CN110956161A (en) Autonomous map building method and device and intelligent robot
US9207678B2 (en) Method and apparatus for constructing map for mobile robot
CN109163722B (en) Humanoid robot path planning method and device
CN107491068B (en) Mobile robot path planning method and device and path planning equipment
CN105652876A (en) Mobile robot indoor route planning method based on array map
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
CN109341698B (en) Path selection method and device for mobile robot
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN110764518B (en) Underwater dredging robot path planning method and device, robot and storage medium
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
JP2016149090A (en) Autonomous mobile device, autonomous mobile system, autonomous mobile method and program
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
CN111263308A (en) Positioning data acquisition method and system
CN104034338B (en) A kind of dynamic navigation method and device
CN108521809A (en) Obstacle information reminding method, system, unit and recording medium
CN111609854A (en) Three-dimensional map construction method based on multiple depth cameras and sweeping robot
CN110967029B (en) Picture construction method and device and intelligent robot
US20220187845A1 (en) Method for estimating positioning of moving object by using big cell grid map, recording medium in which program for implementing same is stored, and computer program stored in medium in order to implement same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230418

Address after: 518000 C5, college industrialization complex building, Shenzhen Virtual University Park, No. 2, Yuexing Third Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: Shenzhen Weidang Life Technology Co.,Ltd.

Address before: 518000 floor 2, torch venture building, No. 22, Yanshan Road, Nanshan District, Shenzhen, Guangdong

Patentee before: INTERNATIONAL INTELLIGENT MACHINES Co.,Ltd.

TR01 Transfer of patent right