CN111290393A - Driving control method and device, intelligent robot and computer readable storage medium - Google Patents

Driving control method and device, intelligent robot and computer readable storage medium Download PDF

Info

Publication number
CN111290393A
CN111290393A CN202010142799.3A CN202010142799A CN111290393A CN 111290393 A CN111290393 A CN 111290393A CN 202010142799 A CN202010142799 A CN 202010142799A CN 111290393 A CN111290393 A CN 111290393A
Authority
CN
China
Prior art keywords
point cloud
dimensional
map
intelligent robot
preset path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010142799.3A
Other languages
Chinese (zh)
Inventor
王恒
卜大鹏
陈侃
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010142799.3A priority Critical patent/CN111290393A/en
Publication of CN111290393A publication Critical patent/CN111290393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses control method traveles for intelligent robot, and intelligent robot includes the body and installs the function piece on the body, and the function piece is installed in the mounting height department of body, and the control method traveles includes: acquiring real-time two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot; generating a first map according to the real-time two-dimensional or three-dimensional point cloud; generating a second map according to the point cloud in the real-time two-dimensional or three-dimensional point cloud, wherein the height relative to the driving surface is not higher than the installation height; judging whether the body can pass through the first map by a preset path or not and whether the functional piece can pass through the second map by the preset path or not; and if so, controlling the intelligent robot to travel along a preset path. The application also discloses a driving control device, an intelligent robot and a computer readable storage medium. Whether the body and the functional piece can pass through the preset path or not is judged through the first map and the second map respectively, so that the intelligent robot can better avoid obstacles, and the quality of task completion of the intelligent robot is improved.

Description

Driving control method and device, intelligent robot and computer readable storage medium
Technical Field
The present disclosure relates to the field of intelligent robot control technologies, and more particularly, to a driving control method, a driving control apparatus, an intelligent robot, and a computer-readable storage medium.
Background
The intelligent robot needs to avoid obstacles in the driving process, the whole intelligent robot is taken as a whole to be checked in the driving process, whether the intelligent robot collides with the obstacles or not is judged, in some working scenes, the driving passageway is narrow, and under the condition that suspended obstacles exist, because the body appearance of the intelligent robot is inconsistent with the size of a functional part, when the intelligent robot is taken as a whole to be judged, the intelligent robot can pass through the working scene practically, the intelligent robot can easily cause misjudgment, and the quality of executing tasks of the intelligent robot is influenced.
Disclosure of Invention
In view of the above, the present invention is directed to solving, at least to some extent, one of the problems in the related art. To this end, embodiments of the present application provide a travel control method, a travel control device, an intelligent robot, and a computer-readable storage medium.
The driving control method is applied to an intelligent robot, the intelligent robot comprises a body and a functional piece installed on the body, the functional piece is installed at the installation height of the body, and the driving control method comprises the following steps: acquiring real-time two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot; generating a first map according to the real-time two-dimensional or three-dimensional point cloud; generating a second map according to the point cloud in the real-time two-dimensional or three-dimensional point cloud, wherein the height relative to the driving surface is not higher than the installation height; judging whether the body can pass through the first map by a preset path or not and whether the functional piece can pass through the second map by the preset path or not; and if so, controlling the intelligent robot to travel along the preset path.
In the embodiment of the application, the intelligent robot comprises a body and a functional part, firstly, real-time two-dimensional or three-dimensional point clouds in a preset range around the intelligent robot are obtained, then a first map is generated according to all the two-dimensional or three-dimensional point clouds, a second map is generated according to the point clouds which are not higher than the installation height of the functional part, whether the body can pass through the first map by a preset path or not is judged during the driving process, whether the functional part can pass through the second map by the preset path or not is judged, if yes, the intelligent robot is controlled to drive by the preset path, therefore, the body and the functional part of the intelligent robot are respectively judged in the first map and the second map, the situation that the intelligent robot cannot pass through when the intelligent robot actually drives can be avoided, the intelligent robot can actually pass through, and the situation that the driving cannot pass occurs when the intelligent robot is judged as a whole due to the fact that the appearance sizes of the body and the functional part, meanwhile, the driving area of the intelligent robot can be increased, the intelligent robot can better execute tasks conveniently, and the task completion quality of the intelligent robot is improved.
In some embodiments, the acquiring a real-time two-dimensional or three-dimensional point cloud within a preset range around the intelligent robot includes: detecting two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot; filtering noise points in the detected two-dimensional or three-dimensional point cloud and the point cloud on the driving surface to form an intermediate point cloud; and converting the intermediate point cloud into a world coordinate system to form the real-time two-dimensional or three-dimensional point cloud.
In the embodiment, the two-dimensional or three-dimensional point cloud in the preset range around the intelligent robot is detected, then noise points in the two-dimensional or three-dimensional point cloud and the point cloud on the driving surface are filtered to form the intermediate point cloud, and the intermediate point cloud is converted into the world coordinate system to form the real-time two-dimensional or three-dimensional point cloud, so that the acquired real-time two-dimensional or three-dimensional point cloud is more accurate, and the map established by the intelligent robot is more accurate.
In some embodiments, the generating a first map from the real-time two-dimensional or three-dimensional point cloud comprises: projecting the real-time two-dimensional or three-dimensional point cloud on the driving surface to generate a first point cloud; generating the first map according to the first point cloud; and marking the first point cloud as a first color within the first map.
In the embodiment, the real-time two-dimensional or three-dimensional point cloud is projected onto the driving surface to generate the first point cloud, the first map is generated, and the first point cloud is marked as the first color in the first map, so that the first map is the two-dimensional map, the occupied space of the memory of the intelligent robot and the occupancy rate of the processor can be reduced, and the first point cloud is marked as the first color, so that the intelligent robot can conveniently identify and conveniently distinguish by a user.
In some embodiments, the generating a second map from the point cloud of the real-time two-dimensional or three-dimensional point cloud having a height relative to the driving surface that is not higher than the installation height comprises: projecting the point cloud of which the height relative to the driving surface is not higher than the installation height in the real-time two-dimensional or three-dimensional point cloud onto the driving surface to generate a second point cloud; generating the second map according to the second point cloud; and marking the second point cloud in a second color within the second map.
In the embodiment, the point cloud with the height not higher than the installation height relative to the driving surface in the real-time two-dimensional or three-dimensional point cloud is projected onto the driving surface to generate the second point cloud, then the second map is generated according to the second point cloud, and finally the second point cloud is marked as the second color in the second map.
In some embodiments, the determining whether the body can pass through the first map in a preset path and the function can pass through the second map in the preset path includes: judging whether the body interferes with the area of the first color or not when the body runs along the preset path; and judging whether the functional piece interferes with the area of the second color or not when the functional piece runs along the preset path.
In the embodiment, whether the body interferes with the area of the first color or not and whether the functional part interferes with the area of the second color or not are judged, so that the intelligent robot can better avoid the obstacle without colliding with the obstacle, and the intelligent robot is safer when running.
In some embodiments, the running control method further includes: if the body can not pass through the first map by a preset path, and/or if the functional element can not pass through the second map by the preset path; and adjusting the driving path of the intelligent robot so that the body avoids the area of the first color and the function piece avoids the area of the second color.
In this embodiment, if the body cannot pass through the first map by the preset path and/or the function element cannot pass through the second map by the preset path, the intelligent robot can adjust the driving path so that the body avoids the area with the first color and the function element avoids the area with the second color, thereby enabling the intelligent robot to adjust the driving path in real time during the driving process, avoiding collision with an obstacle and enabling the intelligent robot to safely drive.
In some embodiments, the running control method further includes: controlling the intelligent robot to drive around a working area to obtain initial two-dimensional or three-dimensional point cloud in the working area; generating a third map according to the initial two-dimensional or three-dimensional point cloud; and generating the preset path according to the third map.
In the embodiment, the intelligent robot is controlled to drive around the working area, the initial two-dimensional or three-dimensional point cloud in the working area can be obtained, then the third map is generated according to the initial two-dimensional or three-dimensional point cloud, the preset path is generated according to the third map, and the intelligent robot plans the preset path according to the working area, so that the intelligent robot can drive along the preset path, and the intelligent robot can drive safely.
The running control device of the embodiment of the application is applied to the intelligent robot, the intelligent robot comprises a body and a functional piece, and the running control device comprises: the acquisition module is used for acquiring real-time two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot; the first mapping module is used for generating a first map according to the real-time two-dimensional or three-dimensional point cloud; the second mapping module is used for generating a second map according to the point cloud in the real-time two-dimensional or three-dimensional point cloud, wherein the height of the point cloud relative to the driving surface is not higher than the installation height; the judging module is used for judging whether the body can pass through the first map by a preset path or not and whether the functional piece can pass through the second map by the preset path or not; the control module is used for controlling the intelligent robot to drive along the preset path when the body can pass through the first map along the preset path and the functional piece can pass through the second map along the preset path.
In the driving control device of the embodiment of the application, the intelligent robot comprises a body and a functional piece, real-time two-dimensional or three-dimensional point clouds in a preset range around the intelligent robot are firstly obtained, then a first map is generated according to all the two-dimensional or three-dimensional point clouds, a second map is generated according to the point clouds which are not higher than the installation height of the functional piece, whether the body can pass through the first map by a preset path or not is judged in the driving process, whether the functional piece can pass through the second map by the preset path or not is judged, if yes, the intelligent robot is controlled to drive by the preset path, therefore, the body and the functional piece of the intelligent robot are respectively judged in the first map and the second map, the situation that the intelligent robot cannot pass through the driving process when the intelligent robot actually drives and the intelligent robot can actually pass through the driving process, the phenomenon that the driving cannot pass through the driving process is caused when the intelligent robot is judged as a whole due to the fact that the appearance sizes, meanwhile, the driving area of the intelligent robot can be increased, the intelligent robot can better execute tasks conveniently, and the task completion quality of the intelligent robot is improved.
In some embodiments, the obtaining module is further configured to: detecting two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot; filtering noise points in the detected two-dimensional or three-dimensional point cloud and the point cloud on the driving surface to form an intermediate two-dimensional or three-dimensional point cloud; and converting the intermediate two-dimensional or three-dimensional point cloud into a world coordinate system to form the real-time two-dimensional or three-dimensional point cloud.
In the embodiment, the two-dimensional or three-dimensional point cloud in the preset range around the intelligent robot is detected, then noise points in the two-dimensional or three-dimensional point cloud and the point cloud on the driving surface are filtered to form a middle two-dimensional or three-dimensional point cloud, and the middle two-dimensional or three-dimensional point cloud is converted into a world coordinate system to form a real-time two-dimensional or three-dimensional point cloud.
In some embodiments, the first charting module is further to: projecting the real-time two-dimensional or three-dimensional point cloud on the driving surface to generate a first point cloud; generating the first map according to the first point cloud; and marking the first point cloud as a first color within the first map.
In the embodiment, the real-time two-dimensional or three-dimensional point cloud is projected onto the driving surface to generate the first point cloud, the first map is generated, and the first point cloud is marked as the first color in the first map, so that the first map is the two-dimensional map, the memory space of the intelligent robot and the occupancy rate of the processor can be reduced, and the first point cloud is marked as the first color, so that the intelligent robot can conveniently identify and a user can conveniently distinguish.
In some embodiments, the second mapping module is further configured to: projecting the point cloud of which the height relative to the driving surface is not higher than the installation height in the real-time two-dimensional or three-dimensional point cloud onto the driving surface to generate a second point cloud; generating the second map according to the second point cloud; and marking the second point cloud in a second color within the second map.
In the embodiment, the point cloud with the height not higher than the installation height relative to the driving surface in the real-time two-dimensional or three-dimensional point cloud is projected onto the driving surface to generate the second point cloud, then the second map is generated according to the second point cloud, and finally the second point cloud is marked as the second color in the second map.
In some embodiments, the determining module is further configured to: judging whether the body interferes with the area of the first color or not when the body runs along the preset path; and judging whether the functional piece interferes with the area of the second color or not when the functional piece runs along the preset path.
In the embodiment, whether the body interferes with the area of the first color or not and whether the functional part interferes with the area of the second color or not are judged, so that the intelligent robot can better avoid the obstacle without colliding with the obstacle, and the intelligent robot is safer when running.
In some embodiments, the control module is further configured to: if the body can not pass through the first map by a preset path, and/or if the functional element can not pass through the second map by the preset path; and adjusting the driving path of the intelligent robot so that the body avoids the area of the first color and the function piece avoids the area of the second color.
In this embodiment, if the body cannot pass through the first map along the preset path and/or the function element cannot pass through the second map along the preset path, the intelligent robot can adjust the driving path so that the body avoids the area with the first color and the function element avoids the area with the second color, thereby enabling the intelligent robot to adjust the driving path in real time during the driving process, avoiding collision with an obstacle and enabling the intelligent robot to safely drive.
In some embodiments, the control module is further configured to: controlling the intelligent robot to drive around a working area to obtain initial two-dimensional or three-dimensional point cloud in the working area; generating a third map according to the initial two-dimensional or three-dimensional point cloud; and generating the preset path according to the third map.
In the embodiment, the intelligent robot is controlled to drive around the working area, the initial two-dimensional or three-dimensional point cloud in the working area can be obtained, then the third map is generated according to the initial two-dimensional or three-dimensional point cloud, the preset path is generated according to the third map, and the intelligent robot plans the preset path according to the working area, so that the intelligent robot can drive according to the preset path, and the intelligent robot can drive safely.
The intelligent robot of this application embodiment includes: one or more processors, memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs including instructions for performing the travel control method of any of the above embodiments.
In the intelligent robot of the embodiment of the application, the intelligent robot comprises a body and a functional part, real-time two-dimensional or three-dimensional point clouds in a preset range around the intelligent robot are firstly obtained, then a first map is generated according to all the two-dimensional or three-dimensional point clouds, a second map is generated according to the point clouds which are not higher than the installation height of the functional part, whether the body can pass through the first map by a preset path or not is judged in the driving process, whether the functional part can pass through the second map by the preset path or not is judged, if yes, the intelligent robot is controlled to drive by the preset path, therefore, the body and the functional part of the intelligent robot are respectively judged in the first map and the second map, the situation that the intelligent robot cannot pass through the intelligent robot when actually driving and the intelligent robot can actually pass through the intelligent robot can be avoided, because the appearance sizes of the body and the functional part are not consistent, the intelligent robot is judged as a whole, meanwhile, the driving area of the intelligent robot can be increased, the intelligent robot can better execute tasks conveniently, and the task completion quality of the intelligent robot is improved.
The computer-readable storage media of the embodiments of the present application, when executed by one or more processors, cause the processors to perform the travel control method of any of the above embodiments.
In the computer-readable storage medium of the embodiment of the application, the intelligent robot comprises a body and a functional piece, real-time two-dimensional or three-dimensional point clouds in a preset range around the intelligent robot are firstly obtained, then a first map is generated according to all the two-dimensional or three-dimensional point clouds, a second map is generated according to the point clouds which are not higher than the installation height of the functional piece, whether the body can pass through the first map by a preset path or not is judged during the driving process, whether the functional piece can pass through the second map by the preset path or not is judged, if yes, the intelligent robot is controlled to drive by the preset path, therefore, the body and the functional piece of the intelligent robot are respectively judged in the first map and the second map, the situation that the intelligent robot cannot pass through the intelligent robot when the intelligent robot actually drives can be avoided, the intelligent robot can actually pass through the intelligent robot can be judged as a whole due to the inconsistent appearance sizes of the body and the functional piece, meanwhile, the driving area of the intelligent robot can be increased, the intelligent robot can better execute tasks conveniently, and the task completion quality of the intelligent robot is improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of a travel control method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of a configuration of an intelligent robot according to certain embodiments of the present application;
FIG. 3 is a block schematic diagram of an intelligent robot according to certain embodiments of the present application;
FIG. 4 is a block schematic diagram of a travel control apparatus according to certain embodiments of the present application;
FIG. 5 is a flow chart illustrating a method of controlling travel according to certain embodiments of the present application;
FIG. 6 is a schematic view of a scenario of a travel control method according to some embodiments of the present application;
FIG. 7 is a schematic diagram of a scenario of a travel control method according to some embodiments of the present application;
FIG. 8 is a flow chart illustrating a method of controlling travel according to certain embodiments of the present application;
FIG. 9 is a schematic view of a scenario of a travel control method according to some embodiments of the present application;
FIG. 10 is a flow chart illustrating a method of controlling travel according to certain embodiments of the present application;
FIG. 11 is a schematic view of a scenario of a travel control method according to some embodiments of the present application;
FIG. 12 is a schematic diagram of a scenario of a travel control method according to some embodiments of the present application;
FIG. 13 is a schematic flow chart diagram of a travel control method according to certain embodiments of the present application;
FIG. 14 is a schematic view of a scenario of a travel control method according to some embodiments of the present application;
FIG. 15 is a schematic diagram of a scenario of a travel control method according to some embodiments of the present application;
FIG. 16 is a flow chart illustrating a method of controlling travel according to certain embodiments of the present application;
FIG. 17 is a schematic view of a scenario of a travel control method according to some embodiments of the present application;
FIG. 18 is a flow chart illustrating a method of controlling travel according to certain embodiments of the present application;
FIG. 19 is a flow chart illustrating a method of controlling travel according to certain embodiments of the present application;
FIG. 20 is a block diagram of a computer-readable storage medium of some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout.
In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Referring to fig. 1 and 2, the driving control method according to the embodiment of the present disclosure is applied to an intelligent robot 100, the intelligent robot 100 includes a body 40 and a function component 50 installed on the body 40, the function component 50 is installed at an installation height H of the body 40, and the driving control method includes the steps of:
s010: acquiring real-time two-dimensional or three-dimensional point cloud within a preset range around the intelligent robot 100;
s020: generating a first map according to the real-time two-dimensional or three-dimensional point cloud;
s030: generating a second map according to the point cloud with the height relative to the driving surface M not higher than the installation height H in the real-time two-dimensional or three-dimensional point cloud;
s040: judging whether the body 40 can pass through the first map by a preset path or not and whether the functional element 50 can pass through the second map by the preset path or not; and
s050: if yes, the intelligent robot 100 is controlled to travel along a preset path.
Referring to fig. 3, the intelligent robot 100 according to the embodiment of the present disclosure includes one or more processors 10, a memory 20, and one or more programs, where the one or more programs are stored in the memory 20 and executed by the one or more processors 10, and the programs include instructions for executing the driving control method according to the embodiment of the present disclosure. When the processor 10 executes the program, the processor 10 may be configured to implement the travel control method according to any one of the embodiments of the present application. When the processor 10 executes the program, the processor 10 may be configured to implement step S010, step S020, step S030, step S040, and step S050, that is, the processor 10 may be configured to obtain a real-time two-dimensional or three-dimensional point cloud within a preset range around the intelligent robot 100; generating a first map according to the real-time two-dimensional or three-dimensional point cloud; generating a second map according to the point cloud with the height relative to the driving surface M not higher than the installation height H in the real-time two-dimensional or three-dimensional point cloud; judging whether the body 40 can pass through the first map by a preset path or not and whether the functional element 50 can pass through the second map by the preset path or not; and if so, controlling the intelligent robot 100 to travel along the preset path.
Referring to fig. 4, the driving control apparatus 200 according to the embodiment of the present disclosure includes an obtaining module 210, a first mapping module 220, a second mapping module 230, a determining module 240, and a control module 250. The obtaining module 210, the first drawing module 220, the second drawing module 230, the determining module 240 and the control module 250 can be respectively used for implementing the steps S010, S020, S030, S040 and S050. That is, the obtaining module 210 is configured to obtain a real-time two-dimensional or three-dimensional point cloud within a preset range around the intelligent robot 100; the first mapping module 220 is configured to generate a first map according to the real-time two-dimensional or three-dimensional point cloud; the second mapping module 230 is configured to generate a second map according to a point cloud with a height not higher than the installation height H relative to the driving surface M in the real-time two-dimensional or three-dimensional point cloud; the determining module 240 is configured to determine whether the body 40 can pass through the first map in the preset path and whether the function element 50 can pass through the second map in the preset path; if the determination result output by the determination module 240 is yes, the control module 250 executes step S050, and the control module 250 controls the intelligent robot 100 to travel along the preset path.
In the driving control method, the driving control device 200 and the intelligent robot 100 according to the embodiment of the present application, the intelligent robot 100 includes a body 40 and a function 50, first obtains real-time two-dimensional or three-dimensional point clouds within a preset range around the intelligent robot 100, then generates a first map according to all the two-dimensional or three-dimensional point clouds, generates a second map according to the point clouds not higher than the installation height H of the function 50, determines whether the body 40 can pass through the first map along a preset path during driving, and determines whether the function 50 can pass through the second map along the preset path, if so, controls the intelligent robot 100 to drive along the preset path, thereby determining the body 40 and the function 50 of the intelligent robot 100 in the first map and the second map respectively, and avoiding the intelligent robot 100 from actually passing through during actual driving, due to the fact that the body 40 and the functional pieces 50 are different in appearance size, the situation that the intelligent robot 100 cannot run when the intelligent robot 100 is used as a whole to avoid obstacles occurs, meanwhile, the driving-possible area of the intelligent robot 100 can be increased, the intelligent robot 100 can perform tasks better, and the task completion quality of the intelligent robot 100 is improved.
The intelligent robot 100 may be an industrial robot, an agricultural robot, a home robot, a service robot, a cleaning robot, etc., which is not limited herein. Further, the cleaning robot may be an intelligent robot 100 such as a sweeper, a scrubber, a vacuum cleaner, etc. The intelligent robot 100 may also include elements such as a communication interface 30, a cleaning implement, and the like. The intelligent robot 100 may be used to clean surfaces such as floors, floor tiles, pavements, or cement grounds.
Specifically, the intelligent robot 100 includes a body 40 and a function 50, where the body 40 refers to a body of the intelligent robot 100, and the function 50 refers to an executing device of the intelligent robot 100, for example, when the intelligent robot 100 is a floor washing machine, the function 50 may be a brush, a water absorption and bending, a dust pushing part, and the like, which is not limited herein. The functional element 50 is mounted on the body 40, and the functional element 50 has a certain mounting height H on the body 40, for example, the functional element 50 may be mounted at a position 10 cm, 20 cm, 30 cm, 35 cm, 40 cm, 50 cm, etc. of the body 40, which is not limited herein. The external dimensions of the body 40 and the functional element 50 are different, and the width of the body 40 may be larger than the width of the functional element 50, or the width of the body 40 may be smaller than the width of the functional element 50. In one example, the width of the functional element 50 may be greater than the width of the body 40.
In step S010, a real-time two-dimensional or three-dimensional point cloud within a preset range around the intelligent robot 100 is obtained, where the preset range around the intelligent robot 100 may be a range set by a user, for example, a range around the intelligent robot 100, which is equidistant from 2 meters, 3 meters, 4 meters, 5 meters, 8 meters, and 10 meters, and is not limited herein. The acquired point cloud is real-time two-dimensional or three-dimensional point cloud data, that is, two-dimensional or three-dimensional point cloud data detected by elements such as a sensor on the intelligent robot 100 in real time, so that the acquired point cloud data is more accurate and more stable. It is to be understood that the acquired point cloud may be a two-dimensional point cloud or a three-dimensional point cloud, which is not limited herein.
In the step S020, the first map is generated according to the real-time two-dimensional or three-dimensional point cloud, and it can be understood that the first map is generated according to all the acquired real-time two-dimensional or three-dimensional point cloud data, so that the first map includes all the point cloud data, and the data of the first map is more accurate, and is suitable for navigation and obstacle avoidance of the intelligent robot 100 and the body 40. Wherein the generation of the first map from the real-time two-dimensional or three-dimensional point cloud may be controlled by the first mapping module 220.
In step S030, a second map is generated according to the point cloud of which the height relative to the driving surface M is not higher than the installation height H in the real-time two-dimensional or three-dimensional point cloud, and the functional component 50 is installed at the installation height H of the body 40, it can be understood that the point cloud of which the height relative to the driving surface M is not higher than the installation height H is the point cloud between the functional component 50 and the driving surface M, and the second map only includes the point cloud between the driving surface M and the functional component 50, so that the second map is suitable for navigation and obstacle avoidance of the functional component 50.
Step S040 of determining whether the body 40 can pass through the first map along the preset route and whether the function element 50 can pass through the second map along the preset route, and after step S040 is executed, if the result output in step S040 is yes, the control module 250 executes step S050 of controlling the intelligent robot 100 to travel along the preset route. Because the first map includes all point clouds, judge whether the body 40 can pass through with the route of predetermineeing with first map, the judgement to the body 40 in the process of traveling is more accurate, and the second map includes the point cloud below the functional component 50 mounting height H simultaneously, judges whether the functional component 50 can pass through with the route of predetermineeing through the second map, and the judgement functional component 50 that can be more accurate can whether can collide with the barrier when the operation. Whether the body 40 and the function piece 50 can pass through is judged by separating the first map and the second map, the phenomenon that the body 40 and the function piece 50 cannot pass through can be avoided due to the fact that the sizes of the body 40 and the function piece 50 are inconsistent, and the intelligent robot 100 is judged as a whole, and the body 40 and the function piece 50 can be judged by separating the first map and the second map, the driving area of the intelligent robot 100 can be increased, so that the intelligent robot 100 is more accurate and safe in driving, more ranges can be cleaned, and better cleaning effect can be realized.
Further, in some embodiments, referring to fig. 5, step S010 includes the steps of:
s011: detecting two-dimensional or three-dimensional point clouds in a preset range around the intelligent robot 100;
s012: filtering and detecting noise points in the two-dimensional or three-dimensional point cloud and the point cloud on the driving surface M to form an intermediate point cloud; and
s013: and converting the intermediate point cloud into a world coordinate system to form a real-time two-dimensional or three-dimensional point cloud.
In step S011, the sensor on the intelligent robot 100 may detect two-dimensional or three-dimensional point cloud within a preset range around the sensor, where the sensor may be an ultrasonic sensor, a laser sensor, an image sensor, or the like, which is not limited herein. After the detection of the two-dimensional or three-dimensional point cloud within the preset range around the intelligent robot 100 is completed, some noise points may exist in the detected two-dimensional or three-dimensional point cloud, and the point cloud on the driving surface M of the intelligent robot 100 may also be included in the detected two-dimensional or three-dimensional point cloud, and further in step S012, the data filtering node on the intelligent robot 100 may filter and detect the noise points in the two-dimensional or three-dimensional point cloud and the point cloud on the driving surface M to form an intermediate point cloud, so that the obtained data of the two-dimensional or three-dimensional point cloud is more accurate, and the map established according to the real-time two-dimensional or three-dimensional point. Of course, other nodes on the intelligent robot 100 may execute step S012, which is not limited herein.
Further, since the two-dimensional or three-dimensional point cloud is obtained by the sensor and other elements, and the two-dimensional or three-dimensional point cloud is initially formed in the coordinate system of the sensor and other elements, in order to further generate a map and unify data, the two-dimensional or three-dimensional point cloud under the sensor coordinate system needs to be converted into the world coordinate system, in step S013, the intermediate two-dimensional or three-dimensional point cloud formed in step S012 is converted into the world coordinate system, so as to form the required real-time two-dimensional or three-dimensional point cloud data, and the obtained real-time two-dimensional or three-dimensional point cloud is also in the same coordinate system as the intelligent robot 100. Therefore, the finally obtained real-time two-dimensional or three-dimensional point cloud is more accurate, and obstacle avoidance and navigation are facilitated when the intelligent robot 100 runs.
The obtaining module 210 can be further configured to execute step S011, step S012, and step S013, and the processor 10 can be further configured to execute step S011, step S012, and step S013.
Referring to fig. 6 to 8, in some embodiments, step S020 includes steps of:
s021: projecting the real-time two-dimensional or three-dimensional point cloud D1 on a running surface M to generate a first point cloud D11;
s022: generating a first map T1 according to the first point cloud D11; and
s023: the first point cloud D11 is marked within the first map T1 as a first color Y1.
Specifically, in step S010, a real-time two-dimensional or three-dimensional point cloud D1 is acquired, wherein the real-time two-dimensional or three-dimensional point cloud D1 includes all the two-dimensional or three-dimensional point clouds acquired in real time. In step S021, the real-time two-dimensional or three-dimensional point cloud D1 is projected onto the travel surface M to generate a first point cloud D11, it can be understood that the first point cloud D11 is the two-dimensional point cloud projected on the travel surface M by the real-time two-dimensional or three-dimensional point cloud D1, in step S022, the first map T1 is generated according to the first point cloud D11, so the first map T1 is a two-dimensional map, the first map T1 is made into a two-dimensional map, and subsequent calculation and storage are performed, so that the memory occupied space of the intelligent robot 100 and the occupancy rate of the processor 10 can be reduced. And finally, marking the first point cloud D11 as a first color Y1 in the first map T1, so that the intelligent robot 100 can conveniently identify the first point cloud, and the driving safety of the intelligent robot 100 is improved. It will be appreciated that an area of the first color Y1 indicates an obstacle within the area. The first color Y1 may be blue, black, purple, red, etc., without limitation.
Referring to fig. 7, the first map T1 further includes a third color Y3, where the color of the third color Y3 may be blue, light blue, gray, sky blue, red, etc., the third color Y3 refers to an area where the intelligent robot 100 may collide during driving, and the intelligent robot 100 preferably does not interfere with the area of the third color Y3, or the intelligent robot 100 needs to increase alertness when driving to the area of the third color Y3. Referring to fig. 7, the area of the first map T1 excluding the first color Y1 and the third color Y3, that is, the area of the fourth color Y4 in fig. 7 (shown in white in fig. 7), is an area where the intelligent robot 100 can normally travel.
In one embodiment, in the first map T1, the first color Y1 is purple, the third color Y3 is sky blue, and the fourth color Y4 is black, so that the colors are different from one area to another, thereby facilitating better recognition by the intelligent robot 100 and facilitating understanding of the first map T1 by the user.
The first drawing module 220 is further configured to perform steps S021, S022 and S023, and the processor 10 is further configured to perform steps S021, S022 and S023.
Referring to fig. 6, 9 and 10, in some embodiments, step S030 includes:
s031: projecting a point cloud D2, of the real-time two-dimensional or three-dimensional point clouds D1, of which the height relative to the driving surface M is not higher than the installation height H, onto the driving surface M to generate a second point cloud D21;
s032: generating a second map T2 according to the second point cloud D21; and
s033: the second point cloud D21 is labeled within the second map T2 as a second color Y2.
Specifically, the height with respect to the running surface M is not higher than the mounting height H, that is, a height range between the running surface M and the mounting height H of the function 50. The driving surface M may be a ground surface, a wall surface, a stair surface, etc., which is not limited herein. Referring to fig. 6 again, the real-time two-dimensional or three-dimensional point cloud D1 includes point clouds having a plurality of heights, wherein the point cloud D2 between the driving surface M and the installation height H, that is, the point cloud D2 is projected onto the driving surface M to generate a second point cloud D21. It will be appreciated that the second point cloud D21 represents an obstacle that is not at a height above the mounting height H relative to the ride surface M.
Further, in step S032, a second map T2 is generated according to the second point cloud D21, it can be understood that the second map T2 is generated by combining the data of the second point cloud D21, and since the second point cloud D21 is the point cloud of which the point cloud D2 is projected on the driving surface M, the second point cloud D21 is a two-dimensional point cloud, the generated second map T2 is a two-dimensional map, the two-dimensional map occupies a smaller memory compared with the three-dimensional map, and the processor 10 resources required to be occupied during the operation are less, so that the memory occupied space of the smart robot 100 and the occupancy rate of the processor 10 can be reduced by using the two-dimensional map.
Further, in step S033, the second point cloud D21 is marked as a second color Y2 in the second map T2, and since the second point cloud D21 represents obstacle information, the second point cloud D21 is marked as a second color Y2 in the second map T2, which is beneficial for the intelligent robot 100 to recognize, and can be determined as an obstacle, so that the obstacle avoidance function is better implemented, and the user recognition is also beneficial for use. Here, in the second map T2, there are also a fifth color Y5 and a sixth color Y6, where an area of the fifth color Y5 indicates an area where a collision is likely to occur after traveling to an area of the fifth color Y5, and an area of the sixth color Y6 indicates an area where normal traveling is possible without a collision.
Specifically, the second color Y2 may be blue, black, purple, red, green, etc., the fifth color Y5 may be blue, light blue, gray, sky blue, red, etc., and the sixth color Y6 may be black, white, green, yellow, etc., without limitation. The first color Y1 may be the same as or different from the second color Y2, the third color Y4 may be the same as or different from the fifth color Y5, and the fourth color Y4 may be the same as or different from the sixth color Y6, which is not limited herein. In one embodiment, in the second map T2, the second color Y2 is purple, the fifth color Y5 is sky blue, and the sixth color Y6 is black, so that in the second map T2, the colors of the regions are not uniform, and the discrimination is high, thereby facilitating the recognition.
Further, in some embodiments, the first map T1 and the second map T2 are rasterized map data in the interior of the smart robot 100, wherein one grid represents one byte and represents 8 bits, wherein the first map T1 and the second map T2 are represented by values 0 to 255, wherein 255 represents an unknown area, 254 represents a point in the known area that cannot be touched by the smart robot, the value 253 further indicates an area that is not touched by the smart robot 100 as far as possible, and 0 represents a travelable area in which the smart robot 100 can travel.
The second drawing module 230 can be further configured to perform step S031, step S032 and step S033, and the processor 10 can be further configured to perform step S031, step S032 and step S033.
Referring to fig. 11 to 13, in some embodiments, step S040 includes the steps of:
s041: judging whether the body 40 interferes with the area of the first color Y1 when the body 40 runs along the preset path; and
s042: it is determined whether the function 50 interferes with the region of the second color Y2 while the function 50 travels the preset path.
Specifically, the order of executing step S041 and step S042 may be that step S041 is executed first, and then step S042 is executed, or step S042 is executed first, and then step S041 is executed, or step S041 and step S042 are executed simultaneously, which is not limited herein. In step S020 and step S030, a first map T1 and a second map T2 are respectively created, wherein the first map T1 is used for judging when the main body 40 runs, and the second map T2 is used for judging when the function 50 runs, so that the intelligent robot can better judge whether the intelligent robot can run on the preset route, and the situation that the main body 40 and the function 50 are regarded as a whole to cause wrong judgment due to the fact that the external dimensions of the main body 40 and the function 50 are not consistent is avoided.
Further, the first point cloud D11 is marked as the first color Y1 in the first map T1, and it is determined whether the body 40 can pass through the first map T1 in the preset route, that is, whether the body 40 will interfere with the area of the first color Y1 when traveling in the preset route, and if the body 40 will not interfere with the area of the first color Y1, it indicates that the body 40 will not collide with the obstacle when traveling in the preset route, and can travel safely. The second point cloud D21 is marked as a second color Y2 in the second map T2, and determines whether the function 50 can pass through the second map T2 by using the preset path, that is, determines whether the function 50 will interfere with the area of the second color Y2 when traveling by using the preset path, and if the function 50 will not interfere with the area of the second color Y2, it indicates that the function 50 will not collide with the obstacle when traveling by using the preset path, and can travel safely.
The determining module 240 may be further configured to execute step S041 and step S042, and the processor 10 may be further configured to execute step S041 and step S042.
Referring to fig. 14 to 16, in some embodiments, the driving control method further includes the steps of:
s060: if the main body 40 cannot pass through the first map T1 along the predetermined path L, and/or if the function element 50 cannot pass through the second map T2 along the predetermined path L;
s061: the travel path of the intelligent robot 100 is adjusted so that the body 40 avoids the area of the first color Y1 and the function 50 avoids the area of the second color Y2.
The body 40 cannot pass through the first map T1 along the preset path L, and/or the function element 50 cannot pass through the second map T2 along the preset path L, so that the intelligent robot 100 can automatically adjust the driving path, the body 40 can avoid the area of the first color, and the function element 50 avoids the area of the second color, and therefore, the intelligent robot 100 cannot collide with an obstacle all the time in the driving process, and the intelligent robot 100 can keep safe driving.
Specifically, referring to fig. 14 and 15, the intelligent robot 100 initially travels along the preset path L, and it can be understood that the main body 40 and the functional element 50 both travel along the preset path L. In fig. 14, the body 40 does not interfere with the region of the first color Y1 (as shown by the body 40 of the dotted line) while traveling along the preset path L, and in fig. 15, the function 50 interferes with the region of the second color Y2 (as shown by the function 50 of the dotted line), and if the intelligent robot 100 continues traveling along the preset path L, the intelligent robot 100 may not normally operate due to collision with an obstacle, so that the intelligent robot 100 adjusts the travel path to the first travel path L1, and the intelligent robot 100 travels along the adjusted first travel path L1, so that the body 40 can avoid the region of the first color Y1, and the function 50 can also avoid the region of the second color Y2, thereby safely traveling the intelligent robot 100. The intelligent robot 100 gradually travels to the preset path L after avoiding the obstacle along the first travel path L1, so that the intelligent robot 100 can always keep safe travel, and the problem that the intelligent robot 100 is damaged or jammed due to collision with the obstacle, which affects the working efficiency, is avoided.
The control module 250 may be further configured to perform steps S060 and S061, and the processor 10 may be further configured to perform steps S060 and S061.
Referring to fig. 17 and 18, in some embodiments, the driving control method further includes:
s070: controlling the intelligent robot 100 to drive around a working area to acquire an initial two-dimensional or three-dimensional point cloud in the working area;
s071: generating a third map according to the initial two-dimensional or three-dimensional point cloud;
s072: and generating a preset path L according to the third map.
The intelligent robot 100 can automatically plan the preset path L according to the point cloud data of the working area, so that the intelligent robot 100 can drive along the preset path L in the subsequent task execution process, the intelligent robot 100 can orderly and effectively perform the subsequent task execution process, the disordered driving in the working area is avoided, and the driving of the intelligent robot 100 is safer.
Specifically, before the intelligent robot 100 starts to execute the task, the intelligent robot 100 firstly drives a circle in a work area where the task needs to be executed until all areas in the work area are driven, the intelligent robot 100 continuously acquires point clouds in the work area in the driving process to form an initial two-dimensional or three-dimensional point cloud, then a third map T3 is generated according to the initial two-dimensional or three-dimensional point cloud, the third map T3 includes all the point clouds in the work area, so that the third map T3 is relatively accurate, and finally a preset path L is generated according to the third map T3, so that the intelligent robot 100 can drive along the preset path L subsequently when executing the task, and the intelligent robot 100 can effectively and orderly execute the task without running in a mess when working in the work area.
The control module 250 is further configured to execute step S070, step S071 and step S072, and the processor 10 is further configured to execute step S070, step S071 and step S072.
Further, referring to fig. 17 and 19, in some embodiments, step S072 includes the steps of:
s721: the ontology 40 generates a first preset path according to the third map T3;
s722: the function 50 generates a second preset path according to the third map T3; and
s723: and processing the first preset path and the second preset path to generate a preset path.
Specifically, since there is a difference in the apparent size of the body 40 and the function 50, there is also a difference in the travelable area in the third map, and if the apparent size of the body 40 is smaller than the apparent size of the function 50, the travelable area of the body 40 in the third map is larger than that of the function 50, and therefore there may be a certain difference in the travelable paths generated by the body 40 and the function 50 in the third map.
In the embodiment shown in fig. 17, the external dimensions of the function element 50 are larger than the external dimensions of the body 40, the travelable area of the body 40 in the third map T3 is the area Z2, and the travelable area of the function element 50 in the third map T3 is the area Z1, it can be understood that the body 40 generates the first preset path according to the third map T3 as long as the first preset path is in the area Z2 and the body 40 travels on the first preset path and does not collide with the obstacle, and the function element 50 generates the second preset path according to the third map T3 as long as the second preset path is in the area Z1 and the function element 50 travels on the second preset path and does not collide with the obstacle. Since the zone Z1 is smaller than the zone Z2, if the first predetermined route is taken for driving, the function element 50 will collide with the obstacle, so the first predetermined route and the second predetermined route need to be processed to generate the predetermined route, so that the body 40 and the function element 50 will not collide with the obstacle in the third map when driving on the predetermined route.
The travel control device 200 may be further configured to execute step S721, step S722, and step S723, and the processor 10 may further execute step S721, step S722, and step S723.
Referring to fig. 3 again, the memory 20 is used for storing a computer program that can be run on the processor 10, and the processor 10 implements the driving control method according to any of the above embodiments when executing the computer program.
The memory 20 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Further, the intelligent robot 100 may further include a communication interface 30, and the communication interface 30 is used for communication between the memory 20 and the processor 10.
If the memory 20, the processor 10 and the communication interface 30 are implemented independently, the communication interface 30, the memory 20 and the processor 10 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 20, the processor 10, and the communication interface 30 are integrated on a chip, the memory 20, the processor 10, and the communication interface 30 may complete communication with each other through an internal interface.
The processor 10 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
Referring to fig. 20, a non-transitory computer-readable storage medium 300 according to an embodiment of the present disclosure includes computer-executable instructions 301, which, when executed by one or more processors 400, cause the processors 400 to perform a driving control method according to any embodiment of the present disclosure.
For example, when the computer-executable instructions are executed by the processor 400, the processor 400 is configured to perform the steps of:
s010: identifying a working scene of the intelligent robot 100 according to the detection data of the plurality of sensors;
s020: matching a working mode according to the working scene, wherein the working mode is associated with preset working parameters of a plurality of cleaning executing pieces 40;
s030: the cleaning implement 40 is controlled to perform the cleaning task at preset operating parameters.
On which a computer program is stored which, when executed by the processor 400, implements the travel control method as described above.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the terms "certain embodiments," "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, unless specifically limited otherwise.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (10)

1. A travel control method for an intelligent robot, the intelligent robot comprising a body and a functional part mounted on the body, the functional part being mounted at a mounting height of the body, the travel control method comprising:
acquiring real-time two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot;
generating a first map according to the real-time two-dimensional or three-dimensional point cloud;
generating a second map according to the point cloud in the real-time two-dimensional or three-dimensional point cloud, wherein the height relative to the driving surface is not higher than the installation height;
judging whether the body can pass through the first map by a preset path or not and whether the functional piece can pass through the second map by the preset path or not; and
and if so, controlling the intelligent robot to travel along the preset path.
2. The driving control method according to claim 1, wherein the acquiring of the real-time two-dimensional or three-dimensional point cloud within a preset range around the intelligent robot comprises:
detecting two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot;
filtering noise points in the detected two-dimensional or three-dimensional point cloud and the point cloud on the driving surface to form an intermediate point cloud; and
and converting the intermediate point cloud into a world coordinate system to form the real-time two-dimensional or three-dimensional point cloud.
3. The method of claim 1, wherein generating the first map from the real-time two-dimensional or three-dimensional point cloud comprises:
projecting the real-time two-dimensional or three-dimensional point cloud on the driving surface to generate a first point cloud;
generating the first map according to the first point cloud; and
marking the first point cloud as a first color within the first map.
4. The travel control method according to claim 3, wherein the generating a second map from the point cloud of the real-time two-dimensional or three-dimensional point cloud having a height relative to the travel surface that is not higher than the installation height comprises:
projecting the point cloud of which the height relative to the driving surface is not higher than the installation height in the real-time two-dimensional or three-dimensional point cloud onto the driving surface to generate a second point cloud;
generating the second map according to the second point cloud; and
labeling the second point cloud as a second color within the second map.
5. The travel control method according to claim 4, wherein the determining whether the body can pass through the first map in a preset path and the function piece can pass through the second map in the preset path includes:
judging whether the body interferes with the area of the first color or not when the body runs along the preset path; and
and judging whether the functional piece interferes with the area of the second color or not when the functional piece runs along the preset path.
6. The running control method according to claim 4, characterized by further comprising:
if the body can not pass through the first map by a preset path, and/or if the functional element can not pass through the second map by the preset path;
and adjusting the driving path of the intelligent robot so that the body avoids the area of the first color and the function piece avoids the area of the second color.
7. The running control method according to claim 1, characterized by further comprising:
controlling the intelligent robot to drive around a working area to obtain initial two-dimensional or three-dimensional point cloud in the working area;
generating a third map according to the initial two-dimensional or three-dimensional point cloud; and
and generating the preset path according to the third map.
8. The utility model provides a running control device, is applied to intelligent robot, its characterized in that, intelligent robot includes body and function piece, running control device includes:
the acquisition module is used for acquiring real-time two-dimensional or three-dimensional point cloud in a preset range around the intelligent robot;
the first mapping module is used for generating a first map according to the real-time two-dimensional or three-dimensional point cloud;
the second mapping module is used for generating a second map according to the point cloud in the real-time two-dimensional or three-dimensional point cloud, wherein the height of the point cloud relative to the driving surface is not higher than the installation height;
the judging module is used for judging whether the body can pass through the first map by a preset path or not and whether the functional piece can pass through the second map by the preset path or not; and
the control module is used for controlling the intelligent robot to drive along the preset path when the body can pass through the first map along the preset path and the functional piece can pass through the second map along the preset path.
9. An intelligent robot, comprising:
one or more processors, memory; and
one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the travel control method of any of claims 1-7.
10. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the travel control method of any one of claims 1 to 7.
CN202010142799.3A 2020-03-04 2020-03-04 Driving control method and device, intelligent robot and computer readable storage medium Pending CN111290393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010142799.3A CN111290393A (en) 2020-03-04 2020-03-04 Driving control method and device, intelligent robot and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010142799.3A CN111290393A (en) 2020-03-04 2020-03-04 Driving control method and device, intelligent robot and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111290393A true CN111290393A (en) 2020-06-16

Family

ID=71021808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010142799.3A Pending CN111290393A (en) 2020-03-04 2020-03-04 Driving control method and device, intelligent robot and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111290393A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112060079A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Robot and collision detection method and device thereof
CN113524193A (en) * 2021-08-05 2021-10-22 诺亚机器人科技(上海)有限公司 Robot motion space marking method and device, robot and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2571660A2 (en) * 2010-05-20 2013-03-27 iRobot Corporation Mobile human interface robot
CN105955275A (en) * 2016-05-26 2016-09-21 华讯方舟科技有限公司 Robot path programming method and system
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
WO2018101631A2 (en) * 2016-11-30 2018-06-07 (주)유진로봇 Robotic vacuum cleaner, cleaning function control apparatus equipped in robotic vacuum cleaner, and multi-channel lidar-based obstacle detection apparatus equipped in robotic vacuum cleaner
CN108267748A (en) * 2017-12-06 2018-07-10 香港中文大学(深圳) A kind of omnidirectional three-dimensional point cloud ground drawing generating method and system
CN109144072A (en) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 A kind of intelligent robot barrier-avoiding method based on three-dimensional laser
CN109276193A (en) * 2018-11-13 2019-01-29 苏州苏相机器人智能装备有限公司 A kind of robot and barrier-avoiding method of adjustable height and position
CN109658373A (en) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 A kind of method for inspecting, equipment and computer readable storage medium
CN109740604A (en) * 2019-04-01 2019-05-10 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of running region detection
CN110221616A (en) * 2019-06-25 2019-09-10 清华大学苏州汽车研究院(吴江) A kind of method, apparatus, equipment and medium that map generates
CN110531759A (en) * 2019-08-02 2019-12-03 深圳大学 Path generating method, device, computer equipment and storage medium are explored by robot

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2571660A2 (en) * 2010-05-20 2013-03-27 iRobot Corporation Mobile human interface robot
CN105955275A (en) * 2016-05-26 2016-09-21 华讯方舟科技有限公司 Robot path programming method and system
WO2018101631A2 (en) * 2016-11-30 2018-06-07 (주)유진로봇 Robotic vacuum cleaner, cleaning function control apparatus equipped in robotic vacuum cleaner, and multi-channel lidar-based obstacle detection apparatus equipped in robotic vacuum cleaner
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN109658373A (en) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 A kind of method for inspecting, equipment and computer readable storage medium
CN108267748A (en) * 2017-12-06 2018-07-10 香港中文大学(深圳) A kind of omnidirectional three-dimensional point cloud ground drawing generating method and system
CN109144072A (en) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 A kind of intelligent robot barrier-avoiding method based on three-dimensional laser
CN109276193A (en) * 2018-11-13 2019-01-29 苏州苏相机器人智能装备有限公司 A kind of robot and barrier-avoiding method of adjustable height and position
CN109740604A (en) * 2019-04-01 2019-05-10 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of running region detection
CN110221616A (en) * 2019-06-25 2019-09-10 清华大学苏州汽车研究院(吴江) A kind of method, apparatus, equipment and medium that map generates
CN110531759A (en) * 2019-08-02 2019-12-03 深圳大学 Path generating method, device, computer equipment and storage medium are explored by robot

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112060079A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Robot and collision detection method and device thereof
CN112060079B (en) * 2020-07-30 2022-02-22 深圳市优必选科技股份有限公司 Robot and collision detection method and device thereof
CN113524193A (en) * 2021-08-05 2021-10-22 诺亚机器人科技(上海)有限公司 Robot motion space marking method and device, robot and storage medium
CN113524193B (en) * 2021-08-05 2022-09-23 诺亚机器人科技(上海)有限公司 Robot motion space marking method and device, robot and storage medium

Similar Documents

Publication Publication Date Title
EP3185096B1 (en) A charging pile, method and device for recognizing the charging pile, and an autonomous cleaning device
KR102035018B1 (en) Apparatus for controlling cleaning function and robotic cleaner with the apparatus
CN108873013B (en) Method for acquiring passable road area by adopting multi-line laser radar
JP5488518B2 (en) Road edge detection device, driver support device, and road edge detection method
CN110852312B (en) Cliff detection method, mobile robot control method, and mobile robot
CN111290393A (en) Driving control method and device, intelligent robot and computer readable storage medium
JP4927541B2 (en) Method and computer program for detecting the contour of an obstacle around a vehicle
CN107110970B (en) Method for detecting at least one object in the surrounding area of a motor vehicle by means of an ultrasonic sensor, driver assistance system and motor vehicle
JP2006510097A (en) A method for recognizing and tracking objects
CN113741438A (en) Path planning method and device, storage medium, chip and robot
CN113633221B (en) Method, device and system for processing missed-scanning area of automatic cleaning equipment
CN110515095B (en) Data processing method and system based on multiple laser radars
CN111345739B (en) Control method, control device, intelligent robot and computer readable medium
CN110794831A (en) Method for controlling robot to work and robot
WO2021168854A1 (en) Method and apparatus for free space detection
CN116057481A (en) Surface type detection method and robot cleaner configured to perform the method
JP2006260098A (en) Obstacle detection unit
KR101000332B1 (en) Drivable road detecting method and outdoor driving robot using the same
US11960296B2 (en) Method and apparatus for autonomous mobile device
CN114529539A (en) Method and device for detecting road surface obstacle of unmanned equipment, unmanned equipment and storage medium
CN116533998B (en) Automatic driving method, device, equipment, storage medium and vehicle of vehicle
CN111366937A (en) Robot working method based on ultrasonic wave, working device, chip and robot
JP5492598B2 (en) Car shape detection device and car wash machine equipped with the same
CN111409053B (en) Steering calibration method and device, intelligent robot and computer readable storage medium
JP7403423B2 (en) robot vacuum cleaner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200616