CN110928283A - Robot and intelligent moving method and device thereof - Google Patents

Robot and intelligent moving method and device thereof Download PDF

Info

Publication number
CN110928283A
CN110928283A CN201811086262.9A CN201811086262A CN110928283A CN 110928283 A CN110928283 A CN 110928283A CN 201811086262 A CN201811086262 A CN 201811086262A CN 110928283 A CN110928283 A CN 110928283A
Authority
CN
China
Prior art keywords
robot
area
obstacle
speed
maximum target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811086262.9A
Other languages
Chinese (zh)
Inventor
熊友军
黄高波
黄祥斌
周海浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201811086262.9A priority Critical patent/CN110928283A/en
Publication of CN110928283A publication Critical patent/CN110928283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The intelligent moving method of the robot comprises the following steps: in the moving process of the robot, acquiring the position of an obstacle in the moving direction of the robot; judging whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point of the first area and the robot is smaller than the distance between the point in the second area and the robot; when the position of the obstacle is in a second area, calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle; and controlling the robot to move according to the current moving speed of the robot and the maximum target speed. The robot can avoid stopping moving once the obstacle is detected, so that the speed of the robot is adjusted more flexibly, and the moving flexibility of the robot is improved.

Description

Robot and intelligent moving method and device thereof
Technical Field
The application belongs to the field of robots, and particularly relates to a robot and an intelligent moving method and device thereof.
Background
The motion scenes of the wheeled robot comprise a map navigation motion scene and a non-map navigation motion scene. For the situation of map navigation movement, a navigation algorithm plans a curved moving path according to the layout of obstacles (such as walls, tables, chairs and the like) in a map, and stops when encountering dynamic obstacles in the process, and moves after replanning the moving path. For a scene without a map, the robot moves linearly mainly by a specified distance or moves forward at a specified speed.
In both movement control modes, the robot must stop if it encounters an obstacle during movement. At present, a wheel type robot is safe, basically adopts a braking distance under the condition of higher speed as an obstacle avoidance distance, and the distance is usually larger; in the process that the robot moves at a higher speed, if a person appears in the safety distance in front of the robot, the robot is decelerated, but after the person moves away quickly, the robot is still decelerated and stops, the moving speed of the robot cannot be flexibly and flexibly adjusted, and the moving mode is inflexible.
Disclosure of Invention
In view of this, embodiments of the present application provide a robot and an intelligent movement method and apparatus thereof, so as to solve the problems in the prior art that when the robot encounters an obstacle, the moving speed cannot be flexibly and flexibly adjusted, and the movement mode is inflexible.
A first aspect of an embodiment of the present application provides an intelligent movement method for a robot, where the intelligent movement method for a robot includes:
in the moving process of the robot, acquiring the position of an obstacle in the moving direction of the robot;
judging whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point of the first area and the robot is smaller than the distance between the point in the second area and the robot;
when the position of the obstacle is in a second area, calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle;
and controlling the robot to move according to the current moving speed of the robot and the maximum target speed.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the calculating a maximum target speed of the robot according to the real-time distance between the robot and the obstacle when the position of the obstacle is in the second area includes:
when the position of the obstacle is in the second zone, according to the formula: and calculating the maximum target speed V _ target _ max of the position of the robot, wherein the V _ max is the maximum speed allowed at the position farthest away from the robot in the second area, the L _ min is the distance when the point in the second area is closest to the robot, the L _ max is the distance when the point in the second area is farthest away from the robot, and the L is the real-time position of the obstacle and the robot.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the controlling the robot to move according to the current moving speed of the robot and the maximum target speed includes:
if the current moving speed is less than the maximum target speed, keeping the current moving speed;
and if the current moving speed is greater than the maximum target speed, gradually adjusting the current moving speed to the maximum target speed.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the method further includes:
when the obstacle is in a first area in front of the robot, determining that the maximum target speed is 0;
and determining the speed adjustment rate of the robot according to the distance between the obstacle and the robot and the relative speed.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the first area is in front of the robot, the distance to the robot is less than or equal to a first predetermined value L _ min, the width of the first area is greater than or equal to the width of the robot, and the second area is in front of the robot, the distance to the robot is greater than the first predetermined value L _ min, and less than or equal to a second predetermined value L _ max, and the width of the second area is greater than or equal to the width of the robot.
A second aspect of an embodiment of the present application provides a smart mobile device of a robot, including:
the position acquisition unit is used for acquiring the position of an obstacle in the moving direction of the robot in the moving process of the robot;
the position judging unit is used for judging whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point of the first area and the robot is smaller than the distance between the point of the second area and the robot;
the maximum target speed calculation unit is used for calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle when the position of the obstacle is in the second area;
and the movement control unit is used for controlling the robot to move according to the current moving speed of the robot and the maximum target speed.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the maximum target speed calculation unit is configured to:
when the position of the obstacle is in the second zone, according to the formula: and calculating the maximum target speed V _ target _ max of the position of the robot, wherein the V _ max is the maximum speed allowed at the position farthest away from the robot in the second area, the L _ min is the distance when the point in the second area is closest to the robot, the L _ max is the distance when the point in the second area is farthest away from the robot, and the L is the real-time position of the obstacle and the robot.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the mobile control unit includes:
a speed holding subunit, configured to hold the current moving speed if the current moving speed is less than the maximum target speed;
and the speed adjusting subunit is configured to gradually adjust the current moving speed to the maximum target speed if the current moving speed is greater than the maximum target speed.
A third aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the robot intelligent movement method according to any one of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the robot intelligent movement method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: the robot speed adjusting method includes the steps that an area in front of the robot is divided into a first area and a second area, the first area is far away from the first area, when an obstacle is located in the second area, the maximum target speed of the robot is calculated according to the distance between the obstacle and the robot, and the robot is controlled to move according to the maximum target speed of the robot and the current moving speed of the robot, so that the robot can be prevented from stopping moving once the obstacle is detected, speed adjustment of the robot is softer, and the flexibility of the robot moving is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an intelligent movement method of a robot according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a first area and a second area defined when a robot provided by an embodiment of the present application moves;
fig. 3 is a schematic flow chart illustrating an implementation of another robot intelligent movement method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a robot smart mobile device according to an embodiment of the present disclosure;
fig. 5 is a schematic view of a robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of an implementation process of an intelligent robot moving method provided in an embodiment of the present application, which is detailed as follows:
in step S101, in the moving process of the robot, the position of an obstacle in the moving direction of the robot is acquired;
specifically, the robot may include a wheeled robot or other mobile structure type robot, and the robot may be driven to move at a specified speed by a driving system of the robot itself, or the robot may be controlled to complete the change of the moving speed according to a specified speed change mode.
The robot can include one or more of radar, RGBD degree of depth camera or ultrasonic sensor equidistance check out test set, through radar, or RGBD degree of depth camera, or through ultrasonic sensor, can detect whether there is the barrier in the certain limit in the robot place ahead to confirm the position of barrier. Alternatively, the distance between the obstacle and the robot may be determined according to the position of the detected obstacle.
The moving direction of the robot is the direction in front of the robot.
In step S102, determining whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point in the first area and the robot is smaller than the distance between the point in the second area and the robot;
in the moving process of the robot, if the obstacle in front of the robot is determined to exist, whether the position of the obstacle belongs to the second area or not is further judged, so that the speed of the robot can be adjusted more flexibly.
Fig. 2 is a schematic diagram of a second area and a first area provided in the embodiment of the present application, that is, a top view of the robot determined from above the robot, as shown in fig. 2, a moving direction of the robot, that is, a front direction of the robot, is an X direction, a distance from a point in the first area to the robot is less than or equal to a first predetermined value L _ min, and a width W1 of the first area is greater than or equal to a width W of the robot, that is, an ABED area in the figure. Wherein the width of the robot includes the width of the torso and the width of the two arms of the robot.
The second area is positioned in front of the robot, and the distance between the point of the second area and the robot is larger than a first preset value L _ min and smaller than or equal to a second preset value L _ max, namely a BCFE area in the figure. Wherein the width W2 of the second area may be greater than or equal to the width of the first area, i.e. the range of directions from the first area to the second area may be gradually increased to accommodate errors in direction change when the robot moves.
In addition, the first predetermined value L _ min and the second region value L _ max in the present application may be set to satisfy the following relationship: l _ min < H < L _ max, where H is the height of the robot. That is, when an obstacle is detected in front of the robot and the distance between the obstacle and the robot is equal to the height of the robot, it is detected that the robot belongs to the second area.
The determination manner of the first region and the second region illustrated in fig. 2 is only a preferred embodiment, and is not limited to this, and may be an arc region or the like.
In step S103, when the position of the obstacle is in the second area, calculating a maximum target speed of the robot according to a real-time distance between the robot and the obstacle;
when the robot detects that the second area includes the obstacle, the maximum target speed corresponding to the position of the obstacle is calculated according to the distance between the current position of the robot and the obstacle, namely, the longer the distance between the robot and the obstacle is, the larger the maximum target speed is, the closer the distance between the robot and the obstacle is, and the smaller the maximum target speed is.
In an alternative embodiment of the present application, when the robot detects that the obstacle is in the second area, the maximum target speed of the robot may be calculated according to the following formula:
V_target_max=(V_max*(L-L_min)/(L_max-L_min))
and calculating the maximum target speed allowed by the robot relative to the obstacle in the second area, wherein V _ max is the maximum speed allowed at the position farthest away from the robot in the second area, L _ min is the distance when the point in the second area is closest to the robot, L _ max is the distance when the point in the second area is farthest away from the robot, and L is the real-time position of the obstacle and the robot.
The distance between the robot and the obstacle is obtained in real time, so that the robot moves, and the maximum target speed allowed by the robot can be calculated in real time when the obstacle moves. Therefore, by means of the method for determining the maximum target speed, the maximum target speed of the robot can be dynamically adjusted by detecting the distance between the obstacle and the robot in real time, and therefore the speed adjustment requirements of the static obstacle and the dynamic obstacle can be effectively met.
In step S104, the robot is controlled to move according to the current moving speed of the robot and the maximum target speed.
The controlling and adjusting the current movement of the robot according to the current movement speed of the robot and the maximum target speed may include:
when the current moving speed of the robot is smaller than the maximum target speed, namely the current moving speed of the robot meets the set speed requirement, the robot is prevented from colliding with the obstacle in a braking mode when the obstacle belongs to the first area of the machine.
When the current moving speed of the robot is larger than the maximum target speed, the maximum target speed is selected as the target moving speed of the robot, namely the moving speed of the robot is gradually adjusted to the maximum target speed by gradually changing the moving speed of the robot, so that the problem that the robot moves obviously in a shaking or blocking mode due to the fact that the robot brakes and accelerates excessively when the robot moves at the current moving speed is avoided if an obstacle enters a first area.
In addition, as shown in fig. 3, a schematic flow chart of an implementation of another robot intelligent movement method provided in the embodiment of the present application is detailed as follows:
in step S301, in the moving process of the robot, the position of an obstacle in the moving direction of the robot is acquired;
in step S302, determining whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point in the first area and the robot is smaller than the distance between the point in the second area and the robot;
in step S303, when the position of the obstacle is in the second area, calculating a maximum target speed of the robot according to a real-time distance between the robot and the obstacle;
in step S304, the robot is controlled to move according to the current moving speed of the robot and the maximum target speed.
Steps S301 to S304 are substantially the same as steps S101 to S104 in fig. 1, and are not repeated herein.
In step S305, when the obstacle is in a first area in front of the robot, determining that the maximum target speed is 0;
and when the obstacle is detected to be in a first area in front of the robot, a braking instruction is sent to control the robot to be in a static state in time so as to prevent the robot from colliding with the obstacle.
In step S306, a speed adjustment rate of the robot is determined according to the distance and the relative speed between the obstacle and the robot.
After determining that the maximum target speed of the robot is 0, the current speed of the robot needs to be adjusted, that is, the current moving speed of the robot is adjusted to 0. In order to facilitate the robot to adjust more flexibly, the adjustment rate of the moving speed of the robot, namely the acceleration of the robot, can be determined according to the distance between the robot and the obstacle.
In addition, in the present application, when the obstacle is in the first area or the second area and the moving speed of the robot needs to be adjusted, a speed change curve may be determined, and the speed change curve may be a first-order linear curve or a high-order multiple curve.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 is a schematic structural diagram of a robot smart mobile device according to an embodiment of the present application, which is detailed as follows:
the robot intelligent mobile device includes:
a position acquiring unit 401, configured to acquire a position of an obstacle in a moving direction of the robot during movement of the robot;
a position determining unit 402, configured to determine whether a position of the obstacle is in a first area or a second area in front of the robot, where a distance between a point in the first area and the robot is smaller than a distance between a point in the second area and the robot;
a maximum target speed calculation unit 403, configured to calculate a maximum target speed of the robot according to a real-time distance between the robot and the obstacle when the position of the obstacle is in the second area;
a movement control unit 404, configured to control the robot to move according to the current moving speed of the robot and the maximum target speed.
Preferably, the maximum target speed calculation unit is configured to:
when the position of the obstacle is in the second zone, according to the formula: and calculating the maximum target speed V _ target _ max of the position of the robot, wherein the V _ max is the maximum speed allowed at the position farthest away from the robot in the second area, the L _ min is the distance when the point in the second area is closest to the robot, the L _ max is the distance when the point in the second area is farthest away from the robot, and the L is the real-time position of the obstacle and the robot.
Preferably, the movement control unit includes:
a speed holding subunit, configured to hold the current moving speed if the current moving speed is less than the maximum target speed;
and the speed adjusting subunit is configured to gradually adjust the current moving speed to the maximum target speed if the current moving speed is greater than the maximum target speed.
The robot intelligent movement device shown in fig. 4 corresponds to the robot intelligent movement method shown in fig. 1.
Fig. 5 is a schematic view of a robot provided in an embodiment of the present application. As shown in fig. 5, the robot 5 of this embodiment includes: a processor 50, a memory 51, and a computer program 52, such as a program of from to. The processor 50, when executing the computer program 52, implements the steps in the various method embodiments described above, such as steps 101-103 shown in FIG. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 501 to 503 shown in fig. 5.
Illustratively, the computer program 52 may be partitioned into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the robot 5. For example, the computer program 52 may be divided into a position acquisition unit, a position determination unit, a maximum target speed calculation unit, and a movement control unit, each unit having the following specific functions:
the position acquisition unit is used for acquiring the position of an obstacle in the moving direction of the robot in the moving process of the robot;
the position judging unit is used for judging whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point of the first area and the robot is smaller than the distance between the point of the second area and the robot;
the maximum target speed calculation unit is used for calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle when the position of the obstacle is in the second area;
and the movement control unit is used for controlling the robot to move according to the current moving speed of the robot and the maximum target speed.
The robot may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a robot 5 and does not constitute a limitation of robot 5 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the robot may also include input output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the robot 5, such as a hard disk or a memory of the robot 5. The memory 51 may also be an external storage device of the robot 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the robot 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the robot 5. The memory 51 is used for storing the computer program and other programs and data required by the robot. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An intelligent movement method of a robot, the intelligent movement method of the robot comprising:
in the moving process of the robot, acquiring the position of an obstacle in the moving direction of the robot;
judging whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point of the first area and the robot is smaller than the distance between the point in the second area and the robot;
when the position of the obstacle is in a second area, calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle;
and controlling the robot to move according to the current moving speed of the robot and the maximum target speed.
2. The method of claim 1, wherein the step of calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle when the position of the obstacle is in the second area comprises:
when the position of the obstacle is in the second zone, according to the formula: and calculating the maximum target speed V _ target _ max of the position of the robot, wherein the V _ max is the maximum speed allowed at the position farthest away from the robot in the second area, the L _ min is the distance when the point in the second area is closest to the robot, the L _ max is the distance when the point in the second area is farthest away from the robot, and the L is the real-time position of the obstacle and the robot.
3. The robot intelligent movement method according to claim 1, wherein the controlling of the movement of the robot according to the current movement speed of the robot and the maximum target speed comprises:
if the current moving speed is less than the maximum target speed, keeping the current moving speed;
and if the current moving speed is greater than the maximum target speed, gradually adjusting the current moving speed to the maximum target speed.
4. The robotic smart mobility method of claim 1, further comprising:
when the obstacle is in a first area in front of the robot, determining that the maximum target speed is 0;
and determining the speed adjustment rate of the robot according to the distance between the obstacle and the robot and the relative speed.
5. A robot intelligent movement method according to claim 1, wherein the first area is a front area of the robot, the distance to the robot is less than or equal to a first predetermined value L _ min, the width is greater than or equal to the width of the robot, and the second area is a front area of the robot, the distance to the robot is greater than the first predetermined value L _ min, and less than or equal to a second predetermined value L _ max, the width is greater than or equal to the width of the robot.
6. A smart mobile device of a robot, comprising:
the position acquisition unit is used for acquiring the position of an obstacle in the moving direction of the robot in the moving process of the robot;
the position judging unit is used for judging whether the position of the obstacle is in a first area or a second area in front of the robot, wherein the distance between the point of the first area and the robot is smaller than the distance between the point of the second area and the robot;
the maximum target speed calculation unit is used for calculating the maximum target speed of the robot according to the real-time distance between the robot and the obstacle when the position of the obstacle is in the second area;
and the movement control unit is used for controlling the robot to move according to the current moving speed of the robot and the maximum target speed.
7. The robotic smart mobile device of claim 6, wherein the maximum target speed calculation unit is to:
when the position of the obstacle is in the second zone, according to the formula: and calculating the maximum target speed V _ target _ max of the position of the robot, wherein the V _ max is the maximum speed allowed at the position farthest away from the robot in the second area, the L _ min is the distance when the point in the second area is closest to the robot, the L _ max is the distance when the point in the second area is farthest away from the robot, and the L is the real-time position of the obstacle and the robot.
8. The robotic smart mobile device of claim 6, wherein the mobile control unit comprises:
a speed holding subunit, configured to hold the current moving speed if the current moving speed is less than the maximum target speed;
and the speed adjusting subunit is configured to gradually adjust the current moving speed to the maximum target speed if the current moving speed is greater than the maximum target speed.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the robot smart mobility method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the robot smart moving method according to any one of claims 1 to 5.
CN201811086262.9A 2018-09-18 2018-09-18 Robot and intelligent moving method and device thereof Pending CN110928283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811086262.9A CN110928283A (en) 2018-09-18 2018-09-18 Robot and intelligent moving method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811086262.9A CN110928283A (en) 2018-09-18 2018-09-18 Robot and intelligent moving method and device thereof

Publications (1)

Publication Number Publication Date
CN110928283A true CN110928283A (en) 2020-03-27

Family

ID=69855756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811086262.9A Pending CN110928283A (en) 2018-09-18 2018-09-18 Robot and intelligent moving method and device thereof

Country Status (1)

Country Link
CN (1) CN110928283A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111329399A (en) * 2020-04-09 2020-06-26 湖南格兰博智能科技有限责任公司 Finite-state-machine-based sweeper target point navigation method
CN113805571A (en) * 2020-05-29 2021-12-17 苏州科瓴精密机械科技有限公司 Robot walking control method and system, robot and readable storage medium
CN114341761A (en) * 2020-12-25 2022-04-12 深圳市优必选科技股份有限公司 Collision avoidance method, mobile machine, and storage medium
US11797013B2 (en) 2020-12-25 2023-10-24 Ubtech North America Research And Development Center Corp Collision avoidance method and mobile machine using the same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204399056U (en) * 2015-01-15 2015-06-17 十堰金海源实业有限公司 AGV dolly collision avoidance system
CN204917859U (en) * 2015-09-09 2015-12-30 合肥泰禾光电科技股份有限公司 Anticollision AGV fork truck
CN205263649U (en) * 2015-11-18 2016-05-25 广东科达洁能股份有限公司 AGV dolly with variable safety protection system
CN106338996A (en) * 2016-10-20 2017-01-18 上海物景智能科技有限公司 Safe control method and system for mobile robot
CN206649342U (en) * 2016-11-15 2017-11-17 广州大学 A kind of automatical pilot transportation vehicle CAS

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204399056U (en) * 2015-01-15 2015-06-17 十堰金海源实业有限公司 AGV dolly collision avoidance system
CN204917859U (en) * 2015-09-09 2015-12-30 合肥泰禾光电科技股份有限公司 Anticollision AGV fork truck
CN205263649U (en) * 2015-11-18 2016-05-25 广东科达洁能股份有限公司 AGV dolly with variable safety protection system
CN106338996A (en) * 2016-10-20 2017-01-18 上海物景智能科技有限公司 Safe control method and system for mobile robot
CN206649342U (en) * 2016-11-15 2017-11-17 广州大学 A kind of automatical pilot transportation vehicle CAS

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111329399A (en) * 2020-04-09 2020-06-26 湖南格兰博智能科技有限责任公司 Finite-state-machine-based sweeper target point navigation method
CN113805571A (en) * 2020-05-29 2021-12-17 苏州科瓴精密机械科技有限公司 Robot walking control method and system, robot and readable storage medium
CN113805571B (en) * 2020-05-29 2024-03-12 苏州科瓴精密机械科技有限公司 Robot walking control method, system, robot and readable storage medium
CN114341761A (en) * 2020-12-25 2022-04-12 深圳市优必选科技股份有限公司 Collision avoidance method, mobile machine, and storage medium
US11797013B2 (en) 2020-12-25 2023-10-24 Ubtech North America Research And Development Center Corp Collision avoidance method and mobile machine using the same
CN114341761B (en) * 2020-12-25 2024-04-02 优必康(青岛)科技有限公司 Anti-collision method, mobile machine and storage medium

Similar Documents

Publication Publication Date Title
CN110928283A (en) Robot and intelligent moving method and device thereof
US11161246B2 (en) Robot path planning method and apparatus and robot using the same
US9927811B1 (en) Control system and method for controlling mobile warning triangle
CN110147091B (en) Robot motion control method and device and robot
US20180164804A1 (en) Tele-operated vehicle, and vehicle control device and control method thereof
US11485357B2 (en) Vehicle control device
EP3760505A1 (en) Method and apparatus for avoidance control of vehicle, electronic device and storage medium
CN111095384A (en) Information processing device, autonomous moving device, method, and program
CN105765495A (en) Reduced dead band for single joystick drive vehicle control
CN109955245A (en) A kind of barrier-avoiding method of robot, system and robot
CN110850859A (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
CN111761581B (en) Path planning method and device, and narrow space traveling method and device
CN111136648A (en) Mobile robot positioning method and device and mobile robot
CN113771839B (en) Automatic parking decision planning method and system
CN113287991B (en) Control method and control device for cleaning robot
CN112508912A (en) Ground point cloud data filtering method and device and boom anti-collision method and system
EP3415405A2 (en) Mobile warning triangle and related obstacle avoidance method
CN110083158B (en) Method and equipment for determining local planning path
Hsu et al. Implementation of car-following system using LiDAR detection
CN110901384B (en) Unmanned vehicle control method, device, medium and electronic equipment
CN112445209A (en) Robot control method, robot, storage medium, and electronic apparatus
CN104423381A (en) Electronic equipment and protection method thereof
US20190367013A1 (en) Control Device and Method
CN114967695A (en) Robot and its escaping method, device and storage medium
US11115594B2 (en) Shutter speed adjusting method and apparatus, and robot using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication