CN116483063A - Edge behavior control method, system, robot and storage medium - Google Patents

Edge behavior control method, system, robot and storage medium Download PDF

Info

Publication number
CN116483063A
CN116483063A CN202211105827.XA CN202211105827A CN116483063A CN 116483063 A CN116483063 A CN 116483063A CN 202211105827 A CN202211105827 A CN 202211105827A CN 116483063 A CN116483063 A CN 116483063A
Authority
CN
China
Prior art keywords
robot
force
reaction event
sensor
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211105827.XA
Other languages
Chinese (zh)
Inventor
王锦涛
杜川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Intelligence Technology Dongguan Co Ltd
Yunjing Intelligent Shenzhen Co Ltd
Original Assignee
Yunjing Intelligence Technology Dongguan Co Ltd
Yunjing Intelligent Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Intelligence Technology Dongguan Co Ltd, Yunjing Intelligent Shenzhen Co Ltd filed Critical Yunjing Intelligence Technology Dongguan Co Ltd
Priority to CN202211105827.XA priority Critical patent/CN116483063A/en
Publication of CN116483063A publication Critical patent/CN116483063A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a method, a system, a robot and a storage medium for controlling edge behavior, wherein the method is applied to the robot, a plurality of sensors are arranged on the robot, and the method comprises the following steps: acquiring a reaction event, wherein the reaction event is generated by signals fed back when a plurality of sensors received by a robot detect an obstacle or a specific area, and the specific area comprises a forbidden zone and/or a base station area; acquiring a first resultant force generated by a robot when a reaction event occurs; acquiring a second resultant force generated by the robot when a reaction event at a historical moment occurs; and determining the target direction of the robot according to the first resultant force and the second resultant force. The method and the device can determine the target direction of the robot and adjust the position of the robot so that the robot can move along the target direction, and the problem that the robot cannot be separated from a complex scene due to the fact that the robot cannot detect the proper direction in the complex scene is solved.

Description

Edge behavior control method, system, robot and storage medium
Technical Field
The present disclosure relates to the field of robot control technologies, and in particular, to a method, a system, a robot, and a storage medium for controlling an edge behavior.
Background
At present, robots are also increasingly widely applied to daily production and living of people. Some robots that can move autonomously can move along the edges, i.e. along the edges of obstacles.
However, when the autonomous robot moves along the passive edge, if some complex scenes are encountered, the autonomous robot may fail to detect a new target direction, so that the autonomous robot falls into the dead cycle behavior of reciprocating motion. For example, a scene in which one end of two obstacles is connected to take on a shape of a certain angle (which may be an acute angle). When the robot enters the scene which cannot escape, the robot cannot detect the proper direction, so that the robot can be involved in the dead cycle of the behavior.
Disclosure of Invention
The application provides a method, a system, a robot and a storage medium for controlling edgewise behavior, which aim to enable the robot to be separated from some complex scenes when encountering the scenes.
In a first aspect, the present application provides an edge behavior control method applied to a robot, where a plurality of sensors are disposed on the robot, the method includes:
acquiring a reaction event, wherein the reaction event is generated by signals fed back when a plurality of sensors received by the robot detect an obstacle or a specific area, and the specific area comprises a forbidden zone and/or a base station area;
Acquiring a first resultant force generated by the robot when the reaction event occurs;
acquiring a second resultant force generated by the robot when a reaction event at a historical moment occurs;
and determining the target direction of the robot according to the first resultant force and the second resultant force.
In a second aspect, the present application further provides an edgewise behavior control system, comprising a robot, a sensor arranged on the robot and a controller, the controller being capable of implementing the steps of the edgewise behavior control method as claimed.
In a third aspect, the present application also provides a robot comprising:
an obstacle detection module;
a mobile module; and
the obstacle detection module and the mobile module are both connected with the control device, and the control device comprises: the robot control system comprises a memory, a processor and a robot control program which is stored in the memory and can run on the processor, wherein the robot control program realizes the steps of the edgewise behavior control method when being executed by the processor.
In a fourth aspect, the present application also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the edgewise behavior control method as described.
The application provides a control method, a system, a robot and a storage medium for edgewise behavior, wherein the control method, the system, the robot and the storage medium can acquire reaction events generated by detecting obstacles or specific areas by various sensors when the current robot moves edgewise and reaction events in a historical state, and determine the target direction of the robot by calculating the resultant force of the current reaction events and the historical reaction events, and adjust the position of the robot so that the robot can move along the target direction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for controlling edge behavior according to an embodiment of the present application;
FIG. 2 is a flow chart of sub-steps of an edge behavior control method provided in an embodiment of the present application;
FIG. 3 is a flowchart illustrating another sub-step of a method for controlling edge behavior according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a further sub-step of a method for controlling edge behavior according to an embodiment of the present application;
fig. 5 is a schematic diagram of a first repulsive force in an edge behavior control method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first attractive force in an edge behavior control method according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of calculating resultant force, attractive force and repulsive force in a method for controlling edge behavior according to an embodiment of the present application;
fig. 8a is a schematic diagram of a first state of a robot touching a wall in a history state not considered according to an embodiment of the present application;
fig. 8b is a schematic diagram of a second state of the robot touching the wall in a history state not considered according to the embodiment of the present application;
fig. 8c is a schematic diagram of a third state of the robot touching the wall in a history state not considered according to the embodiment of the present application;
fig. 8d is a schematic diagram of a fourth state of the robot touching the wall in a history state not considered according to the embodiment of the present application;
fig. 9a is a schematic diagram of a first state of a robot touching a wall in consideration of a history state according to an embodiment of the present application;
Fig. 9b is a schematic diagram of a second state of the robot touching the wall in consideration of the history state according to the embodiment of the present application;
fig. 9c is a schematic diagram of a third state of the robot touching the wall in consideration of the history state according to the embodiment of the present application;
fig. 9d is a schematic diagram of a fourth state of the robot touching the wall in consideration of the history state according to the embodiment of the present application;
fig. 9e is a schematic diagram of a fifth state of the robot touching the wall in consideration of the history state according to the embodiment of the present application;
fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
To facilitate an understanding of the present application, some terminology will first be explained.
Active edge: refers to the distance and angle deviation between the robot sensing itself and the 'edge' to be followed. And comparing the interval and the angle deviation with preset values, and triggering an active edge instruction through a comparison result. The robot adjusts the movement direction of the robot in an active mode, so that the distance and angle deviation between the robot and the edge to be followed are equal to or approximately equal to the preset distance and the preset angle value. The preset distance is the distance between the robot and the edge to be followed, which is set by people in advance; the preset angle value is the deviation between the motion direction of the robot and the angle of the edge to be followed, which is manually preset.
Passive edge: the robot receives the reaction events (i.e. reactions), and performs fusion processing on all the reaction events at the moment in a passive mode to adjust the motion reversal of the robot.
Reaction event: may be generated by a plurality of sensors preset on the robot detecting/impacting an obstacle or a specific area.
The plurality of sensors includes a sensor, a cliff sensor, an ultrasonic wave, a radome sensor, a front tof, and the like. The TOF (Time of Flight) technology is to emit an infrared light source to an object to be measured, and collect light waves after the light waves are reflected back by the object by a sensor (sensor), and the system calculates the distance between the object to be measured and a camera by calculating the pulse difference or Time difference of the received light waves. A plurality of different kinds of sensors may be arranged on each robot, and a plurality of each kind of sensors is also possible.
Specific region: may include an area artificially set on a map built in the robot, a forbidden zone, etc. The area set on the map built in the robot is, for example, a predetermined cleaning area.
The embodiment of the application provides a control method, a system, a robot and a storage medium for edge behavior, which can determine the target direction of the robot and adjust the position of the robot so that the robot can move along the target direction, and the problem that the robot cannot depart from a complex scene due to failure in detecting the proper direction in the complex scene is solved.
When a reaction event occurs, the factors to be considered when the robot adjusts the direction mainly comprise the following three types:
(1) The robot avoids the acting force of the current obstacle.
(2) The robot acts force towards the edge of the current obstacle next time, so that the robot can smoothly finish the next exploration of the opposite side.
(3) And avoiding the reaction event triggered by the historical moment.
So if the three factors are taken into consideration, the target direction of the robot can be determined more accurately.
In view of the above, the method for controlling the edge behavior provided in the embodiments of the present application considers the above three factors, so that the target direction can be determined more accurately.
Referring to fig. 1, fig. 1 is a flow chart of a method for controlling edge behavior according to an embodiment of the present application. The edgewise behavior control method is applied to a robot, wherein a plurality of sensors are arranged on the robot, and the method comprises steps S100 to S400.
Step S100, acquiring a reaction event, wherein the reaction event is generated by a signal fed back when the robot receives that the sensor detects an obstacle or a specific area, and the specific area comprises a forbidden area and/or a base station area.
The sensor can send a detection signal to the front of the movement direction, and when an obstacle or a specific area is detected, the sensor returns to the robot; at this time, the robot obtains a feedback signal, and when the distance between the edge of the obstacle or the edge of the characteristic region and the robot is smaller than the preset distance, the robot generates a reaction event.
In step S200, a first resultant force generated by the robot when the reaction event occurs is obtained.
When a reaction event occurs, it is indicated that the robot may or may not have touched an obstacle or cliff (i.e., a step) along the original movement direction, so that the robot may generate a resultant force corresponding to each reaction event according to the occurrence of the reaction event, so as to adjust the movement direction of the robot. When a plurality of reaction events occur, synthesizing resultant force corresponding to each reaction event to obtain a first resultant force.
Specifically, referring to fig. 2, fig. 2 is a schematic flow chart of sub-steps of an edge behavior control method according to an embodiment of the present application. Step S200 may include:
s201, acquiring a first attractive force and a first repulsive force which are currently received by the robot, wherein the first repulsive force is acting force of the robot for avoiding a current obstacle, and the first attractive force is acting force of the robot for facing the edge of the current obstacle next time.
S202, synthesizing the first attractive force and the first repulsive force to obtain a third resultant force corresponding to the sensor triggering the reaction event.
It will be appreciated that when a certain reaction event occurs, the robot generates a resultant force, i.e. a third resultant force, corresponding to the sensor triggering the generation of the reaction event.
S203, synthesizing third resultant forces corresponding to all sensors generated by the current trigger reaction event, and obtaining the first resultant forces.
The robot generates a reaction event according to the signal fed back by the sensor, and generates a first attractive force and a first repulsive force according to the reaction event. When an obstacle is detected, a plurality of sensors can detect or touch the obstacle and feed back signals, so that the robot generates a plurality of first attractive forces and first repulsive forces. Therefore, when a plurality of reaction events occur at the same time, the robot synthesizes a plurality of first attractive forces and first repulsive forces generated according to feedback signals sent by a plurality of sensors, and then a first resultant force can be obtained.
Further, in an embodiment of the present application, acquiring the first attractive force and the first repulsive force currently received by the robot may include: determining the directions of the various forces currently received by the robot based on the positions of the sensors and in combination with the directions of the sensors towards the center of the robot; and determining each first repulsive force and first attractive force according to the magnitude of the acting force and the direction of the acting force.
And a plurality of sensors are arranged on the robot, and each position is preset. When the type of each sensor is known, the position of the sensor arranged on the robot can be known. The magnitude of the applied force to the robot is generally known, as the magnitude of a particular applied force can be configured and adjusted according to the importance of different sensors.
Further, in the embodiment of the present application, referring to fig. 3, fig. 3 is a schematic flow chart of another sub-step of an edge behavior control method provided in the embodiment of the present application. Based on the position of the sensor, and in combination with the direction of the sensor toward the center of the robot, determining the first repulsive force currently experienced by the robot may include:
step S2021, when the reaction event occurs, acquiring a direction of the sensor corresponding to the reaction event toward the central position of the robot, to obtain a first direction.
Step S2022, acquiring the force of the robot in the first direction, and determining the first repulsive force in combination with the first direction.
The force of the robot along the first direction can be generally obtained in advance, and the magnitude of the first repulsive force can be determined by combining the force of the robot along the first direction.
Referring to fig. 5, fig. 5 is a schematic diagram of a first repulsive force in an edge behavior control method according to an embodiment of the present application. Wherein, ultra sonic is the ultrasonic sensor, and when ultrasonic sensor detects that the place ahead has the obstacle, a signal is given the robot in feedback, and the robot can produce a first repulsion. The direction of the first magnetic force is the direction of the ultrasonic sensor towards the central part of the robot. Similarly, the first repulsive force corresponding to each sensor can be determined by the same method for the sensors such as cliff-1, cliff-2 (cliff sensor), mangnet (magnet sensor), ir (infra red), and buffer-1.
Furthermore, it is emphasized that the conditions under which the feedback signal is triggered by different sensors are different. For example, when touching a wall, the timer-1 may be triggered at this time, and if a cliff (e.g., a step) is present, the cliff-1 may be triggered at this time.
When the sensor detects that the front part is provided with an obstacle, and the distance between the obstacle and the robot is smaller than or equal to the preset distance, a signal is fed back to the robot, and the robot also generates a first attractive force. The resultant force obtained by combining the first attractive force and the first repulsive force is the resultant force born by the robot when the reaction event triggered by the sensor occurs.
Further, referring to fig. 4, fig. 4 is a schematic flow chart of another sub-step of the method for controlling edge behavior according to the embodiment of the present application. The robot is provided with a first side, a second side perpendicular to the first side, sensors are respectively arranged on the first side and the second side of the robot, the moving direction of the robot towards the first side is a second direction, and the moving direction of the robot towards the second side is a third direction; determining a first attractive force currently received by the robot based on the position of the sensor and in combination with the direction of the sensor towards the center of the robot, including:
step S2023, calculating a first distance between the position of the robot and the obstacle when the current reaction event occurs.
Step S2024, calculating a second distance between the position of the robot when the last reaction event occurs and the position of the robot when the current reaction event occurs.
Step S2025, when the first distance and the second distance are both less than or equal to a first preset threshold, determining that the direction of the first guiding force is the same according to the first side or the second side where the sensor corresponding to the current reaction event is located.
Step S2026, determining the first attractive force according to the magnitude of the force of the robot in the second direction or the third direction, and the second direction or the third direction.
Exemplary, participation is shown in fig. 5, and fig. 5 is a schematic diagram of a first repulsive force in an edgewise behavior control method according to an embodiment of the present application. When the second direction may be the left side of the robot, the third direction may be the front side of the robot. Therefore, in fig. 5, when the robot walks along the right side, the sensor on the right side can be directed forward at this time after transmitting the feedback signal. After the left sensor sends a feedback signal, the acting force generated by the robot faces to the left.
Further, when the first distance and the second distance are both greater than a preset threshold, a midpoint of a line segment between the position of the robot and the obstacle is the target direction along a direction of a perpendicular bisector of the line segment toward the obstacle.
In addition, the influence of the history state on the robot needs to be considered, so in the embodiment of the application, the method further includes:
step S300, when a reaction event at a historical moment occurs, a second resultant force generated by the robot is obtained.
Specifically, step S300 includes: and acquiring a second repulsive force and a second attractive force born by the robot. And synthesizing the second repulsive force and the second attractive force to obtain a fourth resultant force corresponding to the reaction event. When each reaction event at the historical moment occurs, the attenuation coefficients corresponding to the fourth resultant force born by the robot are obtained, and each attenuation coefficient is multiplied by the fourth resultant force under the same reaction event and summed to obtain a second resultant force.
In the embodiment of the application, the historical time may include the time of the reaction event generated by the robot in the historical time. For example, the historical time may include a time when the last sensor detected an obstacle, causing the robot to generate a plurality of reaction events, and a time when the last sensor detected an obstacle, causing the robot to generate a plurality of reaction events.
When considering the history state, the resultant force at the history time (i.e., the fourth resultant force obtained by combining the second attractive force and the second repulsive force) needs to be multiplied by the corresponding attenuation coefficient. For example, the fourth resultant force at the historical moment includes F2, F3, F4.. Each force is then multiplied by a corresponding attenuation coefficient αe (0, 1), such as: f (F) 2=F2*α,F 3=F3*α 2 .., final F 1、F2 、F And thirdly, synthesizing to obtain a second resultant force, wherein the direction of the last second resultant force is the rotation target direction of the robot.
Wherein the attenuation coefficient corresponding to each resultant force is alpha n . Wherein n represents: the frequency of occurrence of a reaction event is a distance from when the current reaction event occurred.
Further, the acquiring the second repulsive force and the second attractive force suffered by the robot includes:
acquiring the position of the sensor when each reaction event at the historical moment occurs; and determining a second repulsive force and a second attractive force born by the robot based on the position of the sensor and in combination with the direction of the sensor towards the center of the robot, wherein the second repulsive force is the acting force of the robot for avoiding the current obstacle, and the second attractive force is the acting force of the robot for facing the edge of the current obstacle next time.
The determining, based on the position of the sensor and in combination with the direction of the sensor toward the center of the robot, a second repulsive force currently experienced by the robot includes:
when the reaction event occurs, acquiring a fourth direction of the sensor corresponding to the reaction event towards the center position of the robot; the second repulsive force is determined in accordance with the magnitude of the force applied by the robot toward the fourth direction in combination with the fourth direction.
The method of acquiring the second repulsive force is substantially similar to that of acquiring the first repulsive force, so that an excessive explanation will not be made here.
Further, the robot is provided with a third side and a fourth side perpendicular to the third side, the third side and the fourth side of the robot are respectively provided with a sensor, the movement direction of the robot towards the third side is a fourth direction, and the movement direction of the robot towards the fourth side is a fifth direction. The determining, based on the position of the sensor and in combination with the direction of the sensor toward the center of the robot, a second attractive force to which the robot is subjected includes:
calculating a third distance between the position of the robot and the obstacle when the current reaction event occurs; calculating a fourth distance between the position of the robot when the last reaction event occurs and the position of the robot when the current reaction event occurs; when the third distance and the fourth distance are smaller than or equal to a second preset threshold, determining the direction of the second traction force to be a fourth direction or a fifth direction according to a third side or a fourth side of the sensor corresponding to the current reaction event; the second attractive force is determined according to the magnitude of the force of the robot in the fourth direction or the fifth direction and combined with the fourth direction or the fifth direction.
The third side may be a right side of the robot, and the fourth side may be a front side of the robot. When the robot walks along the right side, the sensor on the right side can face to the front at the moment after sending the feedback signal. After the left sensor sends a feedback signal, the acting force generated by the robot faces to the left.
It will be appreciated that if the robot is travelling along the left side, the left side sensor sends a feedback signal to cause the robot to generate a reaction event when the direction of the force generated by the robot is forward. When the right sensor sends a feedback signal to enable the robot to generate a reaction event, the direction of the acting force generated by the robot is right.
The method further comprises the steps of: and when the third distance and the fourth distance are both larger than a second preset threshold, the direction of the middle point of the line segment between the position of the robot and the obstacle towards the obstacle is the target direction.
In order to facilitate the robot to pass through the obstacle, the direction of the middle point of the line segment between the position of the robot and the obstacle towards the obstacle is the target direction, which comprises: acquiring the position of the robot and the midpoint position of a line segment between the obstacle; determining a straight line perpendicular or nearly perpendicular to the line segment and passing through the midpoint position by the midpoint position; and taking the direction of the straight line as the target direction.
The second preset threshold is set according to actual conditions, and when the third distance and the fourth distance are both larger than the second preset threshold, a certain distance is reserved between the robot and the obstacle. A straight line perpendicular or nearly perpendicular to the line segment and passing through the midpoint position can be taken as the target direction at this time.
S400, determining the target direction of the robot according to the first resultant force and the second resultant force.
After the first resultant force and the second resultant force are combined, the direction of the final resultant force is obtained, and at the moment, the target direction of the robot is determined, and the robot moves along the direction, so that the current obstacle can be passed.
According to the scheme, the reaction events generated when the sensors detect the obstacle or the specific area and the reaction events in the historical state can be obtained when the current robot moves along the edge, the target direction of the robot is determined by calculating the resultant force of the current reaction events and the historical reaction events, and the position of the robot is adjusted, so that the robot can move along the target direction, and the problem that the robot cannot deviate from the complex scene due to the fact that the robot cannot detect the proper direction in some complex scenes is solved.
Referring to fig. 7, fig. 7 is a schematic diagram of calculating resultant force, attractive force and repulsive force in an edge behavior control method according to an embodiment of the present application. The following azimuthal terms left, right, front, back, etc. are all determined with reference to the perspective of fig. 7. Taking a buffer-1 sensor on the right side of the robot as an example, the robot is the right side edge at this time, so that the corresponding attractive force direction is the front of the robot at this time, the repulsive force is the direction that the buffer-1 sensor faces the center of the robot, the repulsive force and the attractive force are also known, and the repulsive force and the attractive force are combined to obtain a resultant force, and the direction is shown in fig. 4.
If the robot adjusts the angle at the current position, an obstacle may be touched. Therefore, in order to enable the robot to safely complete the in-situ rotation by a certain angle, in an embodiment of the present application, the adjusting the position of the robot includes:
acquiring the historical speed of the robot; driving the robot to return along a historical route according to the historical speed; when the reaction event is no longer generated, the robot is stopped from moving.
The robot is returned along the historical line according to the historical speed, so that the robot is enabled to retreat along the original path, and the robot does not need to retreat along the original path until the current reaction event is released. Therefore, the robot can be maximally ensured to return to the scene which has just passed, and then can successfully avoid the obstacle according to the adjusted target direction prospect.
When a reaction event occurs, the robot needs to back up until the reaction event is released. In order to avoid the generation of the reaction in the backward process, the robot needs to backward according to the track of the robot generating the reaction until the reaction is released.
The historical speed and the historical line can be obtained in advance. In particular, the effect of returning the robot along the original path can be achieved by sending the speed history data of the reverse sequence to the robot,
in order to facilitate adjustment of a robot, adjusting a movement direction of the robot to move the robot along the target direction, comprising:
and adjusting the angular speed and the linear speed of the robot to enable the movement direction of the robot to be the target direction.
In the scheme, the direction of the robot can be changed by adjusting the angular speed of the robot, and the linear speed of the robot can be adjusted to enable the robot to move forwards. The angular speed and the linear speed are sequentially or synchronously adjusted, so that the robot can find the target direction as soon as possible and move along the target direction.
Further, after the determining the target direction of the robot, the method further comprises:
and adjusting the position of the robot and the movement direction of the robot to enable the robot to move along the target direction.
To facilitate the robot to adjust the position, the robot movement may be stopped before adjusting the position of the robot. And then adjusting the angular speed of the robot to change the movement direction of the robot, and adjusting the linear speed of the robot to enable the robot to move towards the target direction when the movement direction of the robot is the target direction.
To facilitate understanding of the present embodiment, the following is explained in some application scenarios:
the participation in fig. 8 a-8 d is shown in fig. 8 a-8 d without regard to historical state factors in the robot motion. Wherein, the circular structure is the robot. When the robot moves forward (i.e., the dotted arrow part in fig. 8 a), the right side of the robot touches the wall; the sensor on the right side detects an obstacle, at the moment, the sensor on the right side feeds back a signal to the robot, the repulsive force which faces to the front and the repulsive force which faces to the center of the robot are generated by the robot, the attractive force and the repulsive force are synthesized to obtain a resultant force, and the direction of the resultant force is shown in figure 8 a. The robot can now adjust the direction and turn as indicated by the dashed arrow in fig. 8 b. At this time, the robot still moves forward, after reaching the position in fig. 8c, the left and right sides of the robot touch the wall, and the robot receives feedback signals of a plurality of sensors (the buffer-2 and the buffer-1 respectively), and starts to retreat.
Fig. 9 a-9 e show application scenarios for multiple sensors and in view of historical status. In fig. 9a, when the right side of the robot touches the wall and the sensor of the buffer-1 on the right side of the robot detects that an obstacle exists, the forces generated by the buffer-1 are respectively repulsive force towards the front and repulsive force of the buffer-1 towards the center of the robot, and the resultant force of the attractive force and the repulsive force is shown in fig. 9 a. The robot adjusts the direction of movement according to the direction of the resultant force, the dashed arrow in fig. 9b representing the beginning of the turning of the robot.
In fig. 9c, the robot is illustrated after moving to the cliff, while the right side touches the wall. At this time, both sensors send feedback signals, so that the robot generates two pairs of repulsive force and resultant force, as shown in fig. 9 d. After combining each pair of repulsive forces with resultant forces, as shown in fig. 9 e. Then the robot can return to the original position and move along the target direction, so that the short wall and the cliff can be avoided.
In an embodiment of the present application, a system for controlling an edge behavior is provided, including a robot, a sensor disposed on the robot, and a controller, where the controller is capable of implementing the steps of the edge behavior control method described above.
In an embodiment of the present application, there is also provided a robot including: an obstacle detection module; a mobile module; and a control device, wherein the obstacle detection module and the moving module are connected with the control device, and the control device comprises: the robot control system comprises a memory, a processor and a robot control program which is stored in the memory and can run on the processor, wherein the robot control program realizes the steps of the edge behavior control method when being executed by the processor.
It should be noted that, for convenience and brevity of description, specific working processes of the above-described apparatus and each module may refer to corresponding processes in the foregoing embodiments of the edge behavior control method, which are not described herein again.
The apparatus provided by the above embodiments may be implemented in the form of a computer program which may be run on a computer device as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a terminal.
With reference to FIG. 10, the computer device includes a processor, memory, and a network interface connected by a system bus, where the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor to perform any of a number of borderline behavior control methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by a processor, causes the processor to perform any of the edgewise behavior control methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, a plurality of sensors are arranged on the robot, and the processor is used for running a computer program stored in a memory to realize the following steps:
acquiring a reaction event, wherein the reaction event is generated by signals fed back when a plurality of sensors received by the robot detect an obstacle or a specific area, and the specific area comprises a forbidden zone and/or a base station area; acquiring a first resultant force generated by the robot when the reaction event occurs; acquiring a second resultant force generated by the robot when a reaction event at a historical moment occurs; and determining the target direction of the robot according to the first resultant force and the second resultant force.
In one embodiment, the processor, after implementing determining the target direction of the robot, is configured to implement:
and adjusting the position of the robot and the movement direction of the robot to enable the robot to move along the target direction.
In one embodiment, the processor is configured to, prior to adjusting the position of the robot, implement:
stopping the robot movement.
In one embodiment, the processor, when acquiring the first resultant force generated by the robot when the reaction event occurs, is configured to:
Acquiring a first attractive force and a first repulsive force which are currently received by the robot, wherein the first repulsive force is acting force of the robot for avoiding a current obstacle, and the first attractive force is acting force of the robot for facing the edge of the current obstacle next time; synthesizing the first attractive force and the first repulsive force to obtain a third combined force corresponding to the sensor triggering the reaction event; and synthesizing third resultant forces corresponding to all sensors generated by the current trigger reaction event to obtain the first resultant forces.
In one embodiment, the processor is configured, when implementing the acquiring the first attractive force and the first repulsive force currently received by the robot, to implement:
when a reaction event occurs, acquiring the position of the sensor generating the reaction event; determining the directions of the various forces currently received by the robot based on the positions of the sensors and in combination with the directions of the sensors towards the center of the robot; and determining each first repulsive force and first attractive force according to the magnitude of the acting force and the direction of the acting force.
In an embodiment, when the processor implements the position based on the sensor and determines, in combination with the direction of the sensor toward the center of the robot, a first repulsive force currently experienced by the robot, the processor is configured to implement:
When the reaction event occurs, acquiring the direction of the sensor corresponding to the reaction event towards the central position of the robot to obtain a first direction; and acquiring the force in a first direction, and determining the first repulsive force by combining the first direction.
In an embodiment, the robot has a first side, a second side perpendicular to the first side, and sensors are respectively arranged on the first side and the second side of the robot, wherein the direction of the robot moving towards the first side is a second direction, and the direction of the robot moving towards the second side is a third direction; the processor is further configured to, when implementing the determination of the first attractive force currently received by the robot based on the position of the sensor and in combination with the direction of the sensor toward the center of the robot, implement:
calculating a first distance between the position of the robot and the obstacle when a current reaction event occurs; calculating a second distance between the position of the robot when the last reaction event occurs and the position of the robot when the current reaction event occurs; when the first distance and the second distance are smaller than or equal to a first preset threshold value, determining that the direction of the first traction force is a second direction or a third direction according to the first side or the second side of the sensor corresponding to the current reaction event; the first attractive force is determined according to the magnitude of the force of the robot in the second direction or the third direction and the second direction or the third direction.
In an embodiment, when the method for controlling the edge behavior is implemented, the processor is further configured to implement that, when the first distance and the second distance are both greater than a preset threshold, a midpoint of a line segment between the position of the robot and the obstacle is the target direction along a direction of a perpendicular bisector of the line segment toward the obstacle.
In an embodiment, the processor is configured to, when implementing the second resultant force generated by the robot when the reaction event of the historical moment is generated, implement: acquiring a second repulsive force and a second attractive force born by the robot; combining the second repulsive force and the second attractive force to obtain a fourth combined force corresponding to the reaction event; when each reaction event at the historical moment occurs, the attenuation coefficients corresponding to the fourth resultant force born by the robot are obtained, and each attenuation coefficient is multiplied by the fourth resultant force under the same reaction event and summed to obtain a second resultant force.
In an embodiment, the processor is configured to, when implementing acquiring the second repulsive force and the second attractive force to which the robot is subjected: acquiring the position of the sensor when each reaction event at the historical moment occurs; and determining a second repulsive force and a second attractive force born by the robot based on the position of the sensor and in combination with the direction of the sensor towards the center of the robot, wherein the second repulsive force is the acting force of the robot for avoiding the current obstacle, and the second attractive force is the acting force of the robot for facing the edge of the current obstacle next time.
In an embodiment, the processor is configured to determine, when implementing the sensor-based position and in combination with a direction of the sensor toward the center of the robot, a second repulsive force currently experienced by the robot, for implementing: when the reaction event occurs, acquiring a fourth direction of the sensor corresponding to the reaction event towards the center position of the robot; the second repulsive force is determined in accordance with the magnitude of the force applied by the robot toward the fourth direction in combination with the fourth direction.
In an embodiment, the robot has a third side, a fourth side perpendicular to the third side, and sensors are respectively disposed on the third side and the fourth side of the robot, the direction of the movement of the robot towards the third side is a fourth direction, and the direction of the movement of the robot towards the fourth side is a fifth direction; the processor is used for determining a second attractive force born by the robot based on the position of the sensor and combining the direction of the sensor towards the center of the robot, and the second attractive force is used for realizing the following steps:
calculating a third distance between the position of the robot and the obstacle when the current reaction event occurs; calculating a fourth distance between the position of the robot when the last reaction event occurs and the position of the robot when the current reaction event occurs; when the third distance and the fourth distance are smaller than or equal to a second preset threshold, determining the direction of the second traction force to be a fourth direction or a fifth direction according to a third side or a fourth side of the sensor corresponding to the current reaction event; the second attractive force is determined according to the magnitude of the force of the robot in the fourth direction or the fifth direction and combined with the fourth direction or the fifth direction.
In an embodiment, the processor, when implementing the edge behavior control method, is configured to implement: and when the third distance and the fourth distance are both larger than a second preset threshold, the direction of the middle point of the line segment between the position of the robot and the obstacle towards the obstacle is the target direction.
In an embodiment, the processor is configured to, when the direction of the midpoint of the line segment between the position of the robot and the obstacle toward the obstacle is the target direction, implement: acquiring the position of the robot and the midpoint position of a line segment between the obstacle; determining a straight line perpendicular or nearly perpendicular to the line segment and passing through the midpoint position by the midpoint position; and taking the direction of the straight line as the target direction.
In an embodiment, the processor, when implementing adjusting the position of the robot, is configured to implement:
acquiring the historical speed of the robot; driving the robot to return along a historical route according to the historical speed; when the reaction event is no longer generated, the robot is stopped from moving.
In an embodiment, the processor is configured to, when implementing the overall movement direction of the robot, move the robot along the target direction, to implement: and adjusting the angular speed and the linear speed of the robot to enable the movement direction of the robot to be the target direction.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program comprises program instructions, and the processor executes the program instructions to realize any of the edge behavior control methods provided by the embodiment of the application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided on the computer device.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. An edgewise behavior control method, applied to a robot on which a plurality of sensors are arranged, the method comprising:
acquiring a reaction event, wherein the reaction event is generated by signals fed back when a plurality of sensors received by the robot detect an obstacle or a specific area, and the specific area comprises a forbidden zone and/or a base station area;
acquiring a first resultant force generated by the robot when the reaction event occurs;
acquiring a second resultant force generated by the robot when a reaction event at a historical moment occurs;
and determining the target direction of the robot according to the first resultant force and the second resultant force.
2. The edgewise control method of claim 1, further comprising, after the determining the target direction for the robot:
and adjusting the position of the robot and the movement direction of the robot to enable the robot to move along the target direction.
3. The edgewise control method of claim 2, wherein prior to adjusting the position of the robot, the method further comprises:
stopping the robot movement.
4. The edgewise control method of claim 1, wherein the acquiring the first resultant force generated by the robot when the reaction event occurs comprises:
acquiring a first attractive force and a first repulsive force which are currently received by the robot, wherein the first repulsive force is acting force of the robot for avoiding a current obstacle, and the first attractive force is acting force of the robot for facing the edge of the current obstacle next time;
synthesizing the first attractive force and the first repulsive force to obtain a third combined force corresponding to the sensor triggering the reaction event;
and synthesizing third resultant forces corresponding to all sensors generated by the current trigger reaction event to obtain the first resultant forces.
5. The edgewise behavior control method according to claim 4, wherein the acquiring the first attractive force and the first repulsive force currently received by the robot includes:
when a reaction event occurs, acquiring the position of the sensor generating the reaction event;
determining the directions of the various forces currently received by the robot based on the positions of the sensors and in combination with the directions of the sensors towards the center of the robot;
And determining each first repulsive force and first attractive force according to the magnitude of the acting force and the direction of the acting force.
6. The edgewise control method according to claim 5, wherein the determining the first repulsive force currently experienced by the robot based on the position of the sensor in combination with the direction of the sensor toward the center of the robot includes:
when the reaction event occurs, acquiring the direction of the sensor corresponding to the reaction event towards the central position of the robot to obtain a first direction;
and acquiring the force in a first direction, and determining the first repulsive force by combining the first direction.
7. The edgewise control method of claim 5, wherein the robot has a first side, a second side perpendicular to the first side, the first and second sides of the robot are respectively provided with sensors, the direction of movement of the robot toward the first side is a second direction, and the direction of movement of the robot toward the second side is a third direction; determining a first attractive force currently received by the robot based on the position of the sensor and in combination with the direction of the sensor towards the center of the robot, including:
Calculating a first distance between the position of the robot and the obstacle when a current reaction event occurs;
calculating a second distance between the position of the robot when the last reaction event occurs and the position of the robot when the current reaction event occurs;
when the first distance and the second distance are smaller than or equal to a first preset threshold value, determining that the direction of the first traction force is a second direction or a third direction according to the first side or the second side of the sensor corresponding to the current reaction event;
the first attractive force is determined according to the magnitude of the force of the robot in the second direction or the third direction and the second direction or the third direction.
8. The edgewise behavior control method according to claim 7 wherein the method further comprises:
and when the first distance and the second distance are both larger than a preset threshold value, the direction of the midpoint of the line segment between the position of the robot and the obstacle, along the perpendicular bisector of the line segment, towards the obstacle is the target direction.
9. The edgewise control method according to claim 1, wherein the second resultant force generated by the robot when the reaction event of the acquisition history time occurs includes:
Acquiring a second repulsive force and a second attractive force born by the robot;
combining the second repulsive force and the second attractive force to obtain a fourth combined force corresponding to the reaction event;
when each reaction event at the historical moment occurs, the attenuation coefficients corresponding to the fourth resultant force born by the robot are obtained, and each attenuation coefficient is multiplied by the fourth resultant force under the same reaction event and summed to obtain a second resultant force.
10. The edgewise control method according to claim 9, wherein the acquiring the second repulsive force and the second attractive force to which the robot is subjected includes:
acquiring the position of the sensor when each reaction event at the historical moment occurs;
and determining a second repulsive force and a second attractive force born by the robot based on the position of the sensor and in combination with the direction of the sensor towards the center of the robot, wherein the second repulsive force is the acting force of the robot for avoiding the current obstacle, and the second attractive force is the acting force of the robot for facing the edge of the current obstacle next time.
11. The edgewise control method according to claim 10, wherein the determining the second repulsive force currently experienced by the robot based on the position of the sensor in combination with the direction of the sensor toward the center of the robot includes:
When the reaction event occurs, acquiring a fourth direction of the sensor corresponding to the reaction event towards the center position of the robot;
the second repulsive force is determined in accordance with the magnitude of the force applied by the robot toward the fourth direction in combination with the fourth direction.
12. The edgewise control method according to claim 10, wherein the robot has a third side, a fourth side perpendicular to the third side, and sensors are respectively disposed on the third side and the fourth side of the robot, a direction in which the robot moves toward the third side is a fourth direction, and a direction in which the robot moves toward the fourth side is a fifth direction; the determining, based on the position of the sensor and in combination with the direction of the sensor toward the center of the robot, a second attractive force to which the robot is subjected includes:
calculating a third distance between the position of the robot and the obstacle when the current reaction event occurs;
calculating a fourth distance between the position of the robot when the last reaction event occurs and the position of the robot when the current reaction event occurs;
when the third distance and the fourth distance are smaller than or equal to a second preset threshold, determining the direction of the second traction force to be a fourth direction or a fifth direction according to a third side or a fourth side of the sensor corresponding to the current reaction event;
The second attractive force is determined according to the magnitude of the force of the robot in the fourth direction or the fifth direction and combined with the fourth direction or the fifth direction.
13. The edgewise behavior control method according to claim 12 wherein the method further comprises:
and when the third distance and the fourth distance are both larger than a second preset threshold, the direction of the middle point of the line segment between the position of the robot and the obstacle towards the obstacle is the target direction.
14. The edgewise behavior control method according to claim 13, wherein a direction in which a midpoint of a line segment between the position of the robot and the obstacle is toward the obstacle is the target direction, including:
acquiring the position of the robot and the midpoint position of a line segment between the obstacle;
determining a straight line perpendicular or nearly perpendicular to the line segment and passing through the midpoint position by the midpoint position;
and taking the direction of the straight line as the target direction.
15. The edgewise control method of claim 2, wherein the adjusting the position of the robot comprises:
acquiring the historical speed of the robot;
Driving the robot to return along a historical route according to the historical speed;
when the reaction event is no longer generated, the robot is stopped from moving.
16. The edgewise control method according to claim 2, wherein the adjusting the position of the robot and the movement direction of the robot to move the robot in the target direction includes:
and adjusting the angular speed and the linear speed of the robot to enable the movement direction of the robot to be the target direction.
17. An edgewise behavior control system comprising a robot, a sensor arranged on the robot and a controller capable of implementing the steps of the edgewise behavior control method according to any of the claims 1-16.
18. A robot, comprising:
an obstacle detection module;
a mobile module; and
the obstacle detection module and the mobile module are both connected with the control device, and the control device comprises: a memory, a processor and a control program of a robot stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the edgewise behavior control method according to any one of claims 1 to 16.
19. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the edgewise behavior control method according to any of the claims 1 to 16.
CN202211105827.XA 2022-09-09 2022-09-09 Edge behavior control method, system, robot and storage medium Pending CN116483063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211105827.XA CN116483063A (en) 2022-09-09 2022-09-09 Edge behavior control method, system, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211105827.XA CN116483063A (en) 2022-09-09 2022-09-09 Edge behavior control method, system, robot and storage medium

Publications (1)

Publication Number Publication Date
CN116483063A true CN116483063A (en) 2023-07-25

Family

ID=87216546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211105827.XA Pending CN116483063A (en) 2022-09-09 2022-09-09 Edge behavior control method, system, robot and storage medium

Country Status (1)

Country Link
CN (1) CN116483063A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116774711A (en) * 2023-08-23 2023-09-19 天津旗领机电科技有限公司 Deceleration control system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116774711A (en) * 2023-08-23 2023-09-19 天津旗领机电科技有限公司 Deceleration control system and method
CN116774711B (en) * 2023-08-23 2023-10-31 天津旗领机电科技有限公司 Deceleration control system and method

Similar Documents

Publication Publication Date Title
CN107562048B (en) Dynamic obstacle avoidance control method based on laser radar
JP4316477B2 (en) Tracking method of mobile robot
KR100486737B1 (en) Method and apparatus for generating and tracing cleaning trajectory for home cleaning robot
US8515612B2 (en) Route planning method, route planning device and autonomous mobile device
US10948907B2 (en) Self-driving mobile robots using human-robot interactions
EP2107392A1 (en) Object recognition system for autonomous mobile body
US8731715B2 (en) Mobile device and method and computer-readable medium controlling same for using with sound localization
US20120136510A1 (en) Apparatus and method for detecting vehicles using laser scanner sensors
CN116483063A (en) Edge behavior control method, system, robot and storage medium
Wang et al. Acoustic robot navigation using distributed microphone arrays
CN108873875B (en) Robot steering motion control method and device, robot and storage medium
CN111103875B (en) Method, apparatus and storage medium for avoiding
WO2019047415A1 (en) Trajectory tracking method and apparatus, storage medium and processor
US20240000281A1 (en) Autonomous robot
JP2011224679A (en) Reaction robot, reaction control method, and reaction control program
Kim Control laws to avoid collision with three dimensional obstacles using sensors
WO2022060530A1 (en) Robot localization and mapping accommodating non-unique landmarks
Ribeiro Obstacle avoidance
WO2021246169A1 (en) Information processing device, information processing system, method, and program
Fazli et al. Simultaneous landmark classification, localization and map building for an advanced sonar ring
Huang Control approach for tracking a moving target by a wheeled mobile robot with limited velocities
JP2021144435A (en) Collision prevention device, mobile body, and program
KR102479619B1 (en) Autonomous driving robot capable of obstacle avoidance movement
US11872704B2 (en) Dynamic motion planning system
US20230012905A1 (en) Proximity detection for automotive vehicles and other systems based on probabilistic computing techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000, Building 1, Yunzhongcheng A2902, Wanke Yuncheng Phase 6, Dashi Er Road, Xili Community, Xishan District, Shenzhen City, Guangdong Province

Applicant after: Yunjing intelligent (Shenzhen) Co.,Ltd.

Applicant after: Yunjing Intelligent Innovation (Shenzhen) Co.,Ltd.

Address before: 31st Floor, West Tower, Baidu International Building, No. 8 Haitian 1st Road, Binhai Community, Yuehai Street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: Yunjing intelligent (Shenzhen) Co.,Ltd.

Applicant before: YUNJING INTELLIGENCE TECHNOLOGY (DONGGUAN) Co.,Ltd.

CB02 Change of applicant information