WO2019183804A1 - 一种机器人规避控制方法及相关装置 - Google Patents

一种机器人规避控制方法及相关装置 Download PDF

Info

Publication number
WO2019183804A1
WO2019183804A1 PCT/CN2018/080702 CN2018080702W WO2019183804A1 WO 2019183804 A1 WO2019183804 A1 WO 2019183804A1 CN 2018080702 W CN2018080702 W CN 2018080702W WO 2019183804 A1 WO2019183804 A1 WO 2019183804A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
external object
orientation information
environment
obstacle
Prior art date
Application number
PCT/CN2018/080702
Other languages
English (en)
French (fr)
Inventor
尤中乾
Original Assignee
尤中乾
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 尤中乾 filed Critical 尤中乾
Priority to PCT/CN2018/080702 priority Critical patent/WO2019183804A1/zh
Priority to EP18911429.1A priority patent/EP3779630A4/en
Priority to CN201880004168.6A priority patent/CN109906134B/zh
Priority to US17/042,020 priority patent/US20210060780A1/en
Publication of WO2019183804A1 publication Critical patent/WO2019183804A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/216Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/573Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/24Electric games; Games using electronic circuits not otherwise provided for
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H17/00Toy vehicles, e.g. with self-drive; ; Cranes, winches or the like; Accessories therefor
    • A63H17/26Details; Accessories
    • A63H17/36Steering-mechanisms for toy vehicles
    • A63H17/40Toy vehicles automatically steering or reversing by collision with an obstacle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H11/00Self-movable toy figures

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a robot avoidance control method and related device.
  • intelligent robots With the continuous development of artificial intelligence technology, intelligent robots came into being. At present, intelligent robots have been widely used in various fields, such as smart home field, service field and smart game field. In practical applications, robots may encounter obstacles, moving objects, and other substances during moving (such as walking). How to automatically move obstacles, moving objects, and other substances while the robot is moving. Avoidance is the current research hotspot.
  • the embodiment of the invention provides a robot avoidance control method and related device, which can control the robot to effectively avoid external objects.
  • a first aspect of the embodiments of the present invention provides a robot avoidance control method, where the method includes:
  • the dodge movement policy is determined according to the orientation information and the environment map, and is used to control the Moving the robot within the environment map to evade an external object emitted on the direction indicated by the orientation information that triggers the robot;
  • a move instruction is generated in accordance with the dodge movement policy, the move instruction being used to control the robot movement.
  • a second aspect of the embodiments of the present invention provides a robot avoidance control apparatus, where the apparatus includes:
  • a first acquiring unit configured to acquire a position where the robot is triggered by the external object when the robot receives the trigger of the external object
  • a first determining unit configured to determine orientation information of the external object according to a position that the robot is triggered by the external object
  • a second determining unit configured to determine an avoidance movement policy according to the orientation information of the external object and an environment map of the environment where the robot is located in advance, where the avoidance movement policy is determined according to the orientation information and the environment map For controlling the robot to move within the environment map to avoid an external object that is generated on the orientation indicated by the orientation information and that triggers the robot;
  • an instruction generating unit configured to generate a move instruction according to the avoidance movement policy, where the move instruction is used to control the robot movement.
  • a third aspect of the embodiments of the present invention provides a robot, including a processor and a memory, where the memory stores executable program code, and the processor is configured to invoke the executable program code to perform the foregoing first aspect.
  • Robot avoidance control method
  • a fourth aspect of the embodiments of the present invention provides a storage medium, wherein the storage medium stores an instruction, and when executed on a computer, causes the computer to execute the robot avoidance control method according to the first aspect.
  • the robot when the robot receives the trigger of the external object, the position of the robot triggered by the external object is first acquired, and the orientation information of the external object is determined according to the position triggered by the external object, and then according to the orientation of the external object.
  • the information and the environment map of the pre-acquired environment of the robot determine the avoidance movement strategy, and finally generate the movement instruction according to the avoidance movement strategy, and the movement instruction is used to control the movement of the robot, thereby controlling the robot to effectively avoid the external object.
  • FIG. 1 is a schematic flow chart of a robot avoidance control method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a robot application scenario according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a robot avoidance control device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart diagram of a robot avoidance control method according to an embodiment of the present invention.
  • the robot avoidance control method described in the embodiment of the present invention includes:
  • the robot When the robot receives the trigger of the external object, the robot acquires a position where the robot is triggered by the external object.
  • the external object may be a moving object (such as a BB bomb, a water bomb, etc.), or may be a light (such as a laser or the like).
  • the external object may be a launching object emitted by a launching device (such as a BB gun, a water gun, a laser launcher), or a throwing object thrown by a user (such as a coin, a stone, etc.), and the external object may also be An object that naturally falls in the environment in which the robot is located (eg, water droplets, etc.).
  • the external object may also be another type of object or substance, and the external object may be in other motion states, which is not limited in the embodiment of the present invention.
  • the trigger of the robot receiving the external object may mean that the robot is hit by the external object, for example, by the BB bomb emitted by the BB gun; or the robot is hit by the external object, for example, The laser emitted by the laser emitter was hit.
  • the robot can detect whether the robot is struck by the moving object through a preset vibration sensor, and detect whether the robot is hit by the light through a preset photosensor; if the robot is detected to be hit by the moving object or is illuminated by the light Hit, the robot determines the trigger to receive the external object.
  • the robot when the robot receives the trigger of the external object, the robot acquires the position where the robot is triggered by the external object, that is, acquires the position where the robot is hit by the moving object, or acquires the position where the robot is hit by the light.
  • the robot is preset with at least one vibration sensor and/or at least one photosensitive sensor, and the at least one sensor and/or the at least one photosensitive sensor are preset in at least one body part of the robot (eg, head, arm, torso, etc.) )on.
  • the vibration sensor and the photosensor may be preset in the same position on the robot or in different positions on the robot.
  • the robot can also be triggered by other objects in other ways.
  • the robot can detect whether the robot is triggered by other objects by other types of sensors set in advance.
  • the robot presets an initial reference life value N (for example, 12), that is, a preset total life value of the robot; and pre-sets a mapping relationship between the life value and the body part of the robot, such as a robot.
  • the head corresponds to the life value n1 (for example, 3)
  • the torso of the robot corresponds to the life value n2 (for example, 2)
  • the arm of the robot corresponds to the life value n3 (for example, 1).
  • the robot receives the trigger of the external object, the robot's life value is reduced and adjusted according to the position triggered by the external object.
  • the robot subtracts n1 from the initial reference life value N, obtains the adjusted life value N1, and adjusts the initial reference life value N to the life value. N1. It should be noted that, in the case where the head of the robot is triggered again by the external object, and the other parts of the robot are triggered by the external object, it may be deduced by analogy, and details are not described herein again.
  • the robot reduces and adjusts the life value of the robot according to the position triggered by the external object, if the current life value of the robot is not zero, the time starts; if the robot is detected to be externally After the object is triggered for the first preset duration (for example, 30s), it is not triggered by the external object again, that is, within the first preset duration, the robot is not detected to be hit again by the moving object or hit by the light again.
  • the current life value of the robot for example, increase the current life of the robot by one.
  • the current life value of the robot is adjusted and adjusted again. That is to say, under the premise that the robot is not triggered by the external object again, every time the first preset time is long, the current life value of the robot is adjusted and adjusted once until the current life value of the robot is equal to the initial reference life value. .
  • the robot after the robot reduces and adjusts the life value of the robot according to the position triggered by the external object, if the current life value of the robot is zero, the robot is controlled to enter a stationary state, that is, the robot is controlled. Stop moving and start timing; when it is detected according to the timing result that the control robot enters the stationary state for more than the second preset duration (for example, 1 min), the current life value of the robot is reset to the initial reference vital value N, and the robot is controlled. Restart the movement within the environment of the robot.
  • the second preset duration for example, 1 min
  • the robot when the robot receives the trigger of the external object, the robot controls the robot to issue a trigger signal, or an alarm signal, to prompt the user that the robot is hit by the external object or the robot receives the impact of the moving object.
  • the robot can emit a trigger signal by flashing a light, or can issue a trigger signal by issuing a preset specific sound, and can also issue a trigger signal by making a preset specific action (such as vibration), and the robot can also pass other
  • the method for issuing the trigger signal is not limited in the embodiment of the present invention.
  • the robot determines orientation information of the external object according to a position that the robot is triggered by the external object.
  • the robot after acquiring the position where the robot is hit or hit by the external object, the robot determines the orientation information of the external object according to the position that the robot is impacted or hit by the external object, and the orientation information of the external object includes the external information.
  • the direction information of the object is the direction information of the object.
  • the robot determines that the external object is a moving object, first acquiring a position where the robot is impacted by the moving object, and pressure information generated when the moving object hits the robot, the pressure The information includes the magnitude of the pressure value and the pressure direction information; the robot then determines the direction information of the external object according to the position where the robot is impacted by the external object, and analyzes the magnitude of the acquired pressure value and the pressure direction information to determine the moving object.
  • the location information that is, the location area where the predicted moving object is issued.
  • the robot may acquire an image of the environment where the robot is located at a preset time interval (for example, 2 s or 3 s) through a preset camera, and process and analyze multiple images acquired at different times to obtain an external image.
  • the orientation information of the object the orientation information including direction information and location information of the external object.
  • the camera preset by the robot may be one or more, and may be a monocular camera, a binocular camera or a multi-head camera, which is not limited in the embodiment of the present invention.
  • the robot determines an avoidance movement policy according to the orientation information of the external object and the environment map of the environment where the robot is located in advance.
  • the robot is preset with an environment detecting device, and the environment detecting device may be disposed on multiple parts of the robot, and further, may be disposed on the head of the robot or other rotatable parts.
  • the environment detecting device may be, for example, a depth camera.
  • the depth camera may be a single-purpose camera or a multi-purpose camera.
  • the environment detecting device may be a laser radar or the like, which is not limited in the embodiment of the present invention.
  • the environment map of the environment in which the robot is located is obtained by the robot in advance. Specifically, the robot first performs obstacle recognition on the environment where the robot is located or the surrounding environment on the robot moving path through the preset environment detecting device, and acquires the environment of the robot.
  • the obstacle information wherein the obstacle information includes one or more of distance information of the obstacle and the robot, orientation information of the obstacle, shape information of the obstacle, and size information of the obstacle.
  • the robot then constructs an environment map of the environment in which the robot is located in real time according to the obstacle information.
  • the robot can use the Simultaneous Localization and Mapping (SLAM) technology to construct a two-dimensional environment of the environment in which the robot is located in real time according to the obstacle information.
  • Map you can also use the visual SLAM technology to build a three-dimensional environment map of the environment in which the robot is located in real time based on obstacle information.
  • SLAM Simultaneous Localization and Mapping
  • the robot may also use other technologies to construct an environment map of the environment in which the robot is located, which is not limited in the embodiment of the present invention. After the robot obtains the environment map of the environment in which the robot is located in advance, the movement route of the robot can be reasonably planned according to the environment map, so that the obstacle can be effectively avoided when the robot is controlled to move in the environment, and the protection of the robot can be realized.
  • the evasion movement strategy is determined according to the orientation information of the external object and the environment map of the environment where the robot is located, and is used to control the movement of the robot within the environment corresponding to the environment map to avoid the indication of the orientation information of the external object.
  • the robot generates a movement instruction according to the avoidance movement strategy, and the movement instruction is used to control the movement of the robot.
  • the robot determines the evasive movement strategy according to the orientation information of the external object and the environment map of the environment where the robot is pre-acquired: the robot first predicts that the external object is re-issued according to the orientation information of the external object. a location area in the environment map that arrives; and then determining a target orientation information according to the location area in the environment map that is to be reached when the external object is predicted to be re-issued, and the environment map of the environment in which the robot is acquired in advance, the target orientation The information includes the target direction and the target location.
  • the target direction may be a direction opposite to the direction indicated by the orientation information of the external object, or may be a direction at a preset angle (for example, 45 degrees or 90 degrees) with the direction indicated by the orientation information of the external object.
  • the target location may be a location where the probability that the robot is triggered again by the external object within the environment map. Specifically, the target location may be a location in the location area within the environment map that will be reached when the external object is re-issued, and the location where the external object reaches the lowest probability, that is, the environment map that will arrive when the external object is re-issued.
  • the target position may also be determined according to the target direction, and the position area in the environment map to be reached when the external object is re-issued is separated by a preset distance ( For example, a position of 0.5 m), that is to say, the target position is located outside the positional area within the environmental map that will arrive when the external object is re-issued.
  • a preset distance For example, a position of 0.5 m
  • the robot firstly plans the avoidance route of the robot according to the determined target orientation information, that is, the movement route of the robot.
  • the avoidance route may be a route that is the shortest distance from the current position of the robot to the position indicated by the target orientation information; or may be the route that takes the shortest time from the current position of the robot to the position indicated by the target orientation information; It may be a route or the like having the smallest probability of being triggered again by the external object in the process from the current position of the robot to the position indicated by the target orientation information.
  • the robot then generates a movement instruction according to the planned avoidance route, and the movement instruction is used to control the robot to move in the environment where the robot is located according to the avoidance route, and move to a position indicated by the target orientation information to avoid the orientation information of the external object.
  • An external object emitted from the indicated azimuth that triggers the robot.
  • the robot determines the evasive movement strategy according to the orientation information of the external object and the environment map of the environment where the robot is pre-acquired: the robot first predicts that the external object is re-issued according to the orientation information of the external object. a location area within the environment map that arrives; then a location area within the environment map that will arrive when the external object is predicted to be re-issued, an environmental map of the pre-acquired environment of the robot, and a pre-acquired environment of the robot The obstacle information identifies the target obstacle.
  • the target obstacle may refer to an obstacle having a preset distance from the location area in the environment map that will arrive when the external object is re-issued, that is, the target obstacle is located when the external object is re-issued. a location area outside the location area within the environment map; the target obstacle may also refer to an obstacle that faces the external object and blocks the external object, that is, the external object will arrive when it is re-issued.
  • the location area within the environment map is located on the side of the target obstacle facing the external object, and the side of the target obstacle facing away from the external object is located outside the location area within the environment map that will arrive when the external object is re-issued.
  • the robot firstly plans the avoidance route of the robot according to the obstacle information of the target obstacle determined.
  • the avoidance route may be the route from the current position of the robot to the position where the target obstacle is away from the side of the external object, or may be the position from the current position of the robot to the side of the target obstacle facing away from the external object.
  • the robot then generates a movement instruction according to the planned avoidance route, and the movement instruction is used to control the robot to move in the environment where the robot is located according to the avoidance route, and move to a position where the target obstacle faces away from the external object to avoid the outside.
  • An external object emitted from the orientation indicated by the orientation information of the object that triggers the robot.
  • the movement command is also used to control the speed and/or direction of the robot as it moves in accordance with the avoidance route.
  • the movement instruction may be used to control the robot to continuously adjust the movement speed when moving according to the avoidance route; the movement instruction may also be used to control the robot to continuously adjust the movement direction when moving according to the avoidance route.
  • the moving speed of the robot may be accelerated every first preset interval, and the moving speed of the robot is the first speed; the robot is moved every second preset interval. The moving speed is slowed down, and the moving speed of the robot is the second speed.
  • the values of the first preset interval and the second preset interval may be the same or different; the first speed is greater than the second speed; when the moving speed of the robot is increased again, the speed of the robot may be greater than the first
  • the speed may also be smaller than the first speed, and may be equal to the first speed, that is, the value of the first speed may be constantly changing; for the same reason, the value of the second speed may also be constantly changing, the first pre- The values of the interval and the second preset interval may also be constantly changing, and are not described herein again.
  • the robot can control the robot to move to the left (or forward) every third preset interval, at which time the movement distance of the robot is the first distance; the robot is controlled to the right (or backward) every fourth preset interval. Move, at this time the robot's moving distance is the second distance.
  • the values of the third preset interval and the fourth preset interval may be the same or different; the values of the first distance and the second distance may be the same or different.
  • the moving distance of the robot may be greater than the first distance, or may be smaller than the first distance, and may be equal to the first distance, that is, the value of the first distance may be For the same reason, the value of the first distance may also be constantly changing.
  • the values of the third preset interval and the fourth preset interval may also be constantly changing, and are not described herein again. In the above manner, it is possible to control the robot to constantly change the speed and/or direction while moving in accordance with the avoidance route, thereby further reducing the probability that the robot is triggered again by the external object when moving in accordance with the avoidance route.
  • the target orientation information or the target obstacle determined by the robot may be multiple, and the robot first processes and analyzes the determined target orientation information or the respective avoidance routes corresponding to the target obstacle, and predicts each target.
  • the robot generates an external object that is triggered.
  • the speed of movement of the robot is related to the current life value of the robot. It can be that the moving speed of the robot is positively related to the current life value of the robot, that is, the greater the current life value of the robot, the faster the moving speed of the robot, and vice versa; the moving speed of the robot and the current life of the robot. The value is negatively correlated, that is, the greater the current life value of the robot, the slower the moving speed of the robot, and the faster the reverse, the embodiment of the present invention is not limited.
  • the robot can determine the orientation information of the external object after being triggered by the external object, and determine the avoidance movement strategy according to the orientation information of the external object and the environment map of the environment where the robot is pre-acquired; the avoidance movement strategy indicates that the robot can utilize The obstacle in the environment of the robot or the direction different from the direction indicated by the orientation information of the external object controls the robot to evade the external object, thereby controlling the robot to effectively evade the external object.
  • FIG. 2 is a schematic diagram of a robot application scenario according to an embodiment of the present invention.
  • the robot is applied to a real-life game, where the environment of the robot is in the user's home (or office).
  • the environment in which the robot is located includes obstacles such as stools, tables, cabinets, sofas, walls, etc., and the user holds a launching device.
  • the robot controls the robot to move on the ground by moving modules (such as wheels or foot-like structures).
  • the robot detects obstacles in the surrounding environment of the robot's moving path through an environment detecting device (such as a depth camera or a laser radar). To determine where obstacles are not accessible, and where no obstacles can pass.
  • the robot controls the robot to move in a passable direction, and continues to detect obstacles in the surrounding environment of the robot moving path in real time, and acquires obstacle information of the obstacle, the obstacle information includes distance information of the obstacle and the robot, and obstacles One or more of orientation information, shape information of an obstacle, and size information of an obstacle.
  • the robot builds an environment map of the environment in which the robot is located in real time according to the obtained obstacle information, thereby acquiring an environment map of the environment where the robot is located in advance, and the location information of the obstacle is recorded in the environment map.
  • the user shoots the robot with a launcher that can emit a laser, or can fire a BB bomb, or can fire a water bomb or the like.
  • the robot is equipped with a light sensor and/or a vibration sensor.
  • the light sensor configured by the robot is sensitive to the laser and then captured by the robot, and determines that the robot is hit by the laser; objects such as BB bombs or water bombs hit the robot.
  • the robot can generate a brief and strong vibration.
  • the vibration sensor configured by the robot is sensitive to the vibration and is collected by the robot and determines that the robot is hit. After the robot detects that it has been hit by a laser, BB or water bomb, the robot will flash the light, or make a sound, or make a vibration to prompt the user to be hit.
  • the current life value of the robot is changed according to the number of times the robot is hit and the position.
  • the number of times the robot is hit reaches the preset number of times, the current life value of the robot becomes zero.
  • the robot is controlled to enter a stationary state and the movement is stopped. For example, suppose the total life value of the robot is 3, and the current life value is decremented by one when the robot is hit once, and the robot enters a stationary state when the robot is hit 3 times. After the robot changes the current life value of the robot, if the current life value of the robot is not zero, the robot is controlled to enter the escape mode.
  • the robot plan can escape the moving route of the shot and generate a movement command for controlling the robot to move according to the moving route to avoid the shooting of the laser, the BB bomb or the water bomb.
  • the robot determines the orientation information of the laser, the BB bomb or the water bomb according to the position hit by the laser, the BB bomb or the water bomb, and analyzes the pre-acquired environment map to select the direction away from the direction indicated by the orientation information.
  • the robot searches and analyzes the obstacle in the environmental map, and if an obstacle is found to block the shooting of the laser, the BB bomb or the water bomb, the obstacle is The object is determined as the target obstacle, and the robot is moved to the target obstacle to avoid the side of the laser, the BB bomb or the water bomb shooting, so that the robot can be controlled to effectively avoid the shooting of the laser, the BB bomb or the water bomb.
  • the moving speed of the robot in the escape mode is related to the life value.
  • the life value of the robot is relatively large, the moving speed of the robot is faster, otherwise the moving speed of the robot is slower.
  • the life value of the robot can be restored periodically. After the robot is hit by a laser, a BB bomb or a water bomb, if the robot is not hit again after a certain period of time in the process of escaping, the life value is gradually restored, for example, If the robot is not hit again within 1 minute, the current life of the robot is increased by 1. If the robot enters a stationary state, after controlling the robot to stop moving for a certain period of time, the life value of the robot is restored to the initial reference life value, and the robot is controlled to restart.
  • the interaction between the robot and the user in the game can be realized; on the other hand, the game related to the robot can be developed from the augmented reality game to the real real game, thereby effectively improving the user experience, increasing the manner and fun of the game. .
  • the robot in the embodiment of the present invention may also be a robot having a flight function.
  • the method for circumventing the external object during the flight may also be referred to the above description, and details are not described herein again.
  • the robot when the robot receives the trigger of the external object, first acquiring the position triggered by the external object, and determining the orientation information of the external object according to the position triggered by the external object; and then according to the orientation of the external object
  • the information and the environment map of the pre-acquired environment of the robot determine an avoidance movement strategy, which is determined according to the orientation information and the environment map, and is used to control the robot to move within the environment map to avoid issuing in the orientation indicated by the orientation information.
  • the external object that triggers the robot is generated; finally, the movement instruction is generated according to the avoidance movement strategy, and the movement instruction is used to control the movement of the robot, so that the robot can effectively control the external object to avoid.
  • FIG. 3 is a schematic structural diagram of a robot avoidance control apparatus according to an embodiment of the present invention.
  • the robot avoidance control device described in the embodiment of the present invention corresponds to the robot described above, and the robot avoidance control device includes:
  • a first acquiring unit 301 configured to acquire, when the robot receives the trigger of the external object, a location that is triggered by the external object;
  • a first determining unit 302 configured to determine, according to a location that the robot is triggered by the external object, location information of the external object
  • a second determining unit 303 configured to determine an evasive mobile policy according to the orientation information of the external object and an environment map of the environment where the robot is located in advance, where the evasive mobile policy is based on the orientation information and the environment map Determining, for controlling the robot to move within the environment map to avoid an external object that is generated on the orientation indicated by the orientation information and that triggers the robot;
  • the instruction generating unit 304 is configured to generate a move instruction according to the avoidance movement policy, where the move instruction is used to control the robot movement.
  • the second determining unit 303 determines, according to the orientation information of the external object and the environment map of the environment where the robot is located, the specific manner of evading the mobile policy is:
  • the specific manner in which the command generating unit 304 generates a move instruction according to the avoidance movement policy is:
  • the movement instruction is configured to control the robot to move within the environment map according to the target orientation information to avoid an external object that is generated on the orientation indicated by the orientation information and that triggers the robot.
  • the second determining unit 303 determines, according to the orientation information of the external object and the environment map of the environment where the robot is located, the specific manner of evading the mobile policy is:
  • the specific manner in which the command generating unit 304 generates a move instruction according to the avoidance movement policy is:
  • the movement instruction is configured to control the robot to move to a side of the target obstacle facing away from the external object to avoid triggering on the robot that is triggered by the orientation indicated by the orientation information. External object.
  • the external object includes a moving object and a light
  • the robot avoidance control device further includes:
  • a detecting unit 305 configured to detect, by using a preset vibration sensor, whether the robot receives a trigger of the moving object, and detect, by a preset photosensor, whether the robot receives the trigger of the light;
  • the adjusting unit 306 is configured to perform a reduction adjustment on a life value of the robot according to a position triggered by the external object when the robot receives the trigger of the external object.
  • the adjusting unit 306 is further configured to: if the life value of the robot is not zero, and the robot is not within the first preset duration after being triggered by the external object, When the external object is triggered again, the life value of the robot is increased and adjusted.
  • the adjusting unit 306 is further configured to control the robot to enter a stationary state if the life of the robot is zero;
  • the life value of the robot is reset to the initial reference vital value, and the robot is controlled to restart the movement.
  • the robot avoidance control device further includes:
  • the second obtaining unit 307 performs obstacle recognition on the environment where the robot is located, and acquires obstacle information of the environment where the robot is located;
  • the construction unit 308 is configured to construct an environment map of the environment where the robot is located in real time according to the obstacle information
  • the obstacle information includes one or more of distance information of the obstacle from the robot, orientation information of the obstacle, shape information of the obstacle, and size information of the obstacle.
  • the robot avoidance control device further includes:
  • the signal issuing unit 309 is configured to, when the robot receives the trigger of the external object, control the robot to issue a trigger signal, where the trigger signal includes a flash, a sound, or an action.
  • the first acquiring unit 301 is first triggered to acquire the position triggered by the external object, and the first determining unit 302 is triggered to determine according to the position triggered by the external object.
  • the orientation information of the external object; the triggering second determining unit 303 determines the avoidance movement strategy according to the orientation information of the external object and the environment map of the environment in which the robot is pre-acquired, and the avoidance movement strategy is determined according to the orientation information and the environment map.
  • the last trigger instruction generation unit 304 generates a movement instruction according to the avoidance movement strategy, and the movement instruction is used to control the movement of the robot.
  • the robot can be controlled to effectively evade external objects.
  • FIG. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention.
  • the robot described in the embodiment of the present invention includes a processor 401, a user interface 402, a communication interface 403, and a memory 404.
  • the processor 401, the user interface 402, the communication interface 403, and the memory 404 can be connected by a bus or other means.
  • the embodiment of the present invention takes a bus connection as an example.
  • the processor 401 (or a central processing unit (CPU)) is a computing core and a control core of the robot, and can analyze various types of instructions in the robot and process various types of data of the robot, for example, the CPU can use It analyzes the switch command sent by the user to the robot and controls the robot to perform the switch operation; for example, the CPU can transfer various types of interaction data between the internal structures of the robot, and so on.
  • the user interface 402 is a medium for realizing interaction and information exchange between the user and the robot, and the specific embodiment may include a display for output and a keyboard for input, etc., it should be noted that The keyboard can be either a physical keyboard or a touch screen virtual keyboard, or a keyboard that combines physical and touch screen virtual.
  • the communication interface 403 may optionally include a standard wired interface, a wireless interface (such as WI-FI, a mobile communication interface, etc.), and may be used to transmit and receive data under the control of the processor 403; the communication interface 403 may also be used for internal signaling of the robot. Or the transmission and interaction of instructions.
  • the memory 404 (Memory) is a memory device in the robot for storing programs and data. It can be understood that the memory 404 herein can include both the built-in memory of the robot and, of course, the extended memory supported by the robot.
  • the memory 404 provides a storage space, which stores the operating system of the robot, and may include, but is not limited to, an Android system, an iOS system, a Windows Phone system, and the like, which are not limited by the present invention.
  • the processor 401 performs the following operations by running the executable program code in the memory 404:
  • the dodge movement policy is determined according to the orientation information and the environment map, and is used to control the Moving the robot within the environment map to evade an external object emitted on the direction indicated by the orientation information that triggers the robot;
  • a move instruction is generated in accordance with the dodge movement policy, the move instruction being used to control the robot movement.
  • the processor 401 determines, according to the orientation information of the external object and the environment map of the environment in which the robot is located, the specific manner of evading the mobile policy is:
  • the movement instruction is configured to control the robot to move within the environment map according to the target orientation information to avoid an external object that is generated on the orientation indicated by the orientation information and that triggers the robot.
  • the processor 401 determines, according to the orientation information of the external object and the environment map of the environment in which the robot is located, the specific manner of evading the mobile policy is:
  • the movement instruction is configured to control the robot to move to a side of the target obstacle facing away from the external object to avoid triggering on the robot that is triggered by the orientation indicated by the orientation information. External object.
  • the external object includes a moving object and light
  • the processor 401 is further configured to:
  • the life value of the robot is reduced and adjusted according to the position triggered by the external object.
  • the processor 401 after the processor 401 reduces the health of the robot according to the position triggered by the external object, the processor 401 is further configured to:
  • the life value of the robot is not zero, and the robot is not triggered again by the external object within the first preset time period after being triggered by the external object, the life value of the robot is increased. Adjustment.
  • the processor 401 after the processor 401 reduces the health of the robot according to the position triggered by the external object, the processor 401 is further configured to:
  • the life value of the robot is reset to the initial reference vital value, and the robot is controlled to restart the movement.
  • the processor 401 is further configured to:
  • the obstacle information includes one or more of distance information of the obstacle from the robot, orientation information of the obstacle, shape information of the obstacle, and size information of the obstacle.
  • the processor 401 is further configured to:
  • the robot When the robot receives the trigger of the external object, the robot is controlled to issue a trigger signal, and the trigger signal includes a flash, a sound, or an action.
  • the processor 401, the user interface 402, the communication interface 403, and the memory 404 described in the embodiments of the present invention may implement the implementation manner of the robot described in the robot avoidance control method provided by the embodiment of the present invention.
  • An implementation manner described in the robot avoidance control apparatus provided in FIG. 3 of the embodiment of the present invention may be omitted, and details are not described herein again.
  • the processor 401 when the robot receives the trigger of the external object, the processor 401 first acquires the position triggered by the external object, and determines the orientation information of the external object according to the position triggered by the external object; and then according to the external
  • the orientation information of the object and the environment map of the pre-obtained environment of the robot determine an avoidance movement strategy, which is determined according to the orientation information and the environment map, and is used to control the robot to move within the environment map to avoid the indication according to the orientation information.
  • the embodiment of the present invention further provides a computer readable storage medium, wherein the computer readable storage medium stores instructions, and when executed on a computer, causes the computer to execute the robot avoidance control method described in the foregoing method embodiment.
  • the embodiment of the invention further provides a computer program product comprising instructions, which when executed on a computer, causes the computer to execute the robot avoidance control method described in the above method embodiment.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Flash disk, Read-Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.

Abstract

一种机器人规避控制方法及相关装置,其中方法包括:当机器人接收到外部对象的触发时,获取所述机器人被所述外部对象所触发的位置(S101);根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息(S102);根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略(S103),所述躲避移动策略是根据所述方位信息和所述环境地图确定的,用于控制所述机器人在所述环境地图内移动以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象;按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动(S104)。通过该方法可以控制机器人对外部对象进行有效躲避。

Description

一种机器人规避控制方法及相关装置 技术领域
本发明涉及数据处理技术领域,尤其涉及一种机器人规避控制方法及相关装置。
背景技术
随着人工智能技术的不断发展,智能机器人应运而生。目前,智能机器人已广泛应用于各个领域,例如智能家居领域、服务领域以及智能游戏领域等。在实际应用中,机器人在移动(例如行走)的过程中,可能会遇到障碍物、移动的物体以及其他物质,如何在机器人移动的过程中,自动对障碍物、移动的物体以及其他物质进行躲避是目前的研究热点。
发明内容
本发明实施例提供了一种机器人规避控制方法及相关装置,可以控制机器人对外部对象进行有效躲避。
本发明实施例第一方面提供了一种机器人规避控制方法,所述方法包括:
当机器人接收到外部对象的触发时,获取所述机器人被所述外部对象所触发的位置;
根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息;
根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,所述躲避移动策略是根据所述方位信息和所述环境地图确定的,用于控制所述机器人在所述环境地图内移动以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象;
按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动。
本发明实施例第二方面提供了一种机器人规避控制装置,所述装置包括:
第一获取单元,用于当机器人接收到外部对象的触发时,获取所述机器人 被所述外部对象所触发的位置;
第一确定单元,用于根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息;
第二确定单元,用于根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,所述躲避移动策略是根据所述方位信息和所述环境地图确定的,用于控制所述机器人在所述环境地图内移动以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象;
指令生成单元,用于按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动。
本发明实施例第三方面提供了一种机器人,包括处理器和存储器,所述存储器存储有可执行程序代码,所述处理器用于调用所述可执行程序代码,执行上述第一方面所述的机器人规避控制方法。
本发明实施例第四方面提供了一种存储介质,所述存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面所述的机器人规避控制方法。
本发明实施例中,当机器人接收到外部对象的触发时,首先获取机器人被外部对象所触发的位置,并根据机器人被外部对象所触发的位置确定外部对象的方位信息,然后根据外部对象的方位信息和预先获取的机器人所在环境的环境地图,确定躲避移动策略,最后按照躲避移动策略生成移动指令,移动指令用于控制机器人移动,从而可以控制机器人对外部对象进行有效躲避。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种机器人规避控制方法的流程示意图;
图2是本发明实施例提供的一种机器人应用场景的示意图;
图3是本发明实施例提供的一种机器人规避控制装置的结构示意图;
图4是本发明实施例提供的一种机器人的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
请参阅图1,图1为本发明实施例提供的一种机器人规避控制方法的流程示意图。本发明实施例中所描述的机器人规避控制方法,包括:
S101、当机器人接收到外部对象的触发时,所述机器人获取所述机器人被所述外部对象所触发的位置。
本发明实施例中,外部对象可以是运动物体(例如BB弹、水弹等),也可以是光线(例如激光等)。其中,外部对象可以是发射装置(例如BB弹枪、水弹枪、激光发射器)所发出的发射物,也可以是用户投掷出的投掷物(例如硬币、石子等),外部对象还可以是机器人所在环境中自然下落的物体(例如水滴等)。需要说明的是,外部对象也可以是其他类型的物体或者物质,外部对象也可以处于其他的运动状态,本发明实施例不作限定。
其中,机器人接收到外部对象的触发,可以指的是机器人被外部对象所撞击,例如被BB弹枪发射出的BB弹所撞击;也可以指的是是机器人被外部对象所击中,例如被激光发射器发射出的激光所击中。具体地,机器人可以通过预先设置的振动传感器来检测机器人是否被运动物体所撞击,通过预先设置的光敏传感器来检测机器人是否被光线所击中;若检测到机器人被运动物体所撞击或者被光线所击中,机器人则确定接收到外部对象的触发。进一步地,当机器人接收到外部对象的触发时,机器人获取机器人被外部对象所触发的位置,也即是说,获取机器人被运动物体所撞击的位置,或者获取机器人被光线所击中的位置。需要说明的是,机器人预先设置有至少一个振动传感器和/或至少一个光敏传感器,该至少一个传感器和/或至少一个光敏传感器预先设置在机器人的至少一个身体部位(例如头部、手臂、躯干等)上。当机器人同时预先设置有振动传感器以及光敏传感器时,振动传感器和光敏传感器可以预先设置 在机器人上的相同位置,也可以设置在机器人上的不同位置。机器人也可以是被外部对象以其他方式进行触发,机器人可以通过预先设置的其他类型的传感器来检测机器人是否被外部对象以其他方式触发。
在一些可行的实施方式中,机器人预先设置一初始参考生命值N(例如12),也即是预先设置的机器人总的生命值;预设有生命值与机器人身体部位的映射关系,例如机器人的头部对应生命值n1(例如3),机器人的躯干对应生命值n2(例如2),机器人的手臂对应生命值n3(例如1)。当机器人接收到外部对象的触发时,根据机器人被外部对象所触发的位置对机器人的生命值进行减少调整。例如,当机器人被外部对象所触发的位置位于机器人的头部时,机器人则将初始参考生命值N减去n1,得到减少调整后的生命值N1,并将初始参考生命值N调整为生命值N1。需要说明的是,对于机器人的头部被外部对象再次触发的情况,以及机器人的其他部位被外部对象所触发的情况,则可以以此类推,在此不再赘述。
在一些可行的实施方式中,机器人在根据机器人被外部对象所触发的位置对机器人的生命值进行减少调整之后,若机器人的当前生命值不为零,则开始计时;若检测到机器人在被外部对象触发之后的第一预设时长(例如30s)内,未被外部对象再次触发,也即是在第一预设时长内,未检测到机器人被运动物体所再次撞击或者被光线所再次击中,则对机器人的当前生命值进行增加调整,例如将机器人的当前生命值加一。进一步地,在对机器人的当前生命值进行增加调整之后的第一预设时长内,任未检测到机器人被外部对象再次触发,则再次对机器人的当前生命值进行增加调整。也即是说,在机器人未被外部对象再次触发的前提条件下,每隔第一预设时长,则对机器人的当前生命值进行增加调整一次,直至机器人的当前生命值等于初始参考生命值为止。
在一些可行的实施方式中,机器人在根据机器人被外部对象所触发的位置对机器人的生命值进行减少调整之后,若机器人的当前生命值为零,则控制机器人进入静止状态,也即是控制机器人停止移动,并开始计时;当根据计时结果检测到控制机器人进入静止状态的时长大于第二预设时长(例如1min)时,将机器人的当前生命值重新设置为初始参考生命值N,并控制机器人在机器人所在环境内重新开始移动。
在一些可行的实施方式中,当机器人接收到外部对象的触发时,机器人控制机器人发出触发信号,或者说警报信号,以提示用户机器人被外部对象所击中或者机器人接收到运动物体的撞击。其中,机器人可以通过闪灯来发出触发信号,也可通过发出预设的特定声音来发出触发信号,还可以通过做出预设的特定动作(例如振动)来发出触发信号,机器人还可以通过其他方式来发出触发信号,本发明实施例不作限定。
S102、所述机器人根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息。
本发明实施例中,机器人在获取到机器人被外部对象所撞击或者击中的位置之后,根据机器人被外部对象所撞击或者击中的位置确定外部对象的方位信息,该外部对象的方位信息包括外部对象的方向信息。
在一些可行的实施方式中,当机器人接收到外部对象的撞击时,机器人确定外部对象为运动物体,首先获取机器人被运动物体所撞击的位置,以及运动物体撞击机器人时产生的压力信息,该压力信息包括压力值的大小和压力方向信息;机器人然后根据机器人被外部对象所撞击的位置确定外部对象的方向信息,并对获取到的压力值的大小、压力方向信息进行分析,确定出运动物体的位置信息,也即是预测运动物体发出之前所在的位置区域。
在一些可行的实施方式中,机器人可以通过预置的照相机每隔预设时间间隔(例如2s或者3s)获取一次机器人所在环境的图像,并对不同时刻获取的多张图像进行处理分析,得到外部对象的方位信息,该方位信息包括外部对象的方向信息和位置信息。需要说明的是,机器人预置的照相机可以是一个,也可以是多个,可以是单目照相机、也可以是双目照相机或者多目照相机,本发明实施例不作限定。
S103、所述机器人根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略。
本发明实施例中,机器人预置有环境探测装置,该环境探测装置可以配置在机器人的多个部位上,进一步的,可以配置在机器人的头部或者其他可以转动的部位上。该环境探测装置例如可以是深度摄像头,该深度摄像头可以是单目的摄像头,也可以是多目的摄像头;该环境探测装置也可以是激光雷达等, 本发明实施例不作限定。机器人所在环境的环境地图是机器人预先获取得到的,具体地,机器人首先通过预置的环境探测装置对机器人所在的环境,或者说机器人移动路径上的周边环境进行障碍物识别,获取机器人所在环境的障碍物信息;其中,障碍物信息包括障碍物与机器人的距离信息、障碍物的方位信息、障碍物的形状信息、障碍物的尺寸信息中的一种或多种。机器人然后根据障碍物信息实时构建机器人所在环境的环境地图,具体地,机器人可以利用激光同步定位与地图构建(Simultaneous Localization and Mapping,SLAM)技术,根据障碍物信息实时构建机器人所在环境的二维环境地图;也可以利用视觉SLAM技术,根据障碍物信息实时构建机器人所在环境的三维环境地图。需要说明的是,机器人还可以利用其他技术来构建机器人所在环境的环境地图,本发明实施例不作限定。其中,机器人预先获取得到机器人所在环境的环境地图之后,可以根据该环境地图合理规划机器人的移动路线,从而可以在控制机器人在该环境中进行移动时有效避开障碍物,实现对机器人的保护。
本发明实施例中,躲避移动策略是根据外部对象的方位信息和机器人所在环境的环境地图确定的,用于控制机器人在环境地图所对应的环境内移动,以躲避在外部对象的方位信息所指示方位上发出的会对机器人产生触发的外部对象。
S104、所述机器人按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动。
在一些可行的实施方式中,机器人根据外部对象的方位信息和预先获取的机器人所在环境的环境地图,确定躲避移动策略的具体方式为:机器人首先根据外部对象的方位信息预测外部对象再次发出时将到达的该环境地图内的位置区域;然后根据预测得到的外部对象再次发出时将到达的该环境地图内的位置区域、以及预先获取到的机器人所在环境的环境地图确定目标方位信息,该目标方位信息包括目标方向和目标位置。其中,该目标方向可以是与外部对象的方位信息所指示方向相反的方向,也可以是与外部对象的方位信息所指示方向成预设角度(例如45度或者90度)的方向。该目标位置可以是该环境地图内机器人被外部对象再次触发的概率较低的位置。具体地,该目标位置可以是外部对象再次发出时将到达的该环境地图内的位置区域中,外部对象到达的概 率最低的位置,也即是外部对象再次发出时将到达的该环境地图内的位置区域中,机器人被外部对象再次触发的概率最低的位置;该目标位置也可以是根据该目标方向确定的,与外部对象再次发出时将到达的该环境地图内的位置区域间隔预设距离(例如0.5m)的位置,也即是说,该目标位置位于外部对象再次发出时将到达的该环境地图内的位置区域之外的位置区域。
进一步地,机器人按照躲避移动策略生成移动指令的具体方式为:机器人首先根据确定得到的目标方位信息规划机器人的躲避路线,也即是机器人的移动路线。该躲避路线可以是从机器人的当前位置处到达目标方位信息所指示的位置处距离最短的路线;也可以是从机器人的当前位置处到达目标方位信息所指示的位置处耗时最短的路线;还可以是从机器人的当前位置处到达目标方位信息所指示的位置处的过程中被外部对象再次触发的概率最小的路线等。机器人然后根据规划的躲避路线生成移动指令,该移动指令用于控制机器人按照该躲避路线在机器人所在环境中进行移动,并移动至目标方位信息所指示的位置处,以躲避在外部对象的方位信息所指示方位上发出的会对机器人产生触发的外部对象。
在一些可行的实施方式中,机器人根据外部对象的方位信息和预先获取的机器人所在环境的环境地图,确定躲避移动策略的具体方式为:机器人首先根据外部对象的方位信息预测外部对象再次发出时将到达的该环境地图内的位置区域;然后根据预测得到的外部对象再次发出时将到达的该环境地图内的位置区域、预先获取到的机器人所在环境的环境地图、以及预先获取的机器人所在环境的障碍物信息确定目标障碍物。其中,目标障碍物可以指的是与外部对象再次发出时将到达的该环境地图内的位置区域具有预设距离的障碍物,也即是说,目标障碍物位于外部对象再次发出时将到达的该环境地图内的位置区域之外的位置区域;目标障碍物也可以指的是迎向外部对象的一侧可以挡住外部对象的障碍物,也即是说,外部对象再次发出时将到达的该环境地图内的位置区域位于目标障碍物迎向外部对象的一侧,目标障碍物背向外部对象的一侧位于外部对象再次发出时将到达的该环境地图内的位置区域之外的位置区域。
进一步地,机器人按照躲避移动策略生成移动指令的具体方式为:机器人首先根据确定得到的目标障碍物的障碍物信息规划机器人的躲避路线。该躲避 路线可以是从机器人的当前位置处到达目标障碍物背向外部对象一侧所在位置处距离最短的路线;也可以是从机器人的当前位置处到达目标障碍物背向外部对象一侧所在位置处耗时最短的路线;还可以是从机器人的当前位置处到达目标障碍物背向外部对象一侧所在位置处的过程中被外部对象再次触发的概率最小的路线等。机器人然后根据规划的躲避路线生成移动指令,该移动指令用于控制机器人按照该躲避路线在机器人所在环境中进行移动,并移动至目标障碍物背向外部对象一侧所在位置处,以躲避在外部对象的方位信息所指示方位上发出的会对机器人产生触发的外部对象。
在一些可行的实施方式中,该移动指令还用于控制机器人在按照该躲避路线进行移动时的速度和/或方向。具体地,该移动指令可以用于控制机器人在按照该躲避路线进行移动时不断调整移动速度;该移动指令也可以用于控制机器人在按照该躲避路线进行移动时不断调整移动方向。例如,当机器人根据该移动指令在该躲避路线中移动时,可以每隔第一预设间隔将机器人的移动速度加快,此时机器人的移动速度为第一速度;每隔第二预设间隔将机器人的移动速度减慢,此时机器人的移动速度为第二速度。需要说明的是,第一预设间隔和第二预设间隔的值可以相同,也可以不同;第一速度大于第二速度;当再次将机器人的移动速度加快时,机器人的速度可以大于第一速度,也可以小于第一速度,还可以等于第一速度,也即是说,第一速度的值可以是不断变化的;同理,第二速度的值也可以是不断变化的,第一预设间隔和第二预设间隔的值也可以是不断变化的,在此不再赘述。
进一步地,机器人可以每隔第三预设间隔控制机器人向左(或者向前)移动,此时机器人的移动距离为第一距离;每隔第四预设间隔控制机器人向右(或者向后)移动,此时机器人的移动距离为第二距离。需要说明的是,第三预设间隔和第四预设间隔的值可以相同,也可以不同;第一距离和第二距离的值可以相同,也可以不同。当再次控制机器人向左(或者向前)移动时,机器人的移动距离可以大于第一距离,也可以小于第一距离,还可以等于第一距离,也即是说,第一距离的值可以是不断变化的;同理,第一距离的值也可以是不断变化的,第三预设间隔和第四预设间隔的值也可以是不断变化的,在此不再赘述。采用上述方式,可以控制机器人在按照该躲避路线进行移动时不断变化速 度和/或方向,从而可以进一步降低机器人在按照该躲避路线进行移动时被外部对象再次触发的概率。
在一些可行的实施方式中,机器人确定出的目标方位信息或者目标障碍物可以是多个,机器人首先对确定出的目标方位信息或者目标障碍物对应的各个躲避路线进行处理分析,预测每个目标方位信息或者目标障碍物对应的各条躲避路线被外部对象所撞击或者击中的概率;然后选取被外部对象所撞击或者击中的概率最小的躲避路线作为目标躲避路线,并根据目标躲避路线生成移动指令,该移动指令用于控制机器人按照该目标躲避路线在机器人所在环境中进行移动,并移动至目标躲避路线的终点位置处,以躲避在外部对象的方位信息所指示方位上发出的会对机器人产生触发的外部对象。
在一些可行的实施方式中,机器人的移动速度与机器人的当前生命值相关。可以是机器人的移动速度与机器人的当前生命值成正相关,也即是机器人的当前生命值越大,机器人的移动速度越快,反之则越慢;也可以是机器人的移动速度与机器人的当前生命值成负相关,也即是机器人的当前生命值越大,机器人的移动速度越慢,反之则越快,本发明实施例不作限定。
采用上述方式,机器人可以在被外部对象所触发后,确定外部对象的方位信息,并根据外部对象的方位信息和预先获取的机器人所在环境的环境地图确定躲避移动策略;躲避移动策略指示机器人可以利用机器人所在环境中的障碍物或者根据与外部对象的方位信息所指示方向不同的方向,控制机器人对外部对象进行躲避,从而可以控制机器人对外部对象进行有效躲避。
为更好地说明本发明实施例中的技术方案,下面通过举例子的方式进行说明。请一并参阅图2,图2为本发明实施例提供的一种机器人应用场景的示意图,如图2所示,机器人应用于真实现实游戏中,机器人所在环境为用户的家(或者办公室)中,机器人所在环境中包括有凳子、桌子、柜子、沙发、墙壁等障碍物,用户手持有发射装置。机器人通过移动模块(例如轮子或者类足结构等)控制机器人在地面上移动,在移动过程中,机器人通过环境探测装置(例如深度摄像头或者激光雷达等)探测机器人移动路径的周边环境中的障碍物,从而判断哪里有障碍物不能够通行,哪里没有障碍物能够通行。机器人控制机器人向可以通行的方向进行移动,并继续实时探测机器人移动路径的周边环境 中的障碍物,获取障碍物的障碍物信息,该障碍物信息包括障碍物与机器人的距离信息、障碍物的方位信息、障碍物的形状信息、障碍物的尺寸信息中的一种或多种。机器人根据获取得到的障碍物信息实时搭建机器人所在环境的环境地图,从而预先获取到机器人所在环境的环境地图,该环境地图中记录有障碍物的位置信息等。
在游戏过程中,用户手持可以发射激光、或者可以发射BB弹、或者可以发射水弹等的发射装置对机器人进行射击。机器人配置有光敏传感器和/或震动传感器,当激光击中机器人之后,机器人配置的光敏传感器敏感到激光后被机器人采集,并确定机器人被激光所击中;BB弹或者水弹等物体撞击到机器人可以让机器人产生短暂强烈的震动,机器人配置的震动传感器敏感到震动后被机器人采集,并确定机器人被击中。在机器人检测到被激光、BB弹或者水弹击中之后,机器人会通过闪灯、或者发出声音、或者做出震动,以提示用户机器人被击中。
当机器人确定被击中之后,根据机器人被击中的次数以及位置更改并记录机器人当前的生命值,当机器人被击中的次数达到预设次数之后,也即是机器人的当前生命值变为零之后,控制机器人进入静止状态,停止移动。例如,假设机器人的总的生命值为3,机器人被击中一次则当前生命值减一,当机器人被击中3次之后机器人进入静止状态。机器人更改机器人的当前生命值之后,如果机器人的当前生命值不为零,则控制机器人进入逃避模式。在逃避模式中,机器人规划可以逃避被射击的移动路线,并生成移动指令,该移动指令用于控制机器人根据该移动路线进行移动,以躲避激光、BB弹或者水弹的射击。例如,机器人根据被激光、BB弹或者水弹所击中的位置确定激光、BB弹或者水弹的方位信息,通过对预先获取的环境地图进行分析,选择背离该方位信息所指示方向的方向上可以通行的移动路线,并控制机器人根据该移动路线进行移动;或者,机器人搜索分析该环境地图中的障碍物,如果发现有障碍物可以挡住激光、BB弹或者水弹的射击,则将该障碍物确定为目标障碍物,并控制机器人移动至目标障碍物可以躲避激光、BB弹或者水弹射击的一侧,从而可以控制机器人有效避免激光、BB弹或者水弹的射击。
进一步地,机器人在逃避模式中的移动速度和生命值相关,当机器人的生 命值比较大时,则机器人的移动速度较快,否则机器人的移动速度较慢。另外,机器人的生命值可以定时恢复,在机器人被激光、BB弹或者水弹击中之后,机器人在逃避的过程中如果超过一定的时间都没有被再次击中,则生命值逐渐恢复,例如,若机器人1分钟内都没有再次被击中到,则将机器人的当前生命值加1。如果机器人进入静止状态,控制机器人停止移动一定时间之后,将机器人的生命值重新恢复为初始参考生命值,并控制机器人重新开始移动。采用上述方式,一方面可以实现机器人和用户在游戏中的互动;另一方面,可以将与机器人相关的游戏从增强现实游戏发展为真实现实游戏,从而有效提高用户体验,增加游戏的方式以及乐趣。
需要说明的是,本发明实施例中的机器人也可以是具有飞行功能的机器人,机器人在飞行过程中对外部对象的规避方法亦可参考上述描述,在此不再赘述。
本发明实施例中,当机器人接收到外部对象的触发时,首先获取机器人被外部对象所触发的位置,并根据机器人被外部对象所触发的位置确定外部对象的方位信息;然后根据外部对象的方位信息和预先获取的机器人所在环境的环境地图,确定躲避移动策略,该躲避移动策略是根据方位信息和环境地图确定的,用于控制机器人在环境地图内移动以躲避在方位信息所指示方位上发出的会对机器人产生触发的外部对象;最后按照躲避移动策略生成移动指令,移动指令用于控制机器人移动,从而可以控制机器人对外部对象进行有效躲避。
请参阅图3,图3为本发明实施例提供的一种机器人规避控制装置的结构示意图。本发明实施例中所描述的机器人规避控制装置,对应于前文所述的机器人,所述机器人规避控制装置包括:
第一获取单元301,用于当机器人接收到外部对象的触发时,获取所述机器人被所述外部对象所触发的位置;
第一确定单元302,用于根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息;
第二确定单元303,用于根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,所述躲避移动策略是根据所 述方位信息和所述环境地图确定的,用于控制所述机器人在所述环境地图内移动以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象;
指令生成单元304,用于按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动。
在一些可行的实施方式中,所述第二确定单元303根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略的具体方式为:
根据所述外部对象的方位信息预测所述外部对象将到达的所述环境地图内的位置区域;
根据所述位置区域和所述环境地图确定目标方位信息;
其中,所述指令生成单元304按照所述躲避移动策略生成移动指令的具体方式为:
根据所述目标方位信息生成移动指令;
其中,所述移动指令用于控制所述机器人按照所述目标方位信息在所述环境地图内移动,以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象。
在一些可行的实施方式中,所述第二确定单元303根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略的具体方式为:
根据所述外部对象的方位信息预测所述外部对象将到达的所述环境地图内的位置区域;
根据所述位置区域、所述环境地图和预先获取的所述机器人所在环境的障碍物信息确定目标障碍物;
其中,所述指令生成单元304按照所述躲避移动策略生成移动指令的具体方式为:
根据所述目标障碍物的障碍物信息生成移动指令;
其中,所述移动指令用于控制所述机器人移动至所述目标障碍物背向所述外部对象的一侧,以躲避在所述方位信息所指示方位上发出的会对所述机器人 产生触发的外部对象。
在一些可行的实施方式中,所述外部对象包括运动物体和光线,所述机器人规避控制装置还包括:
检测单元305,用于通过预置的振动传感器检测所述机器人是否接收到所述运动物体的触发,通过预置的光敏传感器检测所述机器人是否接收到所述光线的触发;
调整单元306,用于当所述机器人接收到所述外部对象的触发时,根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整。
在一些可行的实施方式中,所述调整单元306,还用于若所述机器人的生命值不为零,且所述机器人在被所述外部对象触发之后的第一预设时长内,未被所述外部对象再次触发,则对所述机器人的生命值进行增加调整。
在一些可行的实施方式中,所述调整单元306,还用于若所述机器人的生命值为零,则控制所述机器人进入静止状态;
当所述机器人进入所述静止状态的时长大于第二预设时长时,将所述机器人的生命值重新设置为初始参考生命值,并控制所述机器人重新开始移动。
在一些可行的实施方式中,所述机器人规避控制装置还包括:
第二获取单元307,对所述机器人所在环境进行障碍物识别,获取所述机器人所在环境的障碍物信息;
构建单元308,用于根据所述障碍物信息实时构建所述机器人所在环境的环境地图;
其中,所述障碍物信息包括障碍物与所述机器人的距离信息、障碍物的方位信息、障碍物的形状信息、障碍物的尺寸信息中的一种或多种。
在一些可行的实施方式中,所述机器人规避控制装置还包括:
信号发出单元309,用于当所述机器人接收到所述外部对象的触发时,控制所述机器人发出触发信号,所述触发信号包括闪灯、声音、或者动作。
可以理解的是,本发明实施例的机器人规避控制装置的各功能单元的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本发明实施例中,当机器人接收到外部对象的触发时,首先触发第一获取 单元301获取机器人被外部对象所触发的位置,并触发第一确定单元302根据机器人被外部对象所触发的位置确定外部对象的方位信息;然后触发第二确定单元303根据外部对象的方位信息和预先获取的机器人所在环境的环境地图,确定躲避移动策略,该躲避移动策略是根据方位信息和环境地图确定的,用于控制机器人在环境地图内移动以躲避在方位信息所指示方位上发出的会对机器人产生触发的外部对象;最后触发指令生成单元304按照躲避移动策略生成移动指令,移动指令用于控制机器人移动,从而可以控制机器人对外部对象进行有效躲避。
请参阅图4,图4为本发明实施例提供的一种机器人的结构示意图。本发明实施例中所描述的机器人,包括:处理器401、用户接口402、通信接口403及存储器404。其中,处理器401、用户接口402、通信接口403及存储器404可通过总线或其他方式连接,本发明实施例以通过总线连接为例。
其中,处理器401(或称中央处理器(Central Processing Unit,CPU))是机器人的计算核心以及控制核心,其可以解析机器人内的各类指令以及处理机器人的各类数据,例如:CPU可以用于解析用户向机器人所发送的开关机指令,并控制机器人进行开关机操作;再如:CPU可以在机器人内部结构之间传输各类交互数据,等等。用户接口402是实现用户与机器人进行交互和信息交换的媒介,其具体体现可以包括用于输出的显示屏(Display)以及用于输入的键盘(Keyboard)等等,需要说明的是,此处的键盘既可以为实体键盘,也可以为触屏虚拟键盘,还可以为实体与触屏虚拟相结合的键盘。通信接口403可选的可以包括标准的有线接口、无线接口(如WI-FI、移动通信接口等),受处理器403的控制可以用于收发数据;通信接口403还可以用于机器人内部信令或者指令的传输以及交互。存储器404(Memory)是机器人中的记忆设备,用于存放程序和数据。可以理解的是,此处的存储器404既可以包括机器人的内置存储器,当然也可以包括机器人所支持的扩展存储器。存储器404提供存储空间,该存储空间存储了机器人的操作系统,可包括但不限于:Android系统、iOS系统、Windows Phone系统等等,本发明对此并不作限定。
在本发明实施例中,处理器401通过运行存储器404中的可执行程序代码, 执行如下操作:
当机器人接收到外部对象的触发时,获取所述机器人被所述外部对象所触发的位置;
根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息;
根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,所述躲避移动策略是根据所述方位信息和所述环境地图确定的,用于控制所述机器人在所述环境地图内移动以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象;
按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动。
在一些可行的实施方式中,所述处理器401根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略的具体方式为:
根据所述外部对象的方位信息预测所述外部对象将到达的所述环境地图内的位置区域;
根据所述位置区域和所述环境地图确定目标方位信息;
所述处理器401按照所述躲避移动策略生成移动指令的具体方式为:
根据所述目标方位信息生成移动指令;
其中,所述移动指令用于控制所述机器人按照所述目标方位信息在所述环境地图内移动,以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象。
在一些可行的实施方式中,所述处理器401根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略的具体方式为:
根据所述外部对象的方位信息预测所述外部对象将到达的所述环境地图内的位置区域;
根据所述位置区域、所述环境地图和预先获取的所述机器人所在环境的障碍物信息确定目标障碍物;
所述处理器401按照所述躲避移动策略生成移动指令的具体方式为:
根据所述目标障碍物的障碍物信息生成移动指令;
其中,所述移动指令用于控制所述机器人移动至所述目标障碍物背向所述外部对象的一侧,以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象。
在一些可行的实施方式中,所述外部对象包括运动物体和光线,所述处理器401还用于:
通过预置的振动传感器检测所述机器人是否接收到所述运动物体的触发,通过预置的光敏传感器检测所述机器人是否接收到所述光线的触发;
当所述机器人接收到所述外部对象的触发时,根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整。
在一些可行的实施方式中,所述处理器401根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整之后,所述处理器401还用于:
若所述机器人的生命值不为零,且所述机器人在被所述外部对象触发之后的第一预设时长内,未被所述外部对象再次触发,则对所述机器人的生命值进行增加调整。
在一些可行的实施方式中,所述处理器401根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整之后,所述处理器401还用于:
若所述机器人的生命值为零,则控制所述机器人进入静止状态;
当所述机器人进入所述静止状态的时长大于第二预设时长时,将所述机器人的生命值重新设置为初始参考生命值,并控制所述机器人重新开始移动。
在一些可行的实施方式中,所述处理器401还用于:
对所述机器人所在环境进行障碍物识别,获取所述机器人所在环境的障碍物信息;
根据所述障碍物信息实时构建所述机器人所在环境的环境地图;
其中,所述障碍物信息包括障碍物与所述机器人的距离信息、障碍物的方位信息、障碍物的形状信息、障碍物的尺寸信息中的一种或多种。
在一些可行的实施方式中,所述处理器401还用于:
当所述机器人接收到所述外部对象的触发时,控制所述机器人发出触发信号,所述触发信号包括闪灯、声音、或者动作。
具体实现中,本发明实施例中所描述的处理器401、用户接口402、通信接口403及存储器404可执行本发明实施例提供的一种机器人规避控制方法中所描述的机器人的实现方式,也可执行本发明实施例图3提供的一种机器人规避控制装置中所描述的实现方式,在此不再赘述。
本发明实施例中,当机器人接收到外部对象的触发时,处理器401首先获取机器人被外部对象所触发的位置,并根据机器人被外部对象所触发的位置确定外部对象的方位信息;然后根据外部对象的方位信息和预先获取的机器人所在环境的环境地图,确定躲避移动策略,该躲避移动策略是根据方位信息和环境地图确定的,用于控制机器人在环境地图内移动以躲避在方位信息所指示方位上发出的会对机器人产生触发的外部对象;最后按照躲避移动策略生成移动指令,移动指令用于控制机器人移动,从而可以控制机器人对外部对象进行有效躲避。
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述方法实施例所述的机器人规避控制方法。
本发明实施例还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法实施例所述的机器人规避控制方法。
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和单元并不一定是本发明所必须的。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读 存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
以上对本发明实施例所提供的一种机器人规避控制方法及相关装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种机器人规避控制方法,其特征在于,所述方法包括:
    当机器人接收到外部对象的触发时,获取所述机器人被所述外部对象所触发的位置;
    根据所述机器人被所述外部对象所触发的位置确定所述外部对象的方位信息;
    根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,所述躲避移动策略是根据所述方位信息和所述环境地图确定的,用于控制所述机器人在所述环境地图内移动以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象;
    按照所述躲避移动策略生成移动指令,所述移动指令用于控制所述机器人移动。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,包括:
    根据所述外部对象的方位信息预测所述外部对象将到达的所述环境地图内的位置区域;
    根据所述位置区域和所述环境地图确定目标方位信息;
    所述按照所述躲避移动策略生成移动指令,包括:
    根据所述目标方位信息生成移动指令;
    其中,所述移动指令用于控制所述机器人按照所述目标方位信息在所述环境地图内移动,以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述外部对象的方位信息和预先获取的所述机器人所在环境的环境地图,确定躲避移动策略,包括:
    根据所述外部对象的方位信息预测所述外部对象将到达的所述环境地图 内的位置区域;
    根据所述位置区域、所述环境地图和预先获取的所述机器人所在环境的障碍物信息确定目标障碍物;
    所述按照所述躲避移动策略生成移动指令,包括:
    根据所述目标障碍物的障碍物信息生成移动指令;
    其中,所述移动指令用于控制所述机器人移动至所述目标障碍物背向所述外部对象的一侧,以躲避在所述方位信息所指示方位上发出的会对所述机器人产生触发的外部对象。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述外部对象包括运动物体和光线,所述方法还包括:
    通过预置的振动传感器检测所述机器人是否接收到所述运动物体的触发,通过预置的光敏传感器检测所述机器人是否接收到所述光线的触发;
    当所述机器人接收到所述外部对象的触发时,根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整之后,所述方法还包括:
    若所述机器人的生命值不为零,且所述机器人在被所述外部对象触发之后的第一预设时长内,未被所述外部对象再次触发,则对所述机器人的生命值进行增加调整。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述机器人被所述外部对象所触发的位置对所述机器人的生命值进行减少调整之后,所述方法还包括:
    若所述机器人的生命值为零,则控制所述机器人进入静止状态;
    当所述机器人进入所述静止状态的时长大于第二预设时长时,将所述机器人的生命值重新设置为初始参考生命值,并控制所述机器人重新开始移动。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    对所述机器人所在环境进行障碍物识别,获取所述机器人所在环境的障碍物信息;
    根据所述障碍物信息实时构建所述机器人所在环境的环境地图;
    其中,所述障碍物信息包括障碍物与所述机器人的距离信息、障碍物的方位信息、障碍物的形状信息、障碍物的尺寸信息中的一种或多种。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述机器人接收到所述外部对象的触发时,控制所述机器人发出触发信号,所述触发信号包括闪灯、声音、或者动作。
  9. 一种机器人,其特征在于,包括:处理器和存储器,所述存储器存储有可执行程序代码,所述处理器用于调用所述可执行程序代码,执行如权利要求1至8中任一项所述的机器人规避控制方法。
  10. 一种存储介质,其特征在于,所述存储介质中存储有指令,当其在计算机上运行时,使得计算机执行如权利要求1至8中任一项所述的机器人规避控制方法。
PCT/CN2018/080702 2018-03-27 2018-03-27 一种机器人规避控制方法及相关装置 WO2019183804A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2018/080702 WO2019183804A1 (zh) 2018-03-27 2018-03-27 一种机器人规避控制方法及相关装置
EP18911429.1A EP3779630A4 (en) 2018-03-27 2018-03-27 ROBOT AVOIDATION CONTROL PROCEDURES AND RELATED DEVICE
CN201880004168.6A CN109906134B (zh) 2018-03-27 2018-03-27 一种机器人规避控制方法及相关装置
US17/042,020 US20210060780A1 (en) 2018-03-27 2018-03-27 Robot avoidance control method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/080702 WO2019183804A1 (zh) 2018-03-27 2018-03-27 一种机器人规避控制方法及相关装置

Publications (1)

Publication Number Publication Date
WO2019183804A1 true WO2019183804A1 (zh) 2019-10-03

Family

ID=66945110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/080702 WO2019183804A1 (zh) 2018-03-27 2018-03-27 一种机器人规避控制方法及相关装置

Country Status (4)

Country Link
US (1) US20210060780A1 (zh)
EP (1) EP3779630A4 (zh)
CN (1) CN109906134B (zh)
WO (1) WO2019183804A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110279350A (zh) 2019-06-20 2019-09-27 深圳市银星智能科技股份有限公司 自移动设备移动方法及自移动设备
CN113515132B (zh) * 2021-09-13 2021-12-28 深圳市普渡科技有限公司 机器人路径规划方法、机器人和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389486A (zh) * 2012-05-07 2013-11-13 联想(北京)有限公司 控制方法和电子设备
WO2016148743A1 (en) * 2015-03-18 2016-09-22 Irobot Corporation Localization and mapping using physical features
CN106054881A (zh) * 2016-06-12 2016-10-26 京信通信系统(广州)有限公司 一种执行终端的避障方法及执行终端
CN106227212A (zh) * 2016-08-12 2016-12-14 天津大学 基于栅格地图和动态校准的精度可控室内导航系统及方法
CN106980317A (zh) * 2017-03-31 2017-07-25 大鹏高科(武汉)智能装备有限公司 一种水下路障躲避方法和系统

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030232649A1 (en) * 2002-06-18 2003-12-18 Gizis Alexander C.M. Gaming system and method
US7704119B2 (en) * 2004-02-19 2010-04-27 Evans Janet E Remote control game system with selective component disablement
JP2006239844A (ja) * 2005-03-04 2006-09-14 Sony Corp 障害物回避装置、障害物回避方法及び障害物回避プログラム並びに移動型ロボット装置
US8935006B2 (en) * 2005-09-30 2015-01-13 Irobot Corporation Companion robot for personal interaction
WO2009038797A2 (en) * 2007-09-20 2009-03-26 Evolution Robotics Robotic game systems and methods
WO2011146259A2 (en) * 2010-05-20 2011-11-24 Irobot Corporation Mobile human interface robot
JP5425881B2 (ja) * 2011-12-22 2014-02-26 株式会社コナミデジタルエンタテインメント ゲーム装置、ゲーム装置の制御方法、及びプログラム
KR102096398B1 (ko) * 2013-07-03 2020-04-03 삼성전자주식회사 자율 이동 로봇의 위치 인식 방법
US20160188977A1 (en) * 2014-12-24 2016-06-30 Irobot Corporation Mobile Security Robot
CN104864776B (zh) * 2015-02-12 2017-01-11 上海保瑞信息科技发展有限公司 靶机系统
CN204800664U (zh) * 2015-04-28 2015-11-25 深圳市大疆创新科技有限公司 信息装置及使用该信息装置的机器人
EP3144765B1 (en) * 2015-09-18 2020-01-08 Samsung Electronics Co., Ltd. Apparatus for localizing cleaning robot, cleaning robot, and controlling method of cleaning robot
US9776323B2 (en) * 2016-01-06 2017-10-03 Disney Enterprises, Inc. Trained human-intention classifier for safe and efficient robot navigation
US20170209789A1 (en) * 2016-01-21 2017-07-27 Proxy42, Inc. Laser Game System
CN107053214B (zh) * 2017-01-13 2023-09-05 广州大学 一种基于体感控制的机器人对战装置及控制方法
US20180200631A1 (en) * 2017-01-13 2018-07-19 Kenneth C. Miller Target based games played with robotic and moving targets
CN106871730B (zh) * 2017-03-17 2018-07-31 北京军石科技有限公司 一种轻武器射击训练全地形智能移动靶标系统
CN107121019B (zh) * 2017-05-15 2019-10-15 中国人民解放军73653部队 一种群组对抗射击训练系统
JP7139643B2 (ja) * 2018-03-23 2022-09-21 カシオ計算機株式会社 ロボット、ロボットの制御方法及びプログラム
WO2021125510A1 (en) * 2019-12-20 2021-06-24 Samsung Electronics Co., Ltd. Method and device for navigating in dynamic environment
US11493925B2 (en) * 2020-03-05 2022-11-08 Locus Robotics Corp. Robot obstacle collision prediction and avoidance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389486A (zh) * 2012-05-07 2013-11-13 联想(北京)有限公司 控制方法和电子设备
WO2016148743A1 (en) * 2015-03-18 2016-09-22 Irobot Corporation Localization and mapping using physical features
CN106054881A (zh) * 2016-06-12 2016-10-26 京信通信系统(广州)有限公司 一种执行终端的避障方法及执行终端
CN106227212A (zh) * 2016-08-12 2016-12-14 天津大学 基于栅格地图和动态校准的精度可控室内导航系统及方法
CN106980317A (zh) * 2017-03-31 2017-07-25 大鹏高科(武汉)智能装备有限公司 一种水下路障躲避方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3779630A4 *

Also Published As

Publication number Publication date
CN109906134A (zh) 2019-06-18
EP3779630A4 (en) 2021-08-11
CN109906134B (zh) 2022-06-24
US20210060780A1 (en) 2021-03-04
EP3779630A1 (en) 2021-02-17

Similar Documents

Publication Publication Date Title
US20210234747A1 (en) Augmented reality gaming system
JP7299312B2 (ja) 仮想シーン表示方法、電子デバイス及びコンピュータプログラム
US9816783B1 (en) Drone-target hunting/shooting system
CN111265869A (zh) 虚拟对象的检测方法、装置、终端及存储介质
US11877049B2 (en) Viewing angle adjustment method and device, storage medium, and electronic device
US11441874B2 (en) Remote weapon control device and method for targeting and shooting multiple objects
JP5768273B2 (ja) 歩行者の軌跡を予測して自己の回避行動を決定するロボット
US20160379414A1 (en) Augmented reality visualization system
Mueggler et al. Towards evasive maneuvers with quadrotors using dynamic vision sensors
WO2021082795A1 (zh) 虚拟道具的控制方法和装置、计算机可读存储介质及电子设备
CN107885417A (zh) 虚拟环境中的目标定位方法、装置和计算机可读存储介质
WO2021203856A1 (zh) 数据同步方法、装置、终端、服务器及存储介质
CN111179679B (zh) 射击训练方法、装置、终端设备及存储介质
WO2019183804A1 (zh) 一种机器人规避控制方法及相关装置
AU2012368731A1 (en) Communication draw-in system, communication draw-in method, and communication draw-in program
CN112870715B (zh) 虚拟道具的投放方法、装置、终端及存储介质
KR20160017916A (ko) 가상 훈련 시뮬레이션 제어 장치 및 그 방법
CN111097167B (zh) 移动控制方法、服务器、电子装置及存储介质
CN110180167B (zh) 增强现实中智能玩具追踪移动终端的方法
CN111135554B (zh) 操作控制方法、装置、存储介质及电子装置
WO2019165613A1 (zh) 移动设备的控制方法、设备和存储装置
US20220184506A1 (en) Method and apparatus for driving vehicle in virtual environment, terminal, and storage medium
Jindal et al. Design and Deployment of an Autonomous Unmanned Ground Vehicle for Urban Firefighting Scenarios
CN112717394B (zh) 瞄准标记的显示方法、装置、设备及存储介质
WO2021049147A1 (ja) 情報処理装置、情報処理方法、情報処理プログラム及び制御装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18911429

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018911429

Country of ref document: EP

Effective date: 20201027