WO2023088316A1 - 移动机器人的交互方法、装置、移动机器人和存储介质 - Google Patents

移动机器人的交互方法、装置、移动机器人和存储介质 Download PDF

Info

Publication number
WO2023088316A1
WO2023088316A1 PCT/CN2022/132312 CN2022132312W WO2023088316A1 WO 2023088316 A1 WO2023088316 A1 WO 2023088316A1 CN 2022132312 W CN2022132312 W CN 2022132312W WO 2023088316 A1 WO2023088316 A1 WO 2023088316A1
Authority
WO
WIPO (PCT)
Prior art keywords
projected
pattern
information
mobile robot
projection
Prior art date
Application number
PCT/CN2022/132312
Other languages
English (en)
French (fr)
Inventor
朱俊安
张涛
郭璁
陈鹏
吴翔
曾飞
陈俊伟
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202111354791.4A external-priority patent/CN114265397B/zh
Priority claimed from CN202111355659.5A external-priority patent/CN114274117A/zh
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Priority to EP22894848.5A priority Critical patent/EP4350461A1/en
Priority to KR1020247001573A priority patent/KR20240021954A/ko
Publication of WO2023088316A1 publication Critical patent/WO2023088316A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/242Means based on the reflection of waves generated by the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2107/00Specific environments of the controlled vehicles
    • G05D2107/60Open buildings, e.g. offices, hospitals, shopping areas or universities
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/10Land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/30Radio signals

Definitions

  • the present application relates to the field of artificial intelligence, in particular to an interaction method and device for a mobile robot, a mobile robot and a storage medium.
  • Mobile robots are currently used in restaurants, shopping malls, hotels and other places with a large flow of people.
  • During the driving process of the mobile robot there will often be road right conflicts with pedestrians.
  • the information interaction between mobile robots and pedestrians mainly includes speech and action forms.
  • mobile robots receive human instructions through microphones, determine the prompt information corresponding to the instructions, and send prompt sounds to people through speakers.
  • the prompting sound is used to describe the information content of the prompting information to people; or by receiving action instructions, the instruction information can be transmitted by performing different mechanical actions.
  • the interaction between mobile robots and pedestrians is usually realized by voice broadcast, so that pedestrians can know the driving intention of the mobile robot. For example, when the mobile robot is turning right, it will play the voice "I am going to turn right, please pay attention Keep out of the way" to inform pedestrians.
  • the prompt information is transmitted through the prompt sound or body movements. Since the prompt sound will be affected by various factors such as the distance between the human and the mobile robot, the sound of the surrounding environment, and the regional language, the prompt action will also be affected by the interaction between the human and the mobile robot. The effect of the distance of the mobile robot. Especially in noisy places such as restaurants and shopping malls, it is difficult for the voice broadcast by the mobile robot to be clearly transmitted to pedestrians, and the interaction effect is poor. Therefore, it will be difficult for the mobile robot to quickly and accurately describe the prompt information to the human, which will lead to low interaction efficiency and low interaction accuracy between the mobile robot and pedestrians.
  • the present application provides an interaction method and device for a mobile robot, a mobile robot and a storage medium.
  • a method for interacting with a mobile robot is provided.
  • the mobile robot is provided with a projection device and an environment perception sensor.
  • the method includes:
  • map data information of the space where the mobile robot is located and obtaining real-time environment perception data collected by the environment perception sensor, the real-time environment perception data includes real-time obstacle information and real-time information for indicating road conditions around the mobile robot instructions;
  • the projection device is controlled according to the projection parameters to project the pattern to be projected onto the ground projection area.
  • an interactive device for a mobile robot comprising:
  • the obtaining module is used to obtain map data information of the space where the mobile robot is located and to obtain real-time environment perception data collected by the environment perception sensor, the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
  • the path module is used to obtain the target driving path information of the mobile robot based on real-time obstacle information and map data information, and determine the ground projection area according to the target driving path information and real-time indication information;
  • the determination module is used to obtain the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, and the pattern information to be projected is used to indicate the driving intention of the mobile robot;
  • the projection module is used to control the projection device according to the projection parameters to project the pattern information to be projected to the projection area on the ground.
  • a mobile robot in a third aspect, includes a projection device, an environment perception sensor, and a processor;
  • the environment perception sensor is used to collect real-time environment perception data, and the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
  • the processor is used to obtain map data information of the space where the mobile robot is located and obtain real-time environment perception data, obtain target driving path information of the mobile robot based on real-time obstacle information and map data information, and obtain target driving path information according to the target driving path information and real-time instructions information, determine the projection area on the ground, obtain the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the projection area on the ground, and the pattern to be projected is used to indicate the driving intention of the mobile robot; control the projection device according to the projection parameters to Project the pattern to be projected onto the ground projection area;
  • the projection device is used for projecting the pattern to be projected onto the ground projection area.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the interaction method for a mobile robot as described in the above-mentioned first aspect is implemented.
  • FIG. 1 is a schematic structural diagram of a mobile robot in an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an interaction method for a mobile robot in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a projection area of a mobile robot in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a projection application of a mobile robot in an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of step 101 in an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of step 101 in another embodiment of the present application.
  • FIG. 7 is a schematic flowchart of step 102 in an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of step 103 in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the operation of the RGBD sensor in an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a laser projection device in an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a laser projection device in another embodiment of the present application.
  • FIG. 12 is a schematic flowchart of step 104 in an embodiment of the present application.
  • Fig. 13 is a schematic flowchart of an interaction method for a mobile robot in another embodiment of the present application.
  • Fig. 14 is a schematic flowchart of an interaction method for a mobile robot in another embodiment of the present application.
  • Fig. 15 is a structural block diagram of an interaction device of a mobile robot in an embodiment of the present application.
  • Fig. 16 is a flowchart of an interaction method based on a mobile robot in an embodiment of the present application.
  • Fig. 17 is a flow chart of step 105 in the obstacle-based robot interaction method in an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a non-overlapping area between a pattern to be projected and an obstacle area in an embodiment of the present application.
  • FIG. 19 is a schematic diagram of an overlapping area between a pattern to be projected and an obstacle area in an embodiment of the present application.
  • Fig. 20 is a schematic diagram of an overlapping area between the mobile robot and the obstacle area during the movement in an embodiment of the present application.
  • Fig. 21 is a schematic diagram of the internal structure of the robot in an embodiment of the present application.
  • the mobile robot interaction method provided by the embodiment of the present application may be executed by an interaction device of the mobile robot.
  • the interaction device of the mobile robot is set on the mobile robot as shown in Figure 1, and may be implemented through software, hardware or a combination of software and hardware.
  • the terminal can be a personal computer, laptop, media player, smart TV, smartphone, tablet, and portable wearable device, among others.
  • the mobile robot is provided with a plurality of environmental perception sensors and a laser projection device. Wherein, there may be one, two or more environmental perception sensors. When there are a plurality of environment sensing sensors, the setting of each environment sensing sensor is different.
  • Fig. 1 exemplarily shows a kind of mobile robot, as shown in Fig. 1, these multiple environmental perception sensors include RGBD camera 1 and radar device 3; The hub motor may be included. It should be noted that the sensor type and installation position of the environment sensing sensor may be adjusted according to the actual situation.
  • FIG. 2 shows a flowchart of an interaction method for a mobile robot provided by an embodiment of the present application.
  • This embodiment is described by taking the method applied to a terminal as an example. It can be understood that the method can also be applied to a system including a terminal and a server, and is implemented through interaction between the terminal and the server.
  • the interaction method of the mobile robot may include the following steps:
  • Step 101 Obtain map data information of the space where the mobile robot is located and obtain real-time environment perception data collected by environment perception sensors.
  • the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot.
  • Obstacles include stationary obstacles and moving obstacles, and the data of each type of obstacle is not limited.
  • the real-time indication information for indicating the road condition around the mobile robot at least includes road shape information around the mobile robot and obstacle distribution on the surrounding road.
  • the environment awareness sensor includes at least an RGBD camera.
  • the RGBD camera is used to detect the distance from the obstacle around the mobile robot to the mobile robot, the obstacle identification information and the real-time indication information indicating the road condition around the mobile robot.
  • the mobile robot obtains real-time environment perception data by processing the color image and depth image collected by the RGBD camera.
  • the saved map data information is directly invoked from a preset storage area to obtain the map data information, wherein the preset storage area may be a server or a terminal of a mobile robot.
  • the map data information is constructed by the mobile robot in real time. During the movement of the mobile robot, the environment perception sensor is used to collect the data required to construct the map, and the map is constructed and improved based on the collected data.
  • Step 102 Obtain target driving path information of the mobile robot based on real-time environment perception data and map data information, and determine a ground projection area according to the target driving path information and real-time indication information.
  • the map data information includes location information of these static obstacles.
  • the mobile robot Before the mobile robot starts to drive, it will first obtain the start position and the end position, and then determine the initial driving path from the start position to the end position based on the map data information.
  • the environment perception sensor detects that there are moving obstacles (such as pedestrians) around the mobile robot, the obstacle avoidance operation is performed to change the driving route of the mobile robot, that is, based on real-time environment perception data and map data information, the target driving path of the mobile robot is obtained information.
  • the mobile robot uses a path planning algorithm for path planning to obtain target driving path information, wherein the path planning algorithm includes an incremental heuristic algorithm, a BUG algorithm, and a graph search algorithm. method or a combination algorithm that combines multiple path planning algorithms, etc.
  • the mobile robot after the mobile robot acquires the target driving route information, it determines the road surface area that the mobile robot will drive in a future period of time as the ground projection area according to the target driving route. Wherein, the length of the future period of time can be determined according to the traveling speed of the mobile robot.
  • figure (a) is a three-dimensional schematic diagram of the space around the mobile robot
  • 6 is the projection light outlet of the projection device
  • 7-10 are obstacles
  • 11 is a schematic diagram of the projection area
  • 12 is the mobile robot
  • Figure ( b) is the ground distribution map corresponding to figure (a)
  • 7'-10' is the contact surface between obstacles 7-10 and the ground
  • 12' is the contact surface between the mobile robot 12 and the ground
  • 13 represents the target of the mobile robot direction of travel.
  • the coordinate point corresponding to the center position of the contact surface between the mobile robot 12 and the ground is taken as the coordinate position of the mobile robot, that is, d 0 (x 0 , y 0 ) in the figure (b), and according to the target driving path information, determine A series of moving coordinate points of the mobile robot, the series of moving coordinate points form a center line, that is, the curve 14 in figure (b), and then translate the center line to both sides for a certain distance to obtain two edge lines, wherein, the translated The distance value is half the width value of the base of the mobile robot.
  • the area between the two edge lines is the ground projection area, namely 11' in Figure (b).
  • the direction of the ground projection area is determined according to the target driving path information, and the size and shape of the ground projection area are determined according to the road surface shape information and the real-time obstacle distribution information.
  • the road surface shape information is a curved shape
  • the shape of the ground projection area is a curved shape.
  • the real-time obstacle distribution information shows that the free space before the real-time obstacle is relatively narrow, the ground projection area needs to be adjusted smaller.
  • Step 103 Obtain the information of the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the information of the pattern to be projected and the projection area on the ground.
  • the pattern information to be projected is used to indicate the driving intention of the mobile robot.
  • the pattern to be projected may be a text pattern, a graphic pattern, or a combination of text and geometric patterns, and may also be an animation.
  • the pattern information to be projected can be displayed on the ground by flashing.
  • the projection parameters include projection angle, projection color, projection content, projection time and so on.
  • Step 104 Control the laser projection device according to the projection parameters to project the pattern information to be projected onto the ground projection area.
  • the mobile robot when the projection parameters are determined, the mobile robot will adjust the projection device 2 according to the projection parameters, so that the projection device 2 will project the pattern information to be projected onto the ground projection area, and the surrounding pedestrians will know the movement by viewing the projection information on the ground.
  • the driving intention of the robot when the projection parameters are determined, the mobile robot will adjust the projection device 2 according to the projection parameters, so that the projection device 2 will project the pattern information to be projected onto the ground projection area, and the surrounding pedestrians will know the movement by viewing the projection information on the ground.
  • the driving intention of the robot is the mobile robot will adjust the projection device 2 according to the projection parameters, so that the projection device 2 will project the pattern information to be projected onto the ground projection area, and the surrounding pedestrians will know the movement by viewing the projection information on the ground. The driving intention of the robot.
  • the ground projection area is determined according to the target driving path information of the mobile robot and the real-time indication information used to indicate the road conditions around the mobile robot, and the laser projection device is adjusted based on the determined projection parameters corresponding to the pattern to be projected to be used to characterize the mobile robot.
  • the projected pattern of the driving intention is projected onto the ground projection area, so that pedestrians can know the driving intention of the mobile robot according to the projected pattern information projected on the ground by the projection device, which solves the poor interaction effect caused by the noisy environment where the robot is located To improve the interaction effect between mobile robots and pedestrians.
  • step 101 obtains the map data information of the space where the mobile robot is located and the real-time environment perception data collected by the environment perception sensor
  • the mobile robot provided by this embodiment
  • the interactive method of the robot also includes step 201, step 202 and step 203:
  • Step 201 Obtain historical environment perception data collected by the environment perception sensor when the environment of the space where the mobile robot is located satisfies a preset environment condition.
  • the preset environmental conditions include at least one of a small number of pedestrians in the environment of the space where the mobile robot is located and no one in the environment of the space where the mobile robot is located.
  • the historical environment perception data includes static obstacle information in the space where the mobile robot is located, such as tables, chairs or trash cans.
  • the preset environmental condition is that the number of pedestrians in the environment where the mobile robot is located is small
  • the information related to pedestrians in the original perception data collected by the environment perception sensor is filtered out to obtain historical environment perception data.
  • the mobile robot determines when to perform the above historical environment perception data collection operation according to the acquired historical environment perception data collection time information, for example, setting the historical environment perception data collection time to 23:00 every night.
  • Step 202 Determine the spatial coordinate information of the space where the mobile robot is located according to the historical environment perception data, and create a map of the space according to the spatial coordinate information.
  • the spatial coordinate information is the spatial coordinate information of the entire space where the mobile robot is located or the spatial coordinate information of the space that the mobile robot will pass through, for example, the spatial coordinate information of a restaurant or a shopping mall or the corresponding information of the service area of the mobile robot in the shopping mall.
  • the spatial coordinate information of the space For example, when the service area of the mobile robot is the area where the second floor of the shopping mall is located, it is necessary to determine the spatial coordinate information of the second floor of the shopping mall.
  • the spatial coordinate information is two-dimensional coordinate information or three-dimensional coordinate information.
  • two-dimensional coordinates are established with the ground as a plane, and a reference position point is set.
  • the reference position point is the position point of a certain static obstacle in space, or a reference object is placed on the ground, and the position point where the reference object is located is used as the reference position point.
  • two-dimensional coordinates corresponding to other position points in the space are determined.
  • Step 203 Use map data information as map data information.
  • the spatial coordinate information of the space is determined by acquiring the historical environmental perception data collected by the environmental perception sensor when the environment of the space where the mobile robot is located satisfies the preset environmental conditions, and a map of the space is created according to the spatial coordinate information. Since the map is constructed based on historical environmental perception data collected in a space environment that meets the preset environmental conditions, the interference information in the space is reduced, thereby reducing the difficulty of map construction and the amount of map data information.
  • each environmental perception sensor includes a radar device and a camera device.
  • this embodiment relates to the process of acquiring real-time environmental perception data collected by the environmental perception sensor in step 101 . Based on the embodiment shown in FIG. 5, as shown in FIG. 6, the process includes step 301, step 302 and step 303:
  • Step 301 Obtain the real-time distance information between the obstacle and the mobile robot collected by the radar device.
  • the radar device includes at least one of a lidar device and an ultrasonic radar device.
  • the lidar device is used to detect the distance between objects around the robot and the robot within the range of 2D or 3D plane.
  • Step 302 Obtain real-time obstacle recognition information collected by the camera device, road shape information of the road around the mobile robot, and real-time obstacle distribution information on the road around the mobile robot.
  • the camera device includes an RGBD camera; or the camera device includes an RGBD camera and an RGB camera.
  • the real-time obstacle identification information includes identifying whether the obstacle is a pedestrian.
  • an image recognition algorithm is used to recognize the image of the obstacle collected by the RGB camera or the RGBD camera to determine whether the obstacle is a pedestrian.
  • the camera device when the camera device includes an RGBD camera and an RGB camera, the RGB camera is used in conjunction with the radar device.
  • the radar device detects an obstacle, the mobile robot starts the RGB camera to perform a collection operation to obtain real-time obstacles identification information.
  • Step 303 Use real-time obstacle identification information and real-time distance information as real-time obstacle information, and use road surface shape information and real-time obstacle distribution information as real-time instruction information.
  • the real-time distance information between obstacles and the mobile robot is obtained by means of the radar device, and the real-time obstacle identification information, the road shape information of the road surface around the mobile robot, and the real-time obstacle distribution information of the road surface around the mobile robot are obtained by means of the camera device. , realizing the acquisition of real-time environment perception data. Multiple collection devices are used together to improve the diversity of real-time environment perception data and the reliability of real-time environment perception data.
  • this embodiment involves obtaining the target travel path information of the mobile robot based on real-time obstacle information and map data information in step 102, including Step 401 and Step 402.
  • Step 401 Determine the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information.
  • the coordinate position of the mobile robot in the map is obtained as the real-time position, and then the coordinate position of the obstacle in the map is determined as the position of the obstacle according to the real-time obstacle information.
  • Step 402 Obtain the target end position of the mobile robot, determine the shortest path information from the real-time position to the target end position based on the real-time position and the position of the obstacle, and use the shortest path information as the target driving path information of the mobile robot.
  • shortest path information from the real-time location to the destination destination location is determined using a shortest path algorithm.
  • the shortest path algorithm includes Dijkstra algorithm, Bellman-Ford algorithm, Floyd algorithm and SPFA algorithm and so on.
  • the real-time position of the mobile robot and the position of the obstacle are determined according to the map data information and the real-time obstacle information, and the target terminal position of the mobile robot is obtained.
  • the shortest path information of the terminal position realizes the real-time determination of the target driving path information of the mobile robot, and improves the reliability of the path planning of the mobile robot.
  • this embodiment involves determining the projection parameters of the laser projection device according to the pattern information to be projected and the ground projection area in step 103, including the steps 501 and step 502.
  • Step 501 For each pixel in the pattern to be projected, according to the ground projection area, determine the projection angle corresponding to the pixel, the projection time corresponding to the pixel, and the projection color corresponding to the pixel.
  • the corresponding relationship between each pixel point in the pattern to be projected and a certain spatial coordinate point in the ground projection area is obtained, and the projection angle corresponding to each pixel point and the corresponding pixel point are obtained according to the corresponding relationship.
  • the RGBD camera is used to obtain the vertical distance information between the road around the mobile robot and the RGBD camera.
  • each pixel For each pixel, first assume the corresponding original projection angle, projection time and projection color when projecting the pattern information to be projected on a flat road surface, and then obtain the projection angle according to the vertical distance information between the road surface around the mobile robot and the RGBD camera The correction parameter, according to the projection angle correction parameter and the original projection angle, finally obtains the actual projection angle corresponding to the sampling point, and the actual projection angle is the projection angle corresponding to the sampling point.
  • Step 502 Use the projection angle corresponding to each pixel, the projection time corresponding to each pixel, and the projection color corresponding to each pixel as projection parameters of the laser projection device.
  • This embodiment realizes the determination of the projection parameters of the projection device and improves the projection effect of the pattern to be projected by determining the corresponding projection angle, projection time and projection color of each pixel point of the pattern terminal to be projected; at the same time, each pixel point can be set
  • the color information of the mobile robot makes the projection pattern projected on the road surface a colorful pattern, which is easier to attract the attention of pedestrians around, and further improves the interaction effect between the mobile robot and pedestrians.
  • the projection device includes a vibrating mirror, a visible light laser and a lens, as shown in Figure 10 and Figure 11, the vibrating mirror is a rotating vibrating mirror or a MEMS solid-state vibrating mirror, which is used to control the projection direction of the laser, and is used for visible light lasers.
  • the display is performed by emitting lasers in the visible light frequency range, and the lens is used to synthesize lasers of various colors.
  • the vibrating mirror is a rotating vibrating mirror, as shown in FIG.
  • Two visible light lasers 17 and a third visible light laser 18 are used to emit laser light
  • the lens 15 synthesizes the received laser light into a light beam, and then the first rotating galvanometer 13 and the second galvanizing mirror 14 The direction of the synthesized light is adjusted to finally project the to-be-transmitted pattern 19 .
  • the vibrating mirror is a MEMS solid-state vibrating mirror
  • a third visible light laser 18 when the vibrating mirror is a MEMS solid-state vibrating mirror, as shown in FIG. A third visible light laser 18 .
  • the first visible light laser 16, the second visible light laser 17, and the third visible light laser 18 respectively emit laser light, and the lens 15 synthesizes the received laser light into a light beam, and then adjusts the direction of the combined light beam by the MEMS solid-state vibrating mirror 20, Finally, the pattern to be transmitted 19 is projected.
  • this embodiment involves adjusting the laser projection device according to the projection parameters in step 104 to project the pattern information to be projected onto the ground projection area, including steps 601, 602 and 603.
  • Step 601 Determine the rotation angle of the vibrating mirror corresponding to each pixel according to the projection angle corresponding to each pixel, and determine the laser emission information of the visible laser and the laser synthesis information of the lens corresponding to each pixel according to the projection color corresponding to each pixel.
  • the laser corresponding to the visible light laser includes red, green and blue (RGB) three primary color lasers, and the laser emission information includes the visible light frequency band.
  • the visible light frequency bands corresponding to the three visible light lasers in FIG. 10 or FIG. 11 are determined according to the projected colors.
  • Step 602 Determine the projection order of each pixel according to the projection time corresponding to each pixel.
  • Step 603 According to the projection sequence of each pixel, adjust the laser projection device according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel to project the The pattern information is projected onto the ground projection area.
  • This embodiment realizes the visual display of the pattern information to be projected on the ground projection area, and can project colorful patterns on the ground, which is convenient for capturing the attention of pedestrians and improving the interaction effect.
  • the interaction method of the mobile robot further includes:
  • Step 701 According to the target driving route information and real-time environment perception data, determine whether the preset projection condition is met.
  • the preset projection conditions include at least one of the following conditions: the driving direction of the mobile robot changes within a preset time period in the future, the driving state of the mobile robot is paused, there are pedestrians around the mobile robot, and the mobile robot is currently in the Operating status.
  • the preset projection condition is related to the driving situation of the mobile robot. Different pattern information to be projected can be set for different preset projection conditions. For example, when the driving direction of the mobile robot changes, the pattern information to be projected can be "the combination of the arrow mark corresponding to the driving direction and the text"; when the driving state of the mobile robot is paused, the pattern information to be projected can be "your The text pattern of "go ahead” or the text pattern of "start walking in xxx minutes” and so on.
  • the preset projection condition is that the mobile robot is currently running. By detecting whether the mobile robot is in the power-on state, if it is in the power-on state, the projection device is started for projection. In this case, the projection device of the mobile robot is always in the state of projecting patterns. The projection pattern projected onto the ground can be changed in real time.
  • the preset projection condition is that the sound intensity around the mobile robot is higher than a preset value.
  • the interaction is performed by means of projection; when the intensity of the surrounding sound is lower than the preset value , using voice reminders to perform interactions.
  • Step 702 If the judgment result is that the preset projection condition is met, determine the ground projection area according to the target driving route information.
  • the preset projection condition is met according to the target driving route information and real-time environment perception data.
  • the projection of the pattern to be projected is performed only when the projection conditions are preset, the flexibility of the projection setting of the projection device is improved, the energy consumption and calculation amount of the mobile robot are reduced, and the service life of the laser projection device is improved.
  • the pattern to be projected is obtained in step 103, including:
  • Step 801 According to the target driving route information, judge whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot.
  • the pattern currently projected by the mobile robot is the projected pattern projected onto the ground at the current moment.
  • Step 802 If yes, use the currently projected pattern of the mobile robot as the pattern to be projected.
  • the projection pattern is a projection pattern to be projected onto the ground at a next moment of the current moment.
  • Step 803 If not, generate a pattern to be projected according to the driving intention of the mobile robot.
  • different patterns to be projected are set according to different driving intentions of the mobile robot.
  • the pattern projected on the ground will also change, that is, the projected pattern at the next moment is different from the projected pattern at the previous moment.
  • the currently projected pattern that is, "going straight ahead”
  • the projection pattern of is converted into a projection pattern representing "turn left” or "turn right”.
  • the embodiment realizes real-time adjustment of projection according to the driving intention of the mobile robot by judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot and generating a pattern to be projected according to the driving intention of the mobile robot when it cannot reflect the driving intention of the mobile robot
  • the purpose of the pattern is to enable pedestrians to accurately grasp the driving intention of the mobile robot, improve the accuracy of the information conveyed by the mobile robot to pedestrians, and further improve the interaction effect between the mobile robot and pedestrians.
  • an interaction method for a mobile robot includes the following steps:
  • Step 901 Obtain historical environment perception data collected by the environment perception sensor when the environment of the space where the mobile robot is located satisfies a preset environment condition.
  • Step 902 Determine the spatial coordinate information of the space where the mobile robot is located according to the historical environment perception data, and create a spatial map according to the spatial coordinate information, using the map as map data information.
  • Step 903 Obtain real-time distance information between the obstacle and the mobile robot collected by the radar device, real-time obstacle identification information collected by the camera device, road shape information of the road around the mobile robot, and real-time obstacle distribution information on the road around the mobile robot.
  • Step 904 Use real-time obstacle identification information and real-time distance information as real-time obstacle information, and use road surface shape information and real-time obstacle distribution information as real-time indication information.
  • Step 905 Determine the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information.
  • Step 906 Obtain the target end position of the mobile robot, determine the shortest path information from the real-time position to the target end position based on the real-time position and the position of the obstacle, and use the shortest path information as the target driving path information of the mobile robot.
  • Step 907 Determine whether the preset projection conditions are met according to the target driving route information and real-time environment perception data, and determine the ground projection area according to the target driving route information and real-time indication information if the judgment result is in accordance with the preset projection conditions.
  • the preset projection conditions include at least one of the following conditions: the driving direction of the mobile robot changes within a preset time period in the future, the driving state of the mobile robot is paused, there are pedestrians around the mobile robot, and the mobile robot is currently running state.
  • Step 908 Obtain the pattern to be projected.
  • the preset projection condition is that the mobile robot is currently running, according to the target driving path information, it is judged whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot;
  • the pattern is used as the pattern to be projected, if not, the pattern to be projected is generated according to the driving intention of the mobile robot.
  • Step 909 For each pixel in the pattern to be projected, according to the ground projection area, determine the projection angle corresponding to the pixel, the projection time corresponding to the pixel, and the projection color corresponding to the pixel.
  • Step 910 Use the projection angle corresponding to each pixel, the projection time corresponding to each pixel, and the projection color corresponding to each pixel as projection parameters of the laser projection device.
  • Step 911 Determine the rotation angle of the vibrating mirror corresponding to each pixel according to the projection angle corresponding to each pixel, and determine the laser emission information of the visible laser and the laser synthesis information of the lens corresponding to each pixel according to the projection color corresponding to each pixel.
  • Step 912 Determine the projection sequence of each pixel according to the projection time corresponding to each pixel.
  • Step 913 According to the projection sequence of each pixel, adjust the laser projection device according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel to project the The pattern information is projected onto the ground projection area.
  • the image to be projected is projected onto the ground using a laser projection device so that pedestrians know the driving intention of the mobile robot, and the interaction effect between the mobile robot and pedestrians is improved, and the interaction caused by the noisy environment of the space where the robot is located is solved. Poor technical issues.
  • the projection pattern projected on the road can be a colorful pattern, which can better capture the attention of pedestrians and improve the interaction effect.
  • the projection conditions can be preset to improve the flexibility of the projection device, and the projection pattern can be adjusted according to the actual scene. The accuracy of information conveyed by the mobile robot to pedestrians is improved, and the interaction effect between the mobile robot and pedestrians is further improved.
  • the interaction method of the mobile robot may further include:
  • Step 105 Project the pattern to be projected in real time during the running process, and obtain the obstacle area existing on the road surface during the running process;
  • Step 106 Detect whether there is an overlapping area between the pattern to be projected and the obstacle area, and adjust the image to be projected according to the overlapping area when there is an overlapping area between the pattern to be projected and the obstacle area.
  • the pattern is projected so that there is no overlapping area between the pattern to be projected and the obstacle area.
  • the pattern to be projected may specifically be a travel instruction map; by determining the curve overlapping area between the pattern to be projected and the obstacle area, the pattern to be projected is adjusted according to the curve overlapping area, so that the to-be-projected pattern launched by the robot
  • the projection pattern will be dynamically deformed according to the obstacle area, and the adjusted pattern to be projected does not overlap with the obstacle area, thereby realizing the information interaction between the robot and the obstacle, and improving the information interaction efficiency between the robot and the human and accuracy of information interaction.
  • the acquisition of the obstacle area existing on the road surface during operation includes:
  • Obstacle information is collected in real time during operation, and pixel information corresponding to the obstacle information is mapped in a preset projection map;
  • the pattern to be projected includes an initial pattern to be projected and different enlarged patterns to be projected that are generated at different magnification ratios at different times, and the real-time projection of the pattern to be projected during operation includes:
  • the pattern to be projected includes at least one of an initial pattern to be projected and an enlarged pattern to be projected, and the real-time projection of the pattern to be projected during operation includes:
  • the initial pattern to be projected is gradually enlarged according to a preset magnification ratio to form the enlarged pattern to be projected, and at least one of the initial pattern to be projected and the enlarged pattern to be projected is projected.
  • the initial pattern to be projected is gradually enlarged according to a preset magnification ratio to form the enlarged pattern to be projected, and at least one of the initial pattern to be projected and the pattern to be projected to be enlarged is projected.
  • Step 107 Obtain an initial pattern to be projected.
  • Step 108 performing gradual enlargement processing on the initial pattern to be projected according to a preset enlargement ratio to form an enlarged pattern to be projected.
  • Step 109 Display at least one of the initial pattern to be projected and the enlarged pattern to be projected sequentially, at least one of the displayed initial pattern to be projected and the enlarged pattern to be projected is the pattern to be projected .
  • the adjusting the pattern to be projected according to the overlapping area includes:
  • the overlapping pattern to be projected refers to the initial pattern to be projected or the enlarged pattern to be projected
  • the boundary intersection point refers to the intersection point between the mid-perpendicular line and the edge of the obstacle area , and the boundary intersection point is located in the overlapping area of the curves;
  • the pattern to be projected is adjusted according to the two remaining curve segments, the curve intersection and the boundary intersection.
  • the adjusting the pattern to be projected according to the overlapping area includes:
  • the overlapping pattern to be projected includes an overlapping area overlapping with the obstacle area and an overlapping area with the obstacle the remaining regions where the regions do not overlap;
  • the adjustment of the pattern to be projected according to the two remaining curve segments, the intersection point of the curve and the intersection point of the boundary to obtain the adjusted pattern to be projected includes:
  • a pattern to be projected formed by connecting the two remaining curve segments and the connecting line segments is recorded as an adjusted pattern to be projected.
  • the vertical distance after comparing the vertical distance with a preset distance threshold, it further includes:
  • An adjusted color parameter of the pattern to be projected is determined according to the position distance, and the adjusted pattern to be projected is projected according to the color parameter.
  • At least some of the steps in Figure 2, Figures 5 to 8, Figures 12 to 14, and Figures 16 to 17 may include multiple steps or multiple stages, and these steps or stages are not necessarily performed at the same time complete, but may be performed at different times, and the execution order of these steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of steps or stages in other steps.
  • an interaction device for a mobile robot which includes an acquisition module, a path module, a determination module, and a projection module, specifically:
  • the acquisition module is used to acquire the map data information of the space where the mobile robot is located and the real-time environment perception data collected by the environment perception sensor.
  • the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
  • the path module is used to obtain the target driving path information of the mobile robot based on real-time obstacle information and map data information, and determine the ground projection area according to the target driving path information and real-time indication information;
  • the determination module is used to obtain the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, and the pattern information to be projected is used to indicate the driving intention of the mobile robot;
  • the projection module is used to control the projection device according to the projection parameters to project the pattern information to be projected to the projection area on the ground.
  • the device also includes a map module, which is specifically used for:
  • the spatial coordinate information of the space where the mobile robot is located determines the spatial coordinate information of the space where the mobile robot is located, and create a map of the space according to the spatial coordinate information
  • the data information of the map is used as the map data information.
  • the environmental perception sensor includes a radar device and a camera device
  • the acquisition module is used for:
  • Real-time obstacle identification information and real-time distance information are used as real-time obstacle information, and road surface shape information and real-time obstacle distribution information are used as real-time indication information.
  • the path module is used to:
  • map data information and real-time obstacle information determine the real-time position of the mobile robot and the position of the obstacle
  • Obtain the target end position of the mobile robot determine the shortest path information from the real-time position to the target end position based on the real-time position and the position of the obstacle, and use the shortest path information as the target driving path information of the mobile robot.
  • the determining module is used to:
  • the projection angle corresponding to each pixel, the projection time corresponding to each pixel, and the projection color corresponding to each pixel are used as projection parameters of the projection device.
  • the projection device includes a vibrating mirror, a visible light laser and a lens, and the projection module is used for:
  • the projection device is adjusted according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel to project the pattern information to be projected onto The ground projection area.
  • the path module is also specifically used for:
  • the ground projection area is determined according to the target driving route information.
  • the preset projection conditions include at least one of the following conditions:
  • the driving direction of the mobile robot changes, the driving state of the mobile robot is paused, there are pedestrians around the mobile robot, and the mobile robot is currently in the running state.
  • the determination module is specifically used for:
  • the preset projection condition is that the mobile robot is currently running, according to the target driving path information, it is judged whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot;
  • a pattern to be projected is generated according to the driving intention of the mobile robot.
  • the determination module is also specifically used for:
  • the real-time obstacle information indicates that the obstacles around the mobile robot are moving obstacles, then perform the step of judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
  • the interaction device of the mobile robot may also include:
  • the obstacle area acquisition module is used to project the pattern to be projected in real time during the operation, and obtain the obstacle area existing on the road surface during the operation;
  • An overlapping area detection module configured to detect whether there is an overlapping area between the pattern to be projected and the obstacle area, and when there is an overlapping area between the pattern to be projected and the obstacle area, according to the overlapping area
  • the pattern to be projected is adjusted so that there is no overlapping area between the pattern to be projected and the obstacle area.
  • a mobile robot in an embodiment of the present application, includes a projection device, an environment perception sensor, and a processor;
  • the environment perception sensor is used to collect real-time environment perception data, and the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
  • the processor is used to obtain map data information and real-time environment perception data of the space where the mobile robot is located, obtain target driving path information of the mobile robot based on real-time obstacle information and map data information, and obtain target driving path information according to the target driving path information and real-time instruction information , determine the ground projection area, obtain the pattern to be projected, determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, and the pattern to be projected is used to indicate the driving intention of the mobile robot; control the projection device according to the projection parameters.
  • the projection pattern is projected onto the ground projection area;
  • the projection device is used for projecting the pattern to be projected onto the ground projection area.
  • the processor is further configured to:
  • the environment perception sensor includes a radar device and a camera device
  • Radar device for collecting real-time distance information between obstacles and mobile robots
  • the camera device is used to collect real-time obstacle identification information, road shape information of the road surface around the mobile robot, and real-time obstacle distribution information of the road surface around the mobile robot;
  • the processor is used to acquire real-time distance information and real-time obstacle identification information, and use the real-time distance information and real-time obstacle identification information as real-time obstacle information; acquire road surface shape information and real-time obstacle distribution information and combine the road surface shape information and real-time Obstacle distribution information is used as real-time indication information.
  • the processor is used to:
  • the map data information and real-time obstacle information determine the real-time position of the mobile robot and the position of the obstacle; obtain the target end position of the mobile robot, and determine the shortest path from the real-time position to the target end position based on the real-time position and the position of the obstacle Information, the shortest path information is used as the target driving path information of the mobile robot.
  • the processor is used to:
  • the projection device includes a vibrating mirror, a visible light laser and a lens, and the processor is used for:
  • the projection time corresponding to each pixel determines the projection sequence of each pixel; according to the projection sequence of each pixel, according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the lens corresponding to each pixel
  • the laser synthesis information adjusts the projection device to project the pattern information to be projected onto the ground projection area;
  • the projection device is used to project each pixel according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel according to the projection order of each pixel. to the ground projection area.
  • the processor is further configured to:
  • the preset projection conditions include at least one of the following conditions: the driving direction of the mobile robot changes within a preset time period in the future, the movement of the mobile robot The driving state is paused, there are pedestrians around the mobile robot, and the mobile robot is currently running; if the judgment result meets the preset projection conditions, the ground projection area is determined according to the target driving path information.
  • the processor is also used to:
  • the preset projection condition is that the mobile robot is currently running, according to the target driving path information, it is judged whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot; if so, the pattern currently projected by the mobile robot is used as The pattern to be projected; if not, the pattern to be projected is generated according to the driving intention of the mobile robot.
  • the processor is also specifically configured to:
  • the real-time obstacle information indicates that the obstacles around the mobile robot are moving obstacles, then perform the step of judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
  • the mobile robot further includes a memory, and the memory stores computer-readable instructions that can run on the processor, and the processor is used to execute the computer-readable instructions. The following steps are implemented when reading instructions.
  • Step 105 Projecting the pattern to be projected in real time during the running process, and obtaining the obstacle area existing on the road surface during the running process.
  • the pattern to be projected is a pattern to be projected that characterizes the robot's traveling intention, and the pattern to be projected can be a curve to be projected, a straight line to be projected, an image to be projected, etc.; Real-time projection of laser devices, such as projecting on the road ahead of the robot, or on equipment on the road ahead of the robot.
  • the pattern to be projected is to preset a certain number of points in the forward direction of the robot, and use a curve or a straight line to connect the certain number of points to form a coherent graph.
  • the pattern to be projected is a curve obtained by connecting a preset number of curve nodes through Bezier curves.
  • the preset number can be set according to specific requirements, for example, the preset number can be set to 5, 7, 9, 10 and so on.
  • the running process may include: the process of the robot moving, the waiting process of the robot stopping due to encountering an obstacle during the movement, the process of the robot being fixed at a certain place without moving after starting, etc.
  • the pattern to be projected may specifically be a traveling instruction map.
  • the obstacle area is an area including obstacle information detected during the robot's travel; wherein, the obstacle information includes static obstacle information and dynamic obstacle information; wherein, the static obstacle information refers to a static obstacle (such as a meal delivery robot The location information of tables, chairs, lockers and other non-movable obstacles in the scene); dynamic obstacle information refers to the location information of dynamic obstacles (such as pedestrians, other robots and other objects that can move by themselves).
  • the static obstacle information refers to a static obstacle (such as a meal delivery robot The location information of tables, chairs, lockers and other non-movable obstacles in the scene);
  • dynamic obstacle information refers to the location information of dynamic obstacles (such as pedestrians, other robots and other objects that can move by themselves).
  • step 105 that is, the acquisition of the obstacle area existing on the road surface during operation includes:
  • Obstacle information is collected in real time during operation, and pixel information corresponding to the obstacle information is mapped in a preset projection map.
  • the obstacle detection device may be a laser radar sensor, an RGBD (RGB Depth, RGB depth) camera or an ultrasonic sensor.
  • each obstacle information needs to be mapped into pixel information in the preset projection map, that is, one piece of obstacle information corresponds to one piece of pixel information.
  • the preset projection map can be displayed in the projection display interface set on the robot.
  • the information of each obstacle can be represented by pixel information, and when the robot collects the obstacle information, it will be updated synchronously to The preset projection diagram.
  • the projection display interface is a display screen arranged at the front end or rear end of the robot, and the display screen can be a touch screen or a dot matrix screen, so that preset projection images and obstacle information can be displayed on the projection display interface.
  • the preset shape can be set as oval, circle, square, irregular figure and other shapes.
  • the preset shape is set as a circle, and the smallest area is the area that contains all pixel information and has the smallest circular area. If the area area is set too large, it will cause data redundancy, and the interaction with the obstacle will be made in advance before the robot approaches the obstacle, which will reduce the accuracy of the robot interaction.
  • the pattern to be projected includes an initial pattern to be projected and different enlarged patterns to be projected that are generated at different magnification ratios at different times, and the real-time projection of the pattern to be projected during operation includes:
  • the enlarged patterns to be projected in this embodiment are different enlarged patterns to be projected based on the initial pattern to be projected at different times and with different magnification ratios, and the number of the enlarged patterns to be projected can be two, three, etc. , is not limited here. It should be noted that, assuming that after the first enlarged pattern to be projected is obtained by doubling the initial pattern to be projected, even if the second enlarged pattern to be projected is doubled based on the first enlarged pattern to be projected, its The essence is obtained by double-magnifying the original pattern to be projected.
  • the initial pattern to be projected is projected and the generated enlarged pattern to be projected is projected according to the arrangement of the original pattern to be projected at different times .
  • the magnification ratio can be selected according to specific magnification requirements.
  • the pattern to be projected includes at least one of an initial pattern to be projected and an enlarged pattern to be projected, and the enlarged pattern to be projected is formed by enlarging the initial pattern to be projected according to a preset magnification ratio, As shown in Figure 17, the real-time projection of the pattern to be projected during operation specifically includes:
  • the initial pattern to be projected is gradually enlarged according to a preset magnification ratio to form the enlarged pattern to be projected, and at least one of the initial pattern to be projected and the enlarged pattern to be projected is projected in real time during operation.
  • the stepwise enlarging process of the initial pattern to be projected according to the preset magnification ratio to form the enlarged pattern to be projected, and projecting at least one of the initial pattern to be projected and the enlarged pattern to be projected includes:
  • Step 107 Obtain an initial pattern to be projected.
  • the pattern to be projected includes at least one of an initial pattern to be projected and an enlarged pattern to be projected. If the pattern to be projected at the current moment does not appear the initial pattern to be projected, the initial pattern to be projected will appear at a later time.
  • the initial pattern to be projected is stored in the memory of the robot.
  • Step 108 Enlarge the initial pattern to be projected according to a preset magnification ratio to form an enlarged pattern to be projected.
  • the preset magnification ratio can be set according to specific magnification requirements, and the preset magnification ratio can be a fixed value or a variable value. It should be noted that, in this embodiment, there is an enlargement boundary for enlarging the initial pattern to be projected, that is, after gradually enlarging the initial pattern to be projected to a certain number of times (such as enlarging three times, four times or five times), stop enlarging deal with.
  • the preset The magnification ratio is a fixed value, for example, the preset magnification ratio can be set as 20%, 30%, 40% or 50%. For example, assuming that the preset magnification ratio is set to 20%, after the first enlarged pattern to be projected is obtained by enlarging the initial pattern to be projected by 20% at the current moment, the 20% enlarged image based on the initial pattern to be projected will be enlarged at the next moment. The first enlarges the initial pattern to be projected and then enlarges it by 20% to obtain the second enlarged pattern to be projected.
  • the preset magnification ratio is a variable value, for example, the preset magnification ratio can be set to 10% (the first magnification to be projected The projection pattern is based on the magnification ratio of the initial pattern to be projected), 15% (the second projected pattern is based on the magnification ratio of the initial pattern to be projected), 20% (the third enlarged pattern to be projected is based on the magnification ratio of the initial pattern to be projected) and 25% (the fourth enlarged pattern to be projected is based on the enlargement ratio of the initial pattern to be projected).
  • Step 109 displaying at least one of the initial pattern to be projected and the enlarged pattern to be projected in time sequence.
  • the enlarged pattern to be projected may include one, two or more.
  • at least one of the initial pattern to be projected and the enlarged pattern to be projected is projected in chronological order (that is, displayed sequentially), and only the initial pattern to be projected or the enlarged pattern to be projected can be displayed at one moment , it is also possible to display the initial pattern to be projected and the enlarged pattern to be projected at each moment.
  • at least one of the initial pattern to be projected and the enlarged pattern to be projected is displayed sequentially.
  • the initial pattern to be projected is displayed at one moment, and one of the enlarged patterns to be projected is displayed at the next moment, and then Another enlarged pattern to be projected is displayed at the next moment, and this sequence is cycled in turn. It may also be that the initial pattern to be projected is displayed at one moment, the initial pattern to be projected and one of the enlarged patterns to be projected are displayed at the next moment, and the initial pattern to be projected and the two enlarged patterns to be projected are displayed at the next moment.
  • the displayed method is explained by zooming in three times step by step as an example.
  • the pattern to be projected after the first enlargement can be understood as the first enlarged pattern to be projected
  • the pattern to be projected after the second enlargement can be understood as the second enlarged pattern to be projected
  • the pattern to be projected after the third enlargement can be understood as The third is to enlarge the pattern to be projected.
  • Example of dynamically displaying the pattern to be projected 1.
  • the initial pattern to be projected is displayed at the first moment
  • the enlarged pattern to be projected after the first enlargement is displayed at the second moment
  • the enlarged pattern to be projected after the second enlargement is displayed at the third moment
  • the enlarged pattern to be projected after the second enlargement is displayed at the fourth moment.
  • the enlarged pattern to be projected after the third enlargement is displayed at all times.
  • the initial patterns to be projected displayed at the above four moments are cycled sequentially or the patterns to be projected are enlarged until the pattern to be projected is deformed when an obstacle is encountered or the robot's moving direction changes.
  • Example 2 Dynamically display the pattern to be projected.
  • Example 2. The display from the first moment to the fourth moment is the same as the example 1.
  • the enlarged pattern to be projected at the fourth moment is always displayed until an obstacle or the movement of the robot is encountered.
  • the pattern to be projected deforms when the direction changes.
  • Example 3 of dynamically displaying the pattern to be projected The initial pattern to be projected is displayed at the first moment;
  • the enlarged pattern to be projected after enlargement and the enlarged pattern to be projected after the second enlargement, the fourth moment displays the initial pattern to be projected, the enlarged pattern to be projected after the first enlargement, the enlarged pattern to be projected after the second enlargement and
  • the order of the enlarged pattern to be projected after the third enlargement is not required when displaying the initial pattern to be projected and the order of the enlarged pattern to be projected. It may be to display the initial pattern to be projected first and then display the enlarged pattern to be projected, or to display it first. Enlarge the pattern to be projected and then display the initial pattern to be projected.
  • the initial pattern to be projected or the initial pattern to be projected and each enlarged pattern to be projected displayed at the above four moments are cycled sequentially until the pattern to be projected is deformed when an obstacle is encountered or the robot’s motion direction changes. .
  • Example 4 Dynamically display the pattern to be projected.
  • Example 4. The display from the first moment to the fourth moment is the same as the example 3.
  • the initial pattern to be projected and each enlarged pattern to be projected at the fourth moment are displayed at the fifth and subsequent moments until the The pattern to be projected is deformed when the obstacle or the moving direction of the robot changes.
  • Step 106 Detect whether there is an overlapping area between the pattern to be projected and the obstacle area, and adjust the image to be projected according to the overlapping area when there is an overlapping area between the pattern to be projected and the obstacle area.
  • the pattern is projected so that there is no overlapping area between the pattern to be projected and the obstacle area.
  • the pattern to be projected includes the initial pattern to be projected and the enlarged pattern to be projected, so when the projected initial pattern to be projected or the enlarged pattern to be projected overlaps with the obstacle area, it can be determined that the pattern to be projected There is an overlapping area between the pattern and the obstacle area.
  • the distance between the robot and the obstacle area is relatively long, there may be no overlapping area between the initial pattern to be projected and the obstacle area, but the enlarged initial pattern to be projected, that is, the enlarged pattern to be projected may be different from There is an overlapping area between the obstacle areas, for example, when there is an intersection area between the enlarged pattern to be projected and the obstacle area, the intersection area is the overlapping area.
  • the distance between the robot and the obstacle area is relatively short, there may be an overlapping area between the initial pattern to be projected and the obstacle area, so in this embodiment, the initial pattern to be projected or the enlarged pattern to be projected and the obstacle When the object area overlaps, it is determined that there is an overlapping area between the pattern to be projected and the obstacle area.
  • B1 is the initial pattern to be projected
  • B2 is the enlarged pattern to be projected obtained after the first enlargement
  • B3 is the enlarged pattern to be projected obtained after the second enlargement; at this time, the initial pattern to be projected
  • Both the pattern and the enlarged pattern to be projected do not overlap with the obstacle area, so it can be determined that there is no overlapping area between the pattern to be projected and the obstacle area.
  • the initial pattern to be projected may specifically be an initial indication image; the enlarged pattern to be projected may specifically be an enlarged indication image.
  • the obstacle area is mapped in the preset projection map, and if the pattern to be projected that needs to be projected by the robot in real time during the traveling process is also mapped in the preset projection map, the current location of the robot can be mapped to the preset projection map.
  • the real-time position at the location is mapped in the preset projection map, and the position information of the obstacle area is also mapped in the preset projection map, and then the robot to be projected at the current real-time position can be simulated in the preset projection map Whether there is an overlapping area between the pattern and the obstacle area.
  • the current real-time position of the robot and the real position of the obstacle area can be displayed in the preset projection map; the current real-time position of the robot and the real position of the obstacle area can also be mapped according to a certain ratio Shown in preset projections without limitation.
  • the pattern to be projected needs to be adjusted according to the overlapping area, so that there is no overlapping area between the pattern to be projected and the obstacle area, so that Realize the interaction between robot and human.
  • step 106 that is, adjusting the pattern to be projected according to the overlapping area includes:
  • the overlapping pattern to be projected refers to the initial The pattern to be projected or the enlarged pattern to be projected.
  • Curve is a general term for straight and non-straight lines.
  • non-straight lines can be wavy lines, curved lines, etc.
  • the initial pattern to be projected may be composed of straight line segments, non-straight line segments, or a combination of straight line segments and non-straight line segments.
  • the overlapping pattern to be projected may specifically be an overlapping indication image.
  • the obstacle area in this application is a circular obstacle area, that is, a circular area with the smallest area including all obstacle information (such as A1 in FIG. 19 ).
  • the obstacle area determines the intersection of two curves between the overlapping pattern to be projected and the obstacle area in the overlapping area (such as a curve intersection in Figure 19 1, another curve intersection point 2), that is, two intersection points overlapping the boundary of the pattern to be projected and the circular obstacle area.
  • the pattern to be projected is composed of a preset number of curve nodes, so after determining two curve intersections between the pattern to be projected to be overlapped and the obstacle region in the overlapping area, the overlapping to be projected pattern
  • the line segment between the two curve intersections in the pattern is deleted (as shown in the dotted line segment inside the obstacle area A1 in the overlapped projection pattern L5 in Figure 19), that is, it is about to be between the two curve intersections and belongs to the overlapping to be projected
  • All the curve nodes on the pattern (as shown in Figure 19 overlapping all the nodes on the dotted line segment inside A1 inside the obstacle area in the pattern L5 to be projected) are deleted, and then the remaining curve line segment with one of the curve intersections as the end point is obtained ( L1 in FIG. 19) and another remaining curve line segment (L2 in FIG. 19) starting from another curve intersection point.
  • L3 in FIG. 19 is a connection line between two curve intersection points, that is, a line segment between two curve intersection points.
  • L4 refers to the perpendicular line corresponding to the line between the intersection points of the two curves.
  • the intersection point of the perpendicular line (such as the intersection point 4 in Figure 19) is the intersection point of the perpendicular line corresponding to the line between the intersection points of the two curves.
  • the boundary intersection point refers to the intersection point between the mid-perpendicular line and the edge of the obstacle area , and the boundary intersection point is located in the overlapping area of the curves;
  • the boundary intersection point (3 in Figure 19) refers to the intersection point between the vertical line and the edge of the obstacle area, and the boundary intersection point is located in the overlapping area (A2 in Figure 19);
  • the preset distance threshold can be determined according to the real-time
  • the visual effect of the projected pattern to be projected is set, for example, the preset distance threshold can be set as the radius of the obstacle area (in the above description, it is pointed out that the obstacle area in this application is a circular area).
  • the pattern to be projected is adjusted according to the two remaining curve segments, the curve intersection and the boundary intersection.
  • the two remaining curve segments are connected by two curve intersection points and boundary intersection points, and the two The curve intersection and the boundary intersection are also connected as a curve (the radian of the curve can be determined according to the obstacle area, that is, the two curve intersections and the boundary intersection are also connected correspondingly so that the curve does not overlap with the obstacle area) , and then obtain the adjusted pattern to be projected (L6 in FIG. 19); if the vertical distance is greater than the preset distance threshold, stop projecting the pattern to be projected.
  • adjusting the pattern to be projected according to the overlapping area in step 106 includes: recording an initial pattern to be projected or an enlarged pattern to be projected that overlaps with the obstacle area as overlapping to be projected pattern; the overlapping pattern to be projected includes an overlapping area overlapping with the obstacle area and a remaining area not overlapping with the obstacle area;
  • the overlapping area overlapping the pattern to be projected is deleted to obtain the pattern to be projected after adjustment.
  • the pattern to be projected only displays the initial pattern to be projected at the first moment, only the first enlarged pattern to be projected is displayed at the second moment, and the second enlarged pattern to be projected is displayed at the third moment, and is always displayed at the fourth moment and subsequent moments
  • the second enlarges the pattern to be projected.
  • the second enlarged pattern to be projected is an overlapping pattern to be projected, and the overlapping area in the second enlarged pattern to be projected is deleted to obtain the adjusted pattern to be projected, and the adjusted pattern to be projected is displayed at a subsequent moment pattern.
  • the overlapping to-be-projected pattern is reduced by a preset ratio so that the overlapping to-be-projected pattern is tangent to the edge of the obstacle area, so as to obtain an adjusted to-be-projected pattern.
  • the pattern to be projected only displays the initial pattern to be projected at the first moment, only the first enlarged pattern to be projected is displayed at the second moment, and the second enlarged pattern to be projected is displayed at the third moment, followed by the above sequence at subsequent moments Cycle through the display.
  • the second enlarged pattern to be projected, the first enlarged pattern to be projected, or the initial pattern to be projected may all become overlapped patterns to be projected, and the overlapped patterns to be projected are reduced according to a preset ratio so that the overlapped pattern to be projected is the same as The edges of the obstacle area are tangent.
  • the preset reduction ratio is a variable, that is, the preset reduction ratios corresponding to overlapping patterns to be projected at different times are different.
  • the preset ratio is calculated based on the degree of overlap between the overlapping pattern to be projected and the obstacle area.
  • FIG. 20 is a schematic diagram of an overlapping area between the mobile robot and the obstacle area during motion.
  • the obstacle area A1 is in front of the moving direction of the robot.
  • the to-be-projected patterns projected by the robot in real time during operation include the initial to-be-projected patterns and the enlarged to-be-projected patterns that gradually enlarge the initial to-be-projected patterns.
  • the initial pattern to be projected and the enlarged pattern to be projected will be dynamically displayed. As long as the initial pattern to be projected or any enlarged pattern to be projected overlaps with the obstacle area, the distance between the pattern to be projected and the obstacle area projected by the robot in real time will be determined.
  • the enlarged pattern to be projected B4 and other enlarged patterns to be projected or initial patterns to be projected before the enlarged pattern to be projected are all between the obstacle area A1.
  • the processor further implements the following steps when executing the computer-readable instructions:
  • the current location information refers to the real-time location information of the robot; further, after the current location information of the robot is obtained in real time, the location distance between the robot and the obstacle area is determined according to the current location information.
  • the adjusted color parameter of the pattern to be projected is determined according to the position distance, and the adjusted pattern to be projected is projected according to the curve color parameter.
  • the color parameter may include the type of color and the depth of color, etc.
  • the pattern to be projected can be displayed in a light color.
  • the color of the pattern to be projected will gradually become darker.
  • the color parameter can choose a light blue laser beam with a shallow depth;
  • this color parameter can choose a deep red laser beam.
  • the adjusted position to be projected is determined according to the position distance.
  • the pattern to be projected is adjusted according to the overlapping area of the curve, so that the pattern to be projected by the robot will be dynamically deformed according to the obstacle area, Moreover, the adjusted pattern to be projected does not overlap with the obstacle area, thereby realizing information interaction between the robot and the obstacle, and improving the efficiency and accuracy of information interaction between the robot and the human.
  • the processor executes the computer-readable instructions, the following steps are also implemented:
  • the overlapping area is updated according to the position distance.
  • the overlapping position information refers to the current real-time position information of the robot when the initial pattern to be projected or the enlarged pattern to be projected overlaps with the obstacle area.
  • the pattern to be projected by the robot in real time is gradually enlarged from the initial pattern to be projected, that is, the pattern to be projected by the robot in real time includes the initial pattern to be projected and the enlarged pattern to be projected, and it is a cycle Real-time projection, and there is a certain interval between each pattern to be projected (the initial pattern to be projected and the enlarged pattern to be projected, or different enlarged patterns to be projected), so the position of the robot is different, which will cause the pattern to be projected transmitted by the robot
  • the overlapping parts with the obstacle area may be different, and then at different positions, the overlapping area needs to be updated.
  • the initial pattern to be projected or the enlarged pattern to be projected overlaps with the obstacle area
  • the current position of the robot that is, the overlapping position information
  • the position distance between the robot and the obstacle area is determined according to the overlapping position information
  • the overlapping areas between different positions of the robot and the obstacle area are different, such as the same pattern to be projected (initial pattern to be projected or enlarged pattern to be projected), as the distance between the robot and the obstacle area gets closer, the pattern to be projected
  • the overlapping area between the pattern and the obstacle area may become larger, so that the overlapping area can be updated in real time, and then the pattern to be projected is adjusted according to the updated overlapping area, making the interaction between the robot and the human more flexible and accurate.
  • the processor executes the computer-readable instructions, the following steps are also implemented:
  • the area size of the overlapping area is determined according to the magnification.
  • the magnification refers to the magnification of the enlarged pattern to be projected based on the initial pattern to be projected; for example, assuming that the preset magnification ratio is 20%, the first enlarged pattern to be projected is based on the magnification of the initial pattern to be projected is 20%, then the magnification of the second enlarged pattern to be projected based on the initial pattern to be projected is 40%.
  • the overlapping area between the corresponding enlarged pattern to be projected and the obstacle area is different.
  • the size of the overlapping area can be determined according to the magnification, and then different patterns to be projected (When there is overlap between the above-mentioned initial pattern to be projected (or enlarged pattern to be projected) and the obstacle area, adjust the area size of the overlapping area according to the magnification, and then update the overlapping area in real time, thereby adjusting the pattern to be projected according to the updated overlapping area , making the interaction between robots and humans more flexible and accurate.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
  • any references to memory, storage, database or other media used in the various embodiments provided in the present application may include at least one of non-volatile memory and volatile memory.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include Random Access Memory (RAM) or external cache memory.
  • RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)

Abstract

一种移动机器人的交互方法、装置、计算机设备、存储介质和计算机程序产品。该方法包括:获取移动机器人所处空间的地图数据信息以及获取环境感知传感器采集的实时环境感知数据,实时环境感知数据包括实时障碍物信息和用于指示移动机器人周围路况的实时指示信息(101);基于实时障碍物信息以及地图数据信息,获取移动机器人的目标行驶路径信息,并根据目标行驶路径信息以及实时指示信息,确定地面投影区域(102);获取待投影图案,根据待投影图案以及地面投影区域,确定待投影图案对应的投影参数,待投影图案用于指示移动机器人的行驶意图(103);根据投影参数控制投影装置以将待投影图案投影到地面投影区域(104)。

Description

移动机器人的交互方法、装置、移动机器人和存储介质
相关申请
本申请要求2021年11月16日申请的,申请号为202111354791.4,名称为“移动机器人的交互方法、装置、移动机器人和存储介质”的中国专利申请的优先权,以及2021年11月16日申请的,申请号为202111355659.5,名称为“机器人、基于障碍物的机器人交互方法、装置及介质”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及人工智能领域,特别是涉及一种移动机器人的交互方法、装置、移动机器人和存储介质。
背景技术
移动机器人目前已经应用于人流量较大的餐厅、商场、酒店等场所。在移动机器人行驶过程中会经常发生与行人存在路权冲突的情况。针对上述情况,需要提供一种交互方式以使行人能够及时了解移动机器人的行驶意图并作出相应行动以解决路权冲突。
传统技术中,移动机器人与行人之间的信息交互方式主要包括语音形式、动作形式,比如,移动机器人通过麦克风接收人的指令,确定与该指令对应的提示信息,并通过扬声器向人发出提示声音,该提示声音用于向人描述提示信息的信息内容;或者通过接收动作指令,以通过执行不同的机械动作传递指令信息。比如,通常会采用语音播报的方式实现了移动机器人和行人的交互,以使行人能够知晓移动机器人的行驶意图,例如当移动机器人在右转时,会播放语音“我要右转了,请注意避让哦”以告知行人。
在传统技术中,提示信息是通过提示声音或者肢体动作传输的,由于提示声音会受到人与移动机器人的距离、周围环境声音、语言地域性等多种因素的影响,提示动作也会受到人与移动机器人的距离的影响。特别在餐厅、商场等较为嘈杂的场所,移动机器人播报的语音难以清晰的传递给行人,交互效果较差。因此会导致移动机器人难以快速且精确地向人描述提示信息,进而导致移动机器人与行人之间的交互效率较低,且交互准确率较低。
发明内容
本申请提供一种移动机器人的交互方法、装置、移动机器人和存储介质。
第一方面,提供了一种移动机器人的交互方法,移动机器人上设置有投影装置以及环境感知传感器,该方法包括:
获取所述移动机器人所处空间的地图数据信息,以及获取所述环境感知传感器采集的实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;
基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域;
获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;
根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域。
第二方面,提供了一种移动机器人的交互装置,该装置包括:
获取模块,用于获取移动机器人所处空间的地图数据信息以及获取环境感知传感器采集的实时环境感知数据,实时环境感知数据包括实时障碍物信息和用于指示移动机器人周围路况的实时指示信息;
路径模块,用于基于实时障碍物信息以及地图数据信息,获取移动机器人的目标行驶路径信息,并根据目标行驶路径信息以及实时指示信息,确定地面投影区域;
确定模块,用于获取待投影图案,根据待投影图案以及地面投影区域,确定待投影图案对应的投影 参数,待投影图案信息用于指示移动机器人的行驶意图;
投影模块,用于根据投影参数控制投影装置以将待投影图案信息投影到地面投影区域。
第三方面,提供了一种移动机器人,移动机器人包括投影装置、环境感知传感器和处理器;
环境感知传感器,用于采集实时环境感知数据,实时环境感知数据包括实时障碍物信息和用于指示移动机器人周围路况的实时指示信息;
处理器,用于获取移动机器人所处空间的地图数据信息以及获取实时环境感知数据,基于实时障碍物信息以及地图数据信息,获取移动机器人的目标行驶路径信息,并根据目标行驶路径信息以及实时指示信息,确定地面投影区域,获取待投影图案,根据待投影图案以及地面投影区域,确定待投影图案对应的投影参数,待投影图案用于指示移动机器人的行驶意图;根据投影参数控制投影装置以将待投影图案投影到地面投影区域;
投影装置,用于将待投影图案投影到地面投影区域。
第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上述第一方面的移动机器人的交互方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例中移动机器人的结构示意图。
图2为本申请一实施例中移动机器人的交互方法的流程示意图。
图3为本申请一实施例中移动机器人投影区域示意图。
图4为本申请一实施例中移动机器人投影应用示意图。
图5为本申请一实施例中步骤101的流程示意图。
图6为本申请另一实施例中步骤101的流程示意图。
图7为本申请一实施例中步骤102的流程示意图。
图8为本申请一实施例中步骤103的流程示意图。
图9为本申请一实施例中RGBD传感器工作示意图。
图10为本申请一实施例中激光投影装置的一种结构示意图。
图11为本申请另一实施例中激光投影装置的一种结构示意图。
图12为本申请一实施例中步骤104的流程示意图。
图13为本申请另一实施例中移动机器人的交互方法的流程示意图。
图14为本申请又一实施例中移动机器人的交互方法的流程示意图。
图15为本申请一实施例中移动机器人的交互装置的结构框图。
图16是本申请一实施例中基于移动机器人的交互方法的一流程图。
图17是本申请一实施例中基于障碍物的机器人交互方法中步骤105的一流程图。
图18是本申请一实施例中待投影图案与障碍物区域不存在重叠区域的一示意图。
图19是本申请一实施例中待投影图案与障碍物区域存在重叠区域的一示意图。
图20是本申请一实施例中移动机器人在运动过程中与障碍物区域存在重叠区域的示意图。
图21是本申请一实施例中机器人的内部结构示意图。
具体实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些 实施例的目的是使本申请的公开内容更加透彻全面。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。在本说明书中使用的术语“和/或”包括相关所列项目的任何及所有组合。
本申请实施例提供的移动机器人的交互方法,其执行主体可以是移动机器人的交互装置,该移动机器人的交互装置设置于如图1所示的移动机器人上,可以通过软件、硬件或者软硬件结合的方式实现成为移动机器人的终端的部分或者全部。该终端可以是个人计算机、笔记本电脑、媒体播放器、智能电视、智能手机、平板电脑和便携式可穿戴设备等。
该移动机器人上设置有多个环境感知传感器以及激光投影装置。其中,环境感知传感器可以一个、两个或者多个。当环境感知传感器为多个时,各环境感知传感器的设置不同。图1示例性的展示一种移动机器人,如图1所示,该多个环境感知传感器包括RGBD相机1以及雷达装置3;投影装置2设置于移动机器人的行驶机构4的上方,其中行驶机构4可包括轮毂电机,需要说明的是,环境感知传感器的传感器种类和安装位置可根据实际情况进行调整。
请参考图2,其示出了本申请实施例提供的一种移动机器人的交互方法的流程图。本实施例以该方法应用于终端进行举例说明,可以理解的是,该方法也可以应用于包括终端和服务器的系统,并通过终端和服务器的交互实现。如图2所示,该移动机器人的交互方法可以包括以下步骤:
步骤101:获取移动机器人所处空间的地图数据信息以及获取环境感知传感器采集的实时环境感知数据。
其中,实时环境感知数据包括实时障碍物信息和用于指示移动机器人周围路况的实时指示信息。障碍物包括静止障碍物和移动障碍物两种类型,各类型障碍物的数据不做限制。该用于指示移动机器人周围的路况的实时指示信息至少包括移动机器人周围的路面形状信息以及周围路面上的障碍物分布情况。
在一些实施例中,该环境感知传感器至少包括RGBD相机。该RGBD相机用于探测移动机器人周围障碍物距离移动机器人的距离、障碍物识别信息以及指示移动机器人周围的路况的实时指示信息。移动机器人通过对RGBD相机采集到的彩色图像和深度图像进行处理,得到实时环境感知数据。
在一种可选的实现方式中,从预设存储区域直接调用保存的地图数据信息,得到地图数据信息,其中,预设存储区域可以为服务器或者移动机器人的终端中。在另一种可选的实现方式中,该地图数据信息为移动机器人实时构建的。移动机器人在运动的过程中,利用环境感知传感器采集构建地图所需的数据,并基于采集到的数据构建和完善地图。
步骤102:基于实时环境感知数据以及地图数据信息,获取移动机器人的目标行驶路径信息,并根据目标行驶路径信息以及实时指示信息,确定地面投影区域。
在餐厅或者商城的环境,静态障碍物可以认为是在一段时间内所处的位置是固定的物体,例如桌椅、垃圾桶、柜子等。在一些实施例中,该地图数据信息包括这些静态障碍物的位置信息。移动机器人在开始行驶之前,会先获取到起点位置和终点位置,然后基于地图数据信息确定从起点位置到终点位置的初始行驶路径。当环境感知传感器探测到移动机器人周围存在移动障碍物(例如行人)时,则执行避障操作,更改移动机器人的行驶路线,即基于实时环境感知数据以及地图数据信息,获取移动机器人的目标行驶路径信息。
在一些实施例中,移动机器人根据得到的实时环境感知数据以及地图数据信息,采用路径规划算法进行路径规划得到目标行驶路径信息,其中该路径规划算法包括增量式启发算法、BUG算法、图搜索法或融合多种路径规划算法的组合算法等等。
在一些实施例中,移动机器人在获取到目标行驶路径信息后,根据该目标行驶路径确定未来一段时间内移动机器人要行驶过的路面区域作为地面投影区域。其中,该未来一段时间的长度可根据移动机器人的行驶速度来确定。
如图3所示,其中,图(a)为移动机器人周围空间的立体示意图,6为投影装置的投影出光口,7-10为障碍物,11为投影区域示意图,12为移动机器人,图(b)为其图(a)对应的地面分布图,7’-10’为障碍物7-10与地面的接触面,12’为移动机器人12与地面的接触面示意,13表示移动 机器人的目标行驶方向。
将移动机器人12与地面的接触面的中心位置对应的坐标点作为移动机器人的坐标位置,即图(b)中的d 0(x 0,y 0),根据目标行驶路径信息确定未来一段时间内移动机器人的一系列移动坐标点,该一系列移动坐标点形成一条中心线,即图(b)中的曲线14,然后将该中心线向两边平移一段距离得到两条边缘线,其中,平移的距离值为移动机器人底面的宽度值的一半。两条边缘线之间的区域即为地面投影区域,即图(b)中的11’。
在一些实施例中,根据目标行驶路径信息确定地面投影区域所在的方向,并根据路面形状信息以及所述实时障碍物分布信息确定地面投影区域的大小和形状。例如,当路面形状信息为弯曲形状时,该地面投影区域的形状为弯曲形状。当实时障碍物分布信息为实时障碍物之前的空闲空间较为狭窄时,则需要将地面投影区域调小。
步骤103:获取待投影图案信息,根据待投影图案信息以及地面投影区域,确定待投影图案对应的投影参数。
其中,待投影图案信息用于指示移动机器人的行驶意图。该待投影图案可以是文字图案、图形图案或者文字和几何图形式相结合的图案,也可以是动画。该待投影图案信息可以在地面上闪烁显示。
在一些实施例中,该投影参数包括投射角度、投射颜色、投射内容以及投射时间等等。
步骤104:根据投影参数控制激光投影装置以将待投影图案信息投影到地面投影区域。
如图4所示,当确定好投影参数后,移动机器人会根据投影参数调整投影装置2,以使投影装置2将待投影图案信息投影到地面投影区域,周围行人通过查看地面的投影信息知晓移动机器人的行驶意图。
该实施例根据移动机器人的目标行驶路径信息以及用于指示移动机器人周围路况的实时指示信息确定地面投影区域,并基于确定的待投影图案对应的投影参数调整激光投影装置以将用于表征移动机器人的行驶意图的待投影图案投影到地面投影区域,使得行人能够根据投影装置投影到地面上的投影图案信息知晓移动机器人的行驶意图,解决了因为机器人所处空间环境声音嘈杂带来的交互效果差的技术问题,提高移动机器人与行人之间的交互效果。
在本申请实施中,参见图5,基于图2所示的实施例,在步骤101获取移动机器人所处空间的地图数据信息以及环境感知传感器采集的实时环境感知数据之前,本实施例提供的移动机器人的交互方法,还包括步骤201、步骤202和步骤203:
步骤201:获取环境感知传感器在移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据。
其中,该预设环境条件至少包括移动机器人所处空间的环境中行人的数量较少以及移动机器人所述空间的环境中无人中的一种。
在一些实施例中,该历史环境感知数据包括移动机器人所处空间中存在的静态障碍物信息,例如桌椅或者垃圾桶。当预设环境条件为移动机器人所处空间的环境中行人的数量较少时,将环境感知传感器采集的原始感知数据中的行人相关的信息过滤掉,得到历史环境感知数据。
在一些实施例中,移动机器人根据获取到的历史环境感知数据采集时间信息来确定何时执行上述历史环境感知数据采集操作,例如将历史环境感知数据采集时刻设置为每天晚上23:00。
步骤202:根据历史环境感知数据,确定移动机器人所处空间的空间坐标信息,并根据空间坐标信息创建空间的地图。
其中,该空间坐标信息为移动机器人所处的整个空间的空间坐标信息或者移动机器人会经过的空间的空间坐标信息,例如,餐厅或者商场的空间坐标信息或者移动机器人在商场中的服务区域对应的空间的空间坐标信息。比如,当移动机器人的服务区域为商场2楼所在区域,则需要确定商场2楼的空间坐标信息。
其中,空间坐标信息为二维坐标信息或三维坐标信息。在一些实施例中,如图3所示,以地面为平面建立二维坐标,设置一个参照位置点。例如,该参照位置点为空间中某个静态障碍物的位置点,或者,在地面上放置一个参照物体,将该参照物所在的位置点作为参照位置点。基于该参照位置点的二维坐标,确定该空间中其他位置点对应的二维坐标。
步骤203:将地图的数据信息作为地图数据信息。
该实施例中通过获取环境感知传感器在移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据以确定空间的空间坐标信息,并根据空间坐标信息创建空间的地图。由于是该地图是基于满足预设环境条件的空间环境下采集的历史环境感知数据构建的,降低了空间中的干扰信息,进而降低了地图构建的难度以及地图数据信息的数据量。
在本申请实例中,各环境感知传感器包括雷达装置以及相机装置,参照图6,本实施例涉及的是步骤101中获取环境感知传感器采集的实时环境感知数据的过程。基于图5所示的实施例,如图6所示,该过程包括步骤301、步骤302以及步骤303:
步骤301:获取雷达装置采集到的障碍物与移动机器人的实时距离信息。
在一些实施例中,该雷达装置至少包括激光雷达装置以及超声波雷达装置中的一种。该激光雷达装置用于在2D或3D平面范围内,探测机器人周围物体距离机器人的距离。
步骤302:获取相机装置采集到的实时障碍物识别信息、移动机器人周围路面的路面形状信息以及移动机器人周围路面的实时障碍物分布信息。
在一些实施例中,该相机装置包括RGBD相机;或者该相机装置包括RGBD相机以及RGB相机。
其中,该实时障碍物识别信息包括识别该障碍物是否为行人。在一些实施例中,利用图像识别算法对RGB相机或者RGBD相机采集到的障碍物的图像进行识别,确定该障碍物是否为行人。
在一些实施例中,当该相机装置包括RGBD相机以及RGB相机时,RGB相机与雷达装置结合使用,当雷达装置探测到有障碍物时,移动机器人启动RGB相机执行采集操作以用于得到实时障碍物识别信息。
步骤303:将实时障碍物识别信息以及实时距离信息作为实时障碍物信息,将路面形状信息以及实时障碍物分布信息作为实时指示信息。
本申请实施例通过借助雷达装置获取到障碍物与移动机器人的实时距离信息以及借助相机装置获取到实时障碍物识别信息、移动机器人周围路面的路面形状信息以及移动机器人周围路面的实时障碍物分布信息,实现了实时环境感知数据的获取。多种采集装置配合使用,提高了实时环境感知数据的多样性以及实时环境感知数据的可靠性。
在本申请实施例中,请参照图7,基于图2所示的实施例,本实施例涉及的是步骤102中基于实时障碍物信息以及地图数据信息,获取移动机器人的目标行驶路径信息,包括步骤401和步骤402。
步骤401:根据地图数据信息以及实时障碍物信息,确定移动机器人的实时位置以及障碍物的位置。
在一些实施例中,获取移动机器人在地图中的坐标位置作为实时位置,然后根据实时障碍物信息确定该障碍物在地图中的坐标位置作为障碍物的位置。
步骤402:获取移动机器人的目标终点位置,基于实时位置以及障碍物的位置,确定从实时位置到目标终点位置的最短路径信息,将最短路径信息作为移动机器人的目标行驶路径信息。
在一些实施例中,利用最短路径算法确定从实时位置到目标终点位置的最短路径信息。其中,该最短路径算法包括Dijkstra算法,Bellman-Ford算法,Floyd算法和SPFA算法等等。
该实施例通过根据地图数据信息以及实时障碍物信息,确定移动机器人的实时位置以及障碍物的位置,并获取移动机器人的目标终点位置,基于实时位置以及障碍物的位置,确定从实时位置到目标终点位置的最短路径信息,实现了移动机器人的目标行驶路径信息的实时确定,提高了移动机器人的路径规划的可靠性。
在本申请实施例中,请参照图8,基于图2所示的实施例,本实施例涉及的是步骤103中根据待投影图案信息以及地面投影区域,确定激光投影装置的投影参数,包括步骤501和步骤502。
步骤501:针对待投影图案中的各像素点,根据地面投影区域,确定像素点对应的投射角度、像素点对应的投射时间以及像素点对应的投射颜色。
在一些实施例中,获取待投影图案中的各像素点分别与地面投影区域中的某空间坐标点之间的对应关系,根据该对应关系得到确定各像素点对应的投射角度、各像素点对应的投射时间以及各像素点对应的投射颜色。
在一些实施例中,如图9中所示,该地面投影区域可能会存在不平坦的区域。采用RGBD相机获取移动机器人周围路面与RGBD相机之间的垂直距离信息。
针对各像素点,先假设将待投影图案信息投射在平坦路面上时对应的原始投射角度、投射时间以及投射颜色,然后根据该移动机器人周围路面与RGBD相机之间的垂直距离信息,获取投影角度校正参数,根据该投影角度校正参数以及原始投射角度最终得到该采样点对应的实际投射角度,该实际投射角度即为采样点对应的投射角度。
步骤502:将各像素点对应的投射角度、各像素点对应的投射时间以及各像素点对应的投射颜色作为激光投影装置的投影参数。
该实施例通过将待投影图案终端各像素点分别确定对应的投射角度、投射时间以及投射颜色,实现了投影装置的投影参数的确定,提高待投影图案的投影效果;同时可设置每个像素点的颜色信息,使投影到路面上的投影图案为彩色图案,更容易吸引周围行人的注意力,进一步提高了移动机器人和行人的交互效果。
本申请实施例中,投影装置包括振镜、可见光激光器以及透镜,如图10和图11所示,该振镜为旋转振镜或者MEMS固态振镜,用于控制激光的投射方向,可见光激光器用于发射可见光频段的激光进行显示,透镜用于对多种颜色的激光进行合成。
在一种实施例中,该振镜为旋转振镜时,如图10所示,该投影装置包括第一旋转振镜13、第二旋转振镜14、透镜15以及第一可见光激光器16、第二可见光激光器17以及第三可见光激光器18。第一可见光激光器16、第二可见光激光器17以及第三可见光激光器18分别发射出激光,透镜15对于接收到的激光进行合成为一道光线,然后由于第一旋转振镜13以及第二旋转振镜14调整合成后的光线的方向,最终投射出待透射图案19。
在另一种实施例中,当该振镜为MEMS固态振镜时,如图11所示,该投影装置包括MEMS固态振镜20、透镜15以及第一可见光激光器16、第二可见光激光器17以及第三可见光激光器18。第一可见光激光器16、第二可见光激光器17以及第三可见光激光器18分别发射出激光,透镜15对于接收到的激光进行合成为一道光线,然后由MEMS固态振镜20调整合成后的光线的方向,最终投射出待透射图案19。
请参照图12,基于图9所示的实施例,本实施例涉及的是步骤104中根据投影参数调整激光投影装置以将待投影图案信息投影到地面投影区域,包括步骤601、步骤602和步骤603。
步骤601:根据各像素点对应的投射角度确定各像素点对应的振镜的旋转角度,根据各像素点对应的投射颜色确定各像素点对应的可见光激光器的激光发射信息以及透镜的激光合成信息。
其中,可见光激光器对应的激光包括红、绿、蓝(RGB)三基色激光,该激光发射信息包括可见光频段。在一些实施例中,根据投射颜色确定图10或图11中的3个可见光激光器分别对应的可见光频段。
步骤602:根据各像素点对应的投射时间,确定各像素点的投射顺序。
步骤603:按照各像素点的投射顺序,根据各像素点对应的振镜的旋转角度、各像素点对应的激光发射信息以及各像素点对应的透镜的激光合成信息调整激光投影装置以将待投影图案信息投影到地面投影区域。
该实施例实现了待投影图案信息在地面投影区域的可视化显示,而且可在地面上投射彩色图案,便于抓住行人的注意力,提高交互效果。
本申请实施例中,请参照图13,基于图2所示的实施例,在根据目标行驶路径信息以及实时指示信息确定地面投影区域的步骤之前,该移动机器人的交互方法,还包括:
步骤701:根据目标行驶路径信息以及实时环境感知数据,判断是否符合预设投影条件。
其中,该预设投影条件至少包括以下条件中的一种:在未来预设时间段内移动机器人的行驶方向发生变化、移动机器人的行驶状态为暂停状态、移动机器人周围存在行人以及移动机器人当前处于运行状态。
在一些实施例中,该预设投影条件与移动机器人的行驶情况相关。针对不同预设投影条件可设置不同的待投影图案信息。例如,在移动机器人的行驶方向发生变化时,待投影图案信息可以为“行驶 方向对应的箭头标识和文字的组合形式”;移动机器人的行驶状态为暂停状态时,待投影图案信息可以为“您先行”的文字图案或者“xxx分钟后开始行走”的文字图案等等。
在一些实施例中,该预设投影条件是移动机器人当前处于运行状态。通过检测移动机器人是否处于开机状态,若处于开机状态,则启动投影装置进行投影。在这种情况下,移动机器人的投影装置一直处于投影图案的状态。投影到地面上的投影图案可以实时变换。
在一些实施例中,该预设投影条件是移动机器人周围的声音强度高于预设值。在移动机器人设置声音采集装置,利用该声音采集装置采集移动机器人周围的声音,当周围声音的强度高于预设值时,采用投影的方式执行交互;当周围声音的强度低于预设值时,采用语音提醒的方式执行交互。
步骤702:在判断结果为符合预设投影条件的情况下,根据目标行驶路径信息确定地面投影区域。
该实施例通过根据目标行驶路径信息以及实时环境感知数据,判断是否符合预设投影条件,在判断结果为符合预设投影条件的情况下,根据目标行驶路径信息确定地面投影区域,由于是在符合预设投影条件的情况下才执行待投影图案的投影,提高投影装置的投影设置灵活性,降低了移动机器人的能源损耗和运算量,提高了激光投影装置的使用寿命。
本申请实施例中,基于图13所示的实施例,在该投影条件为移动机器人当前处于运行状态的情况下,步骤103中获取待投影图案,包括:
步骤801:根据目标行驶路径信息,判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图。
其中,移动机器人当前投影的图案为当前时刻投影到地面上的投影图案。
步骤802:若能,则将移动机器人当前投影的图案作为待投影图案。
其中,该投影图案为当前时刻的下一时刻点要投影到地面上的投影图案。
步骤803:若否,则根据移动机器人的行驶意图生成待投影图案。
在一些实施例中,根据移动机器人的行驶意图的不同,设置有不同的待投影图案。当移动机器人的行驶意图发生变化时,投影到地面上的图案也会随着变化,即下一时刻的投影图案与上一时刻点的投影图案不同。例如,当移动机器人的行驶意图发生变化,即要从“向前直行”要变为“向左转”或者“向右转”时,则需要将当前投影的图案(即表示“向前直行”的投影图案)转换成表示“左转”或“右转”的投影图案。
实施例通过判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图并在不能反映移动机器人的行驶意图时根据移动机器人的行驶意图生成待投影图案,实现了根据移动机器人的行驶意图实时调整投影图案的目的,使行人能够准确掌握移动机器人的行驶意图,提高了移动机器人向行人传达信息的准确性,进一步提高了移动机器人与行人之间的交互效果。
本申请实施例中,如图14所示,提供了一种移动机器人的交互方法,该方法包括以下步骤:
步骤901:获取环境感知传感器在移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据。
步骤902:根据历史环境感知数据,确定移动机器人所处空间的空间坐标信息,并根据空间坐标信息创建空间的地图,将地图作为地图数据信息。
步骤903:获取雷达装置采集到的障碍物与移动机器人的实时距离信息以及相机装置采集到的实时障碍物识别信息、移动机器人周围路面的路面形状信息以及移动机器人周围路面的实时障碍物分布信息。
步骤904:将实时障碍物识别信息以及实时距离信息作为实时障碍物信息,将路面形状信息以及实时障碍物分布信息作为实时指示信息。
步骤905:根据地图数据信息以及实时障碍物信息,确定移动机器人的实时位置以及障碍物的位置。
步骤906:获取移动机器人的目标终点位置,基于实时位置以及障碍物的位置,确定从实时位置到目标终点位置的最短路径信息,将最短路径信息作为移动机器人的目标行驶路径信息。
步骤907:根据目标行驶路径信息以及实时环境感知数据,判断是否符合预设投影条件,在判断结果为符合预设投影条件的情况下,根据目标行驶路径信息以及实时指示信息,确定地面投影区域。
其中,预设投影条件至少包括以下条件中的一种:在未来预设时间段内移动机器人的行驶方向发生变化、移动机器人的行驶状态为暂停状态、移动机器人周围存在行人以及移动机器人当前处于运行状态。
步骤908:获取待投影图案。
其中,在预设投影条件为移动机器人当前处于运行状态的情况下,根据目标行驶路径信息,判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图;若能,则将移动机器人当前投影的图案作为待投影图案,若否,则根据移动机器人的行驶意图生成待投影图案。
步骤909:针对待投影图案中的各像素点,根据地面投影区域,确定像素点对应的投射角度、像素点对应的投射时间以及像素点对应的投射颜色。
步骤910:将各像素点对应的投射角度、各像素点对应的投射时间以及各像素点对应的投射颜色作为激光投影装置的投影参数。
步骤911:根据各像素点对应的投射角度确定各像素点对应的振镜的旋转角度,根据各像素点对应的投射颜色确定各像素点对应的可见光激光器的激光发射信息以及透镜的激光合成信息。
步骤912:根据各像素点对应的投射时间,确定各像素点的投射顺序。
步骤913:按照各像素点的投射顺序,根据各像素点对应的振镜的旋转角度、各像素点对应的激光发射信息以及各像素点对应的透镜的激光合成信息调整激光投影装置以将待投影图案信息投影到地面投影区域。
该实施例通过将待投影图像利用激光投影装置投射到地面以使行人知晓移动机器人的行驶意图,提高移动机器人与行人之间的交互效果,解决了因为机器人所处空间环境声音嘈杂带来的交互效果差的技术问题。而且投影到路面上的投影图案可以为彩色图案,更好抓取行人的注意力,提高了交互效果,此外,可预设投影条件,提高投影装置的灵活性,可根据实际场景调整投影图案,提高了移动机器人向行人传达信息的准确性,进一步提高了移动机器人与行人之间的交互效果。
在本申请一实施例中,在执行步骤104中根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域之后,所述移动机器人的交互方法还可以包括:
步骤105:在运行过程中实时投射待投影图案,并获取在运行过程中路面上存在的障碍物区域;
步骤106:检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
在本实施例中,所述待投影图案具体可以为行进指示图;通过确定待投影图案和障碍物区域之间的曲线重叠区域,以根据曲线重叠区域调整待投影图案,从而使得机器人发射的待投影图案会根据障碍物区域不同发生动态形变,且该调整后的待投影图案与障碍物区域不重叠,进而实现机器人与障碍物之间的信息交互,提高了机器人与人之间的信息交互效率以及信息交互准确率。
在一个实施例中,所述获取在运行过程中路面上存在的障碍物区域,包括:
在运行过程中实时采集障碍物信息,并在预设投影图中映射出与所述障碍物信息对应的像素信息;
从包含所有所述像素信息的投影区域中确定最小面积区域,并将所述最小面积区域记录为所述障碍物区域。
在一个实施例中,所述待投影图案包括初始待投影图案及按照不同时刻以不同放大比例生成的不同的放大待投影图案,所述在运行过程中实时投射待投影图案,包括:
投射所述初始待投影图案及按照所述不同时刻与所述初始待投影图案排列投射已生成的放大待投影图案。
在一个实施例中,所述待投影图案包括初始待投影图案、放大待投影图案中的至少一个,所述在运行过程中实时投射待投影图案包括:
按照预设放大比例对所述初始待投影图案进行逐步放大处理而形成所述放大待投影图案,投射所述初始待投影图案、所述放大待投影图案中的至少一个。
在一个实施例中,所述按照预设放大比例对所述初始待投影图案进行逐步放大处理而形成所述放大待投影图案,投射所述初始待投影图案、所述放大待投影图案中的至少一个,包括:
步骤107:获取初始待投影图案。
步骤108:按照预设放大比例对所述初始待投影图案进行逐步放大处理而形成放大待投影图案。
步骤109:时序地显示所述初始待投影图案、所述放大待投影图案中的至少一个,所显示的所述初始待投影图案、所述放大待投影图案中的至少一个为所述待投影图案。
在一个实施例中,所述根据所述重叠区域调整所述待投影图案,包括:
在所述重叠区域中确定重叠待投影图案与所述障碍物区域的两个曲线交点;所述重叠待投影图案是指所述初始待投影图案或所述放大待投影图案;
对所述重叠待投影图案中处于两个所述曲线交点之间的线段进行删除处理,得到删除处理后的所述重叠待投影图案中的两个剩余曲线线段;
确定与两个所述曲线交点之间的连线对应的中垂线交点;
检测所述中垂线交点与边界交点之间的垂直距离,并将所述垂直距离与预设距离阈值进行比较;所述边界交点是指所述中垂线与所述障碍物区域边缘的交点,且所述边界交点位于所述曲线重叠区域内;
在所述垂直距离小于或等于预设距离阈值时,根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,调整所述待投影图案。
在一个实施例中,所述根据所述重叠区域调整所述待投影图案,包括:
将与所述障碍物区域存在重叠区域的初始待投影图案或者放大待投影图案记录为重叠待投影图案;所述重叠待投影图案包括与所述障碍物区域重叠的重叠区域和与所述障碍物区域不重叠的剩余区域;
将所述重叠待投影图案的重叠区域删除或者将所述重叠待投影图案按预设比例进行缩小以使所述重叠待投影图案与所述障碍物区域的边缘相切,以得到调整后的待投影图案。
在一实施例中,所述根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,对所述待投影图案进行调整,得到调整后的待投影图案,包括:
通过预设连接方式将两个曲线交点与所述边界交点连接,得到连接线段;
将两个所述剩余曲线线段以及所述连接线段之间连接形成的待投影图案,记录为调整后的待投影图案。
在一个实施例中,所述将所述垂直距离与预设距离阈值进行比较之后,还包括:
在所述垂直距离大于所述预设距离阈值时,停止投射所述待投影图案。
在一个实施例中,所述根据所述重叠区域调整所述待投影图案之后,还包括:
获取机器人的当前位置信息,并根据所述当前位置信息确定所述机器人与所述障碍物区域之间的位置距离;
根据所述位置距离确定调整后的所述待投影图案的颜色参数,并根据所述颜色参数投射调整后的所述待投影图案。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
应该理解的是,虽然图2、图5至图8、图12至图14、以及图16至图17的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2、图5至图8、图12至图14、以及图16至图17中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在本申请实施例中,如图15所示,提供了一种移动机器人的交互装置,该装置包括获取模块、路径模块、确定模块以及投影模块,具体的:
获取模块,用于获取移动机器人所处空间的地图数据信息以及环境感知传感器采集的实时环境感知数据,实时环境感知数据包括实时障碍物信息和用于指示移动机器人周围路况的实时指示信息;
路径模块,用于基于实时障碍物信息以及地图数据信息,获取移动机器人的目标行驶路径信息,并根据目标行驶路径信息以及实时指示信息,确定地面投影区域;
确定模块,用于获取待投影图案,根据待投影图案以及地面投影区域,确定待投影图案对应的投影参数,待投影图案信息用于指示移动机器人的行驶意图;
投影模块,用于根据投影参数控制投影装置以将待投影图案信息投影到地面投影区域。
在一个实施例中,该装置还包括地图模块,该地图模块具体用于:
获取环境感知传感器在移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据;
根据历史环境感知数据,确定移动机器人所处空间的空间坐标信息,并根据空间坐标信息创建空间的地图;
将地图的数据信息作为地图数据信息。
在一个实施例中,环境感知传感器包括雷达装置以及相机装置,获取模块用于:
获取雷达装置采集到的障碍物与移动机器人的实时距离信息;
获取相机装置采集到的实时障碍物识别信息、移动机器人周围路面的路面形状信息以及移动机器人周围路面的实时障碍物分布信息;
将实时障碍物识别信息以及实时距离信息作为实时障碍物信息,将路面形状信息以及实时障碍物分布信息作为实时指示信息。
在一个实施例中,该路径模块用于:
根据地图数据信息以及实时障碍物信息,确定移动机器人的实时位置以及障碍物的位置;
获取移动机器人的目标终点位置,基于实时位置以及障碍物的位置,确定从实时位置到目标终点位置的最短路径信息,将最短路径信息作为移动机器人的目标行驶路径信息。
在一个实施例中,该确定模块用于:
针对待投影图案中的各像素点,根据地面投影区域,确定像素点对应的投射角度、像素点对应的投射时间以及像素点对应的投射颜色;
将各像素点对应的投射角度、各像素点对应的投射时间以及各像素点对应的投射颜色作为投影装置的投影参数。
在一个实施例中,投影装置包括振镜、可见光激光器以及透镜,该投影模块用于:
根据各像素点对应的投射角度确定各像素点对应的振镜的旋转角度,根据各像素点对应的投射颜色确定各像素点对应的可见光激光器的激光发射信息以及透镜的激光合成信息;
根据各像素点对应的投射时间,确定各像素点的投射顺序;
按照各像素点的投射顺序,根据各像素点对应的振镜的旋转角度、各像素点对应的激光发射信息以及各像素点对应的透镜的激光合成信息调整投影装置以将待投影图案信息投影到地面投影区域。
在一个实施例中,该路径模块还具体用于:
根据目标行驶路径信息以及实时环境感知数据,判断是否符合预设投影条件;
在判断结果为符合预设投影条件的情况下,根据目标行驶路径信息确定地面投影区域。
在一个实施例中,预设投影条件至少包括以下条件中的一种:
在未来预设时间段内移动机器人的行驶方向发生变化、移动机器人的行驶状态为暂停状态、移动机器人周围存在行人以及移动机器人当前处于运行状态。
在一个实施例中,确定模块具体用于:
在预设投影条件为移动机器人当前处于运行状态的情况下,根据目标行驶路径信息,判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图;
若能,则将移动机器人当前投影的图案作为待投影图案;
若否,则根据移动机器人的行驶意图生成待投影图案。
在一个实施例中,该确定模块还具体用于:
若实时障碍物信息指示移动机器人周围存在的障碍物为移动障碍物,则执行根据目标行驶路径信息,判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图的步骤。
在一个实施例中,所述移动机器人的交互装置还可以包括:
障碍物区域获取模块,用于在运行过程中实时投射待投影图案,并获取在运行过程中路面上存在的障碍物区域;
重叠区域检测模块,用于检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
在本申请一实施例中,提供了一种移动机器人,移动机器人包括投影装置、环境感知传感器和处理器;
环境感知传感器,用于采集实时环境感知数据,实时环境感知数据包括实时障碍物信息和用于指示移动机器人周围路况的实时指示信息;
处理器,用于获取移动机器人所处空间的地图数据信息以及实时环境感知数据,基于实时障碍物信息以及地图数据信息,获取移动机器人的目标行驶路径信息,并根据目标行驶路径信息以及实时指示信息,确定地面投影区域,获取待投影图案,根据待投影图案以及地面投影区域,确定待投影图案对应的投影参数,待投影图案用于指示移动机器人的行驶意图;根据投影参数控制投影装置以将待投影图案投影到地面投影区域;
投影装置,用于将待投影图案投影到地面投影区域。
在一个实施例中,处理器还用于:
获取环境感知传感器在移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据;根据历史环境感知数据,确定移动机器人所处空间的空间坐标信息,并根据空间坐标信息创建空间的地图;将地图的数据信息作为地图数据信息。
在一个实施例中,该环境感知传感器包括雷达装置以及相机装置;
雷达装置,用于采集障碍物与移动机器人的实时距离信息;
相机装置,用于采集实时障碍物识别信息、移动机器人周围路面的路面形状信息以及移动机器人周围路面的实时障碍物分布信息;
处理器,用于获取实时距离信息以及实时障碍物识别信息,并将实时距离信息以及实时障碍物识别信息作为实时障碍物信息;获取路面形状信息以及实时障碍物分布信息并将路面形状信息以及实时障碍物分布信息作为实时指示信息。
在一个实施例中,该处理器用于:
根据地图数据信息以及实时障碍物信息,确定移动机器人的实时位置以及障碍物的位置;获取移动机器人的目标终点位置,基于实时位置以及障碍物的位置,确定从实时位置到目标终点位置的最短路径信息,将最短路径信息作为移动机器人的目标行驶路径信息。
在一个实施例中,该处理器用于:
针对待投影图案中的各像素点,根据地面投影区域,确定像素点对应的投射角度、像素点对应的投射时间以及像素点对应的投射颜色;将各像素点对应的投射角度、各像素点对应的投射时间以及各像素点对应的投射颜色作为投影装置的投影参数。
在一个实施例中,该投影装置包括振镜、可见光激光器以及透镜,该处理器用于:
根据各像素点对应的投射角度确定各像素点对应的振镜的旋转角度,根据各像素点对应的投射颜色确定各像素点对应的可见光激光器的激光发射信息以及透镜的激光合成信息;根据各像素点对应的投射时间,确定各像素点的投射顺序;按照各像素点的投射顺序,根据各像素点对应的振镜的旋转角度、各像素点对应的激光发射信息以及各像素点对应的透镜的激光合成信息调整投影装置以将待投影图案信息投影到地面投影区域;
该投影装置,用于按照各像素点的投射顺序,根据各像素点对应的振镜的旋转角度、各像素点对应的激光发射信息、各像素点对应的透镜的激光合成信息将各像素点投影到地面投影区域。
在一个实施例中,处理器还用于:
根据目标行驶路径信息以及实时环境感知数据,判断是否符合预设投影条件,预设投影条件至少包括以下条件中的一种:在未来预设时间段内移动机器人的行驶方向发生变化、移动机器人的行驶状 态为暂停状态、移动机器人周围存在行人以及移动机器人当前处于运行状态;在判断结果为符合预设投影条件的情况下,根据目标行驶路径信息确定地面投影区域。
在一个实施例中,该处理器还用于:
在预设投影条件为移动机器人当前处于运行状态的情况下,根据目标行驶路径信息,判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图;若能,则将移动机器人当前投影的图案作为待投影图案;若否,则根据移动机器人的行驶意图生成待投影图案。
在一个实施例中,该处理器还具体用于:
若实时障碍物信息指示移动机器人周围存在的障碍物为移动障碍物,则执行根据目标行驶路径信息,判断移动机器人当前投影的图案是否能够反映移动机器人的行驶意图的步骤。
在一实施例中,结合图21所示,所述移动机器人还包括存储器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,所述处理器用于执行所述计算机可读指令时实现如下步骤。
步骤105:在运行过程中实时投射待投影图案,并获取在运行过程中路面上存在的障碍物区域。
可以理解地,待投影图案为表征机器人行进意图的待投影图案,该待投影图案可以为曲线待投影图案、直线待投影图案、图像待投影图案等;该待投影图案可以通过如设置在机器人上的激光装置进行实时投射,例如投射在机器人前方的行进道路的路面上,亦或者在机器人前方的行进道路上的设备上。在一个实施例中,待投影图案为在机器人的前行方向上预设一定数量的点,并采用曲线或者直线将该一定数量的点连接起来形成一个连贯的曲线图。在另一实施例中,待投影图案为通过贝塞尔曲线将预设数量的曲线节点连接得到的曲线。其中,预设数量可以根据具体需求进行设定,示例性地,预设数量可以设定为5个、7个、9个、10个等。运行过程可以包括:机器人运动的过程、机器人在运动过程中由于遇到障碍物而停止运动的等待过程、机器人启动后固定在某个地点没有运动的过程等。在一些实施例中,该待投影图案具体可以为行进指示图。
障碍物区域为包括机器人行进过程中检测到的障碍物信息的区域;其中,障碍物信息包括静态障碍物信息和动态障碍物信息;其中,静态障碍物信息是指静态障碍物(例如送餐机器人场景下的桌子,椅子,储物柜等不可自行移动的障碍物)的位置信息;动态障碍物信息是指动态障碍物(如行人,其它机器人等可自行移动的物体)的位置信息。
在一实施例中,步骤105中,也即所述获取在运行过程中路面上存在的障碍物区域,包括:
在运行过程中实时采集障碍物信息,并在预设投影图中映射出与所述障碍物信息对应的像素信息。
在一具体实施方式中,在机器人运行过程中,可以通过设置在机器人上的障碍物检测装置对静态障碍物以及动态障碍物进行探测,进而获取各静态障碍物以及动态障碍物的实时位置信息,也即障碍物信息。其中,障碍物检测装置可以为激光雷达传感器,RGBD(RGB Depth,RGB深度)相机或者超声波传感器等。
在一个实施例中,在将障碍物信息设置于预设投影图上时,需要将各障碍物信息在预设投影图中映射成像素信息,也即一个障碍物信息对应于一个像素信息。其中,预设投影图可以在设置在机器人上的投影展示界面中展示,在该预设投影图中可以通过像素信息表征各障碍物信息,且在机器人采集到障碍物信息时,会同步更新至该预设投影图中。其中,投影展示界面为设置在机器人的前端亦或者后端的显示屏,该显示屏可以为触摸屏或者点阵屏等,如此即可在该投影展示界面上展示预设投影图以及障碍物信息。
从包含所有所述像素信息的投影区域中确定最小面积区域,并将所述最小面积区域记录为所述障碍物区域。
可以理解地,在机器人与障碍物之间的交互过程中,需要涵盖到所有的障碍物信息,因此在预设投影图中映射出与障碍物信息对应的像素信息之后,可以通过划分一个预设形状且包含所有像素信息的最小面积区域,并将该最小面积区域记录为与障碍物信息对应的障碍物区域。可选地,预设形状可以设定为椭圆形、圆形、方形、不规则图形等形状。在本实施例中,将预设形状设定为圆形,最小面积区域即为包含所有像素信息且圆形面积区域最小的区域。若区域面积设定过大,则会造成数据的冗余,以及在机器人未接近障碍物之前提前做出与障碍物的交互,降低了机器人交互的准确性。
在一实施例中,所述待投影图案包括初始待投影图案及按照不同时刻以不同放大比例生成的不同 的放大待投影图案,所述在运行过程中实时投射待投影图案,包括:
投射所述初始待投影图案及按照所述不同时刻与所述初始待投影图案排列投射已生成的放大待投影图案。
可以理解地,本实施例中的放大待投影图案是基于初始待投影图案按照不同时刻以不同放大比例生成的不同的放大待投影图案,该放大待投影图案的数量可以为两个、三个等,在此不进行限定。需要说明的是,假设在对初始待投影图案进行放大一倍得到第一个放大待投影图案之后,即使第二个放大待投影图案若是基于第一个放大待投影图案进行放大一倍处理,其本质是基于初始待投影图案进行两倍放大处理得到的。因此,在获取到初始待投影图案以及按照不同时刻以不同放大比例生成的不同的放大待投影图案之后,投射初始待投影图案以及按照不同时刻与初始待投影图案排列投射已生成的放大待投影图案。其中,放大比例可以根据具体放大需求进行选择。
又一实施例中,所述待投影图案包括初始待投影图案、放大待投影图案中的至少一个,所述放大待投影图案由所述初始待投影图案按预设放大比例进行放大处理而形成,如图17所示,在运行过程中实时投射待投影图案具体包括:
按照预设放大比例对所述初始待投影图案进行逐步放大处理而形成所述放大待投影图案,在运行过程中实时投射所述初始待投影图案、所述放大待投影图案中的至少一个。
所述按照预设放大比例对所述初始待投影图案进行逐步放大处理而形成所述放大待投影图案,投射所述初始待投影图案、所述放大待投影图案中的至少一个,包括:
步骤107:获取初始待投影图案。
可以理解地,待投影图案包括初始待投影图案和放大待投影图案中的至少一个,若当前时刻的待投影图案未出现初始待投影图案则在后续的某个时刻会出现初始待投影图案。初始待投影图案存储在机器人的存储器中。
步骤108:按照预设放大比例对所述初始待投影图案进行放大处理而形成放大待投影图案。
可以理解地,预设放大比例可以根据具体放大需求进行设定,该预设放大比例可以是一个固定的值,也可以是可变化的值。需要说明的是,本实施例中,对初始待投影图案进行放大处理存在放大边界,也即在将初始待投影图案逐步放大至一定次数(如放大三次、四次或者五次)之后,停止放大处理。
其中,当第一次放大的放大待投影图案以初始待投影图案为基础进行放大,第二次放大的放大待投影图案以第一次放大的放大待投影图案为基础进行放大时,该预设放大比例是固定的值,例如,预设放大比例可以设定为如20%、30%、40%或者50%等。示例性地,假设预设放大比例设定为20%,在当前时刻将初始待投影图案放大20%得到第一个放大待投影图案之后,在下一时刻将基于初始待投影图案放大20%后的第一个放大初始待投影图案再放大20%得到第二个放大待投影图案。
进一步地,当形成的放大待投影图案都以初始待投影图案为基础进行放大时,该预设放大比例是可变化的值,例如预设放大比例可以设定为10%(第一个放大待投影图案基于初始待投影图案的放大比例)、15%(第二个待投影图案基于初始待投影图案的放大比例)、20%(第三个放大待投影图案基于初始待投影图案的放大比例)和25%(第四个放大待投影图案基于初始待投影图案的放大比例)。
步骤109:时序地显示所述初始待投影图案、所述放大待投影图案中的至少一个。
可以理解地,所述放大待投影图案可以包括一个、两个或者多个。当所述放大待投影图案为一个时,按时间顺序投射(即时序地显示)初始待投影图案、放大待投影图案中的至少一个,可以是一个时刻只显示初始待投影图案或者放大待投影图案,也可以是每个时刻都显示初始待投影图案和放大待投影图案。当放大待投影图案为两个时,时序地显示初始待投影图案、放大待投影图案中的至少一个,可以是一个时刻只显示初始待投影图案,下一时刻显示其中一个放大待投影图案,再下一时刻显示另一个放大待投影图案,以此顺序依次循环。也可以是一个时刻显示初始待投影图案,下一时刻显示初始待投影图案和其中一个放大待投影图案,再下一时刻显示初始待投影图案,两个放大待投影图案,以此顺序依次循环。
时序地显示所述初始待投影图案、所述放大待投影图案中的至少一个有多种显示方式,下面列举几种显示方式的示例,但本申请并不局限于所列举的下述实施例。所显示的方式以逐步放大三次为例 进行说明。第一次放大后的待投影图案可以理解为第一放大待投影图案,第二次放大后的待投影图案可以理解为第二放大待投影图案,第三次放大后的待投影图案可以理解为第三放大待投影图案。
动态地显示待投影图案示例一、第一时刻显示初始待投影图案,第二时刻显示第一次放大后的放大待投影图案,第三时刻显示第二次放大后的放大待投影图案,第四时刻显示第三次放大后的放大待投影图案。第五时刻及以后的时刻再依次循环上述四个时刻显示的初始待投影图案或者放大待投影图案,直至遇到障碍物或者机器人的运动方向发生改变时待投影图案产生形变。
动态地显示待投影图案示例二、第一时刻至第四时刻的显示与示例一相同,第五时刻及以后的时刻一直显示第四时刻的放大待投影图案,直至遇到障碍物或者机器人的运动方向发生改变时待投影图案产生形变。
动态地显示待投影图案示例三、第一时刻显示初始待投影图案,第二时刻显示初始待投影图案和第一次放大后的放大待投影图案,第三时刻显示初始待投影图案、第一次放大后的放大待投影图案和第二次放大后的放大待投影图案,第四时刻显示初始待投影图案、第一次放大后的放大待投影图案、第二次放大后的放大待投影图案和第三次放大后的放大待投影图案,在显示时初始待投影图案和放大待投影图案的顺序并不做要求,可以是先显示初始待投影图案再显示放大待投影图案,也可以是先显示放大待投影图案再显示初始待投影图案。第五时刻及以后的时刻再依次循环上述四个时刻显示的初始待投影图案或者初始待投影图案和各个放大待投影图案,直至遇到障碍物或者机器人的运动方向发生改变时待投影图案产生形变。
动态地显示待投影图案示例四、第一时刻至第四时刻的显示与示例三相同,第五时刻及以后的时刻一直显示第四时刻的初始待投影图案和各个放大待投影图案,直至遇到障碍物或者机器人的运动方向发生改变时待投影图案产生形变。
步骤106:检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
可以理解地,在上述说明中指出待投影图案中包括初始待投影图案以及放大待投影图案,因此在投射的初始待投影图案或放大待投影图案与障碍物区域存在重叠时,即可确定待投影图案与障碍物区域之间存在重叠区域。
例如,在机器人与障碍物区域之间的距离较远时,可能初始待投影图案与障碍物区域之间不存在重叠区域,但是可能放大后的初始待投影图案,也即放大待投影图案可能与障碍物区域之间存在重叠区域,例如放大待投影图案与障碍物区域出现相交区域时,该相交区域即为重叠区域。在机器人与障碍物区域之间的距离较近时,可能初始待投影图案已经与障碍物区域之间存在重叠区域,因此在本实施例中,检测到初始待投影图案或放大待投影图案与障碍物区域存在重叠时,即确定待投影图案与障碍物区域之间存在重叠区域。
示例性地,如图18所示,B1为初始待投影图案;B2为第一次放大后得到的放大待投影图案;B3为第二次放大后得到的放大待投影图案;此时初始待投影图案与放大待投影图案均与障碍物区域不存在重叠,因此即可确定待投影图案与障碍物区域之间不存在重叠区域。
在一些实施例中,该初始待投影图案具体可以为初始指示图;该放大待投影图案具体可以为放大指示图。
可以理解地,在上述说明中将障碍物区域映射在预设投影图中,进而若将机器人在行进过程中需要实时投射的待投影图案也映射在预设投影图中时,可以将机器人当前所处的实时位置映射在预设投影图中,并将障碍物区域的位置信息也映射在预设投影图中,进而可以在预设投影图中模拟机器人在当前所处的实时位置投射的待投影图案与障碍物区域之间是否存在重叠区域。其中,可以将机器人当前所处的实时位置,以及障碍物区域的真实位置在预设投影图中展示;也可以将机器人当前所处的实时位置和障碍物区域的真实位置按照一定比例进行映射之后在预设投影图中展示,在此不作限制。
可以理解地,在待投影图案与障碍物区域之间存在重叠区域时,本实施例中需要根据重叠区域对待投影图案进行调整,进而使得待投影图案与障碍物区域之间不存在重叠区域,从而实现机器人与人之间的交互。
在一实施例中,步骤106中,也即所述根据所述重叠区域调整所述待投影图案,包括:
在所述重叠区域中确定重叠待投影图案与所述障碍物区域的两个曲线交点,即重叠待投影图案的曲线与障碍物区域的曲线的交点;所述重叠待投影图案是指所述初始待投影图案或所述放大待投影图案。曲线是指直线和非直的线的统称。例如,非直的线可以是波浪线、有弧度的线等。初始待投影图案既可以由直线段组成,也可以由非直线段组成,还可以由直线段和非直线段的组合构成。
可以理解地,在一些实施例中,该重叠待投影图案可以具体为重叠指示图。在上述说明中指出本申请中的障碍物区域是圆形障碍物区域,也即包括所有障碍物信息的最小面积的圆形区域(如图19中的A1)。进而在重叠待投影图案(如图19中的L5)与障碍物区域存在重叠区域时,在重叠区域中确定重叠待投影图案与障碍物区域的两个曲线交点(如图19中的一个曲线交点1,另一个曲线交点2),也即重叠待投影图案与圆形障碍物区域的边界的两个交点。
对所述重叠待投影图案中处于两个所述曲线交点之间的线段进行删除处理,得到删除处理后的所述重叠待投影图案中的两个剩余曲线线段。
可以理解地,在上述说明中指出待投影图案由预设数量的曲线节点构成,因此在所述重叠区域中确定重叠待投影图案与所述障碍物区域的两个曲线交点之后,对重叠待投影图案中处于两个曲线交点之间的线段进行删除(如图19中重叠待投影图案L5中位于障碍物区域内部A1内部的虚线线段),也即将处于两个曲线交点之间且属于重叠待投影图案上的所有曲线节点(如图19中重叠待投影图案L5中位于障碍物区域内部A1内部的虚线线段上的所有节点)进行删除,进而得到以其中一个曲线交点为结束端点的剩余曲线线段(如图19中的L1)和以另一个曲线交点为开始端点的另一剩余曲线线段(如图19中的L2)。
确定与两个所述曲线交点之间的连线对应的中垂线交点。
示例性地,如图19中的L3即为两个曲线交点之间的连接线,也即两个曲线交点之间的线段。如图19中的L4即为指与两个所述曲线交点之间的连线对应的中垂线。中垂线交点(如图19中的交点4)即为与两个所述曲线交点之间的连线对应的中垂线的交点。
检测所述中垂线交点与边界交点之间的垂直距离,并将所述垂直距离与预设距离阈值进行比较;所述边界交点是指所述中垂线与所述障碍物区域边缘的交点,且所述边界交点位于所述曲线重叠区域内;
其中,边界交点(如图19中的3)是指中垂线与障碍物区域边缘的交点,且该边界交点位于重叠区域(如图19中的A2)内;预设距离阈值可以根据机器人实时投射的待投影图案的视觉效果进行设定,例如预设距离阈值可以设定为障碍物区域的半径(在上述说明中指出本申请中的障碍物区域选择圆形区域)。
在所述垂直距离小于或等于预设距离阈值时,根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,调整所述待投影图案。
具体地,在将所述垂直距离与预设距离阈值进行比较之后,若垂直距离小于或等于预设距离阈值,则将两个剩余曲线线段,通过两个曲线交点以及边界交点连接,且两个曲线交点与边界交点之间也相应连接为曲线(该曲线的弧度可以根据障碍物区域确定,也即最终使得该两个曲线交点与边界交点之间也相应连接为曲线不与障碍物区域重叠),进而得到调整后的待投影图案(如图19中的L6);若垂直距离大于预设距离阈值,则停止投射所述待投影图案。
又一实施例中,步骤106中所述根据所述重叠区域调整所述待投影图案,包括:将与所述障碍物区域存在重叠区域的初始待投影图案或者放大待投影图案记录为重叠待投影图案;所述重叠待投影图案包括与所述障碍物区域重叠的重叠区域和与所述障碍物区域不重叠的剩余区域;
将所述重叠待投影图案的重叠区域删除或者将所述重叠待投影图案按预设比例进行缩小以使所述重叠待投影图案与所述障碍物区域的边缘相切,以得到调整后的待投影图案。
在一实施例中,将重叠待投影图案的重叠区域删除,以得到调整后的待投影图案。例如待投影图案在第一时刻只显示初始待投影图案,在第二时刻只显示第一放大待投影图案,在第三时刻为显示第二放大待投影图案,在第四时刻及后续时刻一直显示第二放大待投影图案。在遇到障碍物时,第二放大待投影图案为重叠待投影图案,将第二放大待投影图案中的重叠区域删除以得到调整后的待投影图 案,并在后续时刻显示调整后的待投影图案。
在一实施例中,将所述重叠待投影图案按预设比例进行缩小以使所述重叠待投影图案与所述障碍物区域的边缘相切,以得到调整后的待投影图案。例如,待投影图案在第一时刻只显示初始待投影图案,在第二时刻只显示第一放大待投影图案,在第三时刻为显示第二放大待投影图案,在后续的时刻依上述顺序进行循环显示。在遇到障碍物时,第二放大待投影图案、第一放大待投影图案或者初始待投影图案都可能成为重叠待投影图案,对重叠待投影图案按照预设比例进行缩小使得重叠待投影图案与障碍物区域的边缘相切。该缩小的预设比例是一个变量,即不同时刻的重叠待投影图案对应的缩小的预设比例是不同的。预设比例根据重叠待投影图案与障碍物区域的重合度计算到得。
在一实施例中,如图20所示,图20为移动机器人在运动过程中与障碍物区域存在重叠区域的示意图。障碍物区域A1在机器人的行进方向的前方,在上述说明中指出机器人在运行过程中实时投射的待投影图案包括初始待投影图案以及对初始待投影图案逐步放大处理的放大待投影图案,且在运行过程中会动态显示初始待投影图案和放大待投影图案,只要初始待投影图案或者任意一个放大待投影图案与障碍物区域存在重叠区域,则确定机器人实时投射的待投影图案与障碍物区域之间存在重叠区域。因此,根据图20所示,在机器人投射的待投影图案中,放大待投影图案B4以及在放大待投影图案之前投射的其它放大待投影图案或者初始待投影图案,均与障碍物区域A1之间不存在重叠区域;在放大待投影图案B4之后的放大待投影图案B5与障碍物区域A1存在重叠区域,因此即可确定机器人在运行过程中实时投射的待投影图案与障碍物区域存在重叠区域。因此,根据放大待投影图案B5与障碍物区域A1之间的重叠区域,调整放大待投影图案B5,使得调整后的放大待投影图案B5与障碍物区域A1之间不存在重叠区域。
在一实施例中,步骤106之后,也即所述根据所述重叠区域调整所述待投影图案之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
获取所述机器人的当前位置信息,并根据所述当前位置信息确定所述机器人与所述障碍物区域之间的位置距离;
可以理解地,当前位置信息是指机器人当前所处的实时位置信息;进一步地,在实时获取机器人的当前位置信息之后,根据当前位置信息确定机器人与障碍物区域之间的位置距离。
根据所述位置距离确定调整后的所述待投影图案的颜色参数,并根据所述曲线颜色参数投射调整后的所述待投影图案。
可以理解地,颜色参数可以包含颜色的种类以及颜色的深度等。当机器人与障碍物的距离较远时,待投影图案可以采用浅颜色显示,随着机器人与障碍物之间的距离逐渐缩短,待投影图案显示的颜色逐渐变深。例如,在机器人距离障碍物区域较远时(如距离障碍物区域1km时),该颜色参数可以选择深度较浅的浅蓝色的激光光束;在机器人距离障碍物区域较近时(如距离障碍物区域100m时),该颜色参数可以选择深度较深的深红色的激光光束。
具体地,在获取所述机器人的当前位置信息,并根据所述当前位置信息确定所述机器人与所述障碍物区域之间的位置距离之后,根据所述位置距离确定调整后的所述待投影图案的颜色参数,并根据所述颜色参数投射调整后的所述待投影图案,也即将调整后的待投影图案投影在地面上,实现与人交互。
在本实施例中,通过确定待投影图案和障碍物区域之间的曲线重叠区域,以根据曲线重叠区域调整待投影图案,从而使得机器人发射的待投影图案会根据障碍物区域不同发生动态形变,且该调整后的待投影图案与障碍物区域不重叠,进而实现机器人与障碍物之间的信息交互,提高了机器人与人之间的信息交互效率以及信息交互准确率。
在一实施例中,在所述初始待投影图案或所述放大待投影图案与所述障碍物区域存在重叠时,确定所述待投影图案与所述障碍物区域之间存在重叠区域之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
获取所述初始待投影图案或所述放大待投影图案与所述障碍物区域存在重叠,机器人的重叠位置信息,并根据所述重叠位置信息确定机器人与障碍物区域之间的位置距离;
根据所述位置距离更新所述重叠区域。
可以理解地,重叠位置信息是指初始待投影图案或所述放大待投影图案与所述障碍物区域存在重叠时,机器人当前所处的实时位置信息。进一步地,在上述说明中指出机器人实时投射的待投影图案是从初始待投影图案开始逐步放大的,也即机器人实时投射的待投影图案中包括初始待投影图案以及放大待投影图案,且为循环实时投射,并且每一个待投影图案(初始待投影图案与放大待投影图案,或者不同的放大待投影图案)之间存在一定间隔,因此机器人所处的位置不同,会导致机器人透射的待投影图案与障碍物区域之间存在重叠的部分可能不同,进而在不同的位置,需要对重叠区域进行更新。
进而在初始待投影图案或者放大待投影图案与障碍物区域存在重叠时,获取机器人的当前位置(也即重叠位置信息),并根据该重叠位置信息确定机器人与障碍物区域之间的位置距离,在机器人的不同位置与障碍物区域之间的重叠区域不同,如相同的待投影图案(初始待投影图案或者放大待投影图案),随着机器人与障碍物区域之间位置距离越近,待投影图案与障碍物区域之间的重叠区域可能变大,如此可以实时更新重叠区域,进而根据更新的重叠区域对待投影图案进行调整,使得机器人与人之间的交互的灵活性更高,更准确。
在一实施例中,在所述初始待投影图案或所述放大待投影图案与所述障碍物区域存在重叠时,确定所述待投影图案与所述障碍物区域之间存在重叠区域之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
获取与所述障碍物区域之间存在重叠区域对应的放大待投影图案的放大倍数;
根据所述放大倍数确定所述重叠区域的区域大小。
可以理解地,放大倍数是指放大待投影图案基于初始待投影图案的放大倍数;示例性地,假设预设放大比例为20%,则第一个放大待投影图案基于初始待投影图案的放大倍数为20%,则第二个放大待投影图案基于初始待投影图案的放大倍数为40%。然而在放大倍数的不同,对应的放大待投影图案与障碍物区域之间的重叠区域是不同的,因此本实施例中可以根据放大倍数确定重叠区域的区域大小,进而在不同的待投影图案(如上述初始待投影图案或者放大待投影图案)与障碍物区域之间存在重叠时,根据该放大倍数调整重叠区域的区域大小,进而可以实时更新重叠区域,从而根据更新的重叠区域调整待投影图案,使得机器人与人之间的交互的灵活性更高,更准确。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (25)

  1. 一种移动机器人的交互方法,所述移动机器人上设置有投影装置以及环境感知传感器,所述方法包括:
    获取所述移动机器人所处空间的地图数据信息,以及获取所述环境感知传感器采集的实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;
    基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域;
    获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;
    根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域。
  2. 根据权利要求1所述的交互方法,其中所述获取所述移动机器人所处空间的地图数据信息以及所述环境感知传感器采集的实时环境感知数据之前,所述方法还包括:
    获取所述环境感知传感器在所述移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据;
    根据所述历史环境感知数据,确定所述移动机器人所处空间的空间坐标信息,并根据所述空间坐标信息创建所述空间的地图;
    将所述地图的数据信息作为所述地图数据信息。
  3. 根据权利要求2所述的交互方法,其中所述环境感知传感器包括雷达装置以及相机装置,所述获取所述环境感知传感器采集的实时环境感知数据,包括:
    获取所述雷达装置采集到的障碍物与所述移动机器人的实时距离信息;
    获取所述相机装置采集到的实时障碍物识别信息、所述移动机器人周围路面的路面形状信息以及所述移动机器人周围路面的实时障碍物分布信息;
    将所述实时障碍物识别信息以及所述实时距离信息作为所述实时障碍物信息,将所述路面形状信息以及所述实时障碍物分布信息作为所述实时指示信息。
  4. 根据权利要求1所述的交互方法,其中所述基于所述实时障碍物信息以及地图数据信息,获取所述移动机器人的目标行驶路径信息,包括:
    根据所述地图数据信息以及所述实时障碍物信息,确定所述移动机器人的实时位置以及障碍物的位置;
    获取所述移动机器人的目标终点位置,基于所述实时位置以及所述障碍物的位置,确定从所述实时位置到所述目标终点位置的最短路径信息,将所述最短路径信息作为所述移动机器人的目标行驶路径信息。
  5. 根据权利要求1所述的交互方法,其中所述根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,包括:
    针对所述待投影图案中的各所述像素点,根据所述地面投影区域,确定所述像素点对应的投射角度、所述像素点对应的投射时间以及所述像素点对应的投射颜色;
    将各所述像素点对应的投射角度、各所述像素点对应的投射时间以及各所述像素点对应的投射颜色作为所述待投影图案对应的投影参数。
  6. 根据权利要求5所述的交互方法,其中所述投影装置包括振镜、可见光激光器以及透镜,所述根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域,包括:
    根据各所述像素点对应的投射角度确定各所述像素点对应的所述振镜的旋转角度,根据各所述像素点对应的投射颜色确定各所述像素点对应的所述可见光激光器的激光发射信息以及所述透镜的激光合成信息;
    根据各所述像素点对应的投射时间,确定各所述像素点的投射顺序;
    按照各所述像素点的投射顺序,根据各所述像素点对应的所述振镜的旋转角度、各所述像素点对应的所述激光发射信息以及各所述像素点对应的所述透镜的激光合成信息调整所述投影装置以将所述待投影图案投影到所述地面投影区域。
  7. 根据权利要求1所述的交互方法,其中所述根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域之前,所述交互方法还包括:
    根据所述目标行驶路径信息以及所述实时环境感知数据,判断是否符合预设投影条件;
    对应的,所述根据所述目标行驶路径信息确定地面投影区域,包括:
    在判断结果为符合预设投影条件的情况下,根据所述目标行驶路径信息确定地面投影区域。
  8. 根据权利要求7所述的交互方法,其中所述预设投影条件至少包括以下条件中的一种:
    在未来预设时间段内所述移动机器人的行驶方向发生变化;
    所述移动机器人的行驶状态为暂停状态;
    所述移动机器人周围存在行人;
    所述移动机器人当前处于运行状态。
  9. 根据权利要求8所述的交互方法,其中在所述预设投影条件为所述移动机器人当前处于运行状态的情况下,所述获取待投影图案,包括:
    根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图;
    若能,则将所述移动机器人当前投影的图案作为所述待投影图案;
    若否,则根据所述移动机器人的行驶意图生成所述待投影图案。
  10. 根据权利要求9所述的交互方法,其中所述根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图,包括:
    若所述实时障碍物信息指示所述移动机器人周围存在的障碍物为移动障碍物,则执行根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图的步骤。
  11. 根据权利要求1所述的交互方法,其中,在根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域之后,所述交互方法还包括:
    在运行过程中实时投射待投影图案,并获取在运行过程中路面上存在的障碍物区域;
    检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
  12. 根据权利要求11所述的交互方法,其中所述获取在运行过程中路面上存在的障碍物区域,包括:
    在运行过程中实时采集障碍物信息,并在预设投影图中映射出与所述障碍物信息对应的像素信息;
    从包含所有所述像素信息的投影区域中确定最小面积区域,并将所述最小面积区域记录为所述障碍物区域。
  13. 根据权利要求11所述的交互方法,其中所述待投影图案包括初始待投影图案及按照不同时刻以不同放大比例生成的不同的放大待投影图案,所述在运行过程中实时投射待投影图案,包括:
    投射所述初始待投影图案及按照所述不同时刻与所述初始待投影图案排列投射已生成的放大待投影图案。
  14. 根据权利要求11所述的交互方法,其中所述待投影图案包括初始待投影图案、放大待投影图案中的至少一个,所述放大待投影图案由所述初始待投影图案按预设放大比例进行放大处理而形成,所述在运行过程中实时投射待投影图案包括:
    在运行过程中实时投射所述初始待投影图案、所述放大待投影图案中的至少一个。
  15. 根据权利要求13或14所述的交互方法,其中所述根据所述重叠区域调整所述待投影图案,包括:
    在所述重叠区域中确定重叠待投影图案与所述障碍物区域的两个曲线交点;所述重叠待投影图案是指所述初始待投影图案或所述放大待投影图案;
    对所述重叠待投影图案中处于两个所述曲线交点之间的线条进行删除处理,得到删除处理后的所述重叠待投影图案中的两个剩余曲线线段;
    确定与两个所述曲线交点之间的连线对应的中垂线的交点;
    检测所述中垂线交点与边界交点之间的垂直距离,并将所述垂直距离与预设距离阈值进行比较;所述边界交点是指所述中垂线与所述障碍物区域边缘的交点,且所述边界交点位于所述曲线重叠区域内;
    在所述垂直距离小于或等于预设距离阈值时,根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,对所述待投影图案进行调整,得到调整后的待投影图案;所述调整后的待投影图案与所述障碍物区域之间不存在重叠区域。
  16. 根据权利要求14所述的交互方法,其中所述根据所述重叠区域调整所述待投影图案,包括:
    将与所述障碍物区域存在重叠区域的初始待投影图案或者放大待投影图案记录为重叠待投影图案;所述重叠待投影图案包括与所述障碍物区域重叠的重叠区域和与所述障碍物区域不重叠的剩余区域;
    将所述重叠待投影图案的重叠区域删除或者将所述重叠待投影图案按预设比例进行缩小以使所述重叠待投影图案与所述障碍物区域的边缘相切,以得到调整后的待投影图案。
  17. 根据权利要求15所述的交互方法,其中所述根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,对所述待投影图案进行调整,得到调整后的待投影图案,包括:
    通过预设连接方式将两个曲线交点与所述边界交点连接,得到连接线段;
    将两个所述剩余曲线线段以及所述连接线段之间连接形成的待投影图案,记录为调整后的待投影图案。
  18. 根据权利要求15所述的交互方法,其中所述将所述垂直距离与预设距离阈值进行比较之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    在所述垂直距离大于所述预设距离阈值时,停止投射所述待投影图案。
  19. 根据权利要求11所述的交互方法,其中所述根据所述重叠区域调整所述待投影图案之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取机器人的当前位置信息,并根据所述当前位置信息确定所述机器人与所述障碍物区域之间的位置距离;
    根据所述位置距离确定调整后的所述待投影图案的颜色参数,并根据所述颜色参数投射调整后的所述待投影图案。
  20. 一种移动机器人的交互装置,其中所述移动机器人上设置有投影装置以及环境感知传感器,所述交互装置包括:
    获取模块,用于获取所述移动机器人所处空间的地图数据信息,以及获取所述环境感知传感器采集的实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;
    路径模块,用于基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域;
    确定模块,用于获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;
    投影模块,用于根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域。
  21. 根据权利要求20所述的移动机器人的交互装置,还包括:
    障碍物区域获取模块,用于在运行过程中实时投射待投影图案,并获取在运行过程中存在的障碍物区域;
    重叠区域检测模块,用于检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
  22. 一种移动机器人,包括投影装置、环境感知传感器和处理器;
    所述环境感知传感器,用于采集实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;
    所述处理器,用于获取所述移动机器人所处空间的地图数据信息,以及获取所述实时环境感知数据,基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域,获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域;
    所述投影装置,用于将所述待投影图案投影到所述地面投影区域。
  23. 根据权利要求22所述的移动机器人,其中所述处理器还用于:
    根据所述目标行驶路径信息以及所述实时环境感知数据,判断是否符合预设投影条件,所述预设投影条件至少包括以下条件中的一种:在未来预设时间段内所述移动机器人的行驶方向发生变化、所述移动机器人的行驶状态为暂停状态、所述移动机器人周围存在行人以及所述移动机器人当前处于运行状态;
    在判断结果为符合预设投影条件的情况下,根据所述目标行驶路径信息确定地面投影区域。
  24. 根据权利要求23所述的移动机器人,其中所述处理器,还用于:
    在所述预设投影条件为所述移动机器人当前处于运行状态的情况下,根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图;
    若能,则将所述移动机器人当前投影的图案作为所述待投影图案;
    若否,则根据所述移动机器人的行驶意图生成所述待投影图案。
  25. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至19中任一项所述的交互方法的步骤。
PCT/CN2022/132312 2021-11-16 2022-11-16 移动机器人的交互方法、装置、移动机器人和存储介质 WO2023088316A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22894848.5A EP4350461A1 (en) 2021-11-16 2022-11-16 Interaction method and apparatus for mobile robot, and mobile robot and storage medium
KR1020247001573A KR20240021954A (ko) 2021-11-16 2022-11-16 이동 로봇의 인터랙션 방법, 장치, 이동 로봇 및 저장매체(interaction method and apparatus for mobile robot, and mobile robot and storage medium)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111354791.4A CN114265397B (zh) 2021-11-16 2021-11-16 移动机器人的交互方法、装置、移动机器人和存储介质
CN202111354791.4 2021-11-16
CN202111355659.5 2021-11-16
CN202111355659.5A CN114274117A (zh) 2021-11-16 2021-11-16 机器人、基于障碍物的机器人交互方法、装置及介质

Publications (1)

Publication Number Publication Date
WO2023088316A1 true WO2023088316A1 (zh) 2023-05-25

Family

ID=86396251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132312 WO2023088316A1 (zh) 2021-11-16 2022-11-16 移动机器人的交互方法、装置、移动机器人和存储介质

Country Status (3)

Country Link
EP (1) EP4350461A1 (zh)
KR (1) KR20240021954A (zh)
WO (1) WO2023088316A1 (zh)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060187010A1 (en) * 2005-02-18 2006-08-24 Herbert Berman Vehicle motion warning device
CN105976457A (zh) * 2016-07-12 2016-09-28 百度在线网络技术(北京)有限公司 用于指示车辆行车动态的方法和装置
US20160337626A1 (en) * 2014-12-25 2016-11-17 Panasonic Intellectual Property Management Co., Ltd. Projection apparatus
CN106406312A (zh) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 导览机器人及其移动区域标定方法
CN107139832A (zh) * 2017-05-08 2017-09-08 杨科 一种汽车光学投影警示系统及其方法
CN108303972A (zh) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 移动机器人的交互方法及装置
CN109491875A (zh) * 2018-11-09 2019-03-19 浙江国自机器人技术有限公司 一种机器人信息显示方法、系统及设备
CN109927624A (zh) * 2019-01-18 2019-06-25 驭势(上海)汽车科技有限公司 车辆移动的目标区域的投影方法、hmi计算机系统及车辆
CN110039535A (zh) * 2018-01-17 2019-07-23 阿里巴巴集团控股有限公司 机器人交互方法及机器人
CN110442126A (zh) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 一种移动机器人及其避障方法
JP2020154635A (ja) * 2019-03-19 2020-09-24 株式会社フジタ 無人移動装置
CN114265397A (zh) * 2021-11-16 2022-04-01 深圳市普渡科技有限公司 移动机器人的交互方法、装置、移动机器人和存储介质
CN114274117A (zh) * 2021-11-16 2022-04-05 深圳市普渡科技有限公司 机器人、基于障碍物的机器人交互方法、装置及介质

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060187010A1 (en) * 2005-02-18 2006-08-24 Herbert Berman Vehicle motion warning device
US20160337626A1 (en) * 2014-12-25 2016-11-17 Panasonic Intellectual Property Management Co., Ltd. Projection apparatus
CN105976457A (zh) * 2016-07-12 2016-09-28 百度在线网络技术(北京)有限公司 用于指示车辆行车动态的方法和装置
CN106406312A (zh) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 导览机器人及其移动区域标定方法
CN107139832A (zh) * 2017-05-08 2017-09-08 杨科 一种汽车光学投影警示系统及其方法
CN108303972A (zh) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 移动机器人的交互方法及装置
CN110039535A (zh) * 2018-01-17 2019-07-23 阿里巴巴集团控股有限公司 机器人交互方法及机器人
CN109491875A (zh) * 2018-11-09 2019-03-19 浙江国自机器人技术有限公司 一种机器人信息显示方法、系统及设备
CN109927624A (zh) * 2019-01-18 2019-06-25 驭势(上海)汽车科技有限公司 车辆移动的目标区域的投影方法、hmi计算机系统及车辆
JP2020154635A (ja) * 2019-03-19 2020-09-24 株式会社フジタ 無人移動装置
CN110442126A (zh) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 一种移动机器人及其避障方法
CN114265397A (zh) * 2021-11-16 2022-04-01 深圳市普渡科技有限公司 移动机器人的交互方法、装置、移动机器人和存储介质
CN114274117A (zh) * 2021-11-16 2022-04-05 深圳市普渡科技有限公司 机器人、基于障碍物的机器人交互方法、装置及介质

Also Published As

Publication number Publication date
KR20240021954A (ko) 2024-02-19
EP4350461A1 (en) 2024-04-10

Similar Documents

Publication Publication Date Title
US10262230B1 (en) Object detection and identification
US9704267B2 (en) Interactive content control apparatus and method
WO2019085716A1 (zh) 移动机器人的交互方法、装置、移动机器人及存储介质
WO2022148083A1 (zh) 仿真3d数字人交互方法、装置、电子设备及存储介质
US9122053B2 (en) Realistic occlusion for a head mounted augmented reality display
KR101591493B1 (ko) 각각의 사용자의 시점에 대해 공유된 디지털 인터페이스들의 렌더링을 위한 시스템
US9723293B1 (en) Identifying projection surfaces in augmented reality environments
US20180005441A1 (en) Systems and methods for mixed reality transitions
CN110097576B (zh) 图像特征点的运动信息确定方法、任务执行方法和设备
JP7033606B2 (ja) 多視点コンテンツを配信するための表示システムおよび方法
US20130207962A1 (en) User interactive kiosk with three-dimensional display
CN103616032B (zh) 导航地图显示比例尺与三维视角自动控制方法及装置
US20130328925A1 (en) Object focus in a mixed reality environment
US20130342572A1 (en) Control of displayed content in virtual environments
JPWO2009144994A1 (ja) 車両用画像処理装置、車両用画像処理方法
US11461980B2 (en) Methods and systems for providing a tutorial for graphic manipulation of objects including real-time scanning in an augmented reality
CN104284119A (zh) 在对象的预定义部分上投影图像的设备、系统和方法
US20190034155A1 (en) Information processing apparatus, information processing method, and program
US11971536B2 (en) Dynamic matrix filter for vehicle image sensor
CN114341943A (zh) 使用平面提取的简单环境求解器
CN109696173A (zh) 一种车体导航方法和装置
JP2017182340A (ja) 情報処理装置、情報処理方法、及びプログラム
US11269250B2 (en) Information processing apparatus, information processing method, and program
JP2020033003A (ja) 表示制御装置、及び表示制御プログラム
WO2023088316A1 (zh) 移动机器人的交互方法、装置、移动机器人和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894848

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022894848

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2024500596

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022894848

Country of ref document: EP

Effective date: 20240102

ENP Entry into the national phase

Ref document number: 20247001573

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020247001573

Country of ref document: KR