WO2023088316A1 - 移动机器人的交互方法、装置、移动机器人和存储介质 - Google Patents
移动机器人的交互方法、装置、移动机器人和存储介质 Download PDFInfo
- Publication number
- WO2023088316A1 WO2023088316A1 PCT/CN2022/132312 CN2022132312W WO2023088316A1 WO 2023088316 A1 WO2023088316 A1 WO 2023088316A1 CN 2022132312 W CN2022132312 W CN 2022132312W WO 2023088316 A1 WO2023088316 A1 WO 2023088316A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- projected
- pattern
- information
- mobile robot
- projection
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000003993 interaction Effects 0.000 title claims abstract description 72
- 230000008447 perception Effects 0.000 claims abstract description 95
- 238000004590 computer program Methods 0.000 claims abstract description 10
- 230000007613 environmental effect Effects 0.000 claims description 23
- 230000015572 biosynthetic process Effects 0.000 claims description 11
- 238000003786 synthesis reaction Methods 0.000 claims description 11
- 230000002452 interceptive effect Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 93
- 230000008569 process Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 11
- 230000003068 static effect Effects 0.000 description 10
- 230000009471 action Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 239000010813 municipal solid waste Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- MCSXGCZMEPXKIW-UHFFFAOYSA-N 3-hydroxy-4-[(4-methyl-2-nitrophenyl)diazenyl]-N-(3-nitrophenyl)naphthalene-2-carboxamide Chemical compound Cc1ccc(N=Nc2c(O)c(cc3ccccc23)C(=O)Nc2cccc(c2)[N+]([O-])=O)c(c1)[N+]([O-])=O MCSXGCZMEPXKIW-UHFFFAOYSA-N 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005246 galvanizing Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/246—Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/242—Means based on the reflection of waves generated by the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/243—Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/93—Sonar systems specially adapted for specific applications for anti-collision purposes
- G01S15/931—Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2107/00—Specific environments of the controlled vehicles
- G05D2107/60—Open buildings, e.g. offices, hospitals, shopping areas or universities
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2109/00—Types of controlled vehicles
- G05D2109/10—Land vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2111/00—Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
- G05D2111/10—Optical signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2111/00—Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
- G05D2111/30—Radio signals
Definitions
- the present application relates to the field of artificial intelligence, in particular to an interaction method and device for a mobile robot, a mobile robot and a storage medium.
- Mobile robots are currently used in restaurants, shopping malls, hotels and other places with a large flow of people.
- During the driving process of the mobile robot there will often be road right conflicts with pedestrians.
- the information interaction between mobile robots and pedestrians mainly includes speech and action forms.
- mobile robots receive human instructions through microphones, determine the prompt information corresponding to the instructions, and send prompt sounds to people through speakers.
- the prompting sound is used to describe the information content of the prompting information to people; or by receiving action instructions, the instruction information can be transmitted by performing different mechanical actions.
- the interaction between mobile robots and pedestrians is usually realized by voice broadcast, so that pedestrians can know the driving intention of the mobile robot. For example, when the mobile robot is turning right, it will play the voice "I am going to turn right, please pay attention Keep out of the way" to inform pedestrians.
- the prompt information is transmitted through the prompt sound or body movements. Since the prompt sound will be affected by various factors such as the distance between the human and the mobile robot, the sound of the surrounding environment, and the regional language, the prompt action will also be affected by the interaction between the human and the mobile robot. The effect of the distance of the mobile robot. Especially in noisy places such as restaurants and shopping malls, it is difficult for the voice broadcast by the mobile robot to be clearly transmitted to pedestrians, and the interaction effect is poor. Therefore, it will be difficult for the mobile robot to quickly and accurately describe the prompt information to the human, which will lead to low interaction efficiency and low interaction accuracy between the mobile robot and pedestrians.
- the present application provides an interaction method and device for a mobile robot, a mobile robot and a storage medium.
- a method for interacting with a mobile robot is provided.
- the mobile robot is provided with a projection device and an environment perception sensor.
- the method includes:
- map data information of the space where the mobile robot is located and obtaining real-time environment perception data collected by the environment perception sensor, the real-time environment perception data includes real-time obstacle information and real-time information for indicating road conditions around the mobile robot instructions;
- the projection device is controlled according to the projection parameters to project the pattern to be projected onto the ground projection area.
- an interactive device for a mobile robot comprising:
- the obtaining module is used to obtain map data information of the space where the mobile robot is located and to obtain real-time environment perception data collected by the environment perception sensor, the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
- the path module is used to obtain the target driving path information of the mobile robot based on real-time obstacle information and map data information, and determine the ground projection area according to the target driving path information and real-time indication information;
- the determination module is used to obtain the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, and the pattern information to be projected is used to indicate the driving intention of the mobile robot;
- the projection module is used to control the projection device according to the projection parameters to project the pattern information to be projected to the projection area on the ground.
- a mobile robot in a third aspect, includes a projection device, an environment perception sensor, and a processor;
- the environment perception sensor is used to collect real-time environment perception data, and the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
- the processor is used to obtain map data information of the space where the mobile robot is located and obtain real-time environment perception data, obtain target driving path information of the mobile robot based on real-time obstacle information and map data information, and obtain target driving path information according to the target driving path information and real-time instructions information, determine the projection area on the ground, obtain the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the projection area on the ground, and the pattern to be projected is used to indicate the driving intention of the mobile robot; control the projection device according to the projection parameters to Project the pattern to be projected onto the ground projection area;
- the projection device is used for projecting the pattern to be projected onto the ground projection area.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the interaction method for a mobile robot as described in the above-mentioned first aspect is implemented.
- FIG. 1 is a schematic structural diagram of a mobile robot in an embodiment of the present application.
- FIG. 2 is a schematic flowchart of an interaction method for a mobile robot in an embodiment of the present application.
- FIG. 3 is a schematic diagram of a projection area of a mobile robot in an embodiment of the present application.
- FIG. 4 is a schematic diagram of a projection application of a mobile robot in an embodiment of the present application.
- FIG. 5 is a schematic flowchart of step 101 in an embodiment of the present application.
- FIG. 6 is a schematic flowchart of step 101 in another embodiment of the present application.
- FIG. 7 is a schematic flowchart of step 102 in an embodiment of the present application.
- FIG. 8 is a schematic flowchart of step 103 in an embodiment of the present application.
- FIG. 9 is a schematic diagram of the operation of the RGBD sensor in an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a laser projection device in an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of a laser projection device in another embodiment of the present application.
- FIG. 12 is a schematic flowchart of step 104 in an embodiment of the present application.
- Fig. 13 is a schematic flowchart of an interaction method for a mobile robot in another embodiment of the present application.
- Fig. 14 is a schematic flowchart of an interaction method for a mobile robot in another embodiment of the present application.
- Fig. 15 is a structural block diagram of an interaction device of a mobile robot in an embodiment of the present application.
- Fig. 16 is a flowchart of an interaction method based on a mobile robot in an embodiment of the present application.
- Fig. 17 is a flow chart of step 105 in the obstacle-based robot interaction method in an embodiment of the present application.
- FIG. 18 is a schematic diagram of a non-overlapping area between a pattern to be projected and an obstacle area in an embodiment of the present application.
- FIG. 19 is a schematic diagram of an overlapping area between a pattern to be projected and an obstacle area in an embodiment of the present application.
- Fig. 20 is a schematic diagram of an overlapping area between the mobile robot and the obstacle area during the movement in an embodiment of the present application.
- Fig. 21 is a schematic diagram of the internal structure of the robot in an embodiment of the present application.
- the mobile robot interaction method provided by the embodiment of the present application may be executed by an interaction device of the mobile robot.
- the interaction device of the mobile robot is set on the mobile robot as shown in Figure 1, and may be implemented through software, hardware or a combination of software and hardware.
- the terminal can be a personal computer, laptop, media player, smart TV, smartphone, tablet, and portable wearable device, among others.
- the mobile robot is provided with a plurality of environmental perception sensors and a laser projection device. Wherein, there may be one, two or more environmental perception sensors. When there are a plurality of environment sensing sensors, the setting of each environment sensing sensor is different.
- Fig. 1 exemplarily shows a kind of mobile robot, as shown in Fig. 1, these multiple environmental perception sensors include RGBD camera 1 and radar device 3; The hub motor may be included. It should be noted that the sensor type and installation position of the environment sensing sensor may be adjusted according to the actual situation.
- FIG. 2 shows a flowchart of an interaction method for a mobile robot provided by an embodiment of the present application.
- This embodiment is described by taking the method applied to a terminal as an example. It can be understood that the method can also be applied to a system including a terminal and a server, and is implemented through interaction between the terminal and the server.
- the interaction method of the mobile robot may include the following steps:
- Step 101 Obtain map data information of the space where the mobile robot is located and obtain real-time environment perception data collected by environment perception sensors.
- the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot.
- Obstacles include stationary obstacles and moving obstacles, and the data of each type of obstacle is not limited.
- the real-time indication information for indicating the road condition around the mobile robot at least includes road shape information around the mobile robot and obstacle distribution on the surrounding road.
- the environment awareness sensor includes at least an RGBD camera.
- the RGBD camera is used to detect the distance from the obstacle around the mobile robot to the mobile robot, the obstacle identification information and the real-time indication information indicating the road condition around the mobile robot.
- the mobile robot obtains real-time environment perception data by processing the color image and depth image collected by the RGBD camera.
- the saved map data information is directly invoked from a preset storage area to obtain the map data information, wherein the preset storage area may be a server or a terminal of a mobile robot.
- the map data information is constructed by the mobile robot in real time. During the movement of the mobile robot, the environment perception sensor is used to collect the data required to construct the map, and the map is constructed and improved based on the collected data.
- Step 102 Obtain target driving path information of the mobile robot based on real-time environment perception data and map data information, and determine a ground projection area according to the target driving path information and real-time indication information.
- the map data information includes location information of these static obstacles.
- the mobile robot Before the mobile robot starts to drive, it will first obtain the start position and the end position, and then determine the initial driving path from the start position to the end position based on the map data information.
- the environment perception sensor detects that there are moving obstacles (such as pedestrians) around the mobile robot, the obstacle avoidance operation is performed to change the driving route of the mobile robot, that is, based on real-time environment perception data and map data information, the target driving path of the mobile robot is obtained information.
- the mobile robot uses a path planning algorithm for path planning to obtain target driving path information, wherein the path planning algorithm includes an incremental heuristic algorithm, a BUG algorithm, and a graph search algorithm. method or a combination algorithm that combines multiple path planning algorithms, etc.
- the mobile robot after the mobile robot acquires the target driving route information, it determines the road surface area that the mobile robot will drive in a future period of time as the ground projection area according to the target driving route. Wherein, the length of the future period of time can be determined according to the traveling speed of the mobile robot.
- figure (a) is a three-dimensional schematic diagram of the space around the mobile robot
- 6 is the projection light outlet of the projection device
- 7-10 are obstacles
- 11 is a schematic diagram of the projection area
- 12 is the mobile robot
- Figure ( b) is the ground distribution map corresponding to figure (a)
- 7'-10' is the contact surface between obstacles 7-10 and the ground
- 12' is the contact surface between the mobile robot 12 and the ground
- 13 represents the target of the mobile robot direction of travel.
- the coordinate point corresponding to the center position of the contact surface between the mobile robot 12 and the ground is taken as the coordinate position of the mobile robot, that is, d 0 (x 0 , y 0 ) in the figure (b), and according to the target driving path information, determine A series of moving coordinate points of the mobile robot, the series of moving coordinate points form a center line, that is, the curve 14 in figure (b), and then translate the center line to both sides for a certain distance to obtain two edge lines, wherein, the translated The distance value is half the width value of the base of the mobile robot.
- the area between the two edge lines is the ground projection area, namely 11' in Figure (b).
- the direction of the ground projection area is determined according to the target driving path information, and the size and shape of the ground projection area are determined according to the road surface shape information and the real-time obstacle distribution information.
- the road surface shape information is a curved shape
- the shape of the ground projection area is a curved shape.
- the real-time obstacle distribution information shows that the free space before the real-time obstacle is relatively narrow, the ground projection area needs to be adjusted smaller.
- Step 103 Obtain the information of the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the information of the pattern to be projected and the projection area on the ground.
- the pattern information to be projected is used to indicate the driving intention of the mobile robot.
- the pattern to be projected may be a text pattern, a graphic pattern, or a combination of text and geometric patterns, and may also be an animation.
- the pattern information to be projected can be displayed on the ground by flashing.
- the projection parameters include projection angle, projection color, projection content, projection time and so on.
- Step 104 Control the laser projection device according to the projection parameters to project the pattern information to be projected onto the ground projection area.
- the mobile robot when the projection parameters are determined, the mobile robot will adjust the projection device 2 according to the projection parameters, so that the projection device 2 will project the pattern information to be projected onto the ground projection area, and the surrounding pedestrians will know the movement by viewing the projection information on the ground.
- the driving intention of the robot when the projection parameters are determined, the mobile robot will adjust the projection device 2 according to the projection parameters, so that the projection device 2 will project the pattern information to be projected onto the ground projection area, and the surrounding pedestrians will know the movement by viewing the projection information on the ground.
- the driving intention of the robot is the mobile robot will adjust the projection device 2 according to the projection parameters, so that the projection device 2 will project the pattern information to be projected onto the ground projection area, and the surrounding pedestrians will know the movement by viewing the projection information on the ground. The driving intention of the robot.
- the ground projection area is determined according to the target driving path information of the mobile robot and the real-time indication information used to indicate the road conditions around the mobile robot, and the laser projection device is adjusted based on the determined projection parameters corresponding to the pattern to be projected to be used to characterize the mobile robot.
- the projected pattern of the driving intention is projected onto the ground projection area, so that pedestrians can know the driving intention of the mobile robot according to the projected pattern information projected on the ground by the projection device, which solves the poor interaction effect caused by the noisy environment where the robot is located To improve the interaction effect between mobile robots and pedestrians.
- step 101 obtains the map data information of the space where the mobile robot is located and the real-time environment perception data collected by the environment perception sensor
- the mobile robot provided by this embodiment
- the interactive method of the robot also includes step 201, step 202 and step 203:
- Step 201 Obtain historical environment perception data collected by the environment perception sensor when the environment of the space where the mobile robot is located satisfies a preset environment condition.
- the preset environmental conditions include at least one of a small number of pedestrians in the environment of the space where the mobile robot is located and no one in the environment of the space where the mobile robot is located.
- the historical environment perception data includes static obstacle information in the space where the mobile robot is located, such as tables, chairs or trash cans.
- the preset environmental condition is that the number of pedestrians in the environment where the mobile robot is located is small
- the information related to pedestrians in the original perception data collected by the environment perception sensor is filtered out to obtain historical environment perception data.
- the mobile robot determines when to perform the above historical environment perception data collection operation according to the acquired historical environment perception data collection time information, for example, setting the historical environment perception data collection time to 23:00 every night.
- Step 202 Determine the spatial coordinate information of the space where the mobile robot is located according to the historical environment perception data, and create a map of the space according to the spatial coordinate information.
- the spatial coordinate information is the spatial coordinate information of the entire space where the mobile robot is located or the spatial coordinate information of the space that the mobile robot will pass through, for example, the spatial coordinate information of a restaurant or a shopping mall or the corresponding information of the service area of the mobile robot in the shopping mall.
- the spatial coordinate information of the space For example, when the service area of the mobile robot is the area where the second floor of the shopping mall is located, it is necessary to determine the spatial coordinate information of the second floor of the shopping mall.
- the spatial coordinate information is two-dimensional coordinate information or three-dimensional coordinate information.
- two-dimensional coordinates are established with the ground as a plane, and a reference position point is set.
- the reference position point is the position point of a certain static obstacle in space, or a reference object is placed on the ground, and the position point where the reference object is located is used as the reference position point.
- two-dimensional coordinates corresponding to other position points in the space are determined.
- Step 203 Use map data information as map data information.
- the spatial coordinate information of the space is determined by acquiring the historical environmental perception data collected by the environmental perception sensor when the environment of the space where the mobile robot is located satisfies the preset environmental conditions, and a map of the space is created according to the spatial coordinate information. Since the map is constructed based on historical environmental perception data collected in a space environment that meets the preset environmental conditions, the interference information in the space is reduced, thereby reducing the difficulty of map construction and the amount of map data information.
- each environmental perception sensor includes a radar device and a camera device.
- this embodiment relates to the process of acquiring real-time environmental perception data collected by the environmental perception sensor in step 101 . Based on the embodiment shown in FIG. 5, as shown in FIG. 6, the process includes step 301, step 302 and step 303:
- Step 301 Obtain the real-time distance information between the obstacle and the mobile robot collected by the radar device.
- the radar device includes at least one of a lidar device and an ultrasonic radar device.
- the lidar device is used to detect the distance between objects around the robot and the robot within the range of 2D or 3D plane.
- Step 302 Obtain real-time obstacle recognition information collected by the camera device, road shape information of the road around the mobile robot, and real-time obstacle distribution information on the road around the mobile robot.
- the camera device includes an RGBD camera; or the camera device includes an RGBD camera and an RGB camera.
- the real-time obstacle identification information includes identifying whether the obstacle is a pedestrian.
- an image recognition algorithm is used to recognize the image of the obstacle collected by the RGB camera or the RGBD camera to determine whether the obstacle is a pedestrian.
- the camera device when the camera device includes an RGBD camera and an RGB camera, the RGB camera is used in conjunction with the radar device.
- the radar device detects an obstacle, the mobile robot starts the RGB camera to perform a collection operation to obtain real-time obstacles identification information.
- Step 303 Use real-time obstacle identification information and real-time distance information as real-time obstacle information, and use road surface shape information and real-time obstacle distribution information as real-time instruction information.
- the real-time distance information between obstacles and the mobile robot is obtained by means of the radar device, and the real-time obstacle identification information, the road shape information of the road surface around the mobile robot, and the real-time obstacle distribution information of the road surface around the mobile robot are obtained by means of the camera device. , realizing the acquisition of real-time environment perception data. Multiple collection devices are used together to improve the diversity of real-time environment perception data and the reliability of real-time environment perception data.
- this embodiment involves obtaining the target travel path information of the mobile robot based on real-time obstacle information and map data information in step 102, including Step 401 and Step 402.
- Step 401 Determine the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information.
- the coordinate position of the mobile robot in the map is obtained as the real-time position, and then the coordinate position of the obstacle in the map is determined as the position of the obstacle according to the real-time obstacle information.
- Step 402 Obtain the target end position of the mobile robot, determine the shortest path information from the real-time position to the target end position based on the real-time position and the position of the obstacle, and use the shortest path information as the target driving path information of the mobile robot.
- shortest path information from the real-time location to the destination destination location is determined using a shortest path algorithm.
- the shortest path algorithm includes Dijkstra algorithm, Bellman-Ford algorithm, Floyd algorithm and SPFA algorithm and so on.
- the real-time position of the mobile robot and the position of the obstacle are determined according to the map data information and the real-time obstacle information, and the target terminal position of the mobile robot is obtained.
- the shortest path information of the terminal position realizes the real-time determination of the target driving path information of the mobile robot, and improves the reliability of the path planning of the mobile robot.
- this embodiment involves determining the projection parameters of the laser projection device according to the pattern information to be projected and the ground projection area in step 103, including the steps 501 and step 502.
- Step 501 For each pixel in the pattern to be projected, according to the ground projection area, determine the projection angle corresponding to the pixel, the projection time corresponding to the pixel, and the projection color corresponding to the pixel.
- the corresponding relationship between each pixel point in the pattern to be projected and a certain spatial coordinate point in the ground projection area is obtained, and the projection angle corresponding to each pixel point and the corresponding pixel point are obtained according to the corresponding relationship.
- the RGBD camera is used to obtain the vertical distance information between the road around the mobile robot and the RGBD camera.
- each pixel For each pixel, first assume the corresponding original projection angle, projection time and projection color when projecting the pattern information to be projected on a flat road surface, and then obtain the projection angle according to the vertical distance information between the road surface around the mobile robot and the RGBD camera The correction parameter, according to the projection angle correction parameter and the original projection angle, finally obtains the actual projection angle corresponding to the sampling point, and the actual projection angle is the projection angle corresponding to the sampling point.
- Step 502 Use the projection angle corresponding to each pixel, the projection time corresponding to each pixel, and the projection color corresponding to each pixel as projection parameters of the laser projection device.
- This embodiment realizes the determination of the projection parameters of the projection device and improves the projection effect of the pattern to be projected by determining the corresponding projection angle, projection time and projection color of each pixel point of the pattern terminal to be projected; at the same time, each pixel point can be set
- the color information of the mobile robot makes the projection pattern projected on the road surface a colorful pattern, which is easier to attract the attention of pedestrians around, and further improves the interaction effect between the mobile robot and pedestrians.
- the projection device includes a vibrating mirror, a visible light laser and a lens, as shown in Figure 10 and Figure 11, the vibrating mirror is a rotating vibrating mirror or a MEMS solid-state vibrating mirror, which is used to control the projection direction of the laser, and is used for visible light lasers.
- the display is performed by emitting lasers in the visible light frequency range, and the lens is used to synthesize lasers of various colors.
- the vibrating mirror is a rotating vibrating mirror, as shown in FIG.
- Two visible light lasers 17 and a third visible light laser 18 are used to emit laser light
- the lens 15 synthesizes the received laser light into a light beam, and then the first rotating galvanometer 13 and the second galvanizing mirror 14 The direction of the synthesized light is adjusted to finally project the to-be-transmitted pattern 19 .
- the vibrating mirror is a MEMS solid-state vibrating mirror
- a third visible light laser 18 when the vibrating mirror is a MEMS solid-state vibrating mirror, as shown in FIG. A third visible light laser 18 .
- the first visible light laser 16, the second visible light laser 17, and the third visible light laser 18 respectively emit laser light, and the lens 15 synthesizes the received laser light into a light beam, and then adjusts the direction of the combined light beam by the MEMS solid-state vibrating mirror 20, Finally, the pattern to be transmitted 19 is projected.
- this embodiment involves adjusting the laser projection device according to the projection parameters in step 104 to project the pattern information to be projected onto the ground projection area, including steps 601, 602 and 603.
- Step 601 Determine the rotation angle of the vibrating mirror corresponding to each pixel according to the projection angle corresponding to each pixel, and determine the laser emission information of the visible laser and the laser synthesis information of the lens corresponding to each pixel according to the projection color corresponding to each pixel.
- the laser corresponding to the visible light laser includes red, green and blue (RGB) three primary color lasers, and the laser emission information includes the visible light frequency band.
- the visible light frequency bands corresponding to the three visible light lasers in FIG. 10 or FIG. 11 are determined according to the projected colors.
- Step 602 Determine the projection order of each pixel according to the projection time corresponding to each pixel.
- Step 603 According to the projection sequence of each pixel, adjust the laser projection device according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel to project the The pattern information is projected onto the ground projection area.
- This embodiment realizes the visual display of the pattern information to be projected on the ground projection area, and can project colorful patterns on the ground, which is convenient for capturing the attention of pedestrians and improving the interaction effect.
- the interaction method of the mobile robot further includes:
- Step 701 According to the target driving route information and real-time environment perception data, determine whether the preset projection condition is met.
- the preset projection conditions include at least one of the following conditions: the driving direction of the mobile robot changes within a preset time period in the future, the driving state of the mobile robot is paused, there are pedestrians around the mobile robot, and the mobile robot is currently in the Operating status.
- the preset projection condition is related to the driving situation of the mobile robot. Different pattern information to be projected can be set for different preset projection conditions. For example, when the driving direction of the mobile robot changes, the pattern information to be projected can be "the combination of the arrow mark corresponding to the driving direction and the text"; when the driving state of the mobile robot is paused, the pattern information to be projected can be "your The text pattern of "go ahead” or the text pattern of "start walking in xxx minutes” and so on.
- the preset projection condition is that the mobile robot is currently running. By detecting whether the mobile robot is in the power-on state, if it is in the power-on state, the projection device is started for projection. In this case, the projection device of the mobile robot is always in the state of projecting patterns. The projection pattern projected onto the ground can be changed in real time.
- the preset projection condition is that the sound intensity around the mobile robot is higher than a preset value.
- the interaction is performed by means of projection; when the intensity of the surrounding sound is lower than the preset value , using voice reminders to perform interactions.
- Step 702 If the judgment result is that the preset projection condition is met, determine the ground projection area according to the target driving route information.
- the preset projection condition is met according to the target driving route information and real-time environment perception data.
- the projection of the pattern to be projected is performed only when the projection conditions are preset, the flexibility of the projection setting of the projection device is improved, the energy consumption and calculation amount of the mobile robot are reduced, and the service life of the laser projection device is improved.
- the pattern to be projected is obtained in step 103, including:
- Step 801 According to the target driving route information, judge whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot.
- the pattern currently projected by the mobile robot is the projected pattern projected onto the ground at the current moment.
- Step 802 If yes, use the currently projected pattern of the mobile robot as the pattern to be projected.
- the projection pattern is a projection pattern to be projected onto the ground at a next moment of the current moment.
- Step 803 If not, generate a pattern to be projected according to the driving intention of the mobile robot.
- different patterns to be projected are set according to different driving intentions of the mobile robot.
- the pattern projected on the ground will also change, that is, the projected pattern at the next moment is different from the projected pattern at the previous moment.
- the currently projected pattern that is, "going straight ahead”
- the projection pattern of is converted into a projection pattern representing "turn left” or "turn right”.
- the embodiment realizes real-time adjustment of projection according to the driving intention of the mobile robot by judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot and generating a pattern to be projected according to the driving intention of the mobile robot when it cannot reflect the driving intention of the mobile robot
- the purpose of the pattern is to enable pedestrians to accurately grasp the driving intention of the mobile robot, improve the accuracy of the information conveyed by the mobile robot to pedestrians, and further improve the interaction effect between the mobile robot and pedestrians.
- an interaction method for a mobile robot includes the following steps:
- Step 901 Obtain historical environment perception data collected by the environment perception sensor when the environment of the space where the mobile robot is located satisfies a preset environment condition.
- Step 902 Determine the spatial coordinate information of the space where the mobile robot is located according to the historical environment perception data, and create a spatial map according to the spatial coordinate information, using the map as map data information.
- Step 903 Obtain real-time distance information between the obstacle and the mobile robot collected by the radar device, real-time obstacle identification information collected by the camera device, road shape information of the road around the mobile robot, and real-time obstacle distribution information on the road around the mobile robot.
- Step 904 Use real-time obstacle identification information and real-time distance information as real-time obstacle information, and use road surface shape information and real-time obstacle distribution information as real-time indication information.
- Step 905 Determine the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information.
- Step 906 Obtain the target end position of the mobile robot, determine the shortest path information from the real-time position to the target end position based on the real-time position and the position of the obstacle, and use the shortest path information as the target driving path information of the mobile robot.
- Step 907 Determine whether the preset projection conditions are met according to the target driving route information and real-time environment perception data, and determine the ground projection area according to the target driving route information and real-time indication information if the judgment result is in accordance with the preset projection conditions.
- the preset projection conditions include at least one of the following conditions: the driving direction of the mobile robot changes within a preset time period in the future, the driving state of the mobile robot is paused, there are pedestrians around the mobile robot, and the mobile robot is currently running state.
- Step 908 Obtain the pattern to be projected.
- the preset projection condition is that the mobile robot is currently running, according to the target driving path information, it is judged whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot;
- the pattern is used as the pattern to be projected, if not, the pattern to be projected is generated according to the driving intention of the mobile robot.
- Step 909 For each pixel in the pattern to be projected, according to the ground projection area, determine the projection angle corresponding to the pixel, the projection time corresponding to the pixel, and the projection color corresponding to the pixel.
- Step 910 Use the projection angle corresponding to each pixel, the projection time corresponding to each pixel, and the projection color corresponding to each pixel as projection parameters of the laser projection device.
- Step 911 Determine the rotation angle of the vibrating mirror corresponding to each pixel according to the projection angle corresponding to each pixel, and determine the laser emission information of the visible laser and the laser synthesis information of the lens corresponding to each pixel according to the projection color corresponding to each pixel.
- Step 912 Determine the projection sequence of each pixel according to the projection time corresponding to each pixel.
- Step 913 According to the projection sequence of each pixel, adjust the laser projection device according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel to project the The pattern information is projected onto the ground projection area.
- the image to be projected is projected onto the ground using a laser projection device so that pedestrians know the driving intention of the mobile robot, and the interaction effect between the mobile robot and pedestrians is improved, and the interaction caused by the noisy environment of the space where the robot is located is solved. Poor technical issues.
- the projection pattern projected on the road can be a colorful pattern, which can better capture the attention of pedestrians and improve the interaction effect.
- the projection conditions can be preset to improve the flexibility of the projection device, and the projection pattern can be adjusted according to the actual scene. The accuracy of information conveyed by the mobile robot to pedestrians is improved, and the interaction effect between the mobile robot and pedestrians is further improved.
- the interaction method of the mobile robot may further include:
- Step 105 Project the pattern to be projected in real time during the running process, and obtain the obstacle area existing on the road surface during the running process;
- Step 106 Detect whether there is an overlapping area between the pattern to be projected and the obstacle area, and adjust the image to be projected according to the overlapping area when there is an overlapping area between the pattern to be projected and the obstacle area.
- the pattern is projected so that there is no overlapping area between the pattern to be projected and the obstacle area.
- the pattern to be projected may specifically be a travel instruction map; by determining the curve overlapping area between the pattern to be projected and the obstacle area, the pattern to be projected is adjusted according to the curve overlapping area, so that the to-be-projected pattern launched by the robot
- the projection pattern will be dynamically deformed according to the obstacle area, and the adjusted pattern to be projected does not overlap with the obstacle area, thereby realizing the information interaction between the robot and the obstacle, and improving the information interaction efficiency between the robot and the human and accuracy of information interaction.
- the acquisition of the obstacle area existing on the road surface during operation includes:
- Obstacle information is collected in real time during operation, and pixel information corresponding to the obstacle information is mapped in a preset projection map;
- the pattern to be projected includes an initial pattern to be projected and different enlarged patterns to be projected that are generated at different magnification ratios at different times, and the real-time projection of the pattern to be projected during operation includes:
- the pattern to be projected includes at least one of an initial pattern to be projected and an enlarged pattern to be projected, and the real-time projection of the pattern to be projected during operation includes:
- the initial pattern to be projected is gradually enlarged according to a preset magnification ratio to form the enlarged pattern to be projected, and at least one of the initial pattern to be projected and the enlarged pattern to be projected is projected.
- the initial pattern to be projected is gradually enlarged according to a preset magnification ratio to form the enlarged pattern to be projected, and at least one of the initial pattern to be projected and the pattern to be projected to be enlarged is projected.
- Step 107 Obtain an initial pattern to be projected.
- Step 108 performing gradual enlargement processing on the initial pattern to be projected according to a preset enlargement ratio to form an enlarged pattern to be projected.
- Step 109 Display at least one of the initial pattern to be projected and the enlarged pattern to be projected sequentially, at least one of the displayed initial pattern to be projected and the enlarged pattern to be projected is the pattern to be projected .
- the adjusting the pattern to be projected according to the overlapping area includes:
- the overlapping pattern to be projected refers to the initial pattern to be projected or the enlarged pattern to be projected
- the boundary intersection point refers to the intersection point between the mid-perpendicular line and the edge of the obstacle area , and the boundary intersection point is located in the overlapping area of the curves;
- the pattern to be projected is adjusted according to the two remaining curve segments, the curve intersection and the boundary intersection.
- the adjusting the pattern to be projected according to the overlapping area includes:
- the overlapping pattern to be projected includes an overlapping area overlapping with the obstacle area and an overlapping area with the obstacle the remaining regions where the regions do not overlap;
- the adjustment of the pattern to be projected according to the two remaining curve segments, the intersection point of the curve and the intersection point of the boundary to obtain the adjusted pattern to be projected includes:
- a pattern to be projected formed by connecting the two remaining curve segments and the connecting line segments is recorded as an adjusted pattern to be projected.
- the vertical distance after comparing the vertical distance with a preset distance threshold, it further includes:
- An adjusted color parameter of the pattern to be projected is determined according to the position distance, and the adjusted pattern to be projected is projected according to the color parameter.
- At least some of the steps in Figure 2, Figures 5 to 8, Figures 12 to 14, and Figures 16 to 17 may include multiple steps or multiple stages, and these steps or stages are not necessarily performed at the same time complete, but may be performed at different times, and the execution order of these steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of steps or stages in other steps.
- an interaction device for a mobile robot which includes an acquisition module, a path module, a determination module, and a projection module, specifically:
- the acquisition module is used to acquire the map data information of the space where the mobile robot is located and the real-time environment perception data collected by the environment perception sensor.
- the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
- the path module is used to obtain the target driving path information of the mobile robot based on real-time obstacle information and map data information, and determine the ground projection area according to the target driving path information and real-time indication information;
- the determination module is used to obtain the pattern to be projected, and determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, and the pattern information to be projected is used to indicate the driving intention of the mobile robot;
- the projection module is used to control the projection device according to the projection parameters to project the pattern information to be projected to the projection area on the ground.
- the device also includes a map module, which is specifically used for:
- the spatial coordinate information of the space where the mobile robot is located determines the spatial coordinate information of the space where the mobile robot is located, and create a map of the space according to the spatial coordinate information
- the data information of the map is used as the map data information.
- the environmental perception sensor includes a radar device and a camera device
- the acquisition module is used for:
- Real-time obstacle identification information and real-time distance information are used as real-time obstacle information, and road surface shape information and real-time obstacle distribution information are used as real-time indication information.
- the path module is used to:
- map data information and real-time obstacle information determine the real-time position of the mobile robot and the position of the obstacle
- Obtain the target end position of the mobile robot determine the shortest path information from the real-time position to the target end position based on the real-time position and the position of the obstacle, and use the shortest path information as the target driving path information of the mobile robot.
- the determining module is used to:
- the projection angle corresponding to each pixel, the projection time corresponding to each pixel, and the projection color corresponding to each pixel are used as projection parameters of the projection device.
- the projection device includes a vibrating mirror, a visible light laser and a lens, and the projection module is used for:
- the projection device is adjusted according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel to project the pattern information to be projected onto The ground projection area.
- the path module is also specifically used for:
- the ground projection area is determined according to the target driving route information.
- the preset projection conditions include at least one of the following conditions:
- the driving direction of the mobile robot changes, the driving state of the mobile robot is paused, there are pedestrians around the mobile robot, and the mobile robot is currently in the running state.
- the determination module is specifically used for:
- the preset projection condition is that the mobile robot is currently running, according to the target driving path information, it is judged whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot;
- a pattern to be projected is generated according to the driving intention of the mobile robot.
- the determination module is also specifically used for:
- the real-time obstacle information indicates that the obstacles around the mobile robot are moving obstacles, then perform the step of judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
- the interaction device of the mobile robot may also include:
- the obstacle area acquisition module is used to project the pattern to be projected in real time during the operation, and obtain the obstacle area existing on the road surface during the operation;
- An overlapping area detection module configured to detect whether there is an overlapping area between the pattern to be projected and the obstacle area, and when there is an overlapping area between the pattern to be projected and the obstacle area, according to the overlapping area
- the pattern to be projected is adjusted so that there is no overlapping area between the pattern to be projected and the obstacle area.
- a mobile robot in an embodiment of the present application, includes a projection device, an environment perception sensor, and a processor;
- the environment perception sensor is used to collect real-time environment perception data, and the real-time environment perception data includes real-time obstacle information and real-time indication information for indicating road conditions around the mobile robot;
- the processor is used to obtain map data information and real-time environment perception data of the space where the mobile robot is located, obtain target driving path information of the mobile robot based on real-time obstacle information and map data information, and obtain target driving path information according to the target driving path information and real-time instruction information , determine the ground projection area, obtain the pattern to be projected, determine the projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, and the pattern to be projected is used to indicate the driving intention of the mobile robot; control the projection device according to the projection parameters.
- the projection pattern is projected onto the ground projection area;
- the projection device is used for projecting the pattern to be projected onto the ground projection area.
- the processor is further configured to:
- the environment perception sensor includes a radar device and a camera device
- Radar device for collecting real-time distance information between obstacles and mobile robots
- the camera device is used to collect real-time obstacle identification information, road shape information of the road surface around the mobile robot, and real-time obstacle distribution information of the road surface around the mobile robot;
- the processor is used to acquire real-time distance information and real-time obstacle identification information, and use the real-time distance information and real-time obstacle identification information as real-time obstacle information; acquire road surface shape information and real-time obstacle distribution information and combine the road surface shape information and real-time Obstacle distribution information is used as real-time indication information.
- the processor is used to:
- the map data information and real-time obstacle information determine the real-time position of the mobile robot and the position of the obstacle; obtain the target end position of the mobile robot, and determine the shortest path from the real-time position to the target end position based on the real-time position and the position of the obstacle Information, the shortest path information is used as the target driving path information of the mobile robot.
- the processor is used to:
- the projection device includes a vibrating mirror, a visible light laser and a lens, and the processor is used for:
- the projection time corresponding to each pixel determines the projection sequence of each pixel; according to the projection sequence of each pixel, according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the lens corresponding to each pixel
- the laser synthesis information adjusts the projection device to project the pattern information to be projected onto the ground projection area;
- the projection device is used to project each pixel according to the rotation angle of the galvanometer corresponding to each pixel, the laser emission information corresponding to each pixel, and the laser synthesis information of the lens corresponding to each pixel according to the projection order of each pixel. to the ground projection area.
- the processor is further configured to:
- the preset projection conditions include at least one of the following conditions: the driving direction of the mobile robot changes within a preset time period in the future, the movement of the mobile robot The driving state is paused, there are pedestrians around the mobile robot, and the mobile robot is currently running; if the judgment result meets the preset projection conditions, the ground projection area is determined according to the target driving path information.
- the processor is also used to:
- the preset projection condition is that the mobile robot is currently running, according to the target driving path information, it is judged whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot; if so, the pattern currently projected by the mobile robot is used as The pattern to be projected; if not, the pattern to be projected is generated according to the driving intention of the mobile robot.
- the processor is also specifically configured to:
- the real-time obstacle information indicates that the obstacles around the mobile robot are moving obstacles, then perform the step of judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
- the mobile robot further includes a memory, and the memory stores computer-readable instructions that can run on the processor, and the processor is used to execute the computer-readable instructions. The following steps are implemented when reading instructions.
- Step 105 Projecting the pattern to be projected in real time during the running process, and obtaining the obstacle area existing on the road surface during the running process.
- the pattern to be projected is a pattern to be projected that characterizes the robot's traveling intention, and the pattern to be projected can be a curve to be projected, a straight line to be projected, an image to be projected, etc.; Real-time projection of laser devices, such as projecting on the road ahead of the robot, or on equipment on the road ahead of the robot.
- the pattern to be projected is to preset a certain number of points in the forward direction of the robot, and use a curve or a straight line to connect the certain number of points to form a coherent graph.
- the pattern to be projected is a curve obtained by connecting a preset number of curve nodes through Bezier curves.
- the preset number can be set according to specific requirements, for example, the preset number can be set to 5, 7, 9, 10 and so on.
- the running process may include: the process of the robot moving, the waiting process of the robot stopping due to encountering an obstacle during the movement, the process of the robot being fixed at a certain place without moving after starting, etc.
- the pattern to be projected may specifically be a traveling instruction map.
- the obstacle area is an area including obstacle information detected during the robot's travel; wherein, the obstacle information includes static obstacle information and dynamic obstacle information; wherein, the static obstacle information refers to a static obstacle (such as a meal delivery robot The location information of tables, chairs, lockers and other non-movable obstacles in the scene); dynamic obstacle information refers to the location information of dynamic obstacles (such as pedestrians, other robots and other objects that can move by themselves).
- the static obstacle information refers to a static obstacle (such as a meal delivery robot The location information of tables, chairs, lockers and other non-movable obstacles in the scene);
- dynamic obstacle information refers to the location information of dynamic obstacles (such as pedestrians, other robots and other objects that can move by themselves).
- step 105 that is, the acquisition of the obstacle area existing on the road surface during operation includes:
- Obstacle information is collected in real time during operation, and pixel information corresponding to the obstacle information is mapped in a preset projection map.
- the obstacle detection device may be a laser radar sensor, an RGBD (RGB Depth, RGB depth) camera or an ultrasonic sensor.
- each obstacle information needs to be mapped into pixel information in the preset projection map, that is, one piece of obstacle information corresponds to one piece of pixel information.
- the preset projection map can be displayed in the projection display interface set on the robot.
- the information of each obstacle can be represented by pixel information, and when the robot collects the obstacle information, it will be updated synchronously to The preset projection diagram.
- the projection display interface is a display screen arranged at the front end or rear end of the robot, and the display screen can be a touch screen or a dot matrix screen, so that preset projection images and obstacle information can be displayed on the projection display interface.
- the preset shape can be set as oval, circle, square, irregular figure and other shapes.
- the preset shape is set as a circle, and the smallest area is the area that contains all pixel information and has the smallest circular area. If the area area is set too large, it will cause data redundancy, and the interaction with the obstacle will be made in advance before the robot approaches the obstacle, which will reduce the accuracy of the robot interaction.
- the pattern to be projected includes an initial pattern to be projected and different enlarged patterns to be projected that are generated at different magnification ratios at different times, and the real-time projection of the pattern to be projected during operation includes:
- the enlarged patterns to be projected in this embodiment are different enlarged patterns to be projected based on the initial pattern to be projected at different times and with different magnification ratios, and the number of the enlarged patterns to be projected can be two, three, etc. , is not limited here. It should be noted that, assuming that after the first enlarged pattern to be projected is obtained by doubling the initial pattern to be projected, even if the second enlarged pattern to be projected is doubled based on the first enlarged pattern to be projected, its The essence is obtained by double-magnifying the original pattern to be projected.
- the initial pattern to be projected is projected and the generated enlarged pattern to be projected is projected according to the arrangement of the original pattern to be projected at different times .
- the magnification ratio can be selected according to specific magnification requirements.
- the pattern to be projected includes at least one of an initial pattern to be projected and an enlarged pattern to be projected, and the enlarged pattern to be projected is formed by enlarging the initial pattern to be projected according to a preset magnification ratio, As shown in Figure 17, the real-time projection of the pattern to be projected during operation specifically includes:
- the initial pattern to be projected is gradually enlarged according to a preset magnification ratio to form the enlarged pattern to be projected, and at least one of the initial pattern to be projected and the enlarged pattern to be projected is projected in real time during operation.
- the stepwise enlarging process of the initial pattern to be projected according to the preset magnification ratio to form the enlarged pattern to be projected, and projecting at least one of the initial pattern to be projected and the enlarged pattern to be projected includes:
- Step 107 Obtain an initial pattern to be projected.
- the pattern to be projected includes at least one of an initial pattern to be projected and an enlarged pattern to be projected. If the pattern to be projected at the current moment does not appear the initial pattern to be projected, the initial pattern to be projected will appear at a later time.
- the initial pattern to be projected is stored in the memory of the robot.
- Step 108 Enlarge the initial pattern to be projected according to a preset magnification ratio to form an enlarged pattern to be projected.
- the preset magnification ratio can be set according to specific magnification requirements, and the preset magnification ratio can be a fixed value or a variable value. It should be noted that, in this embodiment, there is an enlargement boundary for enlarging the initial pattern to be projected, that is, after gradually enlarging the initial pattern to be projected to a certain number of times (such as enlarging three times, four times or five times), stop enlarging deal with.
- the preset The magnification ratio is a fixed value, for example, the preset magnification ratio can be set as 20%, 30%, 40% or 50%. For example, assuming that the preset magnification ratio is set to 20%, after the first enlarged pattern to be projected is obtained by enlarging the initial pattern to be projected by 20% at the current moment, the 20% enlarged image based on the initial pattern to be projected will be enlarged at the next moment. The first enlarges the initial pattern to be projected and then enlarges it by 20% to obtain the second enlarged pattern to be projected.
- the preset magnification ratio is a variable value, for example, the preset magnification ratio can be set to 10% (the first magnification to be projected The projection pattern is based on the magnification ratio of the initial pattern to be projected), 15% (the second projected pattern is based on the magnification ratio of the initial pattern to be projected), 20% (the third enlarged pattern to be projected is based on the magnification ratio of the initial pattern to be projected) and 25% (the fourth enlarged pattern to be projected is based on the enlargement ratio of the initial pattern to be projected).
- Step 109 displaying at least one of the initial pattern to be projected and the enlarged pattern to be projected in time sequence.
- the enlarged pattern to be projected may include one, two or more.
- at least one of the initial pattern to be projected and the enlarged pattern to be projected is projected in chronological order (that is, displayed sequentially), and only the initial pattern to be projected or the enlarged pattern to be projected can be displayed at one moment , it is also possible to display the initial pattern to be projected and the enlarged pattern to be projected at each moment.
- at least one of the initial pattern to be projected and the enlarged pattern to be projected is displayed sequentially.
- the initial pattern to be projected is displayed at one moment, and one of the enlarged patterns to be projected is displayed at the next moment, and then Another enlarged pattern to be projected is displayed at the next moment, and this sequence is cycled in turn. It may also be that the initial pattern to be projected is displayed at one moment, the initial pattern to be projected and one of the enlarged patterns to be projected are displayed at the next moment, and the initial pattern to be projected and the two enlarged patterns to be projected are displayed at the next moment.
- the displayed method is explained by zooming in three times step by step as an example.
- the pattern to be projected after the first enlargement can be understood as the first enlarged pattern to be projected
- the pattern to be projected after the second enlargement can be understood as the second enlarged pattern to be projected
- the pattern to be projected after the third enlargement can be understood as The third is to enlarge the pattern to be projected.
- Example of dynamically displaying the pattern to be projected 1.
- the initial pattern to be projected is displayed at the first moment
- the enlarged pattern to be projected after the first enlargement is displayed at the second moment
- the enlarged pattern to be projected after the second enlargement is displayed at the third moment
- the enlarged pattern to be projected after the second enlargement is displayed at the fourth moment.
- the enlarged pattern to be projected after the third enlargement is displayed at all times.
- the initial patterns to be projected displayed at the above four moments are cycled sequentially or the patterns to be projected are enlarged until the pattern to be projected is deformed when an obstacle is encountered or the robot's moving direction changes.
- Example 2 Dynamically display the pattern to be projected.
- Example 2. The display from the first moment to the fourth moment is the same as the example 1.
- the enlarged pattern to be projected at the fourth moment is always displayed until an obstacle or the movement of the robot is encountered.
- the pattern to be projected deforms when the direction changes.
- Example 3 of dynamically displaying the pattern to be projected The initial pattern to be projected is displayed at the first moment;
- the enlarged pattern to be projected after enlargement and the enlarged pattern to be projected after the second enlargement, the fourth moment displays the initial pattern to be projected, the enlarged pattern to be projected after the first enlargement, the enlarged pattern to be projected after the second enlargement and
- the order of the enlarged pattern to be projected after the third enlargement is not required when displaying the initial pattern to be projected and the order of the enlarged pattern to be projected. It may be to display the initial pattern to be projected first and then display the enlarged pattern to be projected, or to display it first. Enlarge the pattern to be projected and then display the initial pattern to be projected.
- the initial pattern to be projected or the initial pattern to be projected and each enlarged pattern to be projected displayed at the above four moments are cycled sequentially until the pattern to be projected is deformed when an obstacle is encountered or the robot’s motion direction changes. .
- Example 4 Dynamically display the pattern to be projected.
- Example 4. The display from the first moment to the fourth moment is the same as the example 3.
- the initial pattern to be projected and each enlarged pattern to be projected at the fourth moment are displayed at the fifth and subsequent moments until the The pattern to be projected is deformed when the obstacle or the moving direction of the robot changes.
- Step 106 Detect whether there is an overlapping area between the pattern to be projected and the obstacle area, and adjust the image to be projected according to the overlapping area when there is an overlapping area between the pattern to be projected and the obstacle area.
- the pattern is projected so that there is no overlapping area between the pattern to be projected and the obstacle area.
- the pattern to be projected includes the initial pattern to be projected and the enlarged pattern to be projected, so when the projected initial pattern to be projected or the enlarged pattern to be projected overlaps with the obstacle area, it can be determined that the pattern to be projected There is an overlapping area between the pattern and the obstacle area.
- the distance between the robot and the obstacle area is relatively long, there may be no overlapping area between the initial pattern to be projected and the obstacle area, but the enlarged initial pattern to be projected, that is, the enlarged pattern to be projected may be different from There is an overlapping area between the obstacle areas, for example, when there is an intersection area between the enlarged pattern to be projected and the obstacle area, the intersection area is the overlapping area.
- the distance between the robot and the obstacle area is relatively short, there may be an overlapping area between the initial pattern to be projected and the obstacle area, so in this embodiment, the initial pattern to be projected or the enlarged pattern to be projected and the obstacle When the object area overlaps, it is determined that there is an overlapping area between the pattern to be projected and the obstacle area.
- B1 is the initial pattern to be projected
- B2 is the enlarged pattern to be projected obtained after the first enlargement
- B3 is the enlarged pattern to be projected obtained after the second enlargement; at this time, the initial pattern to be projected
- Both the pattern and the enlarged pattern to be projected do not overlap with the obstacle area, so it can be determined that there is no overlapping area between the pattern to be projected and the obstacle area.
- the initial pattern to be projected may specifically be an initial indication image; the enlarged pattern to be projected may specifically be an enlarged indication image.
- the obstacle area is mapped in the preset projection map, and if the pattern to be projected that needs to be projected by the robot in real time during the traveling process is also mapped in the preset projection map, the current location of the robot can be mapped to the preset projection map.
- the real-time position at the location is mapped in the preset projection map, and the position information of the obstacle area is also mapped in the preset projection map, and then the robot to be projected at the current real-time position can be simulated in the preset projection map Whether there is an overlapping area between the pattern and the obstacle area.
- the current real-time position of the robot and the real position of the obstacle area can be displayed in the preset projection map; the current real-time position of the robot and the real position of the obstacle area can also be mapped according to a certain ratio Shown in preset projections without limitation.
- the pattern to be projected needs to be adjusted according to the overlapping area, so that there is no overlapping area between the pattern to be projected and the obstacle area, so that Realize the interaction between robot and human.
- step 106 that is, adjusting the pattern to be projected according to the overlapping area includes:
- the overlapping pattern to be projected refers to the initial The pattern to be projected or the enlarged pattern to be projected.
- Curve is a general term for straight and non-straight lines.
- non-straight lines can be wavy lines, curved lines, etc.
- the initial pattern to be projected may be composed of straight line segments, non-straight line segments, or a combination of straight line segments and non-straight line segments.
- the overlapping pattern to be projected may specifically be an overlapping indication image.
- the obstacle area in this application is a circular obstacle area, that is, a circular area with the smallest area including all obstacle information (such as A1 in FIG. 19 ).
- the obstacle area determines the intersection of two curves between the overlapping pattern to be projected and the obstacle area in the overlapping area (such as a curve intersection in Figure 19 1, another curve intersection point 2), that is, two intersection points overlapping the boundary of the pattern to be projected and the circular obstacle area.
- the pattern to be projected is composed of a preset number of curve nodes, so after determining two curve intersections between the pattern to be projected to be overlapped and the obstacle region in the overlapping area, the overlapping to be projected pattern
- the line segment between the two curve intersections in the pattern is deleted (as shown in the dotted line segment inside the obstacle area A1 in the overlapped projection pattern L5 in Figure 19), that is, it is about to be between the two curve intersections and belongs to the overlapping to be projected
- All the curve nodes on the pattern (as shown in Figure 19 overlapping all the nodes on the dotted line segment inside A1 inside the obstacle area in the pattern L5 to be projected) are deleted, and then the remaining curve line segment with one of the curve intersections as the end point is obtained ( L1 in FIG. 19) and another remaining curve line segment (L2 in FIG. 19) starting from another curve intersection point.
- L3 in FIG. 19 is a connection line between two curve intersection points, that is, a line segment between two curve intersection points.
- L4 refers to the perpendicular line corresponding to the line between the intersection points of the two curves.
- the intersection point of the perpendicular line (such as the intersection point 4 in Figure 19) is the intersection point of the perpendicular line corresponding to the line between the intersection points of the two curves.
- the boundary intersection point refers to the intersection point between the mid-perpendicular line and the edge of the obstacle area , and the boundary intersection point is located in the overlapping area of the curves;
- the boundary intersection point (3 in Figure 19) refers to the intersection point between the vertical line and the edge of the obstacle area, and the boundary intersection point is located in the overlapping area (A2 in Figure 19);
- the preset distance threshold can be determined according to the real-time
- the visual effect of the projected pattern to be projected is set, for example, the preset distance threshold can be set as the radius of the obstacle area (in the above description, it is pointed out that the obstacle area in this application is a circular area).
- the pattern to be projected is adjusted according to the two remaining curve segments, the curve intersection and the boundary intersection.
- the two remaining curve segments are connected by two curve intersection points and boundary intersection points, and the two The curve intersection and the boundary intersection are also connected as a curve (the radian of the curve can be determined according to the obstacle area, that is, the two curve intersections and the boundary intersection are also connected correspondingly so that the curve does not overlap with the obstacle area) , and then obtain the adjusted pattern to be projected (L6 in FIG. 19); if the vertical distance is greater than the preset distance threshold, stop projecting the pattern to be projected.
- adjusting the pattern to be projected according to the overlapping area in step 106 includes: recording an initial pattern to be projected or an enlarged pattern to be projected that overlaps with the obstacle area as overlapping to be projected pattern; the overlapping pattern to be projected includes an overlapping area overlapping with the obstacle area and a remaining area not overlapping with the obstacle area;
- the overlapping area overlapping the pattern to be projected is deleted to obtain the pattern to be projected after adjustment.
- the pattern to be projected only displays the initial pattern to be projected at the first moment, only the first enlarged pattern to be projected is displayed at the second moment, and the second enlarged pattern to be projected is displayed at the third moment, and is always displayed at the fourth moment and subsequent moments
- the second enlarges the pattern to be projected.
- the second enlarged pattern to be projected is an overlapping pattern to be projected, and the overlapping area in the second enlarged pattern to be projected is deleted to obtain the adjusted pattern to be projected, and the adjusted pattern to be projected is displayed at a subsequent moment pattern.
- the overlapping to-be-projected pattern is reduced by a preset ratio so that the overlapping to-be-projected pattern is tangent to the edge of the obstacle area, so as to obtain an adjusted to-be-projected pattern.
- the pattern to be projected only displays the initial pattern to be projected at the first moment, only the first enlarged pattern to be projected is displayed at the second moment, and the second enlarged pattern to be projected is displayed at the third moment, followed by the above sequence at subsequent moments Cycle through the display.
- the second enlarged pattern to be projected, the first enlarged pattern to be projected, or the initial pattern to be projected may all become overlapped patterns to be projected, and the overlapped patterns to be projected are reduced according to a preset ratio so that the overlapped pattern to be projected is the same as The edges of the obstacle area are tangent.
- the preset reduction ratio is a variable, that is, the preset reduction ratios corresponding to overlapping patterns to be projected at different times are different.
- the preset ratio is calculated based on the degree of overlap between the overlapping pattern to be projected and the obstacle area.
- FIG. 20 is a schematic diagram of an overlapping area between the mobile robot and the obstacle area during motion.
- the obstacle area A1 is in front of the moving direction of the robot.
- the to-be-projected patterns projected by the robot in real time during operation include the initial to-be-projected patterns and the enlarged to-be-projected patterns that gradually enlarge the initial to-be-projected patterns.
- the initial pattern to be projected and the enlarged pattern to be projected will be dynamically displayed. As long as the initial pattern to be projected or any enlarged pattern to be projected overlaps with the obstacle area, the distance between the pattern to be projected and the obstacle area projected by the robot in real time will be determined.
- the enlarged pattern to be projected B4 and other enlarged patterns to be projected or initial patterns to be projected before the enlarged pattern to be projected are all between the obstacle area A1.
- the processor further implements the following steps when executing the computer-readable instructions:
- the current location information refers to the real-time location information of the robot; further, after the current location information of the robot is obtained in real time, the location distance between the robot and the obstacle area is determined according to the current location information.
- the adjusted color parameter of the pattern to be projected is determined according to the position distance, and the adjusted pattern to be projected is projected according to the curve color parameter.
- the color parameter may include the type of color and the depth of color, etc.
- the pattern to be projected can be displayed in a light color.
- the color of the pattern to be projected will gradually become darker.
- the color parameter can choose a light blue laser beam with a shallow depth;
- this color parameter can choose a deep red laser beam.
- the adjusted position to be projected is determined according to the position distance.
- the pattern to be projected is adjusted according to the overlapping area of the curve, so that the pattern to be projected by the robot will be dynamically deformed according to the obstacle area, Moreover, the adjusted pattern to be projected does not overlap with the obstacle area, thereby realizing information interaction between the robot and the obstacle, and improving the efficiency and accuracy of information interaction between the robot and the human.
- the processor executes the computer-readable instructions, the following steps are also implemented:
- the overlapping area is updated according to the position distance.
- the overlapping position information refers to the current real-time position information of the robot when the initial pattern to be projected or the enlarged pattern to be projected overlaps with the obstacle area.
- the pattern to be projected by the robot in real time is gradually enlarged from the initial pattern to be projected, that is, the pattern to be projected by the robot in real time includes the initial pattern to be projected and the enlarged pattern to be projected, and it is a cycle Real-time projection, and there is a certain interval between each pattern to be projected (the initial pattern to be projected and the enlarged pattern to be projected, or different enlarged patterns to be projected), so the position of the robot is different, which will cause the pattern to be projected transmitted by the robot
- the overlapping parts with the obstacle area may be different, and then at different positions, the overlapping area needs to be updated.
- the initial pattern to be projected or the enlarged pattern to be projected overlaps with the obstacle area
- the current position of the robot that is, the overlapping position information
- the position distance between the robot and the obstacle area is determined according to the overlapping position information
- the overlapping areas between different positions of the robot and the obstacle area are different, such as the same pattern to be projected (initial pattern to be projected or enlarged pattern to be projected), as the distance between the robot and the obstacle area gets closer, the pattern to be projected
- the overlapping area between the pattern and the obstacle area may become larger, so that the overlapping area can be updated in real time, and then the pattern to be projected is adjusted according to the updated overlapping area, making the interaction between the robot and the human more flexible and accurate.
- the processor executes the computer-readable instructions, the following steps are also implemented:
- the area size of the overlapping area is determined according to the magnification.
- the magnification refers to the magnification of the enlarged pattern to be projected based on the initial pattern to be projected; for example, assuming that the preset magnification ratio is 20%, the first enlarged pattern to be projected is based on the magnification of the initial pattern to be projected is 20%, then the magnification of the second enlarged pattern to be projected based on the initial pattern to be projected is 40%.
- the overlapping area between the corresponding enlarged pattern to be projected and the obstacle area is different.
- the size of the overlapping area can be determined according to the magnification, and then different patterns to be projected (When there is overlap between the above-mentioned initial pattern to be projected (or enlarged pattern to be projected) and the obstacle area, adjust the area size of the overlapping area according to the magnification, and then update the overlapping area in real time, thereby adjusting the pattern to be projected according to the updated overlapping area , making the interaction between robots and humans more flexible and accurate.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
- any references to memory, storage, database or other media used in the various embodiments provided in the present application may include at least one of non-volatile memory and volatile memory.
- Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
- Volatile memory can include Random Access Memory (RAM) or external cache memory.
- RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
Abstract
Description
Claims (25)
- 一种移动机器人的交互方法,所述移动机器人上设置有投影装置以及环境感知传感器,所述方法包括:获取所述移动机器人所处空间的地图数据信息,以及获取所述环境感知传感器采集的实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域;获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域。
- 根据权利要求1所述的交互方法,其中所述获取所述移动机器人所处空间的地图数据信息以及所述环境感知传感器采集的实时环境感知数据之前,所述方法还包括:获取所述环境感知传感器在所述移动机器人所处空间的环境满足预设环境条件的情况下采集到的历史环境感知数据;根据所述历史环境感知数据,确定所述移动机器人所处空间的空间坐标信息,并根据所述空间坐标信息创建所述空间的地图;将所述地图的数据信息作为所述地图数据信息。
- 根据权利要求2所述的交互方法,其中所述环境感知传感器包括雷达装置以及相机装置,所述获取所述环境感知传感器采集的实时环境感知数据,包括:获取所述雷达装置采集到的障碍物与所述移动机器人的实时距离信息;获取所述相机装置采集到的实时障碍物识别信息、所述移动机器人周围路面的路面形状信息以及所述移动机器人周围路面的实时障碍物分布信息;将所述实时障碍物识别信息以及所述实时距离信息作为所述实时障碍物信息,将所述路面形状信息以及所述实时障碍物分布信息作为所述实时指示信息。
- 根据权利要求1所述的交互方法,其中所述基于所述实时障碍物信息以及地图数据信息,获取所述移动机器人的目标行驶路径信息,包括:根据所述地图数据信息以及所述实时障碍物信息,确定所述移动机器人的实时位置以及障碍物的位置;获取所述移动机器人的目标终点位置,基于所述实时位置以及所述障碍物的位置,确定从所述实时位置到所述目标终点位置的最短路径信息,将所述最短路径信息作为所述移动机器人的目标行驶路径信息。
- 根据权利要求1所述的交互方法,其中所述根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,包括:针对所述待投影图案中的各所述像素点,根据所述地面投影区域,确定所述像素点对应的投射角度、所述像素点对应的投射时间以及所述像素点对应的投射颜色;将各所述像素点对应的投射角度、各所述像素点对应的投射时间以及各所述像素点对应的投射颜色作为所述待投影图案对应的投影参数。
- 根据权利要求5所述的交互方法,其中所述投影装置包括振镜、可见光激光器以及透镜,所述根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域,包括:根据各所述像素点对应的投射角度确定各所述像素点对应的所述振镜的旋转角度,根据各所述像素点对应的投射颜色确定各所述像素点对应的所述可见光激光器的激光发射信息以及所述透镜的激光合成信息;根据各所述像素点对应的投射时间,确定各所述像素点的投射顺序;按照各所述像素点的投射顺序,根据各所述像素点对应的所述振镜的旋转角度、各所述像素点对应的所述激光发射信息以及各所述像素点对应的所述透镜的激光合成信息调整所述投影装置以将所述待投影图案投影到所述地面投影区域。
- 根据权利要求1所述的交互方法,其中所述根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域之前,所述交互方法还包括:根据所述目标行驶路径信息以及所述实时环境感知数据,判断是否符合预设投影条件;对应的,所述根据所述目标行驶路径信息确定地面投影区域,包括:在判断结果为符合预设投影条件的情况下,根据所述目标行驶路径信息确定地面投影区域。
- 根据权利要求7所述的交互方法,其中所述预设投影条件至少包括以下条件中的一种:在未来预设时间段内所述移动机器人的行驶方向发生变化;所述移动机器人的行驶状态为暂停状态;所述移动机器人周围存在行人;所述移动机器人当前处于运行状态。
- 根据权利要求8所述的交互方法,其中在所述预设投影条件为所述移动机器人当前处于运行状态的情况下,所述获取待投影图案,包括:根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图;若能,则将所述移动机器人当前投影的图案作为所述待投影图案;若否,则根据所述移动机器人的行驶意图生成所述待投影图案。
- 根据权利要求9所述的交互方法,其中所述根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图,包括:若所述实时障碍物信息指示所述移动机器人周围存在的障碍物为移动障碍物,则执行根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图的步骤。
- 根据权利要求1所述的交互方法,其中,在根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域之后,所述交互方法还包括:在运行过程中实时投射待投影图案,并获取在运行过程中路面上存在的障碍物区域;检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
- 根据权利要求11所述的交互方法,其中所述获取在运行过程中路面上存在的障碍物区域,包括:在运行过程中实时采集障碍物信息,并在预设投影图中映射出与所述障碍物信息对应的像素信息;从包含所有所述像素信息的投影区域中确定最小面积区域,并将所述最小面积区域记录为所述障碍物区域。
- 根据权利要求11所述的交互方法,其中所述待投影图案包括初始待投影图案及按照不同时刻以不同放大比例生成的不同的放大待投影图案,所述在运行过程中实时投射待投影图案,包括:投射所述初始待投影图案及按照所述不同时刻与所述初始待投影图案排列投射已生成的放大待投影图案。
- 根据权利要求11所述的交互方法,其中所述待投影图案包括初始待投影图案、放大待投影图案中的至少一个,所述放大待投影图案由所述初始待投影图案按预设放大比例进行放大处理而形成,所述在运行过程中实时投射待投影图案包括:在运行过程中实时投射所述初始待投影图案、所述放大待投影图案中的至少一个。
- 根据权利要求13或14所述的交互方法,其中所述根据所述重叠区域调整所述待投影图案,包括:在所述重叠区域中确定重叠待投影图案与所述障碍物区域的两个曲线交点;所述重叠待投影图案是指所述初始待投影图案或所述放大待投影图案;对所述重叠待投影图案中处于两个所述曲线交点之间的线条进行删除处理,得到删除处理后的所述重叠待投影图案中的两个剩余曲线线段;确定与两个所述曲线交点之间的连线对应的中垂线的交点;检测所述中垂线交点与边界交点之间的垂直距离,并将所述垂直距离与预设距离阈值进行比较;所述边界交点是指所述中垂线与所述障碍物区域边缘的交点,且所述边界交点位于所述曲线重叠区域内;在所述垂直距离小于或等于预设距离阈值时,根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,对所述待投影图案进行调整,得到调整后的待投影图案;所述调整后的待投影图案与所述障碍物区域之间不存在重叠区域。
- 根据权利要求14所述的交互方法,其中所述根据所述重叠区域调整所述待投影图案,包括:将与所述障碍物区域存在重叠区域的初始待投影图案或者放大待投影图案记录为重叠待投影图案;所述重叠待投影图案包括与所述障碍物区域重叠的重叠区域和与所述障碍物区域不重叠的剩余区域;将所述重叠待投影图案的重叠区域删除或者将所述重叠待投影图案按预设比例进行缩小以使所述重叠待投影图案与所述障碍物区域的边缘相切,以得到调整后的待投影图案。
- 根据权利要求15所述的交互方法,其中所述根据两个所述剩余曲线线段、所述曲线交点以及所述边界交点,对所述待投影图案进行调整,得到调整后的待投影图案,包括:通过预设连接方式将两个曲线交点与所述边界交点连接,得到连接线段;将两个所述剩余曲线线段以及所述连接线段之间连接形成的待投影图案,记录为调整后的待投影图案。
- 根据权利要求15所述的交互方法,其中所述将所述垂直距离与预设距离阈值进行比较之后,所述处理器执行所述计算机可读指令时还实现如下步骤:在所述垂直距离大于所述预设距离阈值时,停止投射所述待投影图案。
- 根据权利要求11所述的交互方法,其中所述根据所述重叠区域调整所述待投影图案之后,所述处理器执行所述计算机可读指令时还实现如下步骤:获取机器人的当前位置信息,并根据所述当前位置信息确定所述机器人与所述障碍物区域之间的位置距离;根据所述位置距离确定调整后的所述待投影图案的颜色参数,并根据所述颜色参数投射调整后的所述待投影图案。
- 一种移动机器人的交互装置,其中所述移动机器人上设置有投影装置以及环境感知传感器,所述交互装置包括:获取模块,用于获取所述移动机器人所处空间的地图数据信息,以及获取所述环境感知传感器采集的实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;路径模块,用于基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域;确定模块,用于获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;投影模块,用于根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域。
- 根据权利要求20所述的移动机器人的交互装置,还包括:障碍物区域获取模块,用于在运行过程中实时投射待投影图案,并获取在运行过程中存在的障碍物区域;重叠区域检测模块,用于检测所述待投影图案与所述障碍物区域之间是否存在重叠区域,在所述待投影图案与所述障碍物区域之间存在重叠区域时,根据所述重叠区域调整所述待投影图案,以令所述待投影图案与所述障碍物区域之间不存在重叠区域。
- 一种移动机器人,包括投影装置、环境感知传感器和处理器;所述环境感知传感器,用于采集实时环境感知数据,所述实时环境感知数据包括实时障碍物信息和用于指示所述移动机器人周围路况的实时指示信息;所述处理器,用于获取所述移动机器人所处空间的地图数据信息,以及获取所述实时环境感知数据,基于所述实时障碍物信息以及所述地图数据信息,获取所述移动机器人的目标行驶路径信息,并根据所述目标行驶路径信息以及所述实时指示信息,确定地面投影区域,获取待投影图案,根据所述待投影图案以及所述地面投影区域,确定所述待投影图案对应的投影参数,所述待投影图案用于指示所述移动机器人的行驶意图;根据所述投影参数控制所述投影装置以将所述待投影图案投影到所述地面投影区域;所述投影装置,用于将所述待投影图案投影到所述地面投影区域。
- 根据权利要求22所述的移动机器人,其中所述处理器还用于:根据所述目标行驶路径信息以及所述实时环境感知数据,判断是否符合预设投影条件,所述预设投影条件至少包括以下条件中的一种:在未来预设时间段内所述移动机器人的行驶方向发生变化、所述移动机器人的行驶状态为暂停状态、所述移动机器人周围存在行人以及所述移动机器人当前处于运行状态;在判断结果为符合预设投影条件的情况下,根据所述目标行驶路径信息确定地面投影区域。
- 根据权利要求23所述的移动机器人,其中所述处理器,还用于:在所述预设投影条件为所述移动机器人当前处于运行状态的情况下,根据所述目标行驶路径信息,判断所述移动机器人当前投影的图案是否能够反映所述移动机器人的行驶意图;若能,则将所述移动机器人当前投影的图案作为所述待投影图案;若否,则根据所述移动机器人的行驶意图生成所述待投影图案。
- 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至19中任一项所述的交互方法的步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22894848.5A EP4350461A1 (en) | 2021-11-16 | 2022-11-16 | Interaction method and apparatus for mobile robot, and mobile robot and storage medium |
KR1020247001573A KR20240021954A (ko) | 2021-11-16 | 2022-11-16 | 이동 로봇의 인터랙션 방법, 장치, 이동 로봇 및 저장매체(interaction method and apparatus for mobile robot, and mobile robot and storage medium) |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111354791.4A CN114265397B (zh) | 2021-11-16 | 2021-11-16 | 移动机器人的交互方法、装置、移动机器人和存储介质 |
CN202111354791.4 | 2021-11-16 | ||
CN202111355659.5 | 2021-11-16 | ||
CN202111355659.5A CN114274117A (zh) | 2021-11-16 | 2021-11-16 | 机器人、基于障碍物的机器人交互方法、装置及介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023088316A1 true WO2023088316A1 (zh) | 2023-05-25 |
Family
ID=86396251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/132312 WO2023088316A1 (zh) | 2021-11-16 | 2022-11-16 | 移动机器人的交互方法、装置、移动机器人和存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4350461A1 (zh) |
KR (1) | KR20240021954A (zh) |
WO (1) | WO2023088316A1 (zh) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060187010A1 (en) * | 2005-02-18 | 2006-08-24 | Herbert Berman | Vehicle motion warning device |
CN105976457A (zh) * | 2016-07-12 | 2016-09-28 | 百度在线网络技术(北京)有限公司 | 用于指示车辆行车动态的方法和装置 |
US20160337626A1 (en) * | 2014-12-25 | 2016-11-17 | Panasonic Intellectual Property Management Co., Ltd. | Projection apparatus |
CN106406312A (zh) * | 2016-10-14 | 2017-02-15 | 平安科技(深圳)有限公司 | 导览机器人及其移动区域标定方法 |
CN107139832A (zh) * | 2017-05-08 | 2017-09-08 | 杨科 | 一种汽车光学投影警示系统及其方法 |
CN108303972A (zh) * | 2017-10-31 | 2018-07-20 | 腾讯科技(深圳)有限公司 | 移动机器人的交互方法及装置 |
CN109491875A (zh) * | 2018-11-09 | 2019-03-19 | 浙江国自机器人技术有限公司 | 一种机器人信息显示方法、系统及设备 |
CN109927624A (zh) * | 2019-01-18 | 2019-06-25 | 驭势(上海)汽车科技有限公司 | 车辆移动的目标区域的投影方法、hmi计算机系统及车辆 |
CN110039535A (zh) * | 2018-01-17 | 2019-07-23 | 阿里巴巴集团控股有限公司 | 机器人交互方法及机器人 |
CN110442126A (zh) * | 2019-07-15 | 2019-11-12 | 北京三快在线科技有限公司 | 一种移动机器人及其避障方法 |
JP2020154635A (ja) * | 2019-03-19 | 2020-09-24 | 株式会社フジタ | 無人移動装置 |
CN114265397A (zh) * | 2021-11-16 | 2022-04-01 | 深圳市普渡科技有限公司 | 移动机器人的交互方法、装置、移动机器人和存储介质 |
CN114274117A (zh) * | 2021-11-16 | 2022-04-05 | 深圳市普渡科技有限公司 | 机器人、基于障碍物的机器人交互方法、装置及介质 |
-
2022
- 2022-11-16 KR KR1020247001573A patent/KR20240021954A/ko unknown
- 2022-11-16 WO PCT/CN2022/132312 patent/WO2023088316A1/zh active Application Filing
- 2022-11-16 EP EP22894848.5A patent/EP4350461A1/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060187010A1 (en) * | 2005-02-18 | 2006-08-24 | Herbert Berman | Vehicle motion warning device |
US20160337626A1 (en) * | 2014-12-25 | 2016-11-17 | Panasonic Intellectual Property Management Co., Ltd. | Projection apparatus |
CN105976457A (zh) * | 2016-07-12 | 2016-09-28 | 百度在线网络技术(北京)有限公司 | 用于指示车辆行车动态的方法和装置 |
CN106406312A (zh) * | 2016-10-14 | 2017-02-15 | 平安科技(深圳)有限公司 | 导览机器人及其移动区域标定方法 |
CN107139832A (zh) * | 2017-05-08 | 2017-09-08 | 杨科 | 一种汽车光学投影警示系统及其方法 |
CN108303972A (zh) * | 2017-10-31 | 2018-07-20 | 腾讯科技(深圳)有限公司 | 移动机器人的交互方法及装置 |
CN110039535A (zh) * | 2018-01-17 | 2019-07-23 | 阿里巴巴集团控股有限公司 | 机器人交互方法及机器人 |
CN109491875A (zh) * | 2018-11-09 | 2019-03-19 | 浙江国自机器人技术有限公司 | 一种机器人信息显示方法、系统及设备 |
CN109927624A (zh) * | 2019-01-18 | 2019-06-25 | 驭势(上海)汽车科技有限公司 | 车辆移动的目标区域的投影方法、hmi计算机系统及车辆 |
JP2020154635A (ja) * | 2019-03-19 | 2020-09-24 | 株式会社フジタ | 無人移動装置 |
CN110442126A (zh) * | 2019-07-15 | 2019-11-12 | 北京三快在线科技有限公司 | 一种移动机器人及其避障方法 |
CN114265397A (zh) * | 2021-11-16 | 2022-04-01 | 深圳市普渡科技有限公司 | 移动机器人的交互方法、装置、移动机器人和存储介质 |
CN114274117A (zh) * | 2021-11-16 | 2022-04-05 | 深圳市普渡科技有限公司 | 机器人、基于障碍物的机器人交互方法、装置及介质 |
Also Published As
Publication number | Publication date |
---|---|
KR20240021954A (ko) | 2024-02-19 |
EP4350461A1 (en) | 2024-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10262230B1 (en) | Object detection and identification | |
US9704267B2 (en) | Interactive content control apparatus and method | |
WO2019085716A1 (zh) | 移动机器人的交互方法、装置、移动机器人及存储介质 | |
WO2022148083A1 (zh) | 仿真3d数字人交互方法、装置、电子设备及存储介质 | |
US9122053B2 (en) | Realistic occlusion for a head mounted augmented reality display | |
KR101591493B1 (ko) | 각각의 사용자의 시점에 대해 공유된 디지털 인터페이스들의 렌더링을 위한 시스템 | |
US9723293B1 (en) | Identifying projection surfaces in augmented reality environments | |
US20180005441A1 (en) | Systems and methods for mixed reality transitions | |
CN110097576B (zh) | 图像特征点的运动信息确定方法、任务执行方法和设备 | |
JP7033606B2 (ja) | 多視点コンテンツを配信するための表示システムおよび方法 | |
US20130207962A1 (en) | User interactive kiosk with three-dimensional display | |
CN103616032B (zh) | 导航地图显示比例尺与三维视角自动控制方法及装置 | |
US20130328925A1 (en) | Object focus in a mixed reality environment | |
US20130342572A1 (en) | Control of displayed content in virtual environments | |
JPWO2009144994A1 (ja) | 車両用画像処理装置、車両用画像処理方法 | |
US11461980B2 (en) | Methods and systems for providing a tutorial for graphic manipulation of objects including real-time scanning in an augmented reality | |
CN104284119A (zh) | 在对象的预定义部分上投影图像的设备、系统和方法 | |
US20190034155A1 (en) | Information processing apparatus, information processing method, and program | |
US11971536B2 (en) | Dynamic matrix filter for vehicle image sensor | |
CN114341943A (zh) | 使用平面提取的简单环境求解器 | |
CN109696173A (zh) | 一种车体导航方法和装置 | |
JP2017182340A (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US11269250B2 (en) | Information processing apparatus, information processing method, and program | |
JP2020033003A (ja) | 表示制御装置、及び表示制御プログラム | |
WO2023088316A1 (zh) | 移动机器人的交互方法、装置、移动机器人和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22894848 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022894848 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2024500596 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022894848 Country of ref document: EP Effective date: 20240102 |
|
ENP | Entry into the national phase |
Ref document number: 20247001573 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020247001573 Country of ref document: KR |