CN114265397A - Interaction method and device for mobile robot, mobile robot and storage medium - Google Patents

Interaction method and device for mobile robot, mobile robot and storage medium Download PDF

Info

Publication number
CN114265397A
CN114265397A CN202111354791.4A CN202111354791A CN114265397A CN 114265397 A CN114265397 A CN 114265397A CN 202111354791 A CN202111354791 A CN 202111354791A CN 114265397 A CN114265397 A CN 114265397A
Authority
CN
China
Prior art keywords
mobile robot
information
projection
real
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111354791.4A
Other languages
Chinese (zh)
Other versions
CN114265397B (en
Inventor
王宽
张涛
郭璁
陈鹏
吴翔
朱俊安
杨璐雅
张陈路
曾飞
陈俊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202111354791.4A priority Critical patent/CN114265397B/en
Publication of CN114265397A publication Critical patent/CN114265397A/en
Priority to EP22894848.5A priority patent/EP4350461A1/en
Priority to PCT/CN2022/132312 priority patent/WO2023088316A1/en
Priority to KR1020247001573A priority patent/KR20240021954A/en
Application granted granted Critical
Publication of CN114265397B publication Critical patent/CN114265397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to an interaction method and device of a mobile robot, a computer device, a storage medium and a computer program product. The method comprises the following steps: the method comprises the steps of obtaining map data information of a space where the mobile robot is located and obtaining real-time environment sensing data collected by an environment sensing sensor, wherein the real-time environment sensing data comprise real-time barrier information and real-time indication information used for indicating road conditions around the mobile robot; acquiring target running path information of the mobile robot based on the real-time obstacle information and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information; acquiring a pattern to be projected, and determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and a ground projection area, wherein the pattern to be projected is used for indicating the driving intention of the mobile robot; and controlling the projection device according to the projection parameters to project the pattern to be projected to the ground projection area. The method can move the interaction effect of the robot.

Description

Interaction method and device for mobile robot, mobile robot and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an interaction method and apparatus for a mobile robot, and a storage medium.
Background
The mobile robot is currently applied to places with large pedestrian volume, such as restaurants, shopping malls, hotels and the like. A situation in which a road right conflict with a pedestrian often occurs during the travel of a mobile robot. In view of the above situation, it is necessary to provide an interactive manner so that the pedestrian can know the driving intention of the mobile robot in time and make corresponding actions to solve the road right conflict.
In the prior art, interaction between a mobile robot and a pedestrian is usually realized by adopting a voice broadcast mode, so that the pedestrian can know the driving intention of the mobile robot, for example, when the mobile robot turns right, a voice is played, i.e., i want to turn right and please notice avoiding, so as to inform the pedestrian.
However, the above interaction mode is relatively easily affected by the environment, and especially in noisy places such as restaurants and shopping malls, the voice broadcasted by the mobile robot is difficult to be clearly transmitted to pedestrians, and the interaction effect is poor.
Disclosure of Invention
In view of the above, it is necessary to provide an interaction method and apparatus for a mobile robot, and a storage medium, which can improve human-computer interaction effects.
In a first aspect, an interaction method for a mobile robot is provided, where the mobile robot is provided with a projection device and an environment sensing sensor, and the method includes:
the method comprises the steps of obtaining map data information of a space where the mobile robot is located and obtaining real-time environment sensing data collected by an environment sensing sensor, wherein the real-time environment sensing data comprise real-time barrier information and real-time indication information used for indicating road conditions around the mobile robot;
acquiring target running path information of the mobile robot based on the real-time obstacle information and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information;
acquiring a pattern to be projected, and determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and a ground projection area, wherein the pattern to be projected is used for indicating the driving intention of the mobile robot;
and controlling the projection device according to the projection parameters to project the pattern to be projected to the ground projection area.
In one embodiment, before obtaining map data information of a space where the mobile robot is located and real-time environment perception data collected by the environment perception sensor, the method further comprises:
acquiring historical environmental perception data acquired by an environmental perception sensor under the condition that the environment of a space where a mobile robot is located meets a preset environmental condition;
determining space coordinate information of a space where the mobile robot is located according to historical environment perception data, and creating a map of the space according to the space coordinate information;
the data information of the map is used as map data information.
In one embodiment, the environmental awareness sensor includes a radar device and a camera device, and the acquiring of the real-time environmental awareness data collected by the environmental awareness sensor includes:
acquiring real-time distance information between an obstacle and the mobile robot, which is acquired by a radar device;
acquiring real-time obstacle identification information acquired by a camera device, road surface shape information of a road surface around the mobile robot and real-time obstacle distribution information of the road surface around the mobile robot;
and taking the real-time obstacle identification information and the real-time distance information as real-time obstacle information, and taking the road surface shape information and the real-time obstacle distribution information as real-time indication information.
In one embodiment, acquiring target travel path information of the mobile robot based on the real-time obstacle information and the map data information includes:
determining the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information;
and acquiring a target end point position of the mobile robot, determining shortest path information from the real-time position to the target end point position based on the real-time position and the position of the obstacle, and taking the shortest path information as target driving path information of the mobile robot.
In one embodiment, determining a projection parameter corresponding to a pattern to be projected according to the pattern to be projected and a ground projection area includes:
determining a projection angle corresponding to the pixel point, a projection time corresponding to the pixel point and a projection color corresponding to the pixel point according to the ground projection area for each pixel point in the pattern to be projected;
and taking the projection angle corresponding to each pixel point, the projection time corresponding to each pixel point and the projection color corresponding to each pixel point as the projection parameters of the projection device.
In one embodiment, the projection device comprises a galvanometer, a visible light laser and a lens, and the projection device is adjusted according to projection parameters to project a pattern to be projected to a ground projection area, and the projection device comprises:
determining the rotation angle of a galvanometer corresponding to each pixel point according to the projection angle corresponding to each pixel point, and determining the laser emission information of a visible light laser corresponding to each pixel point and the laser synthesis information of a lens according to the projection color corresponding to each pixel point;
determining the projection sequence of each pixel point according to the projection time corresponding to each pixel point;
and adjusting the projection device according to the projection sequence of each pixel point and the rotation angle of the galvanometer corresponding to each pixel point, the laser emission information corresponding to each pixel point and the laser synthesis information of the lens corresponding to each pixel point so as to project the pattern to be projected to the ground projection area.
In one embodiment, before determining the ground projection area according to the target driving path information and the real-time indication information, the method further includes:
judging whether preset projection conditions are met or not according to the target driving path information and the real-time environment perception data;
correspondingly, the method for determining the ground projection area according to the target driving path information comprises the following steps:
and determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
In one embodiment, the preset projection condition includes at least one of the following conditions:
the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a suspended state, pedestrians exist around the mobile robot, and the mobile robot is currently in a running state.
In one implementation, in a case that the preset projection condition is that the mobile robot is currently in a running state, acquiring a pattern to be projected includes:
judging whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information;
if so, taking the pattern currently projected by the mobile robot as a pattern to be projected;
and if not, generating the pattern to be projected according to the driving intention of the mobile robot.
In one embodiment, the determining whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information includes:
and if the real-time obstacle information indicates that the obstacles existing around the mobile robot are mobile obstacles, executing a step of judging whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
In a second aspect, an interaction apparatus for a mobile robot is provided, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring map data information of a space where the mobile robot is located and acquiring real-time environment sensing data acquired by an environment sensing sensor, and the real-time environment sensing data comprises real-time obstacle information and real-time indication information used for indicating the surrounding road conditions of the mobile robot;
the path module is used for acquiring target running path information of the mobile robot based on the real-time obstacle information and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information;
the determining module is used for acquiring a pattern to be projected, determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and a ground projection area, wherein the pattern information to be projected is used for indicating the driving intention of the mobile robot;
and the projection module is used for controlling the projection device according to the projection parameters so as to project the pattern information to be projected to the ground projection area.
In a third aspect, a mobile robot is provided, the mobile robot comprising a projection device, an environmental perception sensor, and a processor;
the environment perception sensor is used for acquiring real-time environment perception data, and the real-time environment perception data comprises real-time obstacle information and real-time indication information used for indicating the surrounding road conditions of the mobile robot;
the processor is used for acquiring map data information of a space where the mobile robot is located and acquiring real-time environment perception data, acquiring target driving path information of the mobile robot based on the real-time obstacle information and the map data information, determining a ground projection area according to the target driving path information and the real-time indication information, acquiring a pattern to be projected, determining projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, wherein the pattern to be projected is used for indicating the driving intention of the mobile robot; controlling a projection device according to the projection parameters to project the pattern to be projected to a ground projection area;
and the projection device is used for projecting the pattern to be projected to the ground projection area.
In one embodiment, the processor is further configured to:
judging whether preset projection conditions are met or not according to the target driving path information and the real-time environment perception data, wherein the preset projection conditions at least comprise one of the following conditions: the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a pause state, pedestrians exist around the mobile robot, and the mobile robot is currently in a running state;
and determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
In one embodiment, the processor is further configured to:
judging whether a currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information under the condition that the preset projection condition is that the mobile robot is currently in the running state;
if so, taking the pattern currently projected by the mobile robot as a pattern to be projected;
and if not, generating the pattern to be projected according to the driving intention of the mobile robot.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the interaction method of the mobile robot according to any one of the first aspect.
According to the interaction method and device for the mobile robot, the mobile robot and the storage medium, the ground projection area is determined according to the target driving path information of the mobile robot and the real-time indication information used for indicating the surrounding road conditions of the mobile robot, and the laser projection device is adjusted based on the projection parameters corresponding to the determined pattern to be projected so as to project the pattern to be projected for representing the driving intention of the mobile robot to the ground projection area, so that pedestrians can know the driving intention of the mobile robot according to the projection pattern information projected to the ground by the projection device, the technical problem of poor interaction effect caused by noisy sound of the space environment where the robot is located is solved, and the interaction effect between the mobile robot and the pedestrians is improved.
Drawings
FIG. 1 is a schematic diagram of a mobile robot in one embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a method of interaction of a mobile robot in one embodiment;
FIG. 3 is a schematic diagram of a projection area of a mobile robot in one embodiment;
FIG. 4 is a diagram of a mobile robot projection application in one embodiment;
FIG. 5 is a schematic flow chart of step 101 in one embodiment;
FIG. 6 is a schematic flow chart of step 101 in another embodiment;
FIG. 7 is a schematic flow chart of step 102 in one embodiment;
FIG. 8 is a schematic flow chart of step 103 in one embodiment;
FIG. 9 is a schematic diagram of an embodiment of an RGBD sensor;
FIG. 10 is a schematic diagram showing a structure of a laser projection apparatus according to an embodiment;
FIG. 11 is a schematic view of another embodiment of a laser projection apparatus;
FIG. 12 is a schematic flow chart of step 104 in one embodiment;
fig. 13 is a flowchart illustrating an interaction method of the mobile robot in another embodiment;
fig. 14 is a flowchart illustrating an interaction method of a mobile robot in accordance with still another embodiment;
fig. 15 is a block diagram showing the structure of an interaction device of a mobile robot in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the interaction method of the mobile robot provided in the embodiment of the present application, the execution main body may be an interaction device of the mobile robot, the interaction device of the mobile robot is disposed on the mobile robot as shown in fig. 1, and may be implemented by software, hardware, or a combination of software and hardware to become part or all of a terminal of the mobile robot. The terminal can be a personal computer, a notebook computer, a media player, a smart television, a smart phone, a tablet computer, a portable wearable device and the like.
The mobile robot is provided with a plurality of environment perception sensors and a laser projection device. Wherein, the environment perception sensor can be one, two or more. When the environmental perception sensor is a plurality of sensors, the environmental perception sensors are arranged differently. Fig. 1 exemplarily shows a mobile robot, as shown in fig. 1, the plurality of environment sensing sensors includes an RGBD camera 1 and a radar device 3; the projection device 2 is disposed above a traveling mechanism 4 of the mobile robot, wherein the traveling mechanism 4 may include a wheel hub motor, and it should be noted that the sensor type and the installation position of the environmental sensor may be adjusted according to actual conditions.
Referring to fig. 2, a flowchart of an interaction method of a mobile robot according to an embodiment of the present application is shown. The embodiment is illustrated by applying the method to a terminal, and it can be understood that the method can also be applied to a system comprising the terminal and a server, and is implemented by interaction between the terminal and the server. As shown in fig. 2, the interaction method of the mobile robot may include the steps of:
step 101, obtaining map data information of a space where the mobile robot is located and obtaining real-time environment perception data collected by an environment perception sensor.
The real-time environment perception data comprise real-time obstacle information and real-time indication information used for indicating the surrounding road conditions of the mobile robot. The obstacles include two types, namely static obstacles and moving obstacles, and data of each type of obstacle is not limited. The real-time indication information for indicating the road conditions around the mobile robot at least comprises the road shape information around the mobile robot and the obstacle distribution situation on the surrounding road.
Optionally, the environmental perception sensor includes at least an RGBD camera. The RGBD camera is used for detecting the distance between obstacles around the mobile robot and the mobile robot, obstacle identification information and real-time indication information indicating road conditions around the mobile robot. The mobile robot obtains real-time environment perception data by processing the color image and the depth image acquired by the RGBD camera.
In an optional implementation manner, the stored map data information is directly called from a preset storage area to obtain the map data information, where the preset storage area may be in a server or a terminal of a mobile robot. In another alternative implementation, the map data information is constructed for the mobile robot in real time. In the moving process of the mobile robot, the environment sensing sensor is used for collecting data required by map building, and the map is built and perfected based on the collected data.
And 102, acquiring target running path information of the mobile robot based on the real-time environment perception data and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information.
In a restaurant or mall environment, a static barrier may be considered to be a fixed object that is located at a fixed location for a period of time, such as a table or chair, a trash can, a cabinet, etc. Optionally, the map data information includes position information of the static obstacles. Before the mobile robot starts traveling, the mobile robot acquires a start position and an end position, and then determines an initial travel path from the start position to the end position based on map data information. When the environment perception sensor detects that a moving obstacle (such as a pedestrian) exists around the mobile robot, obstacle avoidance operation is executed, the running route of the mobile robot is changed, and target running path information of the mobile robot is obtained based on real-time environment perception data and map data information.
Optionally, the mobile robot performs path planning by using a path planning algorithm according to the obtained real-time environment perception data and map data information to obtain target driving path information, wherein the path planning algorithm includes an incremental heuristic algorithm, a BUG algorithm, a graph search method or a combination algorithm fusing multiple path planning algorithms, and the like.
Optionally, after the mobile robot acquires the target travel path information, a road surface area that the mobile robot is to travel through in a future period of time is determined as a ground projection area according to the target travel path. Wherein the length of the future period of time may be determined according to the travel speed of the mobile robot.
As shown in fig. 3, fig. (a) is a schematic perspective view of a space around the mobile robot, fig. 6 is a projection light outlet of a projection device, fig. 7-10 are obstacles, fig. 11 is a schematic projection area, fig. 12 is the mobile robot, fig. (b) is a ground distribution diagram corresponding to fig. (a), fig. 7 ' -10 ' is a contact surface between the obstacles 7-10 and the ground, fig. 12 ' is a schematic contact surface between the mobile robot 12 and the ground, and fig. 13 shows a target traveling direction of the mobile robot.
The coordinate point corresponding to the center position of the contact surface of the mobile robot 12 with the ground is set as the coordinate position of the mobile robot, i.e., d in the diagram (b)0(x0,y0) Determining a series of moving coordinate points of the mobile robot in a future period of time according to the target traveling path information, wherein the series of moving coordinate points form a central line, namely a curve 14 in the graph (b), and then translating the central line to two sides for a distance to obtain two edge lines, wherein the translated distance value is half of the width value of the bottom surface of the mobile robot. The area between the two edge lines is the ground projection area, 11' in fig. (b).
Optionally, the direction of the ground projection area is determined according to the target driving path information, and the size and the shape of the ground projection area are determined according to the road surface shape information and the real-time obstacle distribution information. For example, when the road surface shape information is a curved shape, the shape of the ground projection area is a curved shape. When the real-time obstacle distribution information indicates that the free space before the real-time obstacle is narrow, the ground projection area needs to be reduced.
Step 103, obtaining the information of the pattern to be projected, and determining the projection parameters corresponding to the pattern to be projected according to the information of the pattern to be projected and the ground projection area.
Wherein the pattern information to be projected is used to indicate a driving intention of the mobile robot. The pattern to be projected can be a character pattern, a graphic pattern or a pattern combining the character and the geometric figure, and can also be animation. The information of the pattern to be projected can be displayed on the ground in a flashing way.
Optionally, the projection parameters include projection angle, projection color, projection content, projection time, and the like.
And 104, controlling the laser projection device according to the projection parameters to project the pattern information to be projected to the ground projection area.
As shown in fig. 4, after the projection parameters are determined, the mobile robot adjusts the projection device 2 according to the projection parameters, so that the projection device 2 projects the information of the pattern to be projected onto the ground projection area, and surrounding pedestrians can know the driving intention of the mobile robot by looking at the projection information of the ground.
According to the embodiment, the ground projection area is determined according to the target driving path information of the mobile robot and the real-time indication information used for indicating the surrounding road conditions of the mobile robot, the laser projection device is adjusted based on the determined projection parameters corresponding to the patterns to be projected, so that the patterns to be projected for representing the driving intention of the mobile robot are projected to the ground projection area, a pedestrian can know the driving intention of the mobile robot according to the projection pattern information projected to the ground by the projection device, the technical problem of poor interaction effect caused by noisy environmental sound of the space where the robot is located is solved, and the interaction effect between the mobile robot and the pedestrian is improved.
In the implementation of the present application, referring to fig. 5, based on the embodiment shown in fig. 2, before obtaining, in step 101, map data information of a space where the mobile robot is located and real-time environment sensing data acquired by an environment sensing sensor, the interaction method for the mobile robot provided in this embodiment further includes step 201, step 202, and step 203:
step 201, acquiring historical environmental perception data acquired by an environmental perception sensor under the condition that the environment of a space where the mobile robot is located meets a preset environmental condition.
The preset environmental condition at least comprises one of a small number of pedestrians in the environment of the space where the mobile robot is located and no people in the environment of the space where the mobile robot is located.
Optionally, the historical environmental awareness data includes static obstacle information existing in a space where the mobile robot is located, such as a table and a chair or a trash can. When the preset environmental condition is that the number of pedestrians in the environment of the space where the mobile robot is located is small, filtering information related to the pedestrians in the original sensing data acquired by the environment sensing sensor to obtain historical environment sensing data.
Optionally, the mobile robot determines when to perform the above historical environmental awareness data acquisition operation according to the acquired historical environmental awareness data acquisition time information, for example, setting the historical environmental awareness data acquisition time to 23 per night: 00.
step 202, according to the historical environmental perception data, determining the space coordinate information of the space where the mobile robot is located, and creating a map of the space according to the space coordinate information.
The spatial coordinate information is spatial coordinate information of the entire space where the mobile robot is located or spatial coordinate information of a space where the mobile robot passes, for example, spatial coordinate information of a restaurant or a mall or spatial coordinate information of a space corresponding to a service area of the mobile robot in the mall. For example, when the service area of the mobile robot is the area of mall floor 2, the spatial coordinate information of mall floor 2 needs to be determined.
The space coordinate information is two-dimensional coordinate information or three-dimensional coordinate information. Alternatively, as shown in fig. 3, two-dimensional coordinates are established with the ground as a plane, and a reference position point is set. For example, the reference position point is a position point of a static obstacle in space, or a reference object is placed on the ground, and the position point where the reference object is located is taken as the reference position point. And determining two-dimensional coordinates corresponding to other position points in the space based on the two-dimensional coordinates of the reference position point.
In step 203, the data information of the map is used as the map data information.
In the embodiment, the space coordinate information of the space is determined by acquiring historical environment sensing data acquired by the environment sensing sensor under the condition that the environment of the space where the mobile robot is located meets the preset environment condition, and the map of the space is created according to the space coordinate information. Because the map is constructed on the basis of the historical environmental perception data collected in the space environment meeting the preset environmental conditions, the interference information in the space is reduced, and the difficulty of the construction of the map and the data volume of the map data information are further reduced.
In the present embodiment, each environmental awareness sensor includes a radar device and a camera device, and referring to fig. 6, the present embodiment relates to a process of acquiring real-time environmental awareness data acquired by the environmental awareness sensor in step 101. Based on the embodiment shown in fig. 5, as shown in fig. 6, the process includes steps 301, 302, and 303:
and 301, acquiring real-time distance information between the obstacle and the mobile robot, which is acquired by the radar device.
Optionally, the radar device includes at least one of a laser radar device and an ultrasonic radar device. The laser radar device is used for detecting the distance between an object around the robot and the robot in a 2D or 3D plane range.
Step 302, acquiring real-time obstacle identification information acquired by a camera device, road surface shape information of the road surface around the mobile robot, and real-time obstacle distribution information of the road surface around the mobile robot.
Optionally, the camera device includes an RGBD camera; or the camera arrangement comprises an RGBD camera and an RGB camera.
Wherein the real-time obstacle identification information includes identifying whether the obstacle is a pedestrian. Optionally, an image recognition algorithm is used to recognize an image of an obstacle acquired by the RGB camera or the RGBD camera, and it is determined whether the obstacle is a pedestrian.
Optionally, when the camera device includes an RGBD camera and an RGB camera, the RGB camera is used in combination with the radar device, and when the radar device detects an obstacle, the mobile robot starts the RGB camera to perform a collecting operation for obtaining real-time obstacle identification information.
Step 303, using the real-time obstacle identification information and the real-time distance information as real-time obstacle information, and using the road surface shape information and the real-time obstacle distribution information as real-time indication information.
According to the embodiment of the application, the real-time environment perception data is acquired by acquiring the real-time distance information between the obstacle and the mobile robot by means of the radar device and acquiring the real-time obstacle identification information, the road surface shape information of the road surface around the mobile robot and the real-time obstacle distribution information of the road surface around the mobile robot by means of the camera device. Multiple collection system cooperation is used, has improved the variety of real-time environment perception data and the reliability of real-time environment perception data.
In the embodiment of the present application, referring to fig. 7, based on the embodiment shown in fig. 2, the embodiment relates to acquiring target travel path information of a mobile robot based on real-time obstacle information and map data information in step 102, and includes steps 401 and 402:
step 401, determining the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information.
Optionally, the coordinate position of the mobile robot in the map is acquired as a real-time position, and then the coordinate position of the obstacle in the map is determined as the position of the obstacle according to the real-time obstacle information.
Step 402, acquiring a target end point position of the mobile robot, determining shortest path information from the real-time position to the target end point position based on the real-time position and the position of the obstacle, and using the shortest path information as target driving path information of the mobile robot.
Optionally, the shortest path information from the real-time location to the destination location is determined using a shortest path algorithm. The shortest path algorithm includes Dijkstra algorithm, Bellman-Ford algorithm, Floyd algorithm, SPFA algorithm, and the like.
According to the embodiment, the real-time position of the mobile robot and the position of the obstacle are determined according to the map data information and the real-time obstacle information, the target end point position of the mobile robot is obtained, and the shortest path information from the real-time position to the target end point position is determined based on the real-time position and the position of the obstacle, so that the real-time determination of the target driving path information of the mobile robot is realized, and the reliability of path planning of the mobile robot is improved.
In the embodiment of the present application, referring to fig. 8, based on the embodiment shown in fig. 2, the embodiment relates to determining the projection parameters of the laser projection apparatus according to the pattern information to be projected and the ground projection area in step 103, and includes steps 501 and 502:
step 501, aiming at each pixel point in the pattern to be projected, determining a projection angle corresponding to the pixel point, a projection time corresponding to the pixel point and a projection color corresponding to the pixel point according to the ground projection area.
Optionally, a corresponding relationship between each pixel point in the pattern to be projected and a certain spatial coordinate point in the ground projection area is obtained, and a projection angle corresponding to each pixel point, a projection time corresponding to each pixel point, and a projection color corresponding to each pixel point are determined according to the corresponding relationship.
Alternatively, as shown in fig. 9, the ground projection area may have an uneven area. And acquiring vertical distance information between the surrounding road surface of the mobile robot and the RGBD camera by adopting the RGBD camera.
For each pixel point, an original projection angle, projection time and projection color corresponding to the information of the pattern to be projected on the flat road surface are assumed, then a projection angle correction parameter is obtained according to the vertical distance information between the road surface around the mobile robot and the RGBD camera, an actual projection angle corresponding to the sampling point is finally obtained according to the projection angle correction parameter and the original projection angle, and the actual projection angle is the projection angle corresponding to the sampling point.
Step 502, using the projection angle corresponding to each pixel point, the projection time corresponding to each pixel point, and the projection color corresponding to each pixel point as the projection parameters of the laser projection device.
In the embodiment, the projection parameters of the projection device are determined and the projection effect of the pattern to be projected is improved by respectively determining the corresponding projection angle, projection time and projection color of each pixel point of the pattern terminal to be projected; meanwhile, the color information of each pixel point can be set, so that the projection pattern projected on the road surface is a colorful pattern, the attention of surrounding pedestrians is attracted more easily, and the interaction effect of the mobile robot and the pedestrians is further improved.
In the embodiment of the present application, the projection apparatus includes a galvanometer, a visible light laser, and a lens, as shown in fig. 10 and 11, the galvanometer is a rotary galvanometer or an MEMS solid galvanometer for controlling a projection direction of laser light, the visible light laser is configured to emit laser light in a visible light frequency band for display, and the lens is configured to synthesize laser light of multiple colors.
In one embodiment, when the galvanometer is a rotating galvanometer, as shown in fig. 10, the projection device includes a first rotating galvanometer 13, a second rotating galvanometer 14, a lens 15, and a first visible laser 16, a second visible laser 17, and a third visible laser 18. The first visible light laser 16, the second visible light laser 17 and the third visible light laser 18 respectively emit laser, the lens 15 synthesizes the received laser into a light, and then the first rotary galvanometer 13 and the second rotary galvanometer 14 adjust the direction of the synthesized light to finally project a pattern 19 to be transmitted.
In another embodiment, when the galvanometer is a MEMS solid state galvanometer, as shown in fig. 11, the projection device includes a MEMS solid state galvanometer 20, a lens 15, and a first visible laser 16, a second visible laser 17, and a third visible laser 18. The first visible light laser 16, the second visible light laser 17 and the third visible light laser 18 respectively emit laser, the lens 15 synthesizes the received laser into a light, and then the MEMS solid-state galvanometer 20 adjusts the direction of the synthesized light to finally project a pattern 19 to be transmitted.
Referring to fig. 12, based on the embodiment shown in fig. 9, the present embodiment relates to adjusting the laser projection apparatus according to the projection parameters in step 104 to project the pattern information to be projected onto the ground projection area, and includes steps 601, 602, and 603:
step 601, determining the rotation angle of the galvanometer corresponding to each pixel point according to the projection angle corresponding to each pixel point, and determining the laser emission information of the visible light laser corresponding to each pixel point and the laser synthesis information of the lens according to the projection color corresponding to each pixel point.
The laser light corresponding to the visible light laser comprises red, green and blue (RGB) tricolor laser light, and the laser emission information comprises a visible light frequency band. Optionally, the visible light frequency bands corresponding to the 3 visible light lasers in fig. 10 or fig. 11 are determined according to the projection colors.
Step 602, determining a projection sequence of each pixel point according to the projection time corresponding to each pixel point.
Step 603, according to the projection sequence of each pixel point, adjusting the laser projection device according to the rotation angle of the galvanometer corresponding to each pixel point, the laser emission information corresponding to each pixel point and the laser synthesis information of the lens corresponding to each pixel point so as to project the pattern information to be projected to the ground projection area.
The embodiment realizes the visual display of the information of the pattern to be projected in the ground projection area, and can project the color pattern on the ground, thereby being convenient for catching the attention of pedestrians and improving the interaction effect.
In an embodiment of the application, referring to fig. 13, based on the embodiment shown in fig. 2, before the step of determining the ground projection area according to the target driving path information and the real-time indication information, the interaction method of the mobile robot further includes:
and 701, judging whether the preset projection condition is met or not according to the target running path information and the real-time environment perception data.
Wherein the preset projection condition at least comprises one of the following conditions: the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a suspended state, pedestrians exist around the mobile robot, and the mobile robot is currently in a running state.
Optionally, the preset projection condition is related to a driving condition of the mobile robot. Different pattern information to be projected can be set for different preset projection conditions. For example, when the driving direction of the mobile robot changes, the pattern information to be projected may be "a combination form of an arrow mark and characters corresponding to the driving direction"; when the driving state of the mobile robot is the suspended state, the pattern information to be projected may be a text pattern of "you go" or a text pattern of "start walking after xxx minutes", or the like.
Optionally, the preset projection condition is that the mobile robot is currently in a running state. By detecting whether the mobile robot is in a starting state or not, if the mobile robot is in the starting state, the projection device is started to project. In this case, the projection device of the mobile robot is always in a state of projecting a pattern. The projected pattern projected onto the ground can be transformed in real time.
Optionally, the preset projection condition is that the sound intensity around the mobile robot is higher than a preset value. Arranging a sound collection device on the mobile robot, collecting the sound around the mobile robot by using the sound collection device, and executing interaction in a projection mode when the intensity of the sound around the mobile robot is higher than a preset value; and when the intensity of the ambient sound is lower than a preset value, performing interaction in a voice reminding mode.
And step 702, determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
According to the embodiment, whether the preset projection condition is met or not is judged according to the target driving path information and the real-time environment perception data, the ground projection area is determined according to the target driving path information under the condition that the judgment result is that the preset projection condition is met, and due to the fact that the projection of the pattern to be projected is executed under the condition that the preset projection condition is met, the projection setting flexibility of the projection device is improved, the energy consumption and the calculation amount of the mobile robot are reduced, and the service life of the laser projection device is prolonged.
In this embodiment of the application, based on the embodiment shown in fig. 13, in the case that the projection condition is that the mobile robot is currently in the running state, the acquiring, in step 103, a pattern to be projected includes:
step 801, judging whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information.
The pattern projected by the mobile robot at present is a projection pattern projected onto the ground at the present moment.
And step 802, if yes, taking the pattern currently projected by the mobile robot as a pattern to be projected.
The projection pattern is a projection pattern to be projected onto the ground at a next moment point of the current moment.
And step 803, if not, generating a pattern to be projected according to the driving intention of the mobile robot.
Optionally, different patterns to be projected are set according to different driving intentions of the mobile robot. When the driving intention of the mobile robot changes, the pattern projected on the ground also changes, i.e. the projected pattern at the next moment is different from the projected pattern at the last moment. For example, when the traveling intention of the mobile robot changes, i.e., it is to change from "straight ahead" to "turn left" or "turn right", it is necessary to convert the currently projected pattern (i.e., the projected pattern representing "straight ahead") into a projected pattern representing "turn left" or "turn right".
The embodiment realizes the purpose of adjusting the projection pattern in real time according to the driving intention of the mobile robot by judging whether the currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot or not and generating the pattern to be projected according to the driving intention of the mobile robot when the currently projected pattern of the mobile robot cannot reflect the driving intention of the mobile robot, so that a pedestrian can accurately grasp the driving intention of the mobile robot, the accuracy of information transmission of the mobile robot to the pedestrian is improved, and the interaction effect between the mobile robot and the pedestrian is further improved.
In the embodiment of the present application, as shown in fig. 14, there is provided an interaction method of a mobile robot, including the steps of:
step 901, acquiring historical environmental perception data acquired by an environmental perception sensor under the condition that the environment of the space where the mobile robot is located meets preset environmental conditions.
And 902, determining space coordinate information of a space where the mobile robot is located according to the historical environment sensing data, creating a map of the space according to the space coordinate information, and taking the map as map data information.
Step 903, acquiring real-time distance information between the obstacle and the mobile robot, which is acquired by the radar device, and real-time obstacle identification information, road surface shape information of the road surface around the mobile robot, and real-time obstacle distribution information of the road surface around the mobile robot, which is acquired by the camera device.
Step 904, using the real-time obstacle identification information and the real-time distance information as real-time obstacle information, and using the road surface shape information and the real-time obstacle distribution information as real-time indication information.
Step 905, determining the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information.
Step 906, acquiring a target end point position of the mobile robot, determining shortest path information from the real-time position to the target end point position based on the real-time position and the position of the obstacle, and using the shortest path information as target driving path information of the mobile robot.
And 907, judging whether the preset projection condition is met or not according to the target driving path information and the real-time environment sensing data, and determining a ground projection area according to the target driving path information and the real-time indication information under the condition that the judgment result is that the preset projection condition is met.
Wherein the preset projection condition at least comprises one of the following conditions: the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a suspended state, pedestrians exist around the mobile robot, and the mobile robot is currently in a running state.
Step 908, obtain the pattern to be projected.
Judging whether a currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information under the condition that the preset projection condition is that the mobile robot is currently in the running state; if the pattern to be projected is not the same as the pattern to be projected, the pattern to be projected is generated according to the driving intention of the mobile robot.
In step 909, for each pixel point in the pattern to be projected, the projection angle corresponding to the pixel point, the projection time corresponding to the pixel point, and the projection color corresponding to the pixel point are determined according to the ground projection area.
Step 910, using the projection angle corresponding to each pixel point, the projection time corresponding to each pixel point, and the projection color corresponding to each pixel point as the projection parameters of the laser projection apparatus.
And 911, determining the rotation angle of the galvanometer corresponding to each pixel point according to the projection angle corresponding to each pixel point, and determining the laser emission information of the visible light laser corresponding to each pixel point and the laser synthesis information of the lens according to the projection color corresponding to each pixel point.
Step 912, determining the projection sequence of each pixel point according to the projection time corresponding to each pixel point.
And step 913, adjusting the laser projection device according to the projection sequence of each pixel point and according to the rotation angle of the galvanometer corresponding to each pixel point, the laser emission information corresponding to each pixel point and the laser synthesis information of the lens corresponding to each pixel point, so as to project the pattern information to be projected to the ground projection area.
This embodiment utilizes laser projection device to project to ground so that the pedestrian knows mobile robot's intention of traveling through waiting to project the image, improves the interactive effect between mobile robot and the pedestrian, has solved because the robot is located the technical problem that the interactive effect that the space environment sound is noisy brings is poor. And the projection pattern of projection to on the road surface can be the multicolour pattern, better snatchs pedestrian's attention, has improved interactive effect, and in addition, the projection condition can be predetermine, improves projection arrangement's flexibility, can adjust the projection pattern according to actual scene, has improved the accuracy that mobile robot conveyed information to the pedestrian, has further improved the interactive effect between mobile robot and the pedestrian.
It should be understood that although the various steps in the flowcharts of fig. 2, 5-6, 8-9, and 12-14 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 5-6, 8-9, and 12-14 may include multiple steps or phases that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the steps or phases is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the other steps or phases.
In an embodiment of the present application, as shown in fig. 15, an interaction apparatus of a mobile robot is provided, where the apparatus includes an obtaining module, a path module, a determining module, and a projecting module, specifically:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring map data information of a space where the mobile robot is located and real-time environment perception data acquired by an environment perception sensor, and the real-time environment perception data comprises real-time obstacle information and real-time indication information used for indicating the surrounding road conditions of the mobile robot;
the path module is used for acquiring target running path information of the mobile robot based on the real-time obstacle information and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information;
the determining module is used for acquiring a pattern to be projected, determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and a ground projection area, wherein the pattern information to be projected is used for indicating the driving intention of the mobile robot;
and the projection module is used for controlling the projection device according to the projection parameters so as to project the pattern information to be projected to the ground projection area.
In one embodiment, the apparatus further comprises a map module, the map module specifically configured to:
acquiring historical environmental perception data acquired by an environmental perception sensor under the condition that the environment of a space where a mobile robot is located meets a preset environmental condition;
determining space coordinate information of a space where the mobile robot is located according to historical environment perception data, and creating a map of the space according to the space coordinate information;
the data information of the map is used as map data information.
In one embodiment, the environmental awareness sensor includes a radar device and a camera device, the acquisition module is to:
acquiring real-time distance information between an obstacle and the mobile robot, which is acquired by a radar device;
acquiring real-time obstacle identification information acquired by a camera device, road surface shape information of a road surface around the mobile robot and real-time obstacle distribution information of the road surface around the mobile robot;
and taking the real-time obstacle identification information and the real-time distance information as real-time obstacle information, and taking the road surface shape information and the real-time obstacle distribution information as real-time indication information.
In one embodiment, the path module is to:
determining the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information;
and acquiring a target end point position of the mobile robot, determining shortest path information from the real-time position to the target end point position based on the real-time position and the position of the obstacle, and taking the shortest path information as target driving path information of the mobile robot.
In one embodiment, the determination module is to:
determining a projection angle corresponding to the pixel point, a projection time corresponding to the pixel point and a projection color corresponding to the pixel point according to the ground projection area for each pixel point in the pattern to be projected;
and taking the projection angle corresponding to each pixel point, the projection time corresponding to each pixel point and the projection color corresponding to each pixel point as the projection parameters of the projection device.
In one embodiment, a projection device includes a galvanometer, a visible laser, and a lens, the projection module to:
determining the rotation angle of a galvanometer corresponding to each pixel point according to the projection angle corresponding to each pixel point, and determining the laser emission information of a visible light laser corresponding to each pixel point and the laser synthesis information of a lens according to the projection color corresponding to each pixel point;
determining the projection sequence of each pixel point according to the projection time corresponding to each pixel point;
and adjusting the projection device according to the projection sequence of each pixel point and the rotation angle of the galvanometer corresponding to each pixel point, the laser emission information corresponding to each pixel point and the laser synthesis information of the lens corresponding to each pixel point so as to project the pattern information to be projected to the ground projection area.
In one embodiment, the path module is further specifically configured to:
judging whether preset projection conditions are met or not according to the target driving path information and the real-time environment perception data;
and determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
In one embodiment, the preset projection condition includes at least one of the following conditions:
the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a suspended state, pedestrians exist around the mobile robot, and the mobile robot is currently in a running state.
In one embodiment, the determining module is specifically configured to:
judging whether a currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information under the condition that the preset projection condition is that the mobile robot is currently in the running state;
if so, taking the pattern currently projected by the mobile robot as a pattern to be projected;
and if not, generating the pattern to be projected according to the driving intention of the mobile robot.
In one embodiment, the determining module is further specifically configured to:
and if the real-time obstacle information indicates that the obstacles existing around the mobile robot are mobile obstacles, executing a step of judging whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
In an embodiment of the present application, there is provided a mobile robot including a projection device, an environmental awareness sensor, and a processor;
the environment perception sensor is used for acquiring real-time environment perception data, and the real-time environment perception data comprises real-time obstacle information and real-time indication information used for indicating the surrounding road conditions of the mobile robot;
the processor is used for acquiring map data information and real-time environment perception data of a space where the mobile robot is located, acquiring target driving path information of the mobile robot based on the real-time obstacle information and the map data information, determining a ground projection area according to the target driving path information and real-time indication information, acquiring a pattern to be projected, determining projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, wherein the pattern to be projected is used for indicating the driving intention of the mobile robot; controlling a projection device according to the projection parameters to project the pattern to be projected to a ground projection area;
and the projection device is used for projecting the pattern to be projected to the ground projection area.
In one embodiment, the processor is further configured to:
acquiring historical environmental perception data acquired by an environmental perception sensor under the condition that the environment of a space where a mobile robot is located meets a preset environmental condition; determining space coordinate information of a space where the mobile robot is located according to historical environment perception data, and creating a map of the space according to the space coordinate information; the data information of the map is used as map data information.
In one embodiment, the context aware sensor comprises radar means and camera means;
the radar device is used for acquiring real-time distance information between the obstacle and the mobile robot;
the camera device is used for acquiring real-time obstacle identification information, road surface shape information of the road surface around the mobile robot and real-time obstacle distribution information of the road surface around the mobile robot;
the processor is used for acquiring real-time distance information and real-time obstacle identification information and taking the real-time distance information and the real-time obstacle identification information as real-time obstacle information; and acquiring road surface shape information and real-time obstacle distribution information, and taking the road surface shape information and the real-time obstacle distribution information as real-time indication information.
In one embodiment, the processor is configured to:
determining the real-time position of the mobile robot and the position of the obstacle according to the map data information and the real-time obstacle information; and acquiring a target end point position of the mobile robot, determining shortest path information from the real-time position to the target end point position based on the real-time position and the position of the obstacle, and taking the shortest path information as target driving path information of the mobile robot.
In one embodiment, the processor is configured to:
determining a projection angle corresponding to the pixel point, a projection time corresponding to the pixel point and a projection color corresponding to the pixel point according to the ground projection area for each pixel point in the pattern to be projected; and taking the projection angle corresponding to each pixel point, the projection time corresponding to each pixel point and the projection color corresponding to each pixel point as the projection parameters of the projection device.
In one embodiment, the projection device includes a galvanometer, a visible laser, and a lens, the processor to:
determining the rotation angle of a galvanometer corresponding to each pixel point according to the projection angle corresponding to each pixel point, and determining the laser emission information of a visible light laser corresponding to each pixel point and the laser synthesis information of a lens according to the projection color corresponding to each pixel point; determining the projection sequence of each pixel point according to the projection time corresponding to each pixel point; according to the projection sequence of each pixel point, adjusting a projection device according to the rotation angle of a galvanometer corresponding to each pixel point, laser emission information corresponding to each pixel point and laser synthesis information of a lens corresponding to each pixel point so as to project pattern information to be projected to a ground projection area;
the projection device is used for projecting each pixel point to a ground projection area according to the projection sequence of each pixel point and according to the rotation angle of the galvanometer corresponding to each pixel point, the laser emission information corresponding to each pixel point and the laser synthesis information of the lens corresponding to each pixel point.
In one embodiment, the processor is further configured to:
judging whether preset projection conditions are met or not according to the target driving path information and the real-time environment perception data, wherein the preset projection conditions at least comprise one of the following conditions: the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a pause state, pedestrians exist around the mobile robot, and the mobile robot is currently in a running state; and determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
In one embodiment, the processor is further configured to:
judging whether a currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information under the condition that the preset projection condition is that the mobile robot is currently in the running state; if so, taking the pattern currently projected by the mobile robot as a pattern to be projected; and if not, generating the pattern to be projected according to the driving intention of the mobile robot.
In one embodiment, the processor is further specifically configured to:
and if the real-time obstacle information indicates that the obstacles existing around the mobile robot are mobile obstacles, executing a step of judging whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An interaction method of a mobile robot is characterized in that a projection device and an environment perception sensor are arranged on the mobile robot, and the method comprises the following steps:
obtaining map data information of a space where the mobile robot is located and obtaining real-time environment sensing data collected by the environment sensing sensor, wherein the real-time environment sensing data comprises real-time obstacle information and real-time indication information used for indicating road conditions around the mobile robot;
acquiring target running path information of the mobile robot based on the real-time obstacle information and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information;
acquiring a pattern to be projected, and determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, wherein the pattern to be projected is used for indicating the driving intention of the mobile robot;
controlling the projection device according to the projection parameters to project the pattern to be projected to the ground projection area.
2. The interaction method according to claim 1, wherein before the obtaining of the map data information of the space where the mobile robot is located and the real-time environment perception data collected by the environment perception sensor, the method further comprises:
acquiring historical environmental perception data acquired by the environmental perception sensor under the condition that the environment of the space where the mobile robot is located meets a preset environmental condition;
determining space coordinate information of a space where the mobile robot is located according to the historical environmental perception data, and creating a map of the space according to the space coordinate information;
and taking the data information of the map as the map data information.
3. The interaction method according to claim 2, wherein the environment sensing sensor comprises a radar device and a camera device, and the acquiring the real-time environment sensing data collected by the environment sensing sensor comprises:
acquiring real-time distance information between the obstacle and the mobile robot, which is acquired by the radar device;
acquiring real-time obstacle identification information acquired by the camera device, road surface shape information of the road surface around the mobile robot and real-time obstacle distribution information of the road surface around the mobile robot;
and taking the real-time obstacle identification information and the real-time distance information as the real-time obstacle information, and taking the road surface shape information and the real-time obstacle distribution information as the real-time indication information.
4. The interaction method according to claim 1, wherein the obtaining target travel path information of the mobile robot based on the real-time obstacle information and the map data information includes:
determining the real-time position of the mobile robot and the position of an obstacle according to the map data information and the real-time obstacle information;
and acquiring a target end point position of the mobile robot, determining shortest path information from the real-time position to the target end point position based on the real-time position and the position of the obstacle, and taking the shortest path information as target driving path information of the mobile robot.
5. The interaction method according to claim 1, wherein the determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area comprises:
aiming at each pixel point in the pattern to be projected, determining a projection angle corresponding to the pixel point, a projection time corresponding to the pixel point and a projection color corresponding to the pixel point according to the ground projection area;
and taking the projection angle corresponding to each pixel point, the projection time corresponding to each pixel point and the projection color corresponding to each pixel point as the projection parameters corresponding to the pattern to be projected.
6. The interaction method according to claim 5, wherein the projection device comprises a galvanometer, a visible light laser and a lens, and the controlling the projection device to project the pattern to be projected onto the ground projection area according to the projection parameters comprises:
determining a rotation angle of the galvanometer corresponding to each pixel point according to a projection angle corresponding to each pixel point, and determining laser emission information of the visible light laser corresponding to each pixel point and laser synthesis information of the lens according to a projection color corresponding to each pixel point;
determining the projection sequence of each pixel point according to the projection time corresponding to each pixel point;
and according to the projection sequence of each pixel point, adjusting the projection device according to the rotation angle of the galvanometer corresponding to each pixel point, the laser emission information corresponding to each pixel point and the laser synthesis information of the lens corresponding to each pixel point so as to project the pattern to be projected to the ground projection area.
7. The interaction method according to claim 1, wherein before determining the ground projection area according to the target driving path information and the real-time indication information, the interaction method further comprises:
judging whether preset projection conditions are met or not according to the target driving path information and the real-time environment perception data;
correspondingly, the determining the ground projection area according to the target driving path information includes:
and determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
8. The interaction method according to claim 7, wherein the preset projection condition comprises at least one of the following conditions:
the driving direction of the mobile robot is changed in a future preset time period;
the running state of the mobile robot is a pause state;
pedestrians exist around the mobile robot;
the mobile robot is currently in a running state.
9. The interaction method according to claim 8, wherein in a case that the preset projection condition is that the mobile robot is currently in a running state, the acquiring a pattern to be projected includes:
judging whether a pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information;
if yes, taking the pattern currently projected by the mobile robot as the pattern to be projected;
and if not, generating the pattern to be projected according to the driving intention of the mobile robot.
10. The interaction method according to claim 9, wherein the determining whether the pattern currently projected by the mobile robot reflects the driving intention of the mobile robot according to the target driving path information comprises:
and if the real-time obstacle information indicates that the obstacles existing around the mobile robot are mobile obstacles, judging whether the pattern currently projected by the mobile robot can reflect the driving intention of the mobile robot according to the target driving path information.
11. An interaction device of a mobile robot is characterized in that a projection device and an environment perception sensor are arranged on the mobile robot, and the interaction device comprises:
the acquisition module is used for acquiring map data information of a space where the mobile robot is located and acquiring real-time environment perception data acquired by the environment perception sensor, wherein the real-time environment perception data comprises real-time obstacle information and real-time indication information used for indicating road conditions around the mobile robot;
the path module is used for acquiring target running path information of the mobile robot based on the real-time obstacle information and the map data information, and determining a ground projection area according to the target running path information and the real-time indication information;
the determining module is used for acquiring a pattern to be projected, determining a projection parameter corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, wherein the pattern to be projected is used for indicating the driving intention of the mobile robot;
and the projection module is used for controlling the projection device according to the projection parameters so as to project the pattern to be projected to the ground projection area.
12. A mobile robot is characterized by comprising a projection device, an environment perception sensor and a processor;
the environment perception sensor is used for acquiring real-time environment perception data, and the real-time environment perception data comprises real-time obstacle information and real-time indication information used for indicating the surrounding road conditions of the mobile robot;
the processor is used for acquiring map data information of a space where the mobile robot is located, acquiring real-time environment perception data, acquiring target driving path information of the mobile robot based on the real-time obstacle information and the map data information, determining a ground projection area according to the target driving path information and the real-time indication information, acquiring a pattern to be projected, and determining projection parameters corresponding to the pattern to be projected according to the pattern to be projected and the ground projection area, wherein the pattern to be projected is used for indicating a driving intention of the mobile robot; controlling the projection device according to the projection parameters to project the pattern to be projected to the ground projection area;
the projection device is used for projecting the pattern to be projected to the ground projection area.
13. The mobile robot of claim 12, wherein the processor is further configured to:
judging whether preset projection conditions are met or not according to the target driving path information and the real-time environment perception data, wherein the preset projection conditions at least comprise one of the following conditions: the method comprises the following steps that the driving direction of the mobile robot is changed in a future preset time period, the driving state of the mobile robot is a pause state, pedestrians exist around the mobile robot, and the mobile robot is in a running state currently;
and determining a ground projection area according to the target running path information under the condition that the judgment result is that the preset projection condition is met.
14. The mobile robot of claim 13, wherein the processor is further configured to:
judging whether a currently projected pattern of the mobile robot can reflect the driving intention of the mobile robot or not according to the target driving path information under the condition that the preset projection condition is that the mobile robot is currently in a running state;
if yes, taking the pattern currently projected by the mobile robot as the pattern to be projected;
and if not, generating the pattern to be projected according to the driving intention of the mobile robot.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202111354791.4A 2021-11-16 2021-11-16 Interaction method and device of mobile robot, mobile robot and storage medium Active CN114265397B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202111354791.4A CN114265397B (en) 2021-11-16 2021-11-16 Interaction method and device of mobile robot, mobile robot and storage medium
EP22894848.5A EP4350461A1 (en) 2021-11-16 2022-11-16 Interaction method and apparatus for mobile robot, and mobile robot and storage medium
PCT/CN2022/132312 WO2023088316A1 (en) 2021-11-16 2022-11-16 Interaction method and apparatus for mobile robot, and mobile robot and storage medium
KR1020247001573A KR20240021954A (en) 2021-11-16 2022-11-16 INTERACTION METHOD AND APPARATUS FOR MOBILE ROBOT, AND MOBILE ROBOT AND STORAGE MEDIUM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111354791.4A CN114265397B (en) 2021-11-16 2021-11-16 Interaction method and device of mobile robot, mobile robot and storage medium

Publications (2)

Publication Number Publication Date
CN114265397A true CN114265397A (en) 2022-04-01
CN114265397B CN114265397B (en) 2024-01-16

Family

ID=80825403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111354791.4A Active CN114265397B (en) 2021-11-16 2021-11-16 Interaction method and device of mobile robot, mobile robot and storage medium

Country Status (1)

Country Link
CN (1) CN114265397B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088316A1 (en) * 2021-11-16 2023-05-25 深圳市普渡科技有限公司 Interaction method and apparatus for mobile robot, and mobile robot and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976457A (en) * 2016-07-12 2016-09-28 百度在线网络技术(北京)有限公司 Method and device for indicating driving dynamic state of vehicle
CN106406312A (en) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 Tour guide robot and moving area calibration method
CN108303972A (en) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 The exchange method and device of mobile robot
CN110442126A (en) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 A kind of mobile robot and its barrier-avoiding method
CN210819621U (en) * 2019-07-15 2020-06-23 北京三快在线科技有限公司 Mobile robot
WO2020186493A1 (en) * 2019-03-21 2020-09-24 珊口(深圳)智能科技有限公司 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976457A (en) * 2016-07-12 2016-09-28 百度在线网络技术(北京)有限公司 Method and device for indicating driving dynamic state of vehicle
CN106406312A (en) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 Tour guide robot and moving area calibration method
CN108303972A (en) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 The exchange method and device of mobile robot
US20200039427A1 (en) * 2017-10-31 2020-02-06 Tencent Technology (Shenzhen) Company Limited Interaction method and apparatus of mobile robot, mobile robot, and storage medium
WO2020186493A1 (en) * 2019-03-21 2020-09-24 珊口(深圳)智能科技有限公司 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot
CN110442126A (en) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 A kind of mobile robot and its barrier-avoiding method
CN210819621U (en) * 2019-07-15 2020-06-23 北京三快在线科技有限公司 Mobile robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088316A1 (en) * 2021-11-16 2023-05-25 深圳市普渡科技有限公司 Interaction method and apparatus for mobile robot, and mobile robot and storage medium

Also Published As

Publication number Publication date
CN114265397B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
KR102348127B1 (en) Electronic apparatus and control method thereof
CN108303972B (en) Interaction method and device of mobile robot
CN110576852B (en) Automatic parking method and device and vehicle
CN111492403A (en) Lidar to camera calibration for generating high definition maps
US11378413B1 (en) Augmented navigational control for autonomous vehicles
CN109725634A (en) The 3D LIDAR system using dichronic mirror for automatic driving vehicle
JP6678605B2 (en) Information processing apparatus, information processing method, and information processing program
WO2012029058A9 (en) Method and system for extracting three-dimensional information
JP6774445B2 (en) Mobile control system, mobile and mobile control method
KR102595886B1 (en) Multi-modal segmentation network for enhanced semantic labeling in mapping
CN109696173A (en) A kind of car body air navigation aid and device
US11971536B2 (en) Dynamic matrix filter for vehicle image sensor
CN114265397A (en) Interaction method and device for mobile robot, mobile robot and storage medium
WO2018216683A1 (en) Electric vacuum cleaner
KR101868549B1 (en) Method of generating around view and apparatus performing the same
KR20180086794A (en) Method and apparatus for generating an image representing an object around a vehicle
CN111458723A (en) Object detection
WO2023088316A1 (en) Interaction method and apparatus for mobile robot, and mobile robot and storage medium
JP2005309537A (en) Information providing device
JP2019117214A (en) Object data structure
US20210247958A1 (en) Notification device
EP4194883A1 (en) Device and method for determining objects around a vehicle
JP7304284B2 (en) Position estimation device, mobile object, position estimation method and program
US20240053162A1 (en) Systems and methods for pedestrian guidance via augmented reality
US20240127603A1 (en) Unified framework and tooling for lane boundary annotation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant