CN113778086B - Map construction and use method, robot and storage medium - Google Patents

Map construction and use method, robot and storage medium Download PDF

Info

Publication number
CN113778086B
CN113778086B CN202111030844.7A CN202111030844A CN113778086B CN 113778086 B CN113778086 B CN 113778086B CN 202111030844 A CN202111030844 A CN 202111030844A CN 113778086 B CN113778086 B CN 113778086B
Authority
CN
China
Prior art keywords
robot
point
current
current robot
road condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111030844.7A
Other languages
Chinese (zh)
Other versions
CN113778086A (en
Inventor
张飞
陈军
李通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202111030844.7A priority Critical patent/CN113778086B/en
Publication of CN113778086A publication Critical patent/CN113778086A/en
Application granted granted Critical
Publication of CN113778086B publication Critical patent/CN113778086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application discloses a map construction and use method, a robot and a storage medium, wherein the method comprises the following steps: when the current robot runs at each position point of each running path, identifying road condition information of the position of the current robot; drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path; configuring control parameters to each road condition area; by the technical scheme, when the map is constructed, the road condition area can be automatically drawn in the map according to the road condition information, so that the robot can better navigate and travel according to the constructed map.

Description

Map construction and use method, robot and storage medium
Technical Field
The embodiment of the application relates to the technical field of computer mapping, in particular to a map construction and use method, a robot and a storage medium.
Background
With the continuous development of robotics, robots are playing a role in people's life and work, where mobile robots, because of their flexibility and mobility, can help people to complete corresponding tasks in many scenarios, for example, in the scenarios of logistics transportation, power inspection, indoor guidance, etc., have gradually replaced manual work to independently perform specified tasks.
In the prior art, the robot usually runs at a fixed speed according to a preset path in the working process, so that flexible control of the robot cannot be realized, and the running efficiency and the safety of the robot are reduced.
Therefore, in view of the problems existing in the prior art, improvements are needed.
Disclosure of Invention
The application provides a map construction and use method, a robot and a storage medium, so that road condition areas can be automatically drawn in a map according to road condition information when the map is constructed, and the robot can better navigate and drive according to the constructed map.
In a first aspect, an embodiment of the present application provides a map construction method, including:
When the current robot runs at each position point of each running path, identifying road condition information of the position of the current robot;
drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path;
And configuring control parameters for each road condition area.
In a second aspect, an embodiment of the present application provides a map using method, including:
Acquiring the current position of a target robot;
determining whether the target robot is in a road condition area in a target map according to the current position; wherein, the target map is determined according to any one of the map construction methods provided by the embodiments of the first aspect;
And carrying out running control on the target robot according to the control parameters configured in the road condition area.
In a third aspect, an embodiment of the present application further provides a map construction apparatus, where the apparatus includes:
The road condition information identification module is used for identifying the road condition information of the position of the current robot when the current robot runs at each position point of each running path;
The road condition area drawing module is used for drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path;
and the control parameter configuration module is used for configuring control parameters for each road condition area.
In a fourth aspect, an embodiment of the present application further provides a map using apparatus, including:
The position acquisition module is used for acquiring the current position of the target robot;
the road condition area determining module is used for determining whether the target robot is in a road condition area in a target map according to the current position; wherein, the target map is determined according to any one of the map construction methods provided by the embodiments of the first aspect;
and the running control module is used for controlling the running of the target robot according to the control parameters configured in the road condition area.
In a fifth aspect, an embodiment of the present application further provides a robot, including:
One or more processors;
Storage means for storing one or more programs,
When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement any one of the map construction methods as provided by the embodiments of the first aspect; and/or implementing any one of the map use methods as provided by the embodiments of the second aspect.
In a sixth aspect, embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the map construction methods as provided by the embodiments of the first aspect; and/or implementing any one of the map use methods as provided by the embodiments of the second aspect.
According to the embodiment of the application, when the current robot runs at each position point of each running path, the road condition information of the position of the current robot is identified; drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path; configuring control parameters to each road condition area; through the technical scheme, when the map is constructed, the road condition area can be automatically drawn in the map according to the road condition information, and when the robot passes through the road condition area according to the constructed map, the robot can be correspondingly adjusted according to the control parameters configured in the road condition area, so that the running efficiency and the running safety of the robot are improved.
Drawings
Fig. 1 is a flowchart of a map construction method according to a first embodiment of the present application;
fig. 2 is a flowchart of a map construction method according to a second embodiment of the present application;
fig. 3 is a flowchart of a map construction method according to a third embodiment of the present application;
fig. 4 is a schematic diagram of a map construction device according to a fourth embodiment of the present application;
fig. 5 is a schematic diagram of a map using apparatus according to a fifth embodiment of the present application;
Fig. 6 is a schematic view of a robot according to a sixth embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a map construction method according to an embodiment of the present application. The embodiment of the application can be suitable for the situation of drawing the road condition area in the map when the map is constructed. The method may be performed by a mapping device, which may be implemented in software and/or hardware.
Referring to fig. 1, a map construction method provided by an embodiment of the present application is applied to a current robot, and includes:
S110, identifying road condition information of the position of the current robot when the current robot runs at each position point of each running path.
The current robot can be a mobile robot capable of autonomously or interactively executing various anthropomorphic tasks in running scenes such as logistics transportation, power inspection, indoor guidance and the like.
In this embodiment, when the current robot moves, the current robot may travel according to a pre-planned travel path. The driving paths are respectively provided with different preset number of position points, and the number of the position points can be set according to actual conditions. When the current robot passes through the position point of the driving path, the current robot can recognize the road condition information of the position point, and the current robot can combine the road condition information of the position point with the position point and uniformly store the road condition information.
The road condition information may be information such as whether the road is congested, whether the road is passing, whether the road is ascending, whether the road is descending, and whether the road is a special road (including a cement road, a tile road, a carpet road, etc.). Of course, the traffic information may also include other traffic information that may affect the current robot running at normal speed or operating mode. The application does not limit the concrete content of the road condition information, and can determine the road condition information to be identified by the current robot according to the actual situation.
Specifically, corresponding information monitoring modules are arranged in the current robot aiming at different road condition information, and the road condition information is detected.
For example, current robots may collect people flow information at various location points through machine vision cameras to determine if a road is congested.
S120, drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one travel path.
The target map refers to a map to be constructed and perfected. In this embodiment, according to the road condition information obtained by the current robot at each location point, a corresponding road condition area may be drawn in the target map.
For example, when the current robot moves in the operation scene, it is found that a certain travel path is not passable, and then the target robot may set an area where the corresponding travel path is located in the target map as a passable area according to the road condition information obtained by recognition.
In this embodiment, the road condition area may be a polygonal area. Of course, the road condition area can be other road condition areas, such as rectangular road condition area or round road condition area, and the specific setting of the road condition area can be determined according to the actual situation.
Specifically, when the corresponding road condition area is drawn in the target map, the position point where the road condition information satisfies the set condition may be determined as the position point in the road condition area. For example, a location point where the flow rate of people is larger than the set value may be determined as a location point in the congestion area. Different preset conditions can be set according to different road condition information, and the specific setting of the preset conditions can be determined according to actual requirements.
Therefore, according to the determined position points in the road condition areas, the corresponding road condition areas can be determined by combining the position points.
In some embodiments, after the drawing the at least one road condition area in the target map, the method may further include: and correspondingly creating a road condition area layer in the road condition area in the target map.
It can be understood that the road condition area layer is created in the target map, so that the road condition area layer can be combined with the target map, and the target map with the road condition area layer can be generated, so that the navigation running of the robot can be guided better according to the constructed target map.
Specifically, the present embodiment may set a current path point in the target map according to the current position of the current robot, and add the travel path between the current path point and the previous history path point.
The current position of the current robot can be acquired through a GPS (GlobalPosition System, global positioning system) sensor.
The previous history route point refers to a route point passed by the current robot at a history time, and the route point has been added to the target map.
It can be understood that by setting the current path point in the target map according to the current position of the current robot and adding the travel path between the current path point and the previous history path point, the generation of the path point and the travel path when the current robot moves is realized. In this embodiment, the road condition area is covered with at least one driving path generated by the above method.
Alternatively, the adding timing of the path point may also be determined by setting the path point state. Wherein the waypoint state includes an on and off state. When the path point state is an on state, the generation operation of the path point is supported, and when the path point state is an off state, the generation operation of the path point is not supported.
It can be appreciated that, depending on the waypoint state, the current robot can conditionally generate waypoints, avoiding repeated generation of waypoints on repeated road segments.
Optionally, the setting the current path point in the target map according to the current position of the current robot includes: judging whether the current position of the current robot is in a path range, if so, determining whether the current position of the current robot meets a preset path point establishment condition, wherein the first path point is a preset robot running starting point; if the preset path point establishment condition is met, determining the current position as the current path point of the robot driving path.
The path range refers to a path range that the current robot can travel, and the current position of the current robot is a position point except for the first path point.
In this embodiment, the determining process of the path range that the current robot can travel may include: if any label to be added in the actual running scene of the current robot is determined to exist in the target map, determining the range of a label closed loop according to the label coordinates in the target map; and determining the current path range of the robot according to the range of the label closed loop.
The to-be-added labels can be RFID (Radio Frequency Identification ) labels to be added into the target map, and the to-be-added labels correspond to the positions of the coordinates of the labels in the actual running scene one by one.
When the current robot walks in the actual running scene, the current robot confirms the tags to be added in the actual running scene for a plurality of times until any tag to be added in the actual running scene is added into the target map. When the current robot adds a label, each time one label is added, the currently added label is connected with the last added label by a line segment, and when all the labels are added, a label closed loop can be formed. According to the coordinates of each label in the target map, the range of the label closed loop can be determined. And the range of the label closed loop is corresponding to the actual running scene, so that the current path range of the robot can be determined.
For example, the labels to be added are attached along four sides of the rectangular ceiling, and by identifying the labels to be added, the determined path range is the range of the rectangular ceiling vertically mapped to the ground according to the label coordinates of the labels to be added.
It can be understood that only when the current robot is determined to travel within the correct path range, the determination of the path point establishment condition is performed for the current position of the current robot to determine whether the path point needs to be established, so that the establishment of the path point which does not conform to the reality is avoided outside the path range.
In this embodiment, after the current robot travels within the above-mentioned path range, a determination is made as to whether the current position of the current robot satisfies a preset path point establishment condition. The judging process of whether the current position of the current robot meets the preset path point establishment condition comprises the following steps: determining a distance between the current position of the current robot and a previous history path point according to the current position of the current robot and the position of the previous history path point; and judging whether the distance between the current position of the current robot and the previous historical path point exceeds a preset distance threshold, and if so, determining that the current position of the current robot meets a preset path point establishment condition.
The path point establishment condition may be predetermined, and when the distance between the current position of the current robot and the previous history path point in the target map reaches a preset distance threshold, the current position of the current robot is automatically added to the target map. The preset distance threshold may be 1.5m, 1.3m or other distance values, and the selection of the specific preset distance threshold may be set according to an actual running scene of the robot, which is not limited in the embodiment of the present application.
Alternatively, the travel path added between the current path point and the previous history path point may be either a one-way travel path or a two-way travel path.
The bidirectional running path refers to the running direction of the current robot, namely the current robot can run from the current path point to the previous history path point, and can also run from the previous history path point to the current path point; and the unidirectional travel path means that the current robot travel direction can only travel from the previous history path point to the current path point, or the current robot travel direction can only travel from the current path point to the previous history path point.
For example, in a certain scenario, the current robot can only travel forward, but cannot return from the original path, and at this time, the generated travel path should be a unidirectional travel path; if the current robot can travel forward or return from the original path, the generated travel path should be a bidirectional travel path.
It can be understood that the determination of the type of the specific running path is related to the actual running scene of the robot, and can be determined according to the actual situation, and the determination of the type of the running path is not limited in any way.
Optionally, adding a travel path between the current path point and the previous history path point after the path point establishment condition is satisfied may include: determining a road section distance between a current path point and a previous history path point in a target map; and judging whether a preset path point connection condition is met between a current path point and a previous history path point according to the road section distance, and if so, establishing a running path between the current path point and the previous history path point.
The route connection condition may be predetermined, and when the distance between the current route point and the previous history route point reaches the preset road length threshold, the driving route is automatically established between the current route point and the previous history route point. The preset road section length threshold may be 2.0m, 1.8m or other length values, and the selection of the specific preset road section length threshold may be set according to the actual running scene of the robot, which is not limited in the embodiment of the present application.
Specifically, the determining, according to the road segment distance, whether the current path point and the previous history path point meet the preset path point connection condition may include: comparing the road section distance with a preset road section length threshold; and judging whether the road section distance exceeds a preset road section length threshold value, if not, determining that the current path point and the previous history path point meet the preset path point connection condition.
It can be appreciated that, each time the current robot determines a current path point, it can determine whether a travel path needs to be added between the current path point and a previous history path point according to a preset road length threshold.
In this embodiment, in addition to determining whether or not it is necessary to add a travel path between a current route point and a previous history route point, it includes: determining whether the distance between the current path point and any historical path point meets a path connection condition; if yes, a driving path is added between the current path point and the historical path point.
Wherein, any one history route point refers to any other history route point in the target map besides the previous history route point.
It will be appreciated that in addition to determining whether a travel path needs to be added between a current waypoint and a previous historical waypoint, the relationship between the current waypoint and other historical waypoints in the target map may be determined based on the distance between the current waypoint and any of the historical waypoints.
S130, configuring control parameters for each road condition area.
The control parameters include static parameters related to the road condition area, which need to be informed to the robot in advance before the robot runs to the road condition area, wherein the static parameters can include the route width (such as narrowest, narrow, medium, wide and widest route width), the area size of the road condition area, other information related to the road condition area, and the like; and/or, the control parameters may further include an operation parameter that needs to be adjusted and controlled for the robot in advance before the robot travels to the road condition area, for example, including adjusting and controlling a traveling speed of the robot or an operation mode of the robot.
In this embodiment, different road condition areas may be configured with different control parameters.
Typically, the control parameters include travel speed and/or operating mode.
The travel speed is a travel speed of the robot, and is used for controlling the travel speed of the robot. The traveling speed may be classified into a plurality of levels according to the magnitude of the traveling speed, such as five levels of slowest speed, slower speed, medium speed, faster speed and fastest speed.
The working mode refers to a working mode of the robot, and a plurality of different types of working modes can be set for the robot according to actual requirements. The classification of the working modes includes, but is not limited to, entertainment mode, non-entertainment mode, manual mode, automatic mode, task mode, non-task mode, etc., and of course, the working modes may also be a combination of the above modes, etc., and specifically, the control parameters may be configured according to the actual situation of each road condition area, which is not limited in the embodiment of the present application.
Generally, a fixed normal running speed is set during the movement of the robot. If the front of the robot is a congestion area, the robot can be properly decelerated before passing through the congestion area, for example, the normal running speed of the robot can be reduced according to a set proportion (for example, 10%), so that the trafficability of the robot is improved, and the running efficiency and safety are ensured.
For another example, in a certain operation scene, the robot is specifically a leading advertisement robot, and the entrance area where the leading advertisement robot is located is set as an interaction area, so that before the leading advertisement robot returns to the interaction area, the working mode of the robot can be automatically switched to an entertainment mode, so that the interaction between the leading advertisement robot and a user is realized.
It can be understood that by setting control parameters conforming to actual running for each road condition area, the robot can automatically switch to a proper working state according to different running scenes, for example, the robot can control the moving speed of the robot according to the actual running scenes, and the robot can also switch the working modes according to the different running scenes.
In some embodiments, the control parameters may also include a warning level, a control condition of the motor lock of the robot, and the like. The warning level may be set for the road surface flatness of the road condition area, and may be any other information.
It should be noted that, according to the road condition information of the current robot at each location point, at least one road condition area is drawn in the target map, and control parameters are configured to each road condition area, the operations may be performed in the current robot, or the current robot may send the road condition information of each location point to the background management server for drawing the target map to perform, specifically, the current robot performs or the background management server performs, and the operations may be set according to actual situations.
Optionally, the configuring the control parameter to each road condition area includes: and configuring control parameters to each road condition area according to the area identification of each road condition area.
The area identifier may be an index identifier set for the road condition area when the road condition area is constructed.
In this embodiment, each road condition area may be distinguished by different indexes, and corresponding control parameters may be configured for each road condition area according to the different indexes.
Optionally, the configuring the control parameter to each road condition area includes: determining the color attribute of the road condition area layer according to the control parameters of each road condition area; and displaying the road condition area layer in the target map according to the color attribute of the road condition area layer.
Specifically, the colors of the road condition area layers can be correspondingly set in a distinguishing mode according to different grades of control parameter setting. For example, the road condition areas may be set to different colors according to the driving speed level in the control parameters. Wherein, the color can be filled in by RGB (red green blue) ternary color value or gray value.
It can be understood that by setting different color attributes for the road condition area layer, the road condition area containing different control parameters can be displayed differently in the target map, so as to play roles of indication and highlighting.
According to the embodiment of the application, when the current robot runs at each position point of each running path, the road condition information of the position of the current robot is identified; drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path; configuring control parameters to each road condition area; through the technical scheme, when the map is constructed, the road condition area can be automatically drawn in the map according to the road condition information, and when the robot passes through the road condition area according to the constructed map, the robot can be correspondingly adjusted according to the control parameters configured in the road condition area, so that the running efficiency and the running safety of the robot are improved.
Example two
Fig. 2 is a flowchart of a map construction method according to a second embodiment of the present application, where the present embodiment optimizes the foregoing solution based on the foregoing embodiment.
Further, the operation of identifying road condition information of the current robot at the position is thinned to the step of identifying whether the current robot is at least one of a single slope position, a forbidden position, a preset deceleration position and a carpet position at each position point, and correspondingly, the operation of drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point is thinned to the step of drawing a slope area in the target map according to each position point of the current robot at the single slope position; drawing a forbidden region in the target map according to each position point of the current robot at the forbidden position; drawing a deceleration area in the target map according to each position point of the current robot at a preset deceleration position; and drawing a carpet area' in the target map according to each position point of the current robot at the carpet position so as to perfect the recognition of road condition information and the drawing operation of the road condition area.
Wherein the same or corresponding terms as those of the above-described embodiments are not explained in detail herein.
Referring to fig. 2, the map construction method provided in this embodiment includes:
s210, identifying whether the current robot is at least one of a single slope position, a forbidden position, a preset deceleration position and a carpet position at each position point according to each position point when the current robot runs at each position point of each running path.
In this embodiment, the recognition of the road condition information of each location point is classified, which mainly includes recognition of whether each location point is at least one of a single slope location, a forbidden location, a preset deceleration location and a carpet location.
S220, drawing a slope area in the target map according to each position point of the current robot at the single slope position.
Optionally, the identifying, for each location point, whether the current robot is at a single slope position at the location point includes: determining whether the current robot is in a slope running state or not according to the gesture data of the current robot at each position point; wherein the slope driving state comprises an ascending slope driving state and a descending slope driving state; if all adjacent position points of the current robot at the position point are in the same slope running state, determining that the current robot is at a single slope position at the position point; the adjacent position points are other position points in a preset area to which the position point belongs, or other position points which are driven in a preset period to which the driving time of the position point belongs.
The gesture data can be acquired through a gesture sensor arranged in the current robot.
Specifically, whether the current robot is in a slope driving state or not can be determined according to a preset gesture threshold; the gesture threshold value comprises an ascending gesture threshold value and a descending gesture threshold value, the ascending gesture threshold value and the descending gesture threshold value can be the same or different, and the setting of the ascending gesture threshold value and the descending gesture threshold value can be determined according to actual conditions.
In the embodiment, if the gesture data of the current robot at the position point is greater than the uphill gesture threshold value, determining that the current robot is in an uphill driving state; and if the gesture data of the current robot at the position point is smaller than the downhill gesture threshold value, determining that the current robot is in a downhill running state.
In order to determine more accurately whether the current robot is in a single slope position at the position point, for each position point, the running state of the current robot at each adjacent position point of the position point is also required to be monitored. The adjacent position point may be other position points in a preset area to which the position point belongs, or the adjacent position point may also be other position points to which the running time of the position point belongs in a preset period.
The preset area and the preset period may be set according to actual situations, for example, considering factors such as performance (climbing ability) or safety of the robot.
Of course, for simplicity, one position point before and after the current position point may be directly used as an adjacent position point, so as to determine the slope driving states of the previous position point, the current position point and the subsequent position point. If the slope running state of at least one position point is inconsistent with the slope running states of other position points, determining that the current robot is not at the single slope position at the position point.
It can be understood that when all adjacent position points of the current robot at the position point are in the same slope running state, the position of the current robot at the position point can be accurately determined to be in a single slope position, and the identification of the road condition information of the position point is realized.
Or alternatively, the identifying, for each location point, whether the current robot is in a single slope position at the location point includes: for each position point, if the gradient angles of the current robot at all adjacent position points of the position point meet the set gradient stability condition, determining that the current robot is at a single gradient position at the position point; the adjacent position points are other position points in a preset area to which the position point belongs, or other position points which are driven in a preset period to which the driving time of the position point belongs.
Specifically, a dual-laser ranging device is arranged in the current robot, wherein a corresponding measuring angle is also configured for the dual-laser ranging device in advance. When the current robot runs, the gradient angle of the current robot at the position point can be determined according to the detection distance of the two lasers acquired by the double-laser distance measuring device and based on a pre-configured measurement angle.
The set gradient stability condition may be determined according to an actual situation, and the set gradient stability condition may specifically be set as a gradient threshold, where the gradient threshold may include an ascending gradient threshold and a descending gradient threshold. If the gradient angle is larger than the set ascending gradient threshold value, determining that the vehicle is ascending; and if the gradient angle is smaller than the set downhill gradient threshold value, determining that the vehicle is downhill.
Likewise, to more accurately determine whether the current robot is in a single slope position at that location point, the slope angle of each neighboring location point of the current robot at that location point needs to be monitored for each location point.
It can be appreciated that by means of the gesture sensor and the double-laser distance measuring device, for each position point, the determination of whether the current robot is at a single-slope position at the position point is effectively realized.
It should be noted that, compared with the method of determining whether the current robot is at the single-slope position at the position point by measuring the posture data of the current robot by the posture sensor, the method of determining whether the current robot is at the single-slope position at the position point by the slope angle calculated by the double-laser ranging device can pre-determine the slope in advance without determining when the current robot is at the slope.
In this embodiment, for each location point, after identifying whether the current robot is at the single-slope location at the location point, the location points may be drawn in the target map according to the determined location points at the single-slope location to form the slope region. Wherein the ramp region is covered with at least one travel path.
S230, drawing a forbidden region in the target map according to each position point of the forbidden position of the current robot.
Optionally, the identifying, for each location point, whether the current robot is in a forbidden location at the location point includes: and for each position point, if the gradient angle of the current robot at the position point meets the set forbidden condition, determining that the current robot is at a forbidden position at the position point.
The setting of the no-go condition means that the safe running of the robot is endangered when the gradient angle reaches a certain gradient threshold value, so that the position points of which the gradient angle meets the setting of the no-go condition can be set as no-go positions.
Specifically, an up-slope forbidden angle threshold and a down-slope forbidden angle threshold can be set according to actual conditions, and the up-slope forbidden angle threshold and the down-slope forbidden angle threshold can be the same or different; when the gradient angle is larger than the set up-slope forbidden gradient threshold, determining that the current robot is at a forbidden position at the position point; and when the gradient angle is smaller than the set downhill forbidden gradient threshold, determining that the current robot is at a forbidden position at the position point.
In this embodiment, for each location point, after identifying whether the current robot is at the forbidden location at the location point, the location points may be drawn in the target map according to the determined location points at the forbidden location to form the forbidden area. Wherein the forbidden area is covered with at least one travel path.
S240, drawing a deceleration area in the target map according to each position point of the current robot at the preset deceleration position.
Optionally, for each location point, identifying whether the current robot is at a preset deceleration position at the location point includes: for each position point, determining whether the current robot is in a slope running state according to the gesture data of the current robot at the position point; wherein the slope driving state comprises an ascending slope driving state and a descending slope driving state; if the current robot is in the preset area of the position point or in the preset period of the running time of the position point, changing the current robot from a continuous ascending running state to a continuous descending running state, and determining that the current robot is in a preset deceleration area at the position point; and/or, for each position point, if the road section to which the position point belongs is a narrow road section or a congestion road section, determining that the current robot is in a preset deceleration area at the position point.
The narrow road section can be determined by identifying the width of the driving path, and when the width of the road section to which the position point belongs is smaller than a preset width threshold value, the road section to which the position point belongs is confirmed to be the narrow road section. The congestion road section can be determined by identifying the number of pedestrians on the driving path, and when the number of pedestrians on the road section to which the location point belongs is greater than a preset number threshold value, the road section to which the location point belongs is determined to be the congestion road section.
It can be appreciated that when the current robot changes from a continuous uphill driving state to a continuous downhill driving state, the current robot is passing through the deceleration strip; of course, it is also possible to effectively determine that the current robot is in the preset deceleration area at the location point according to other information of the road section to which the location point belongs, such as whether the current robot is a narrow road section or a congested road section.
In this embodiment, for each location point, after identifying whether the current robot is at the preset deceleration position at the location point, the location points may be drawn in the target map according to the determined location points at the preset deceleration position to form the deceleration region. Wherein the deceleration zone is covered with at least one travel path.
S250, drawing a carpet area in the target map according to each position point of the current robot at the carpet position.
In this case, the carpet area is different from a usual cement road surface, and the robot has a feeling of continuous jolt when traveling on the carpet road surface.
Optionally, the identifying, for each location point, whether the current robot is in a carpet position at the location point includes: for each position point, determining whether the current robot is positioned at a carpet position at the position point according to the ultrasonic data of the position point and preset carpet ultrasonic data acquired by an ultrasonic sensor in the current robot; and/or, for each position point, determining whether the current robot is positioned at the carpet position at the position point according to the vibration data collected by the vertical axis of the accelerometer in the current robot.
In this embodiment, the current robot chassis is provided with the ultrasonic sensor, can be to different road surface regions, gathers different ultrasonic data.
The preset carpet ultrasound data may be predetermined. For example, carpets used in hotel or restaurant scenes with different materials are purchased in advance, the current robot is driven on the road surface paved with the carpets, the carpet road surface and corresponding ultrasonic data are stored in a unified mode, and a carpet ultrasonic database is built.
Therefore, when the current robot collects the ultrasonic data of the position point, the ultrasonic data can be matched with the preset carpet ultrasonic waves in the carpet ultrasonic database, if the preset carpet ultrasonic data consistent with the ultrasonic data exist, whether the current robot is in a carpet area can be determined, and meanwhile, the carpet pavement on which the current robot is located can also be determined, because the ultrasonic data collected by different carpet pavements are different even though the carpet pavements are all carpet pavements.
Specifically, the process of matching ultrasonic data with preset carpet ultrasonic data may include: and comparing the waveform corresponding to the ultrasonic data with the waveform corresponding to the preset carpet ultrasonic data according to the ultrasonic data and the preset carpet ultrasonic data, and if the waveform similarity is not greater than a preset waveform threshold value, determining whether the current robot is positioned at the carpet position at the position point.
Different preset waveform thresholds can be set for different carpet pavements, and the setting of the preset waveform thresholds can be determined according to actual conditions.
In this embodiment, for each location point, it may also be determined from the data collected by the accelerometer whether the current robot is in a carpet position at that location point. For example, an accelerometer is arranged in the current robot, and whether the current robot is positioned at a carpet position at the position point can be determined according to vibration data acquired by the accelerometer in the vertical axis direction; meanwhile, the waveform corresponding to the vibration data and the waveform corresponding to each preset vibration data in the vibration database can be subjected to similarity analysis, so that the carpet pavement where the current robot is specifically located is determined.
The vibration database is pre-stored with vibration data collected by the vertical axis of the accelerometer when the current robot runs on each carpet road surface, and the collected vibration data has a corresponding relation with the carpet road surface.
It will be appreciated that the determination of whether the current robot is in a carpet position at each location point is effectively accomplished by means of the ultrasonic sensor and accelerometer for that location point.
In this embodiment, for each location point, after identifying whether the current robot is at the carpet position at that location point, the location points may be drawn in the target map according to the determined location points at the carpet position to form the carpet area. Wherein the carpet area is covered with at least one travel path.
S260, configuring control parameters to each road condition area.
The road condition area comprises at least one of a single slope position, a forbidden position, a preset deceleration position and a carpet position. Specifically, control parameters corresponding to each road condition area can be configured for each road condition area according to the difference of each road condition area.
According to the embodiment of the application, on the basis of the embodiment, the identification of road condition information is refined, the identification of the road condition information such as the single slope position, the forbidden position, the preset deceleration position and the carpet position is realized, the drawing of the corresponding road condition area in the target map is realized according to the road condition information of the current robot at each position point, and the robot can better navigate according to the constructed target map.
Example III
Fig. 3 is a flowchart of a map using method according to a third embodiment of the present application. The embodiment of the application can be applied to the situation of guiding the robot to navigate and travel according to the constructed target map. The method may be performed by a map-using device, which may be implemented in software and/or hardware.
Referring to fig. 3, a map using method provided by an embodiment of the present application is applied to a target robot, and includes:
S310, acquiring the current position of the target robot.
The current position of the target robot can be obtained by acquiring a positioning sensor carried by the robot, for example, a GPS sensor. Wherein the target robot may be the same or different robot as the aforementioned current robot.
S320, determining whether the target robot is in the road condition area in the target map according to the current position.
The target map is determined according to the map construction method provided by any embodiment of the present application, and this embodiment is not described herein.
It can be understood that if the road condition area of the target robot in the target map is determined, the target robot may use the relevant parameters configured by the road condition area to guide the movement of the target robot or switch the working state.
S330, performing running control on the target robot according to the control parameters configured in the road condition area.
Specifically, when the robot walks in the slope area, the running speed of the target robot can be actively regulated according to the preset control parameters of the slope area, and when the pitch angle of the slope is large, the target robot can be early warned in advance, and meanwhile, when the angle of the slope exceeds the safe slope running threshold value, a self-protection mode can be set for the robot through a preset working mode.
Optionally, the controlling the driving of the target robot according to the control parameter configured in the road condition area includes: searching control parameters configured in the road condition area according to the area identification of the road condition area; or determining control parameters configured in the road condition area according to the color attribute of the road condition area layer corresponding to the road condition area; and carrying out running control on the target robot according to the control parameters.
The area identifier may be an index identifier set for each road condition area when constructing each road condition area, and according to the index identifier, control parameters configured for the road condition area may be searched; or the control parameters configured by the road condition areas can be determined according to the color attribute of the road condition area layer corresponding to each road condition area, such as the depth of the color or different colors.
In this embodiment, when the road condition area is drawn on the target map, different colors are set for the road condition area layer corresponding to the road condition area according to the control parameter condition of the road condition area, so as to distinguish and prompt, for example, the speed allowed to run in the road condition area can be represented by different gray values of the same color.
It can be understood that the target robot is controlled to run according to the control parameters configured in the road condition area, so that the movement of the target robot can be ensured to be more intelligent and accord with the actual scene.
For example, the target robot may be a meal delivery robot that performs a meal delivery task in a restaurant operation scene, where a target map with a road condition area layer is stored in advance. According to the pre-stored target map, the meal delivery robot can automatically run and control the meal delivery robot according to control parameters configured in each road condition area, such as parameters of local running speed, working mode, autonomous movement range and the like of the target robot according to specific conditions of each road condition area, so that the meal is safely transported to a target dining table, and the robot can run more efficiently, safely and stably.
It should be noted that the target robot according to the embodiment of the present application may be the same as or different from the current robot according to the previous embodiment, and the present application is not limited thereto.
The embodiment of the application obtains the current position of the target robot; determining whether the target robot is in a road condition area in a target map according to the current position; and carrying out running control on the target robot according to the control parameters configured in the road condition area, so that the movement of the target robot is more intelligent and accords with the actual scene.
Example IV
Fig. 4 is a schematic structural diagram of a map building apparatus according to a fourth embodiment of the present application. Referring to fig. 4, a map construction device provided by an embodiment of the present application is configured on a current robot, and the device includes: the road condition information identifying module 410, the road condition area drawing module 420 and the control parameter configuration module 430.
The road condition information identifying module 410 is configured to identify road condition information of a location where the current robot is located when the current robot travels at each location point of each travel path; the road condition area drawing module 420 is configured to draw at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path; the control parameter configuration module 430 is configured to configure control parameters for each of the road condition areas.
According to the embodiment of the application, when the current robot runs at each position point of each running path, the road condition information of the position of the current robot is identified; drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point; wherein the road condition area is covered with at least one driving path; configuring control parameters to each road condition area; through the technical scheme, when the map is constructed, the road condition area can be automatically drawn in the map according to the road condition information, and when the robot passes through the road condition area according to the constructed map, the robot can correspondingly adjust according to the control parameters configured by the road condition area, so that the running efficiency and the running safety of the robot are improved.
Further, the control parameters include at least a travel speed and/or an operating mode.
Further, the traffic information identifying module 410 includes: the road condition information identification unit is used for identifying whether the current robot is at least one of a single slope position, a forbidden position, a preset deceleration position and a carpet position at each position point.
Further, the road condition area drawing module 420 includes: a slope region drawing unit, configured to draw a slope region in the target map according to each position point of the current robot at a single slope position; a forbidden region drawing unit, configured to draw a forbidden region in the target map according to each position point of the current robot at a forbidden position; a deceleration region drawing unit, configured to draw a deceleration region in the target map according to each position point of the current robot at a preset deceleration position; and the carpet region drawing unit is used for drawing the carpet region in the target map according to each position point of the current robot at the carpet position.
Further, the traffic information identifying unit includes: a driving state first determining subunit, configured to determine, for each location point, whether the current robot is in a slope driving state according to pose data of the current robot at the location point; wherein the slope driving state comprises an ascending slope driving state and a descending slope driving state; a slope position determining subunit, configured to determine that the current robot is at a single slope position at the location point if all neighboring location points of the location point are in the same slope running state; the adjacent position points are other position points in a preset area to which the position point belongs, or other position points which are driven in a preset period to which the driving time of the position point belongs.
Further, the traffic information identifying unit includes: a stable condition judging subunit, configured to determine, for each location point, that the current robot is at a single slope position at the location point if the slope angles of the current robot at all neighboring location points of the location point meet a set slope stable condition; the adjacent position points are other position points in a preset area to which the position point belongs, or other position points which are driven in a preset period to which the driving time of the position point belongs.
Further, the traffic information identifying unit includes: and the forbidden condition judging subunit is used for determining that the current robot is at a forbidden position at the position point for each position point if the gradient angle of the current robot at the position point meets the set forbidden condition.
Further, the traffic information identifying unit includes: a second running state determining subunit, configured to determine, for each location point, whether the current robot is in a slope running state according to posture data of the current robot at the location point; wherein the slope driving state comprises an ascending slope driving state and a descending slope driving state; a deceleration region determining subunit, configured to determine that, if the current robot is in a preset region to which the location point belongs or in a preset period to which a running time of the location point belongs, the current robot changes from a continuous uphill running state to a continuous downhill running state, and then the current robot is in a preset deceleration region at the location point; and/or, for each position point, if the road section to which the position point belongs is a narrow road section or a congestion road section, determining that the current robot is in a preset deceleration area at the position point.
Further, the traffic information identifying unit includes: the carpet position determining subunit is used for determining whether the current robot is positioned at a carpet position at each position point according to the ultrasonic data of the position point and preset carpet ultrasonic data acquired by the ultrasonic sensor in the current robot; and/or, for each position point, determining whether the current robot is positioned at the carpet position at the position point according to the vibration data collected by the vertical axis of the accelerometer in the current robot.
Further, the apparatus further comprises: and the driving path adding module is used for setting a current path point in a target map according to the current position of the current robot and adding the driving path between the current path point and a previous historical path point.
The map construction device provided by the embodiment of the application can execute the map construction method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Example five
Fig. 5 is a schematic structural diagram of a map using apparatus according to a fifth embodiment of the present application. Referring to fig. 5, a map using apparatus provided in an embodiment of the present application is configured to a target robot, and the apparatus includes: a position acquisition module 510, a road condition area determination module 520, and a travel control module 530.
The position obtaining module 510 is configured to obtain a current position of the target robot; the road condition area determining module 520 is configured to determine whether the target robot is in a road condition area in a target map according to the current position; wherein the target map is determined according to the map construction method of any one of claims 1 to 8; and the running control module 530 is configured to perform running control on the target robot according to the control parameters configured in the road condition area.
The embodiment of the application obtains the current position of the target robot; determining whether the target robot is in a road condition area in a target map according to the current position; and carrying out running control on the target robot according to the control parameters configured in the road condition area, so that the movement of the target robot is more intelligent and accords with the actual scene.
Further, the driving control module 530 includes: the control parameter searching unit is used for searching the control parameters configured in the road condition area according to the area identification of the road condition area; or determining control parameters configured in the road condition area according to the color attribute of the road condition area layer corresponding to the road condition area; and the running control unit is used for controlling the running of the target robot according to the control parameters.
The map using device provided by the embodiment of the application can execute the map using method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the executing method.
Example six
Fig. 6 is a structural diagram of a robot according to a sixth embodiment of the present application. Fig. 6 shows a block diagram of an exemplary robot 612 suitable for use in implementing embodiments of the application. The robot 612 shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 6, robot 612 is in the form of a general purpose computing device. Components of robot 612 may include, but are not limited to: one or more processors or processing units 616, a system memory 628, and a bus 618 that connects the various system components (including the system memory 628 and processing units 616).
Bus 618 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro Channel Architecture (MCA) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Robot 612 typically includes a variety of computer system readable media. Such media can be any available media that can be accessed by robot 612 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 628 may include computer-system-readable media in the form of volatile memory, such as Random Access Memory (RAM) 630 and/or cache memory 632. Robot 612 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 634 can be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in fig. 6, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 618 through one or more data medium interfaces. The system memory 628 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the present application.
A program/utility 640 having a set (at least one) of program modules 642 may be stored in, for example, the system memory 628, such program modules 642 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 642 generally perform the functions and/or methods of the described embodiments of the present application.
The robot 612 may also communicate with one or more external devices 614 (e.g., keyboard, pointing device, display 624, etc.), one or more devices that enable a user to interact with the robot 612, and/or any devices (e.g., network card, modem, etc.) that enable the robot 612 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 622. Also, robot 612 may communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, through network adapter 620. As shown, network adapter 620 communicates with other modules of robot 612 over bus 618. It should be appreciated that although not shown in fig. 6, other hardware and/or software modules may be used in connection with robot 612, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 616 executes various functional applications and data processing by running at least one of other programs among a plurality of programs stored in the system memory 628, for example, implementing any one of the map construction methods or map use methods provided by the embodiments of the present application.
Example seven
An embodiment seven of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a map construction method provided by any of the embodiments of the present application; or the program when executed by the processor implements a map use method provided by any of the embodiments of the present application.
From the above description of embodiments, it will be clear to a person skilled in the art that the present application may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a FLASH memory (FLASH), a hard disk, or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present application.
It should be noted that, in the embodiment of the rights map construction device, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present application.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the application has been described in connection with the above embodiments, the application is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the application, which is set forth in the following claims.

Claims (10)

1. A map construction method, comprising:
judging whether the current position is within a path range or not according to the current position of the current robot;
if the distance is within the path range, determining the distance between the current position and the previous historical path point according to the current position and the position of the previous historical path point;
Judging whether the distance exceeds a preset distance threshold value;
if yes, determining that the current position of the current robot meets a preset path point establishment condition, further determining that the current position is a current path point of a current robot running path, and adding a running path between the current path point and the previous history path point;
When the current robot runs at each position point of each running path, identifying road condition information of the position of the current robot;
Drawing at least one road condition area in a target map according to the road condition information of the current robot at each position point, and correspondingly creating a road condition area layer in the road condition area; wherein the road condition area is covered with at least one driving path;
Configuring control parameters to each road condition area; wherein the control parameters at least comprise a working mode, a warning level, a running speed and control parameters of a motor lock of the robot, and the working mode comprises an entertainment mode;
Determining the color attribute of the road condition area layer according to the control parameter;
and displaying the road condition area layer in the target map according to the color attribute.
2. The method of claim 1, wherein the identifying the traffic information of the current robot location comprises:
for each position point, identifying whether the current robot is at least one of a single slope position, a forbidden position, a preset deceleration position and a carpet position at the position point;
correspondingly, the drawing at least one road condition area in the target map according to the road condition information of the current robot at each position point comprises at least one of the following steps:
Drawing a slope area in the target map according to each position point of the current robot at a single slope position;
drawing a forbidden region in the target map according to each position point of the current robot at the forbidden position;
drawing a deceleration area in the target map according to each position point of the current robot at a preset deceleration position;
And drawing a carpet area in the target map according to each position point of the current robot at the carpet position.
3. The method of claim 2, wherein for each location point, identifying whether the current robot is in a single slope position at that location point comprises:
Determining whether the current robot is in a slope running state or not according to the gesture data of the current robot at each position point; wherein the slope driving state comprises an ascending slope driving state and a descending slope driving state;
If all adjacent position points of the current robot at the position point are in the same slope running state, determining that the current robot is at a single slope position at the position point;
the adjacent position points are other position points in a preset area to which the position point belongs, or other position points which are driven in a preset period to which the driving time of the position point belongs.
4. The method of claim 2, wherein for each location point, identifying whether the current robot is in a single slope position at that location point comprises:
for each position point, if the gradient angles of the current robot at all adjacent position points of the position point meet the set gradient stability condition, determining that the current robot is at a single gradient position at the position point;
the adjacent position points are other position points in a preset area to which the position point belongs, or other position points which are driven in a preset period to which the driving time of the position point belongs.
5. The method of claim 2, wherein for each location point, identifying whether the current robot is in a disabled position at that location point comprises:
And for each position point, if the gradient angle of the current robot at the position point meets the set forbidden condition, determining that the current robot is at a forbidden position at the position point.
6. The method of claim 2, wherein for each location point, identifying whether the current robot is at a preset deceleration position at that location point comprises:
for each position point, determining whether the current robot is in a slope running state according to the gesture data of the current robot at the position point; wherein the slope driving state comprises an ascending slope driving state and a descending slope driving state;
If the current robot is in the preset area of the position point or in the preset period of the running time of the position point, changing the current robot from a continuous ascending running state to a continuous descending running state, and determining that the current robot is in a preset deceleration area at the position point;
And/or the number of the groups of groups,
And for each position point, if the road section to which the position point belongs is a narrow road section or a congestion road section, determining that the current robot is in a preset deceleration area at the position point.
7. The method of claim 2, wherein for each location point, identifying whether the current robot is in a carpet position at that location point comprises:
For each position point, determining whether the current robot is positioned at a carpet position at the position point according to the ultrasonic data of the position point and preset carpet ultrasonic data acquired by an ultrasonic sensor in the current robot; and/or the number of the groups of groups,
And determining whether the current robot is positioned at the carpet position at each position point according to the vibration data acquired by the vertical axis of the accelerometer in the current robot.
8. A map use method, characterized by comprising:
Acquiring the current position of a target robot;
Determining whether the target robot is in a road condition area in a target map according to the current position; wherein the target map is determined according to the map construction method of any one of claims 1 to 7;
And carrying out running control on the target robot according to the control parameters configured in the road condition area.
9. A robot, comprising:
One or more processors;
Storage means for storing one or more programs,
When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement a mapping method as claimed in any one of claims 1-7; and/or performing a map use method as claimed in claim 8.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a map construction method according to any one of claims 1-7; and/or performing a map use method as claimed in claim 8.
CN202111030844.7A 2021-09-03 2021-09-03 Map construction and use method, robot and storage medium Active CN113778086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111030844.7A CN113778086B (en) 2021-09-03 2021-09-03 Map construction and use method, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111030844.7A CN113778086B (en) 2021-09-03 2021-09-03 Map construction and use method, robot and storage medium

Publications (2)

Publication Number Publication Date
CN113778086A CN113778086A (en) 2021-12-10
CN113778086B true CN113778086B (en) 2024-06-18

Family

ID=78840950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111030844.7A Active CN113778086B (en) 2021-09-03 2021-09-03 Map construction and use method, robot and storage medium

Country Status (1)

Country Link
CN (1) CN113778086B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114383597A (en) * 2022-01-12 2022-04-22 北京百度网讯科技有限公司 Request processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829791A (en) * 2011-06-14 2012-12-19 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted terminal based navigation unit and navigation path correction method
CN109186618A (en) * 2018-08-31 2019-01-11 平安科技(深圳)有限公司 Map constructing method, device, computer equipment and storage medium
CN110347164A (en) * 2019-08-08 2019-10-18 北京云迹科技有限公司 A kind of speed adjusting method, device and storage medium
CN112684803A (en) * 2021-03-11 2021-04-20 上海擎朗智能科技有限公司 Control method and device for mobile robot, mobile robot and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101846228B1 (en) * 2016-12-09 2018-04-06 국방과학연구소 Driving guiding method for unmanned robot in inclined road
EP3423787B8 (en) * 2017-05-22 2020-04-08 Baidu.com Times Technology (Beijing) Co., Ltd. Method and system for updating maps based on control feedbacks of autonomous driving vehicles
CN109084789A (en) * 2018-07-02 2018-12-25 四川斐讯信息技术有限公司 A kind of smartwatch air navigation aid, smartwatch and system
KR20200015340A (en) * 2018-08-02 2020-02-12 주식회사 유진로봇 Mobile Robot That Moves Dynamically according to Attribute Block, User Terminal Device, and Map Management Server
CN112578783B (en) * 2019-09-29 2022-08-30 杭州海康机器人技术有限公司 Walking control method and device for automatic guided transport vehicle
CN110941265A (en) * 2019-11-05 2020-03-31 盟广信息技术有限公司 Map entry method and device, computer equipment and storage medium
CN112828879B (en) * 2019-11-25 2022-08-05 上海高仙自动化科技发展有限公司 Task management method and device, intelligent robot and medium
CN112585656B (en) * 2020-02-25 2022-06-17 华为技术有限公司 Method and device for identifying special road conditions, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829791A (en) * 2011-06-14 2012-12-19 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted terminal based navigation unit and navigation path correction method
CN109186618A (en) * 2018-08-31 2019-01-11 平安科技(深圳)有限公司 Map constructing method, device, computer equipment and storage medium
CN110347164A (en) * 2019-08-08 2019-10-18 北京云迹科技有限公司 A kind of speed adjusting method, device and storage medium
CN112684803A (en) * 2021-03-11 2021-04-20 上海擎朗智能科技有限公司 Control method and device for mobile robot, mobile robot and storage medium

Also Published As

Publication number Publication date
CN113778086A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
US11022450B2 (en) Route planning for an autonomous vehicle
US11884293B2 (en) Operator assistance for autonomous vehicles
US11377096B2 (en) Automatic parking method and device
US11458993B2 (en) Detecting a road closure by a lead autonomous vehicle (AV) and updating routing plans for following AVs
US11216004B2 (en) Map automation—lane classification
CN109863513B (en) Neural network system for autonomous vehicle control
US11255679B2 (en) Global and local navigation for self-driving
CN111680362B (en) Automatic driving simulation scene acquisition method, device, equipment and storage medium
JP2020034906A (en) High-precision map generation method, high-precision map generation device, computer equipment, non-transient computer-readable memory medium, and computer program product
CN109643118B (en) Influencing a function of a vehicle based on function-related information about the environment of the vehicle
CN108089571A (en) For predicting the vehicular traffic behavior of automatic driving vehicle to make the method and system of Driving Decision-making
WO2021202298A1 (en) System and method for intersection management by an autonomous vehicle
US11987261B2 (en) Detecting a road structure change by a lead autonomous vehicle (AV) and updating routing plans for the lead AV and following AVs
EP4235104A1 (en) Detecting a road closure by a lead autonomous vehicle (av) and updating routing plans for following autonomous vehicles (avs)
CN113741457B (en) Map construction and use method, robot and storage medium
US20220081004A1 (en) DETECTING AN UNKNOWN OBJECT BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
CN113778086B (en) Map construction and use method, robot and storage medium
US20180143630A1 (en) Path planning for autonomous vehicle using bidirectional search
US20220081003A1 (en) DETECTING A CONSTRUCTION ZONE BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
CN114995519B (en) Unmanned aerial vehicle AI landing method and system based on multi-obstacle scene
US20220413510A1 (en) Targeted driving for autonomous vehicles
US20240166231A1 (en) Systems and methods for determining steer while stopped behavior for a vehicle using dynamic limits
US20240092358A1 (en) Systems and methods for scene understanding
US20240075923A1 (en) Systems and methods for deweighting veering margins based on crossing time
US20240230366A1 (en) Handling Unmapped Speed Limit Signs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant