CN110315538B - Method and device for displaying barrier on electronic map and robot - Google Patents

Method and device for displaying barrier on electronic map and robot Download PDF

Info

Publication number
CN110315538B
CN110315538B CN201910566499.5A CN201910566499A CN110315538B CN 110315538 B CN110315538 B CN 110315538B CN 201910566499 A CN201910566499 A CN 201910566499A CN 110315538 B CN110315538 B CN 110315538B
Authority
CN
China
Prior art keywords
obstacle
electronic map
obstacles
static
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910566499.5A
Other languages
Chinese (zh)
Other versions
CN110315538A (en
Inventor
李昂
李少海
郭盖华
杨白
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen LD Robot Co Ltd
Original Assignee
Shenzhen LD Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen LD Robot Co Ltd filed Critical Shenzhen LD Robot Co Ltd
Priority to CN201910566499.5A priority Critical patent/CN110315538B/en
Publication of CN110315538A publication Critical patent/CN110315538A/en
Application granted granted Critical
Publication of CN110315538B publication Critical patent/CN110315538B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones

Abstract

The invention is applicable to the technical field of robots and provides a method, a device and a robot for displaying obstacles on an electronic map, wherein the method for displaying the obstacles on the electronic map comprises the steps of collecting information of the obstacles, judging the type of the obstacles in the collected information, determining the display mode of the obstacles on the electronic map according to the type of the obstacles, and displaying the obstacles on the electronic map according to the determined display mode. According to the invention, the display mode of the barrier is determined according to the type of the barrier, only part of the barrier can be displayed on the electronic map, and the number of times that the robot avoids the barrier is reduced, so that the possibility of missing scanning of the robot is reduced, and the cleaning efficiency of the robot is improved.

Description

Method and device for displaying barrier on electronic map and robot
Technical Field
The present invention relates to the field of robots, and in particular, to a method and an apparatus for displaying an obstacle on an electronic map, a robot, and a computer-readable storage medium.
Background
Because the robot in the prior art usually performs indoor cleaning according to the existing electronic map, when all indoor obstacles are displayed on the electronic map, the number of times that the robot avoids the obstacles may be increased, so that the robot misses areas where some obstacles are located, for example, the robot misses edge areas of furniture or the robot misses edge areas of pets.
Therefore, a new technical solution is needed to solve the above technical problems.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for displaying an obstacle on an electronic map, and a robot, in which a display manner of the obstacle is determined according to a type of the obstacle, so that only a part of the obstacle can be displayed on the electronic map, which is beneficial to reducing the number of times that the robot avoids the obstacle, thereby reducing the possibility of missing scanning of the robot, and further improving the cleaning efficiency of the robot.
A first aspect of an embodiment of the present invention provides a method for displaying an obstacle on an electronic map, including:
acquiring information of obstacles;
judging the type of the obstacle in the acquired information;
determining a display mode of the obstacle on the electronic map according to the type of the obstacle;
and displaying the obstacles on the electronic map according to the determined display mode.
A second aspect of an embodiment of the present invention provides an apparatus for displaying an obstacle on an electronic map, including:
the acquisition module is used for acquiring information of the barrier;
the judging module is used for judging the type of the obstacle in the acquired information;
the determining module is used for determining the display mode of the obstacle on the electronic map according to the type of the obstacle;
and the display module is used for displaying the obstacles on the electronic map according to the determined display mode.
A third aspect of embodiments of the present invention provides a robot, including a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method mentioned in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: in this embodiment, first, information of an obstacle is collected, then, a type of the obstacle in the collected information is determined, a display mode of the obstacle on an electronic map is determined according to the type of the obstacle, and finally, the obstacle is displayed on the electronic map according to the determined display mode. Compared with the prior art, the method and the device have the advantages that the display mode of the obstacles is determined according to the types of the obstacles, only part of the obstacles can be displayed on the electronic map, the number of times that the robot avoids the obstacles is reduced, the possibility that the robot does not sweep is reduced, the cleaning efficiency of the robot is improved, and the method and the device have high usability and practicability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating a method for displaying an obstacle on an electronic map according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for displaying an obstacle on an electronic map according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for displaying an obstacle on an electronic map according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a robot according to a fourth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation on the implementation process of the embodiment of the present invention.
It should be noted that, the descriptions of "first" and "second" in this embodiment are used to distinguish different regions, modules, and the like, and do not represent a sequential order, and the descriptions of "first" and "second" are not limited to be of different types.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one
Fig. 1 is a schematic flow chart of a method for displaying an obstacle on an electronic map according to an embodiment of the present invention, where an execution subject of the method may be a robot, and specifically, the method may include the following steps:
s101: and acquiring information of the obstacles.
Wherein the electronic map includes, but is not limited to, a grid map, a topological map, and a vector map; the obstacle refers to an object capable of blocking the movement of the robot, such as a table, a chair or a carpet; the robot is one of intelligent household appliances, and comprises a floor sweeping robot, a floor mopping robot and a cleaning robot, and the floor sweeping robot, the floor mopping robot and the cleaning robot can automatically complete a floor cleaning task in a room by means of certain artificial intelligence.
In one embodiment, the information of the obstacle is the information of the obstacle collected by the robot through a vision sensor or an optical sensor, and includes the information of the obstacle encountered by the robot in front, back and side directions.
S102: and judging the type of the obstacle in the acquired information.
The type of the obstacle includes, but is not limited to, a static obstacle, a dynamic obstacle, and a semi-static obstacle, wherein a static obstacle means that the obstacle is in a static state for a period of time longer than in other states, such as a wall; a dynamic barrier is one in which the barrier is in motion for a longer period of time than in other states, such as a pet in a room; a semi-static barrier is a barrier between static and dynamic states that is in a static state for substantially the same length of time as it is in a moving state for a period of time, such as a door or wardrobe.
In one embodiment, the information may be preprocessed, for example, smoothed, filtered, and denoised, before determining the type of obstacle in the acquired information.
S103: and determining the display mode of the obstacle on the electronic map according to the type of the obstacle.
The display modes include, but are not limited to, normal display, highlight, hide, or delete.
S104: and displaying the obstacles on the electronic map according to the determined display mode.
In one embodiment, after the obstacle is displayed on the electronic map according to the determined display mode, the obstacle can be stored locally or in a cloud.
It should be noted that the method can be applied to the robot in the process of constructing the electronic map, such as the edgewise process, or in the process of updating the existing electronic map, such as the process of cleaning the robot or the process of performing backfill search by the robot.
Therefore, in the embodiment of the invention, the number of the obstacles displayed on the electronic map can be reduced by determining the display mode of the obstacles according to the type of the obstacles, so that the times of avoiding the obstacles by the robot is reduced, the possibility of missed scanning of the robot is reduced, the cleaning efficiency of the robot is improved, and the method has high usability and practicability.
Example two
Fig. 2 is a schematic flow chart of a method for displaying an obstacle on an electronic map according to a second embodiment of the present invention, which is a further detailed and descriptive description of steps S101 and S103 in the first embodiment, and the method may include the following steps:
s201: image information including an obstacle is acquired.
In one embodiment, the image information may be acquired by a vision sensor or an optical sensor mounted on the robot, such as a camera, a lidar or an infrared sensor.
In one embodiment, the robot may be controlled to acquire the image information in real time by a vision sensor or an optical sensor.
In one embodiment, the image information may be several depth maps.
S202: and judging the type of the obstacle in the acquired image information.
In one embodiment, determining the type of obstacle in the acquired image information may include:
and judging whether the acquired image information contains obstacles in preset shapes and/or obstacles in preset states.
In one embodiment, the determining whether the acquired image information includes an obstacle with a preset shape and/or an obstacle with a preset state may include:
a1: and judging whether the acquired image information contains the obstacle with the preset shape or not according to a preset machine learning model.
In one embodiment, the machine learning model may be an existing or future-use machine learning model.
In an embodiment, the step a1 may specifically include:
b1: pictures containing obstacles of different shapes are obtained.
B2: and inputting the obtained picture into a preset machine learning model for training.
B3: and inputting the acquired image information into a preset machine learning model for classification, and identifying the shape of an obstacle contained in the image information.
B4: determining a type of the obstacle according to the identified shape of the obstacle.
In one embodiment, the collected image information may be input into the machine learning model frame by frame, or the collected image information may be input into the machine learning model according to a video stream format.
In an embodiment, in order to improve the accuracy of determining the type of the obstacle, the height and/or the size of the obstacle in the image information may be further determined, for example, when the shape of the obstacle is identified as a square through a preset machine learning model, if the height of the obstacle is higher and the size of the obstacle is larger, the obstacle may be basically determined as a wall, and the type corresponding to the obstacle belongs to a static obstacle.
A2: and judging whether the position of the same obstacle changes in two adjacent frames of images according to the acquired image information.
In one embodiment, the type of the obstacle may be determined by determining whether the number of times that the position of the obstacle changes within a preset time exceeds a preset value, for example, if the number of times that the position of a certain obstacle changes before and after within the preset time exceeds 5 times, the type of the obstacle may be basically determined to be a dynamic obstacle; if the number of times of changing the position of a certain obstacle before and after the position of the certain obstacle within the preset time is far less than 5 times, the type of the obstacle can be basically determined to be a static obstacle; if the number of times of changing the position of an obstacle before and after a preset time is close to 5 times, the type of the obstacle can be basically determined to be a semi-static obstacle.
It should be understood that the type of the obstacle is determined by directly judging whether the acquired image information contains the obstacle in the preset state, or the type of the obstacle is determined by judging whether the acquired image information contains the obstacle in the preset shape, and then the type of the obstacle is determined by the specific obstacle, so that the type of the obstacle can be determined from multiple dimensions.
S203: if the type of the obstacle is a static obstacle or a semi-static obstacle, displaying the current position of the obstacle on the electronic map, and if the type of the obstacle is a dynamic obstacle, hiding the current position of the obstacle on the electronic map.
In one embodiment, in order to distinguish static obstacles from semi-static obstacles displayed on the electronic map, the static obstacles and the semi-static obstacles may be displayed on the electronic map in different ways, such as displaying static obstacles normally on the electronic map and highlighting semi-static obstacles on the electronic map.
Since the position of the semi-static obstacle is fixed for a while, it is difficult to reduce the number of obstacles displayed on the electronic map if the position information of the semi-static obstacle at different times is displayed on the electronic map. Therefore, in one embodiment, only the current position of the same semi-static obstacle may be displayed on the electronic map, which may specifically be: and judging whether the position of the semi-static barrier is displayed on the electronic map, if so, hiding or deleting the displayed position of the semi-static barrier on the electronic map, and displaying the current position of the semi-static barrier on the electronic map.
S204: and displaying the obstacles on the electronic map according to the determined display mode.
The step S204 is the same as the step S104 in the first embodiment, and the specific implementation process thereof can refer to the description of the step S104, which is not repeated herein.
S205: and controlling the robot to plan a path according to the current electronic map.
The current electronic map is an electronic map only displaying static obstacles and/or semi-static obstacles; the path plan may be used for area cleaning or edgewise exploration of the robot.
As can be seen from the above, compared with the first embodiment, the second embodiment of the present invention can only display static obstacles and/or semi-static obstacles on the electronic map, which can reduce the number of obstacles displayed on the electronic map, and is beneficial to reducing the number of times that the robot avoids obstacles, thereby improving the efficiency of the robot in performing global cleaning; in addition, the robot is controlled to plan the path according to the current electronic map, so that the robot can plan the optimal path conveniently, and the method has strong usability and practicability.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a device for displaying an obstacle on an electronic map according to a third embodiment of the present invention, and only a part related to the third embodiment of the present invention is shown for convenience of description.
The device for displaying the obstacles on the electronic map can be a software unit, a hardware unit or a combination of software and hardware unit which is built in the robot, and can also be integrated into the robot as an independent pendant.
The device for displaying the obstacles on the electronic map comprises:
the acquisition module 31 is used for acquiring information of obstacles;
a judging module 32, configured to judge a type of an obstacle in the acquired information;
the determining module 33 is configured to determine a display manner of the obstacle on the electronic map according to the type of the obstacle;
and the display module 34 is configured to display the obstacle on the electronic map according to the determined display mode.
In one embodiment, the determining module 32 is specifically configured to:
and judging whether the acquired image information contains obstacles in preset shapes and/or obstacles in preset states.
In one embodiment, the determining module 32 is specifically configured to:
judging whether the acquired image information contains an obstacle with a preset shape or not according to a preset machine learning model; and/or the presence of a gas in the gas,
and judging whether the position of the same obstacle changes in two adjacent frames of images according to the acquired image information.
In an embodiment, the determining module 33 is specifically configured to:
if the type of the obstacle is a static obstacle or a semi-static obstacle, displaying the current position of the obstacle on the electronic map;
and if the type of the obstacle is a dynamic obstacle, hiding the current position of the obstacle on the electronic map.
In one embodiment, the apparatus further comprises:
and the hiding or deleting module is used for judging whether the position of the semi-static barrier is displayed on the electronic map, and hiding or deleting the displayed position of the semi-static barrier on the electronic map if the position of the semi-static barrier is displayed on the electronic map.
In one embodiment, the apparatus further comprises:
and the path planning module is used for controlling the robot to plan a path according to the current electronic map.
Example four
Fig. 4 is a schematic structural diagram of a robot according to a fourth embodiment of the present invention. As shown in fig. 4, the robot 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40. The processor 40, when executing the computer program 42, implements the steps of the first embodiment of the method, such as the steps S101 to S104 shown in fig. 1. Alternatively, the steps in the second embodiment of the method described above, for example, steps S201 to S205 shown in fig. 2, are implemented. The processor 40, when executing the computer program 42, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 31 to 34 shown in fig. 3.
Illustratively, the computer program 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 42 in the robot 4. For example, the computer program 42 may be divided into an acquisition module, a judgment module, a determination module and a display module, and the specific functions of each module are as follows:
the acquisition module is used for acquiring information of the barrier;
the judging module is used for judging the type of the obstacle in the acquired information;
the determining module is used for determining the display mode of the obstacle on the electronic map according to the type of the obstacle;
and the display module is used for displaying the obstacles on the electronic map according to the determined display mode.
The robot may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of a robot 4 and is not intended to be limiting of robot 4 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the robot may also include input output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the robot 4, such as a hard disk or a memory of the robot 4. The memory 41 may also be an external storage device of the robot 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the robot 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the robot 4. The memory 41 is used for storing the computer program and other programs and data required by the robot. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the modules, elements, and/or method steps of the various embodiments described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A method of displaying obstacles on an electronic map, comprising:
acquiring information of obstacles;
judging the type of the obstacle in the acquired information;
determining a display mode of the obstacle on the electronic map according to the type of the obstacle, wherein the display mode comprises normal display, highlight display, hiding or deletion;
displaying the obstacles on the electronic map according to the determined display mode;
the determining, according to the type of the obstacle, a display manner of the obstacle on the electronic map includes:
judging whether the position of a semi-static barrier is displayed on the electronic map;
if the position of the semi-static obstacle is displayed on the electronic map, hiding or deleting the displayed position of the semi-static obstacle on the electronic map, and displaying the current position of the semi-static obstacle on the electronic map, wherein the semi-static obstacle is an obstacle between static state and dynamic state.
2. The method according to claim 1, wherein the information is image information including an obstacle;
correspondingly, the judging the type of the obstacle in the acquired information includes:
and judging whether the acquired image information contains obstacles in preset shapes and/or obstacles in preset states.
3. The method according to claim 2, wherein the determining whether the acquired image information includes an obstacle with a preset shape and/or an obstacle with a preset state comprises:
judging whether the acquired image information contains an obstacle with a preset shape or not according to a preset machine learning model; and/or the presence of a gas in the gas,
and judging whether the position of the same obstacle changes in two adjacent frames of images according to the acquired image information.
4. The method of claim 1, wherein determining the manner in which the obstacle is displayed on the electronic map according to the type of the obstacle comprises:
if the type of the obstacle is a static obstacle, displaying the current position of the obstacle on the electronic map;
and if the type of the obstacle is a dynamic obstacle, hiding the current position of the obstacle on the electronic map.
5. The method according to any one of claims 1 to 4, further comprising, after displaying the obstacle on the electronic map in the determined display manner:
and controlling the robot to plan a path according to the current electronic map.
6. An apparatus for displaying obstacles on an electronic map, comprising:
the acquisition module is used for acquiring information of the barrier;
the judging module is used for judging the type of the obstacle in the acquired information;
the determining module is used for determining the display mode of the obstacle on the electronic map according to the type of the obstacle, wherein the display mode comprises normal display, highlight display, hiding or deletion;
the display module is used for displaying the obstacles on the electronic map according to the determined display mode;
the hiding or deleting module is used for judging whether the position of the semi-static barrier is displayed on the electronic map or not, if the position of the semi-static barrier is displayed on the electronic map, the displayed position of the semi-static barrier on the electronic map is hidden or deleted, and the current position of the semi-static barrier is displayed on the electronic map, wherein the semi-static barrier is a barrier between static state and dynamic state.
7. The apparatus of claim 6, further comprising:
and the path planning module is used for controlling the robot to plan a path according to the current electronic map.
8. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of the claims 1 to 5 are implemented when the computer program is executed by the processor.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201910566499.5A 2019-06-27 2019-06-27 Method and device for displaying barrier on electronic map and robot Active CN110315538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910566499.5A CN110315538B (en) 2019-06-27 2019-06-27 Method and device for displaying barrier on electronic map and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910566499.5A CN110315538B (en) 2019-06-27 2019-06-27 Method and device for displaying barrier on electronic map and robot

Publications (2)

Publication Number Publication Date
CN110315538A CN110315538A (en) 2019-10-11
CN110315538B true CN110315538B (en) 2021-05-07

Family

ID=68120465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910566499.5A Active CN110315538B (en) 2019-06-27 2019-06-27 Method and device for displaying barrier on electronic map and robot

Country Status (1)

Country Link
CN (1) CN110315538B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830970B (en) * 2020-06-12 2022-03-04 珠海一微半导体股份有限公司 Regional cleaning planning method for robot walking along edge, chip and robot
CN112161624A (en) * 2020-09-11 2021-01-01 上海高仙自动化科技发展有限公司 Marking method, marking device, intelligent robot and readable storage medium
CN112380942A (en) * 2020-11-06 2021-02-19 北京石头世纪科技股份有限公司 Method, device, medium and electronic equipment for identifying obstacle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3299922B1 (en) * 2015-05-22 2022-09-07 FUJIFILM Corporation Robot device and movement control method for robot device
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106863305B (en) * 2017-03-29 2019-12-17 赵博皓 Floor sweeping robot room map creating method and device
CN107643755B (en) * 2017-10-12 2022-08-09 南京中高知识产权股份有限公司 Efficient control method of sweeping robot
CN109709945B (en) * 2017-10-26 2022-04-15 深圳市优必选科技有限公司 Path planning method and device based on obstacle classification and robot
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot

Also Published As

Publication number Publication date
CN110315538A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110315538B (en) Method and device for displaying barrier on electronic map and robot
US11189078B2 (en) Automated understanding of three dimensional (3D) scenes for augmented reality applications
US10572970B2 (en) Extracting 2D floor plan from 3D GRID representation of interior space
WO2020248458A1 (en) Information processing method and apparatus, and storage medium
US20160189419A1 (en) Systems and methods for generating data indicative of a three-dimensional representation of a scene
CN109871420B (en) Map generation and partition method and device and terminal equipment
CN111176301A (en) Map construction method and sweeping method of sweeping robot
CN109582015B (en) Indoor cleaning planning method and device and robot
JP2014238883A (en) Method and device for drawing virtual object in real environment
CN112462758B (en) Drawing establishing method and device, computer readable storage medium and robot
AU2013222016A1 (en) Method, system and apparatus for determining a property of an image
CN112085838A (en) Automatic cleaning equipment control method and device and storage medium
WO2015031855A1 (en) Expanding a digital representation of a physical plane
US20230343031A1 (en) Point cloud filtering
CN113703439A (en) Autonomous mobile device control method, device, equipment and readable storage medium
CN109605374B (en) Method and device for displaying movement path of robot and robot
CN111898557A (en) Map creation method, device, equipment and storage medium from mobile equipment
CN111191306A (en) Room design effect display method and system
CN109146973B (en) Robot site feature recognition and positioning method, device, equipment and storage medium
CN114489058A (en) Sweeping robot, path planning method and device thereof and storage medium
WO2022108844A1 (en) Multi-view visual data damage detection
Seresht et al. Automatic building recognition from digital aerial images
CN110315537A (en) A kind of method, apparatus and robot controlling robot motion
GB2439184A (en) Obstacle detection in a surveillance system
Reiterer A semi-automatic image-based measurement system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 room 1601, building 2, Vanke Cloud City phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province (16th floor, block a, building 6, Shenzhen International Innovation Valley)

Patentee after: Shenzhen Ledong robot Co.,Ltd.

Address before: 518000 16th floor, building B1, Nanshan wisdom garden, 1001 Xueyuan Avenue, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN LD ROBOT Co.,Ltd.

CP03 Change of name, title or address