CN116758233A - Robot control method, robot, client device, and storage medium - Google Patents

Robot control method, robot, client device, and storage medium Download PDF

Info

Publication number
CN116758233A
CN116758233A CN202310728464.3A CN202310728464A CN116758233A CN 116758233 A CN116758233 A CN 116758233A CN 202310728464 A CN202310728464 A CN 202310728464A CN 116758233 A CN116758233 A CN 116758233A
Authority
CN
China
Prior art keywords
dimensional
map
room
user
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310728464.3A
Other languages
Chinese (zh)
Inventor
周川艳
李婷丹
徐祥
李晓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN202310728464.3A priority Critical patent/CN116758233A/en
Publication of CN116758233A publication Critical patent/CN116758233A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a robot control method, a robot, a client device and a storage medium. Wherein the method comprises the following steps: displaying a map of the environment; wherein the map comprises: a three-dimensional stereograph representing a boundary of an area in the environment and a two-dimensional display area representing a position, a projected shape, and a size of an object within the environment; in response to a user selection operation of a two-dimensional display area of a target object in the map and a first entity pattern selection operation, replacing or filling the two-dimensional display area of the target object with the first entity pattern on the map; wherein, the first entity graph is associated with an entity name; receiving an instruction triggered by a user and comprising the entity name; and controlling the robot to move to the target object corresponding to the entity name based on the map. By adopting the scheme provided by the embodiment of the application, the workload of the robot for building the three-dimensional map can be effectively reduced, and the robot can be conveniently controlled by a user.

Description

Robot control method, robot, client device, and storage medium
The application is a division of domestic patent application with application number 201911167481.4, submitted in 2019, 11 and 25 days.
Technical Field
The application belongs to the technical field of robots, and particularly relates to a three-dimensional map interaction method, a three-dimensional map interaction device, a robot and a storage medium.
Background
With the continuous development of artificial intelligence technology, various intelligent robots are increasingly entering the lives of people, such as logistics robots, sweeping robots, welcome robots and the like.
Taking a sweeping robot as an example, the sweeping robot can detect the environment by using a sensor, generate a plane map of the interior of the environment, and possibly contain various object marks in the map. The floor sweeping robot can realize autonomous working cleaning work by using the plane map, for example, if the floor sweeping robot receives a cleaning instruction of a user or accords with reserved cleaning time, the floor sweeping robot starts cleaning work and generally cleans all rooms corresponding to the plane map according to a preset cleaning sequence.
Disclosure of Invention
The three-dimensional map interaction method, the three-dimensional map interaction device, the robot and the storage medium provided by the aspects of the application are used for realizing the establishment of the three-dimensional map of the room by the robot, and the three-dimensional map can be edited according to the requirements.
The embodiment of the application provides a three-dimensional map interaction method, which comprises the following steps:
acquiring three-dimensional information of a wall surface and a door frame in the environment;
establishing a basic three-dimensional map according to the three-dimensional information;
acquiring two-dimensional information of each object in the environment;
displaying each object at the corresponding position of the basic three-dimensional map according to the two-dimensional information, wherein each object is displayed on the basic three-dimensional map in a two-dimensional display area mode;
and selecting at least one entity graph corresponding to a certain two-dimensional display area from the entity graph options to replace or fill the certain two-dimensional display area, so as to obtain the three-dimensional map of the environment.
The embodiment of the application provides a three-dimensional map interaction device, which comprises:
the three-dimensional information acquisition module is used for acquiring three-dimensional information of the wall surface and the door frame in the environment;
the three-dimensional map building module is used for building a basic three-dimensional map according to the three-dimensional information;
the two-dimensional information acquisition module is used for acquiring the two-dimensional information of each object in the environment;
the two-dimensional display module is used for displaying each object at the corresponding position of the basic three-dimensional map according to the two-dimensional information, and each object is displayed on the basic three-dimensional map in a two-dimensional display area mode;
And the map editing module is used for selecting at least one entity graph corresponding to a certain two-dimensional display area from the entity graph options to replace or fill the certain two-dimensional display area so as to obtain the three-dimensional map of the environment.
Embodiments of the present application provide a computer-readable storage medium storing a computer program that, when executed by one or more processors, causes the one or more processors to perform acts comprising:
acquiring three-dimensional information of a wall surface and a door frame in the environment;
establishing a basic three-dimensional map according to the three-dimensional information;
acquiring two-dimensional information of each object in the environment;
displaying each object at the corresponding position of the basic three-dimensional map according to the two-dimensional information, wherein each object is displayed on the basic three-dimensional map in a two-dimensional display area mode;
and selecting at least one entity graph corresponding to a certain two-dimensional display area from the entity graph options to replace or fill the certain two-dimensional display area, so as to obtain the three-dimensional map of the environment.
An embodiment of the present application provides a robot including: a machine body provided with one or more processors, one or more memories storing computer programs, and a sensor;
The one or more processors configured to execute the computer program to:
acquiring three-dimensional information of a wall surface and a door frame in the environment;
establishing a basic three-dimensional map according to the three-dimensional information;
acquiring two-dimensional information of each object in the environment;
displaying each object at the corresponding position of the basic three-dimensional map according to the two-dimensional information, wherein each object is displayed on the basic three-dimensional map in a two-dimensional display area mode;
and selecting at least one entity graph corresponding to a certain two-dimensional display area from the entity graph options to replace or fill the certain two-dimensional display area, so as to obtain the three-dimensional map of the environment.
In some embodiments of the present application, the robot detects the environment in which the robot is located, obtains three-dimensional information of wall surfaces and door frames in each room in the environment, and displays a three-dimensional map corresponding to each room in the environment in a map editing interface based on the three-dimensional information, so that a user can intuitively understand the environment layout condition. Further, objects in the environment can be detected, and two-dimensional information of each object can be obtained. And displaying the corresponding three-dimensional map in a two-dimensional display area mode according to the corresponding relation between the object and each room and the two-dimensional information of the object, so that the three-dimensional map conforming to the actual environment layout is obtained. After the three-dimensional map is obtained, the user selects the corresponding entity graph in the entity graph options to replace or fill the two-dimensional display area, so that the workload of the robot for building the three-dimensional map can be effectively reduced, and the participation of the user is increased. The three-dimensional map can clearly see the three-dimensional layout of the room, the object types represented by the entity graph, and the relative positions and sizes of the entity graph in the room, thereby being beneficial to improving the experience effect of the user for watching the map.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a three-dimensional map interaction method according to an embodiment of the present application;
fig. 2 is a schematic view of an effect of a three-dimensional room according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a three-dimensional map interactive interface according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a room naming method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a room naming process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an effect of naming a room;
FIG. 7 is a schematic flow chart of a method for replacing or filling entity patterns according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a physical graphic replacement or population function interface according to an embodiment of the present application;
FIG. 9a is a schematic diagram of an entity graph replacement process according to an embodiment of the present application;
FIG. 9b is a schematic diagram of a physical graphic filling process according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a graphical adjustment interface for object entities according to an embodiment of the present application;
FIG. 11a is a schematic diagram showing the effect of entity graph replacement according to an embodiment of the present application;
FIG. 11b is a schematic diagram illustrating an effect of filling a physical graphic according to an embodiment of the present application;
FIG. 12 is a schematic diagram of adding entity graphics according to an embodiment of the present application;
FIG. 13 is a schematic diagram of an edited three-dimensional map according to an embodiment of the present application;
fig. 14 is a schematic diagram of a three-dimensional map interaction device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two, but does not exclude the case of at least one.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or system comprising such elements.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
In the application, the self-mobile robot can walk independently, execute corresponding service functions, and also have the functions of calculation, communication, surfing the internet and the like. The self-moving robot provided by the embodiment of the application can be an unmanned plane, an unmanned vehicle and the like. The basic service functions of the self-mobile robot are different according to different application scenes. The self-moving robot may be a sweeping self-moving robot, a following self-moving robot, a welcome self-moving robot, or the like. For example, the basic service function of the self-moving robot applied to sweeping in the scenes of families, office buildings, markets and the like is to sweep the ground in the scenes; for the self-moving robot applied to the glass wiping in the scenes of families, office buildings, markets and the like, the basic service function is to clean the glass in the scenes; for a following self-mobile robot, the basic service function is to follow a target object; for a welcome self-mobile robot, its basic service function is to welcome customers and guide them to their destination.
In order to improve the autonomous working capacity of the self-mobile robot, the application provides a three-dimensional map interaction method. Fig. 1 is a schematic flow chart of a three-dimensional map interaction method according to an embodiment of the present application. The method comprises the following steps:
101: and acquiring three-dimensional information of the wall surface and the door frame in the environment.
102: and establishing a basic three-dimensional map according to the three-dimensional information.
103: two-dimensional information of each object in the environment is acquired.
104: and displaying each object at the corresponding position of the basic three-dimensional map according to the two-dimensional information, wherein each object is displayed on the basic three-dimensional map in a two-dimensional display area mode.
105: and selecting at least one entity graph corresponding to a certain two-dimensional display area from the entity graph options to replace or fill the certain two-dimensional display area, so as to obtain the three-dimensional map of the environment.
In practical applications, the robot is provided with at least one Time of flight (TOF) module (for example, a TOF camera) that includes a TOF projector and a TOF receiver. In the process that the robot moves in the current environment, a series of contour points can be acquired through the TOF module, and the contour points are connected to obtain a contour graph. Specifically, the TOF module emits a detection signal, typically a laser signal, that reflects a response signal after encountering an object, and after receiving the response signal, the TOF module calculates the time of flight of the laser light to obtain depth data. And analyzing all the obtained depth data to generate a corresponding three-dimensional depth cloud point diagram. Further, the three-dimensional depth cloud point diagram is utilized to acquire distance information of each pixel point, and the wall surface and the door frame of the room formed by a plurality of pixel points are determined according to the distance values corresponding to the distance information. Thus, the robot automatically generates a three-dimensional stereoscopic figure.
In addition, the robot is required to detect objects contained in each room, so that the workload of the robot for building a three-dimensional map is reduced, and in the technical scheme of the application, the robot only needs to acquire two-dimensional information of each object through a sensor and does not need to acquire three-dimensional information of each object. The sensor for acquiring two-dimensional information of an object may be an infrared sensor, an ultrasonic sensor, or the like. For example, the position of each object in the room, and two-dimensional information such as the size, projection shape, etc. of each object can be acquired by the infrared sensor. The two-dimensional display area is constructed using the basic information, and is displayed in a basic three-dimensional map representing the corresponding room, and the two-dimensional display area may be a two-dimensional hatched area or an unfilled two-dimensional wire frame area. The shape of the two-dimensional display area is determined according to the two-dimensional information, and corresponds to the projection shape of the object on the three-dimensional map. Because the number and the types of objects in the room are large, if the robot is required to autonomously identify each object and calculate the three-dimensional shape of each object, the robot is required to pay more work, the identification result is not necessarily accurate, and the three-dimensional map interaction efficiency and the display effect are affected. By the above mode, a user can intuitively see the size and layout of the rooms in the basic three-dimensional map, and can see the three-dimensional wall surfaces and door frames and the two-dimensional display areas contained in the respective rooms and used for representing the positions, the sizes and the projection shapes of the objects.
In order to enable a user to more intuitively see relevant information such as the type, the size and the like of the object represented by each two-dimensional display area in the three-dimensional map, the two-dimensional display areas can be further replaced or filled with entity graphs with the same type as the actual object in an interactive mode. The physical graphic can be a two-dimensional graphic or a three-dimensional graphic. These physical graphics are typically household items. Of course, if the mobile robot is used in an office, the physical graphic may be an office product, such as a desk and chair, a printer, etc.; if in a mall, the entity graphic may be an elevator, a trash can, a dining table, or the like. The scheme of replacing or filling the two-dimensional display area by a solid graphic will be specifically illustrated below.
In order to facilitate understanding of the interaction process through the three-dimensional map in the above environment, an exemplary description is provided below with reference to fig. 2, and fig. 2 is a schematic diagram of an effect of a three-dimensional room according to an embodiment of the present application. The interface shown in fig. 2 includes a map display area and an editing tool area. As can be seen in the map display area in fig. 2, the environment in which the robot is located comprises a plurality of rooms, each of which comprises a wall surface 201, a door frame 202 and a two-dimensional display area 203 inside the room. The wall surface 201 is indicated by white frame lines in fig. 2, the door frame 202 is indicated by rectangular frame lines in the wall surface 201, and communication between the respective rooms is achieved by the door frame 202. In each room, each detected object is represented by a two-dimensional display area 203. The position, size and projection shape of the two-dimensional display area 203 are displayed according to the actual measurement result, and the size of the two-dimensional display area 203 is determined according to the proportional relation between the object and the room. Through the scheme, the user can more truly and intuitively see the environment layout.
In order to improve the display effect, the user can render the three-dimensional wall surface graph and the door frame graph as required, can render the three-dimensional wall surface graph and the door frame graph into different colors or different designs, can meet the diversified display requirements of the user, and is beneficial to distinguishing each room according to the colors.
After the three-dimensional map is constructed, the map can be further edited in order to facilitate the user to identify the map and to facilitate the user to control the robot better through the map. For example, as shown in fig. 3, which is a schematic diagram of a three-dimensional map interactive interface provided by an embodiment of the present application, a map editing control for enabling a map editing function is provided at the upper right corner of a map, and if a user triggers the control, the user is executed to enter the map editing interface through the map editing function. Editing of the map can be classified into a room naming function and a two-dimensional display area replacement or filling function. The following description will be made separately for the above two functions.
Fig. 4 is a flow chart of a room naming method according to an embodiment of the present application. The method comprises the following specific steps:
401: in response to a user selection operation of the room naming function, a room naming option containing at least one room name is displayed.
402: and in response to a user selecting operation of a certain room in the room naming options, associating a room name with the certain room.
The map display area and the editing tool area can be seen in fig. 3. Each room in the map display area corresponds to a room naming tag, each room naming tag having its corresponding default name, as shown in fig. 3 as room a, room B, etc. In the underlying editing tool area for room naming, it can be seen that there are many room names, such as: primary lying, secondary lying, kitchen, primary guard, etc.
A room as referred to herein is understood to be a room selected by a user from among the current base three-dimensional maps. The name of a certain room is associated with the name of the room, and the name in the editing tool area is selected to realize the name of the certain room; optionally, a room name may also be written on a room name label corresponding to a certain room.
The following illustrates how the rooms are named. Fig. 5 is a schematic diagram of a room naming process according to an embodiment of the present application. When the user clicks on any one of the room naming labels in the map display area, the label is displayed as a selected state (e.g., the selected state is indicated by highlighting or changing color). If the user does not modify or replace the selected tab content, but clicks on the room naming tab corresponding to the other room, the selected state switches to the newly clicked room, i.e., the newly clicked room naming tab becomes highlighted or changes display color. If the user does not modify or replace the selected tag content, but clicks on other areas of the map than the room naming tag, the selected state of the room naming tag corresponding to the current room will be cancelled.
If the user clicks a room name desired by the user at will in the editing tool area in the selected state of the room naming tag, for example, if the user clicks the room name of 'next lying', the room naming tag in the map display area is modified to be 'next lying', and a cancel button is also displayed on the border of the tag, if the user clicks the cancel button, the naming is abandoned; if the user clicks on other room naming labels or other areas, corresponding operations are performed, and the content in the room naming labels is saved and displayed.
Assuming that after the user finishes naming "next-lying", the user clicks the room naming tag of another room "restaurant", the previous "next-lying" naming is saved, and at the same time, the new room naming tag becomes the selected state. In order to avoid the repetition of naming the room name, after the user finishes naming the "next-lying" room, the "next-lying" in the editing tool area corresponding to the room becomes a gray non-selectable state, and is prevented from being selected again by the user.
It is easy to understand that after the user finishes naming the "restaurant", if the user clicks the "restaurant" tab again, the "restaurant" tab is changed to the selected state, so that the user can change the room name, and the user can select other optional room names from the editing tool area to change, or can customize to change by the user. If the user-defined room name is the same as the named room name in the current map, the user is forbidden from modifying the name, the user is prompted to name repeatedly, and the user is required to customize the name as other room names.
Through the method, after the user finishes naming all the rooms in the map, the map shown in fig. 6 can be obtained, and fig. 6 is a schematic diagram of the effect of naming the rooms. The rooms formed by the three-dimensional wall surfaces and the door frames, and the names of the respective rooms are clearly shown in fig. 6, and the rooms are in one-to-one correspondence with the names. The user perfects the room naming on the basis of the map generated by the robot, and the automatic workload of the robot can be reduced. The map obtained in this way can meet the requirements of calling and controlling the robot through the basic three-dimensional map and the viewing requirement of a user on the basic three-dimensional map.
As an alternative embodiment, after the user sends a voice command containing the room name to the robot, the control robot is moved into the room corresponding to the room name. For example, when the user opens a client for controlling the robot and the user says "sweep the next bed", the robot moves to the next bed by itself and cleans the floor. If the user has a plurality of robots in his home, the user can add the robot name to the voice command, for example, the user says "small A will sweep the next bed and small B will sweep the main bed". Through the mode, a user can more conveniently and accurately control the robot, and user interaction experience is improved.
Assume that fig. 7 is a flowchart of a method for replacing or filling entity graphics according to an embodiment of the present application. The method comprises the following specific steps:
701: the entity graphic options including at least one entity graphic are displayed in response to a user-triggered graphic replacement or fill operation.
702: and in response to the selection operation of the user on the certain two-dimensional display area and the selection operation of the first entity graph, replacing or filling the certain two-dimensional display area with the first entity graph.
Fig. 8 is a schematic diagram of a functional interface for replacing or filling entity graphics according to an embodiment of the present application. The map display area and the editing tool area can be seen in fig. 8. The object is displayed in the map display area in the corresponding room in a mode of a two-dimensional display area, the projection shape of the two-dimensional display area is determined through actual detection, the size of the two-dimensional display area relative to the corresponding room is determined according to the actual proportion, and if the map is enlarged or reduced, the corresponding two-dimensional display area is enlarged or reduced according to the proportion relation. It can be seen from fig. 8 that the editing tool area has a plurality of furniture home appliance entity figures representing object entity figures, such as sofas, beds, garages, dining tables, refrigerators, and the like. In practical application, the user can update the furniture home appliance graph by downloading the furniture home appliance graph package according to the preference of the user. For example, if the user likes the panel furniture and the main color is white and simple, the user can directly search and download the corresponding data packet.
The following illustrates how the two-dimensional display area is replaced or filled by a solid graphic.
Fig. 9a is a schematic diagram of a physical graphic replacement process according to an embodiment of the present application. After the user clicks any two-dimensional display area in the map display area, the two-dimensional display area is displayed in a selected state (for example, the selected state is represented by highlighting or changing color), and then the user clicks the entity graphic in the editing tool area again, the corresponding two-dimensional display area is replaced with the entity graphic. If the user does not replace the selected two-dimensional display area but clicks on the other two-dimensional display area, the selected state is switched to the newly clicked two-dimensional display area, that is, the newly clicked two-dimensional display area becomes highlighted or the display color is changed. If the user does not replace the selected two-dimensional display area, but clicks on other areas of the map than the two-dimensional display area, the selected state of the current two-dimensional display area will be canceled.
Fig. 9b is a schematic diagram of a solid image filling process according to an embodiment of the present application. When the user clicks on any one of the two-dimensional display areas in the map display area (here, the two-dimensional display area is a linebox area without filling), the two-dimensional display area is displayed in a selected state, for example, the selected state is represented by highlighting or filling color. And then the user clicks the entity graph in the editing tool area, the entity graph is filled in the corresponding two-dimensional display area, and the user can adjust the size and shape of the entity graph according to the needs and can modify the color and the like of the entity graph. In order to ensure the filling effect of the solid graphics, the size of the solid graphics is defined herein, that is, when one or more solid graphics are filled into the two-dimensional display area, the maximum size of the solid graphics after being enlarged cannot exceed the wire frame range of the two-dimensional display area. If the user does not fill the selected two-dimensional display area or the user finishes filling, and adjusts the size, the angle, the shape and the like, then clicks other two-dimensional display areas, the selected state is switched to the latest clicked two-dimensional display area, and the result of filling completed before is saved. If the user does not fill the selected two-dimensional display area, but clicks on other areas of the map than the two-dimensional display area, the selected state of the current two-dimensional display area will be canceled.
If the user clicks on a home item in the editing tool area at will while the two-dimensional display area is in the selected state, comprising: the physical graphics of furniture and household appliances, for example, the physical graphics of a "bed" is clicked by a user, and the two-dimensional display area in the map display area is replaced or filled with the physical graphics of the "bed". Fig. 10 is a schematic diagram of a physical graphic adjustment interface of an object according to an embodiment of the present application. After the physical graphic is replaced or filled in the position corresponding to the two-dimensional display area, the size, the position and the angle of the physical image can be adjusted. It can be seen from fig. 10 that a rotation control for controlling the rotation of the solid graphic is arranged at the upper left corner of the graphic, and a user can rotate and adjust the placement mode of the bed according to actual requirements; a cancellation control is arranged at the upper right corner of the graph, and if a user does not want to place a bed, the user can discard the graph replacement or filling operation by triggering the control; there is a zoom control in the lower right corner of the graph, and the user can zoom the size of the bed according to the actual need, but it should be noted that the user's magnification of the bed is limited, and the maximum size of the bed is not allowed to exceed the size of the two-dimensional display area being replaced or filled.
Assuming that after the user completes the replacement or filling operation of the bed for the next sleeping in the map, the user clicks another two-dimensional display area, such as the two-dimensional display area corresponding to the sofa in the living room in the map, the bed that was replaced or filled in the middle of the next sleeping in the map will be saved, and at the same time, the new two-dimensional display area becomes the selected state. Further, the user selects the object solid figure of the sofa in the editing tool area, and the selected two-dimensional display area is replaced or filled with the sofa. If the user clicks the bed entity graph in the next lying after completing replacement or filling again, the corresponding entity graph is changed into a selected state, and operations such as scaling, replacement or filling, rotation and the like can be performed on the entity graph.
In order to facilitate the robot to be able to accurately understand the user-specified objects, the entity names of the corresponding objects may be associated in the background data for the entity pattern of each object. When determining the object entity name, it may be associated with the room name, for example, the bed entity name in "primary lying" may be set as "primary lying", and the bed entity name in "secondary lying" may be set as "secondary lying". For example, if only one dining table is located in a dining room in the current room, the entity name of the entity graph corresponding to the dining table can be associated with the entity graph. The naming manner of the entity names is only used as an example, and does not limit the technical scheme of the application, and it should be noted that no matter what naming manner of the entity names is adopted, the entity graphs and the entity names are in one-to-one correspondence in the same map.
By the above method, after the user finishes replacing or filling all the two-dimensional display areas in the map, the map shown in fig. 11a and 11b can be obtained. Fig. 11a is a schematic diagram of an effect of entity graph replacement according to an embodiment of the present application. The entity patterns in the respective rooms are clearly shown in fig. 11a, and the entity patterns are in one-to-one correspondence with entity names. The user can clearly see the information of all objects in the current three-dimensional map. Fig. 11b is a schematic diagram of an effect of filling a physical image according to an embodiment of the present application. The physical graphics in each room, and the filled-in relationship between the physical graphics and the two-dimensional display area, are clearly shown in fig. 11 b. Here, the entity graph and the entity name are in one-to-one correspondence. The user can clearly see the information of all objects in the current three-dimensional map.
As an alternative embodiment, after the user sends a voice command containing the entity name of a certain object to the robot through the client, the robot is controlled to move to the position where the object is located. For example, when a user turns on a client for controlling the robot and the user says "get next bedridden to clean" the robot moves to next bedridden and cleans the floor. Through the mode, not only can the robot be controlled to move in a certain room, but also the robot can be accurately controlled to move to the position of a certain object in a certain room, so that a user can more simply and conveniently control the robot more accurately, and user experience is improved.
In practical applications, an integral two-dimensional display area may correspond to a plurality of furniture appliances, for example, some users place a refrigerator and a water dispenser in close proximity in a living room, and the refrigerator and the water dispenser are represented in a generated map by the two-dimensional display area. FIG. 12 is a schematic diagram of adding an entity graphic according to an embodiment of the present application, where as an alternative implementation, an adding control is displayed in association with the first entity graphic at a position corresponding to a replaced or filled two-dimensional display area, as shown in FIG. 12; and if the user triggers the adding control and selects a second entity graph from the entity graph options, adding a second entity graph of another object to the two-dimensional display area.
Fig. 13 is a schematic diagram of an edited three-dimensional map according to an embodiment of the present application. Through the embodiment, the names of the rooms in the map are named, and the two-dimensional display areas corresponding to the objects are replaced or filled, so that the clearly visible map with the three-dimensional display effect can be obtained, a user can conveniently view the map, and the robot is controlled according to the three-dimensional map.
In the above embodiment, the physical graphic may be a two-dimensional physical image, preferably, a three-dimensional physical graphic may be used.
According to the technical scheme, on the basis of the three-dimensional map generated by the robot, the three-dimensional map is edited according to actual conditions, and the room names and the entity graphs corresponding to the objects in each room are edited, so that the difficulty of the robot in identifying the two-dimensional display area can be effectively reduced.
Fig. 14 is a schematic diagram of a three-dimensional map interaction device according to an embodiment of the present application, where the device includes:
the three-dimensional information acquisition module 141 is used for acquiring three-dimensional information of the wall surface and the door frame in the environment;
the three-dimensional map building module 142 is configured to build a basic three-dimensional map according to the three-dimensional information;
a two-dimensional information acquisition module 143, configured to acquire two-dimensional information of each object in the environment;
the two-dimensional display module 144 is configured to display each object at a corresponding position of the basic three-dimensional map according to the two-dimensional information, where each object is displayed on the basic three-dimensional map in a two-dimensional display area manner;
the map editing module 145 is configured to select at least one entity graphic corresponding to a certain two-dimensional display area from the entity graphic options to replace or fill the certain two-dimensional display area, so as to obtain a three-dimensional map of the environment.
Optionally, the entity pattern is a two-dimensional pattern or a three-dimensional pattern.
Optionally, the size and direction of the facility entity graph may be adjusted.
Optionally, the entity graphic is household articles, office supplies, public facilities.
Optionally, the two-dimensional information includes a position, a projected shape, a size of the object.
Optionally, the three-dimensional information acquisition module 141 is configured to acquire depth data inside the environment; generating a three-dimensional depth cloud point diagram according to the depth data; and determining the three-dimensional information of the wall surface and the door frame in the environment according to the distance information of each point in the three-dimensional depth cloud point diagram.
Optionally, the map editing module 145 is configured to display a room naming option including at least one room name in response to a room naming operation triggered by a user; and in response to a user selecting operation of a certain room in the room naming options, associating a room name with the certain room.
Optionally, the map editing module 145 is configured to put the room name in an unselected state.
Optionally, the system further includes a voice control module 146, configured to receive a first voice command triggered by a user, where the first voice command includes the room name; and controlling the robot to move to the room corresponding to the room name.
Optionally, the map editing module 145 is configured to display the entity graphic option including at least one entity graphic in response to an image replacement or filling operation triggered by a user;
and in response to the selection operation of the user on the certain two-dimensional display area and the selection operation of the first entity graph, replacing or filling the certain two-dimensional display area with the first entity graph.
Optionally, the map editing module 145 is configured to associate entity names with the at least one entity graphic respectively.
Optionally, the voice control module 146 is configured to receive a second voice command triggered by the user, where the second voice command includes an entity name; and controlling the robot to move to the object corresponding to the entity name.
Optionally, a map editing module 145 for displaying the add control in graphical association with the first entity; and responding to the selection operation of the user on a second entity graph in the adding control and the entity graph options, displaying the second entity graph in association with the first entity graph so that the first entity graph and the second entity graph replace or fill the two-dimensional display area.
Fig. 15 is a schematic structural diagram of a robot according to an embodiment of the present application. The robot comprises a machine body, one or more processors 1501, one or more memories 1502 storing computer programs, and sensors 1503, wherein the sensors 1503 comprise at least one external sensor 1503 deployed on the robot and other sensors mounted on the machine body for maintaining basic functions of the self-moving device. In addition, the self-mobile device may include necessary components such as a power supply component 1504.
The at least one external sensor is used for collecting graphic information in the sensing range of the respective signals;
one or more processors 1501 for executing the computer program to: acquiring three-dimensional information of a wall surface and a door frame in the environment; establishing a basic three-dimensional map according to the three-dimensional information; acquiring two-dimensional information of each object in the environment; displaying each object at the corresponding position of the basic three-dimensional map according to the two-dimensional information, wherein each object is displayed on the basic three-dimensional map in a two-dimensional display area mode; and selecting at least one entity graph corresponding to a certain two-dimensional display area from the entity graph options to replace or fill the certain two-dimensional display area, so as to obtain the three-dimensional map of the environment.
Optionally, the entity pattern is a two-dimensional pattern or a three-dimensional pattern.
Optionally, the size and direction of the facility entity graph may be adjusted.
Optionally, the entity graphic is household articles, office supplies, public facilities.
Optionally, the two-dimensional information includes a position, a projected shape, a size of the object.
Optionally, the sensor 1503 includes a time-of-flight module that obtains depth data inside the environment; generating a three-dimensional depth cloud point diagram according to the depth data; and determining the three-dimensional information of the wall surface and the door frame in the environment according to the distance information of each point in the three-dimensional depth cloud point diagram.
Optionally, the one or more processors 1501 are configured to display a room naming option containing at least one room name in response to a user-triggered room naming operation; and in response to a user selecting operation of a certain room in the room naming options, associating a room name with the certain room.
Optionally, one or more processors 1501 are configured to place the room name in an unselected state.
Optionally, the one or more processors 1501 are configured to receive a first voice instruction triggered by a user, where the first voice instruction includes the room name; and controlling the robot to move to the room corresponding to the room name.
Optionally, one or more processors 1501 for displaying the entity graphic options including at least one entity graphic in response to a user-triggered image replacement or filling operation;
and in response to the selection operation of the user on the certain two-dimensional display area and the selection operation of the first entity graph, replacing or filling the certain two-dimensional display area with the first entity graph.
Optionally, one or more processors 1501 are configured to associate each of the at least one entity graph with an entity name.
Optionally, the one or more processors 1501 are configured to receive a second voice instruction triggered by the user, where the second voice instruction includes an entity name; and controlling the robot to move to the object corresponding to the entity name.
Optionally, one or more processors 1501 for displaying the add control in graphical association with the first entity; and responding to the selection operation of the user on a second entity graph in the adding control and the entity graph options, displaying the second entity graph in association with the first entity graph so that the first entity graph and the second entity graph replace or fill the two-dimensional display area.
The embodiment of the application also provides a computer readable storage medium storing a computer program. The computer-readable storage medium stores a computer program, which when executed by one or more processors, causes the one or more processors to perform the steps in the respective method embodiments shown in fig. 1-13.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (12)

1. A robot control method, comprising:
displaying a map of the environment; wherein the map comprises: a three-dimensional stereograph representing a boundary of an area in the environment and a two-dimensional display area representing a position, a projected shape, and a size of an object within the environment;
in response to a user selection operation of a two-dimensional display area of a target object in the map and a first entity pattern selection operation, replacing or filling the two-dimensional display area of the target object with the first entity pattern on the map; wherein, the first entity graph is associated with an entity name;
receiving an instruction triggered by a user and comprising the entity name;
and controlling the robot to move to the target object corresponding to the entity name based on the map.
2. The method of claim 1, wherein the zone boundary divides the space of the environment into at least one room; the method further comprises the steps of:
Responding to a room naming operation triggered by a user, and associating a room name with a target room in the map;
and when the entity name of the object in the target room is determined, associating with the room name to obtain the entity name capable of reflecting the room in which the object is located.
3. The method as recited in claim 1, further comprising:
displaying an adding control at a position corresponding to the two-dimensional display area of the target object;
if the user triggers the adding control and selects a second entity graph from the entity graph options, the second entity graph is added and displayed in the two-dimensional display area of the target object;
wherein the first physical graphic and the second physical graphic do not exceed a wire frame range of a two-dimensional display area of the target object.
4. The method as recited in claim 1, further comprising:
responding to the adjustment operation of the first entity graph, and adjusting the first entity graph;
wherein the adjusting operation includes at least one of: size adjustment, shape adjustment, position adjustment, direction adjustment, color modification; and the adjusted first entity graph does not exceed the wire frame range of the two-dimensional display area of the target object.
5. The method according to any one of claims 1 to 4, further comprising:
detecting the environment in the moving process of the robot in the environment to acquire depth data;
based on all the acquired depth data, a basic three-dimensional map is established; wherein, the basic three-dimensional map comprises: a three-dimensional figure of the region boundary;
acquiring two-dimensional information of an object in the environment, wherein the two-dimensional information comprises the position, projection shape and size of the object;
and constructing a two-dimensional display area in the basic three-dimensional map based on the two-dimensional information of the object to obtain the map.
6. The method of claim 5, wherein creating a base three-dimensional map based on all of the acquired depth data comprises:
analyzing all the acquired depth data to generate a three-dimensional depth cloud point diagram;
acquiring distance information of each pixel point by using the three-dimensional depth cloud point diagram;
and determining the wall surface and the door frame in the environment according to the distance information of each pixel point, and generating a three-dimensional figure of the wall surface and the door frame so as to establish the basic three-dimensional map.
7. A robot control method, comprising:
displaying a map of the environment; wherein the map comprises: a three-dimensional stereograph representing a boundary of an area in the environment and a two-dimensional display area representing a position, a projected shape, and a size of an object within the environment; the zone boundary dividing a space of the environment into at least one room;
responding to a room naming operation triggered by a user, and associating a room name with a target room in the map;
receiving an instruction including the room name triggered by a user;
controlling the robot to move into a target room corresponding to the room name based on the map;
and according to the two-dimensional display areas of all objects in the target room in the map, realizing autonomous cleaning work.
8. The method of claim 7, wherein associating a room name for a target room in the map in response to a user-triggered room naming operation comprises:
responding to a room naming operation triggered by a user, and displaying a room naming option containing at least one room name;
and responding to the selection operation of the user on the target room in the room naming options, and associating the room name selected by the user on the target room.
9. The method according to claim 7 or 8, further comprising:
detecting the environment in the moving process of the robot in the environment to acquire depth data;
based on all the acquired depth data, a basic three-dimensional map is established; wherein, the basic three-dimensional map comprises: a three-dimensional figure of the region boundary;
acquiring two-dimensional information of an object in the environment, wherein the two-dimensional information comprises the position, projection shape and size of the object;
and constructing a two-dimensional display area in the basic three-dimensional map based on the two-dimensional information of the object to obtain the map.
10. A robot, comprising: the machine body is provided with a memory and a processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory is used for storing one or more computer programs;
the processor, coupled to the memory, for at least one or more computer programs stored in the memory for implementing the steps of the robot control method of any of the preceding claims 1 to 6 or the steps of the robot control method of any of the preceding claims 7 to 9.
11. A client device, comprising: a memory and a processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores one or more computer instructions;
the processor, coupled to the memory, for executing the one or more computer instructions for implementing the steps in the robot control method of any of the preceding claims 1 to 6 or the steps in the robot control method of any of the preceding claims 7 to 9.
12. A computer-readable storage medium comprising a computer program; the computer program, when executed by a processor, enables the steps in the robot control method of any one of the above claims 1 to 6, or the steps in the robot control method of any one of the above claims 7 to 9.
CN202310728464.3A 2019-11-25 2019-11-25 Robot control method, robot, client device, and storage medium Pending CN116758233A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310728464.3A CN116758233A (en) 2019-11-25 2019-11-25 Robot control method, robot, client device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911167481.4A CN112837412B (en) 2019-11-25 2019-11-25 Three-dimensional map interaction method, three-dimensional map interaction device, robot and storage medium
CN202310728464.3A CN116758233A (en) 2019-11-25 2019-11-25 Robot control method, robot, client device, and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911167481.4A Division CN112837412B (en) 2019-11-25 2019-11-25 Three-dimensional map interaction method, three-dimensional map interaction device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN116758233A true CN116758233A (en) 2023-09-15

Family

ID=75922478

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911167481.4A Active CN112837412B (en) 2019-11-25 2019-11-25 Three-dimensional map interaction method, three-dimensional map interaction device, robot and storage medium
CN202310728464.3A Pending CN116758233A (en) 2019-11-25 2019-11-25 Robot control method, robot, client device, and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201911167481.4A Active CN112837412B (en) 2019-11-25 2019-11-25 Three-dimensional map interaction method, three-dimensional map interaction device, robot and storage medium

Country Status (1)

Country Link
CN (2) CN112837412B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114610820A (en) * 2021-12-31 2022-06-10 北京石头创新科技有限公司 Optimization method and device for three-dimensional map display

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6133033B2 (en) * 2012-09-28 2017-05-24 東芝ライフスタイル株式会社 Electric vacuum cleaner
US9183666B2 (en) * 2013-03-15 2015-11-10 Google Inc. System and method for overlaying two-dimensional map data on a three-dimensional scene
WO2016106358A1 (en) * 2014-12-22 2016-06-30 Robert Bosch Gmbh System and methods for interactive hybrid-dimension map visualization
CN106067191A (en) * 2016-05-25 2016-11-02 深圳市寒武纪智能科技有限公司 The method and system of semantic map set up by a kind of domestic robot
CN106780735B (en) * 2016-12-29 2020-01-24 深圳先进技术研究院 Semantic map construction method and device and robot
CN107632285B (en) * 2017-09-19 2021-05-04 北京小米移动软件有限公司 Map creating and modifying method and device
CN108873912A (en) * 2018-08-21 2018-11-23 深圳乐动机器人有限公司 Management map method, apparatus, computer equipment and storage medium
CN110211214A (en) * 2019-05-07 2019-09-06 高新兴科技集团股份有限公司 Texture stacking method, device and the storage medium of three-dimensional map
CN110415347B (en) * 2019-07-22 2023-08-25 高新兴科技集团股份有限公司 Method and device for fusing three-dimensional live-action map and two-dimensional plane map and electronic equipment

Also Published As

Publication number Publication date
CN112837412A (en) 2021-05-25
CN112837412B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
US10325409B2 (en) Object holographic augmentation
CN111542420B (en) Mobile home robot and control method thereof
US9041731B2 (en) Method and a user interaction system for controlling a lighting system, a portable electronic device and a computer program product
RU2708287C1 (en) Method and device for drawing room layout
AU2014240544B2 (en) Translated view navigation for visualizations
US20210221001A1 (en) Map-based framework for the integration of robots and smart devices
KR20130110907A (en) Apparatus and method for remote controlling based on virtual reality and augmented reality
CN111985022A (en) Processing method and device for on-line decoration, electronic equipment and storage medium
US11269350B2 (en) Method for creating an environment map for a processing unit
CN112837412B (en) Three-dimensional map interaction method, three-dimensional map interaction device, robot and storage medium
CN105701252A (en) Method and device for providing guest room information
US20230389762A1 (en) Visual fiducial for behavior control zone
CN111986305A (en) Furniture display method and device, electronic equipment and storage medium
KR20180096468A (en) A map generation system and method
CN112784664A (en) Semantic map construction and operation method, autonomous mobile device and storage medium
KR20210083574A (en) A method for providing tag interfaces using a virtual space interior an apparatus using it
CN114158980A (en) Job method, job mode configuration method, device, and storage medium
CN112866070A (en) Interaction method, interaction device, storage medium and electronic equipment
AU2020304463B2 (en) Method and apparatus for displaying item information in current space, and medium
CN110962132B (en) Robot system
CA3031840A1 (en) A device for location based services
CN111830998A (en) Operation method, virtual wall adding method, equipment and storage medium
US20240127552A1 (en) Augmented reality method and system enabling commands to control real-world devices
CN112034849B (en) Area selection processing method for self-moving equipment and self-moving equipment
CN115830162B (en) House type diagram display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination