CN110704140A - Map processing method, map processing device, terminal equipment and storage medium - Google Patents

Map processing method, map processing device, terminal equipment and storage medium Download PDF

Info

Publication number
CN110704140A
CN110704140A CN201810747436.5A CN201810747436A CN110704140A CN 110704140 A CN110704140 A CN 110704140A CN 201810747436 A CN201810747436 A CN 201810747436A CN 110704140 A CN110704140 A CN 110704140A
Authority
CN
China
Prior art keywords
floor
map
map generation
user
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810747436.5A
Other languages
Chinese (zh)
Inventor
周川艳
李晓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201810747436.5A priority Critical patent/CN110704140A/en
Publication of CN110704140A publication Critical patent/CN110704140A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the invention provides a map processing method, a map processing device, terminal equipment and a storage medium, wherein the method comprises the following steps: responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two map generation interfaces corresponding to the robot, wherein the target map generation interface comprises an operation function related to map generation; and executing a map generation instruction triggered by the operation function by a user, wherein the operation functions contained in at least two map generation interfaces are not completely the same, the at least two map generation interfaces correspond to the floor scene of the environment where the robot is located, and the floor scene is a single-floor scene or a multi-floor scene. The user can select a proper map generation interface according to the actual floor environment, and then the floor map of the floor scene is generated in a manual assisting mode based on the operation function related to the map generation and provided in the map generation interface, so that the accuracy and the generation efficiency of the floor map generation result are improved.

Description

Map processing method, map processing device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a map processing method, an apparatus, a terminal device, and a storage medium.
Background
With the development of artificial intelligence technology, various intelligent robots increasingly enter people's lives, such as robots used in hospital scenes, household robots such as floor sweeping robots, and the like.
In order to further increase the degree of intelligence of the robot and to facilitate the use of the robot by the user, an Application program for controlling the robot, called a robot control Application (App), has appeared. The user can control the use of the robot and the related management based on the robot control App. For example, taking a floor sweeping robot as an example, a user can select a cleaning mode from the robot control App to control the floor sweeping robot to clean according to the selected cleaning mode, can set the water consumption, and can perform scheduled cleaning.
For the sweeping robot, the sweeping effect and the sweeping efficiency of the sweeping robot depend on the establishment of a home environment map to a great extent, namely, the map of the environment where the sweeping robot is located is accurately established, so that the robot can complete the sweeping task faster and better based on the established map of the current environment when executing the sweeping task.
In practical application, the environment of the robot can be summarized into two main scenes, namely a single-floor scene and a multi-floor scene, and the current multi-floor scene is still a map building and using mode for multiplexing the single-floor scene. In brief, the sweeping robot determines whether a floor map is stored in the working process, that is, the sweeping process, and if so, further identifies whether the floor environment where the floor is located currently is matched with the generated floor map, and if not, generates a new floor map to replace the stored floor map in the sweeping process.
However, in practice, the level of intelligence of the sweeping robot has not yet reached a perfect condition, and the autonomous determination result of the sweeping robot is not always correct, which will cause the creation result of the floor map to be inconsistent with the actual condition, and the implementation of the method completely relying on the sweeping robot to automatically form the floor map is too complex.
Disclosure of Invention
In view of this, embodiments of the present invention provide a map processing method, an apparatus, a terminal device, and a storage medium, so as to enable a user to assist in generating a floor map required in a robot work process according to an actual home floor scene, and improve accuracy and generation efficiency of a floor map generation result.
The embodiment of the invention provides a map processing method, which comprises the following steps:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
An embodiment of the present invention provides a map processing apparatus, including:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for responding to map generation interface selection operation triggered by a user, and determining a target map generation interface selected by the user from at least two map generation interfaces corresponding to a robot, and the target map generation interface comprises an operation function related to map generation;
the processing module is used for executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
An embodiment of the present invention provides a terminal device, including: a memory, a processor; wherein the content of the first and second substances,
the memory is to store one or more computer instructions that, when executed by the processor, implement:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
Embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
In the embodiment of the invention, different map generation interfaces can be provided for different floor scenes, and the operation functions related to map generation contained in the different map generation interfaces are not completely the same. Therefore, the user can select a proper map generation interface according to the actual floor scene, and then the floor map of the floor scene is generated in an artificial assistance mode based on the operation function related to the map generation and provided in the map generation interface, so that the accuracy and the generation efficiency of the floor map generation result are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a map processing system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a map processing method according to an embodiment of the present invention;
fig. 3 is a schematic view of an interface change state of a map generation interface selection process of a robot control App according to an embodiment of the present invention;
fig. 4 is a schematic view of an interface change state of another map generation interface selection process of a robot control App according to an embodiment of the present invention;
fig. 5 is a schematic view of an interface change state of a map generation interface selection process of a robot control App according to another embodiment of the present invention;
fig. 6 is a schematic view of an interface change state of a map generation interface selection process of a robot control App according to another embodiment of the present invention;
fig. 7 is a schematic view of an interface change state of a map generation interface selection process of a robot control App according to an embodiment of the present invention;
FIG. 8 is a flowchart of another map processing method according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an interface change state of a floor map selection process of a robot control App according to an embodiment of the present invention;
fig. 10 is a schematic diagram of an interface change state of another floor map selection process of a robot control App according to an embodiment of the present invention;
fig. 11 is a schematic diagram of an interface change state of a floor map selection process of another robot control App according to an embodiment of the present invention;
fig. 12 is a schematic diagram of an interface change state of a floor map selection process of yet another robot control App according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a map processing apparatus according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a terminal device corresponding to the map processing apparatus provided in the embodiment shown in fig. 13.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 1 is a schematic structural diagram of a map processing system according to an embodiment of the present invention, as shown in fig. 1, the map processing system includes: the robot comprises a terminal device and a robot, wherein the terminal device is in communication connection with the robot.
Optionally, the communication connection between the terminal device and the robot includes, but is not limited to, communication connection through a mobile cellular network such as 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, and may also be communication connection through a method such as bluetooth, WiFi, and the like.
The terminal device may be a smart phone, a tablet computer, a personal computer, etc., and corresponds to a device for controlling the robot, compared to the robot. The terminal equipment is provided with the robot control App, so that a user can conveniently control the robot through the App.
The robot mentioned in the embodiment of the invention mainly refers to a robot which needs to work in a certain floor scene, and the robot needs to use a floor map of the floor where the robot is located in the working process, so that in practical application, the robot can be a floor sweeping robot in a family environment, a catering service robot in a restaurant environment and the like.
Generally, to the extent that the environment in which the robot is located is a home environment, most floor scenes in the home environment are single-floor scenes or multi-floor scenes, and certainly, other floor scenes are not excluded, for example, in a hotel environment, a plurality of floors and a plurality of rooms are often provided for each floor, so that the scene is also used as a floor scene. The floor writing scenes in the above examples may be referred to as a home single-floor scene, a home multi-floor scene, and a hotel multi-floor scene, respectively. For convenience of description, floor scenes of the robot in various environments are generally divided into a single-floor scene and a multi-floor scene regardless of whether the environment of the robot is a home environment, a hotel environment or other environments.
The embodiment of the invention provides a scheme for assisting a robot to create a floor map corresponding to a floor scene according to the floor scene of the environment where the robot is located and selecting the floor map required to be used in the normal working process of the robot, so that the accuracy and the generation efficiency of the generation result of the floor map are improved, and the matching of the floor map used in the working process of the robot and the floor where the floor map is located is ensured.
For this purpose, the terminal device may provide map generation interfaces corresponding to different floor scenes, such as a single floor map generation interface corresponding to a single floor scene, and a multi-floor map generation interface corresponding to a multi-floor scene. Each map generation interface comprises an operation function related to map generation, but the operation functions related to map generation in different map generation interfaces are not identical. Therefore, the user can select a proper map generation interface according to the actual floor scene of the environment where the robot is located, and then manually intervene in the generation of the floor map corresponding to each floor under the floor scene through the operation function on the selected map generation interface, so that the floor maps suitable for each floor are finally obtained.
In this embodiment, when a floor map corresponding to a floor scene (referred to as a target floor scene for convenience of description) of an environment where the robot is located needs to be generated, a user may trigger a map generation interface selection operation to the terminal device to select a desired map generation interface from at least two kinds of map generation interfaces provided by the terminal device. For the terminal device, the terminal device may determine which map generation interface the user selects from the at least two map generation interfaces based on a map generation interface selection operation triggered by the user. For convenience of description, the map generation interface selected by the user is referred to as a target map generation interface.
When a user selects a target map generation interface, at a proper time, the terminal device shows the target map generation interface, the target map generation interface comprises an operation function related to map generation, so that the user can trigger a map generation instruction through the operation function, and the terminal device executes the map generation instruction. It is understood that, when a user selects a certain operation function, an instruction corresponding to the operation function is triggered to the terminal device, and the terminal executes the instruction in response to the instruction.
In practical applications, because the operation functions related to map generation provided in different map generation interfaces are different, map generation instructions that can be triggered by users are different on different map generation interfaces. Such as: because the floors are in one-to-one correspondence with the floor maps, in a single-floor scene, only one floor is provided, and therefore the operating functions provided in the single-floor map generating interface corresponding to the single-floor scene can be various map editing functions, such as floor map naming, map area division and naming, virtual wall drawing and the like. In a multi-floor scene, assuming that the number of floors is N, N >1, it is described that a floor map corresponding to the N floors one to one needs to be generated, so that, in addition to providing various map editing functions, a new map creating function may be provided in a multi-floor map generating interface corresponding to the multi-floor scene, so that when a user triggers the new map creating function, the controller robot is controlled to generate a floor map of a new floor where the user is currently located.
Optionally, a robot control App may be installed on the terminal device, so that the terminal device may implement the processing logic of the terminal device by running the robot control App, that is, implement the map processing method provided by the embodiment of the present invention.
Based on the processing system, the following describes in detail a specific implementation procedure of the map processing method provided by the embodiment of the present invention, with a terminal device as an execution subject.
Fig. 2 is a flowchart of a map processing method according to an embodiment of the present invention, where the map processing method may be executed by the terminal device shown in fig. 1, and as shown in fig. 2, the method may include the following steps:
201. and responding to the map generation interface selection operation triggered by the user, and determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to the robot, wherein the target map generation interface comprises an operation function related to map generation.
The at least two map generation interfaces are not identical in operation function related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
202. And executing a map generation instruction triggered by the user through an operation function contained in the target map generation interface.
In this embodiment, different map generation interfaces may be provided for different floor scenarios, and the operation functions related to map generation included in the different map generation interfaces are not completely the same. Therefore, the user can select a proper map generation interface according to the actual floor environment, and then the floor map of the floor scene is generated based on the manual assistance of the operation function related to the map generation provided in the map generation interface, so that the accuracy and the generation efficiency of the floor map generation result are improved.
In the embodiment of the present invention, the timing and manner for the user to trigger the map generation interface selection operation are not limited, and any manner that enables the user to select the required map generation interface according to the actual floor scene may be applied to the embodiment of the present invention. Several alternative implementations are described below:
in alternative implementation 1, step 201 may be implemented as: responding to the operation of starting a robot control application program by a user, and displaying a map generation interface selection prompt page, wherein the map generation interface selection prompt page comprises floor scene identifications corresponding to at least two map generation interfaces; and determining a target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page. In this implementation, the method may further include: and displaying the target map generation interface in response to the map generation interface viewing operation triggered by the user.
In practical applications, the operation of starting the robot control application program (hereinafter referred to as a robot control App) may be an operation of starting the robot control App for the first time after a user installs the robot control App in a terminal device.
Referring to fig. 3 to explain an execution process of the above optional implementation mode 1, as shown in fig. 3, when a user first starts a robot control App, a map generation interface selection prompt page may pop up automatically on a home page of the robot control App. The map generation interface selection prompt page is mainly used for enabling a user to select a map generation interface required to be used according to a floor scene where the robot is located. For example, as shown in fig. 3, the map generation interface selection prompt page may be displayed in a form of a selection dialog box suspended on the home page, although the specific display form is not limited thereto, the map generation interface selection prompt page may include two options, that is, "only one layer of map is needed" and "multiple floors of map are needed", in this example, the two options may be regarded as a single floor scene identifier and a multiple floors scene identifier, respectively. Based on this, when the robot is in a single-floor scene, the user can select the option of "only needing one floor of map", and at this time, the target map generation interface selected by the user to be used is determined to be the single-floor map generation interface according to the selection operation of the user. When the robot is in a multi-floor scene, a user can select the option of 'needing a multi-floor map', and at the moment, the target map generation interface selected and used by the user is determined to be the multi-floor map generation interface according to the selection operation of the user.
It is understood that the specific contents included in the map generation interface selection prompt page illustrated in fig. 3 are only examples, and in fact, any guidance contents capable of guiding the user to select a desired map generation interface are suitable for the embodiment of the present invention.
When the user selects one of the two options, the map generation interface selects the prompt page to be closed, and at the moment, the home page of the robot control App can be presented. The home page may include various functions available to the user, wherein optionally controls may be displayed in the home page that trigger viewing of a target mapping interface selected for use by the user, such as the buttons illustrated in FIG. 3 with map management typeface. Thus, when the user triggers the map generation interface viewing operation by clicking the button, the target map generation interface selected by the user for use is presented.
It should be noted that, firstly, when the robot does not generate any floor map, that is, when the robot does not operate, the robot may be triggered to operate on the current floor by the robot control App at any time or by directly operating the operation trigger button on the robot body, and then the robot may generate a floor map corresponding to the current floor during the first operation, which is called a first floor map for convenience of description. Regardless of whether the target map generation interface selected by the user before or after the robot works for the first time is a single-floor map generation interface or a multi-floor map generation interface, the terminal device adds the first floor map to the target map generation interface after receiving the first floor map sent by the robot. Secondly, the timing of the user triggering the map generation interface viewing operation is not limited, that is, in implementation 1, the user may trigger the viewing operation by himself or herself when the user wants to view the selected target map generation interface, only if, when the user triggers the viewing operation, the robot has already completed the first work and sent the first floor map to the terminal device, the first floor map will be displayed in the target map generation interface, and conversely, if, when the user triggers the viewing operation, the robot has not yet generated the first floor map, the target map generation interface will not have a floor map, but only include an operation function related to map generation.
In order to facilitate understanding of differences of different map generation interfaces, fig. 3 illustrates a case where a user selects an option of "only needing one layer of map", and after the user clicks a "map management" button in a map to trigger display of a single-floor map generation interface, assuming that a first floor map, such as a map illustrated in fig. 3, is already generated and added to the single-floor map generation interface at this time, the single-floor map management interface may include map editing functions, such as map naming, virtual wall drawing, area division and naming, storage, and deletion, which are corresponding to a single-floor scene. Therefore, the user can name the first map based on the map naming operation function; the user can divide the first map into a plurality of sub-areas based on the area division and naming operation functions and name each sub-area, for example, the first map is divided according to the room layout condition contained in the floor; the user can draw the virtual wall on the map I based on the virtual wall drawing operation function, and the virtual wall is used for indicating that the position of the virtual wall of the robot cannot enter in the follow-up working process of the robot, for example, when the robot is used for sweeping the robot, the area surrounded by the virtual wall does not need to be cleaned.
Therefore, when the target map generation interface selected by the user is a single-floor map generation interface, that is, the current target floor scene is a single-floor scene, assuming that the terminal device has added the received floor map generated by the first work of the robot to the single-floor map generation interface, then step 203 may be implemented as:
and responding to the editing operation of the user on the floor map generated by the first work of the robot according to various map editing functions contained in the single-floor map generating interface, and storing the edited floor map, namely executing the editing operation and storing the edited floor map.
It should be noted that, in a single-floor scenario, because there is only one floor, the single-floor map generation interface generally includes only one floor map unless the user deletes the floor map, in this case, in the working process of the subsequent robot, since there is no available floor map, the robot will regenerate the floor map corresponding to the floor, and the terminal device adds the regenerated floor map to the single-floor map generation interface.
In contrast, as shown in fig. 4, if the user selects the option "need multi-floor map", the multi-floor map generation interface will be presented after the user clicks the "map management" button in fig. 4. Assuming that the first floor map, such as the map illustrated in fig. 4, is already generated and added to the multi-floor map generation interface, at this time, the multi-floor map management interface includes an operation function related to map generation corresponding to a multi-floor scene. Different from the single-floor map generation interface, in this case, the multi-floor map generation interface includes a plurality of map editing functions and new map creation functions. The map editing functions include, for example, map editing functions such as map naming, virtual wall drawing, area division and naming, storage, and deletion illustrated in fig. 4; the new map creation function is represented, for example, by the control with the "+" typeface in fig. 4. Because the floor map corresponding to each floor needs to be generated in a multi-floor scene, a user can trigger generation of a new floor map by clicking the "+" control in the multi-floor map generation interface.
Based on this, the above step 202 can be implemented as:
responding to the selection operation of the user on the new map creation function, and sending a new map creation instruction to the robot so as to control the robot to generate a third floor map corresponding to the current floor in the working process;
adding a third floor map into the multi-floor map generation interface;
and responding to the editing operation of the third floor map by the user according to various map editing functions, and storing the edited third floor map.
That is, in practical applications, in a multi-floor scenario, assuming that the robot first works on one floor, the first floor map (map one in fig. 4) generated at this time is added to the multi-floor map generation interface, and the user can edit and store the map one through the map editing function provided in the multi-floor map generation interface. Then, the user can move the robot to the second floor, and then click the new map creation function in the multi-floor map generation interface, so that the terminal device issues a new map command to the robot, the new map command is equivalent to a control command for controlling the robot to start working, so that the robot starts working on the current floor, i.e. the second floor, and a floor map corresponding to the second floor is generated during working and is assumed to be the second map in fig. 4. And when the robot finishes working, the second map is sent to the terminal equipment, the terminal equipment adds the second map to the multi-floor map generation interface, and the user can further edit and store the second map through various map editing functions.
In summary, as can be seen from comparing fig. 3 and fig. 4, the difference between the map generation-related operation functions provided on the single-floor map generation interface and the multi-floor map generation interface is mainly reflected in: the multi-floor map generation interface has a new map creation function, but the single-floor map generation interface does not. This is because the robot will only work on a floor under the single floor scene, and the creation of this floor map can be accomplished when generally working for the first time, even if need regenerate because delete existing floor map in the follow-up, because the uniqueness of floor, the correspondence of floor map and floor can not make mistakes. In a multi-floor scene, the new map creation function is equivalent to providing an interface for a user to manually intervene the creation of a floor map according to needs, namely the user controls the robot to execute the map generation according to the needs of generating the floor map of a new floor, and based on the new map creation operation triggered by the user, the robot knows that the robot is in a new floor at the moment, and automatically sends the generated floor map to terminal equipment after generating the new floor map, so that the user carries out editing processing such as naming on the floor map, the accuracy of the corresponding relationship between the floor map and the floor is ensured, and the new floor map generation mode is simple to realize.
In addition to the selection of a desired mapping interface by the user can be realized through the above-mentioned alternative implementation 1, in alternative implementation 2, step 201 can be realized as: if a first floor map sent by the robot is received, displaying a map generation interface selection prompt page, wherein the first floor map is generated by the first work of the robot, and the map generation interface selection prompt page comprises floor scene identifications corresponding to at least two map generation interfaces; and determining a target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page. In this implementation, the method may further include: and displaying the target map generation interface in response to the map generation interface viewing operation triggered by the user.
In the implementation mode, the robot can be controlled to work for the first time in the current floor through the control button provided by the robot control App, and the robot can also be triggered to work for the first time in the current floor through operating the physical button on the robot body. For example, the sweeping robot performs sweeping work, for example, the following robot performs following work. In the working process, the robot can traverse the area where the robot can walk on the current floor to generate a first floor map.
This implementation is described with reference to fig. 5, where the first floor map is, for example, map one illustrated in fig. 5.
After the robot generates the first floor map, the first floor map is sent to the terminal device, and at this time, a map generation interface selection prompt page as illustrated in fig. 5 is displayed on an interface of the robot control App by the terminal device, where the map generation interface selection prompt page includes two options, that is, "only one floor map is needed" and "multiple floor maps are needed", and in this example, the two options may be regarded as a single floor scene identifier and a multiple floor scene identifier, respectively. Therefore, when the user selects the option of 'only needing one-floor map', the target map generation interface selected by the user to be used is determined to be the single-floor map generation interface according to the selection operation of the user. When the user selects the option of 'needing multi-floor maps', the target map generation interface selected by the user to be used is determined to be the multi-floor map generation interface according to the selection operation of the user. It should be noted that, after the target map generation interface selected by the user is determined, the first floor map is automatically added to the target map generation interface selected by the user, so that the user can edit the first floor map based on the map editing function provided in the target map generation interface.
After the user finishes the selection operation, the map generation interface selection prompt page is retracted, and at this time, as shown in fig. 5, a home page of the robot control App can be displayed. In this implementation, optionally, the user may decide by himself whether to display the target map generation interface, and when the user clicks a button with a map management word in fig. 5 to trigger a map generation interface viewing operation, the target map generation interface selected by the user is displayed. At this time, according to the difference of the target map generation interface, the operation that the user can perform on the target map generation interface may refer to the related description of the foregoing optional implementation. Illustrated in fig. 5 is a case where the user selects the multi-floor mapping interface.
In alternative implementation 3, step 201 may be implemented as: responding to a map generation interface viewing operation triggered by a user, and displaying a map generation interface selection prompt page, wherein the map generation interface selection prompt page comprises floor scene identifications corresponding to at least two kinds of map generation interfaces; and determining a target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page. In the implementation mode, the target map generation interface selected by the user can be displayed immediately after the target map generation interface is determined.
This implementation is explained in conjunction with fig. 6: as shown in fig. 6, when the user opens the robot control App, a control for triggering the map generation interface viewing operation, such as a button with a map management word shown in fig. 6, may be displayed on the home page, and when the user clicks the button to trigger the map generation interface viewing operation, a map generation interface selection prompt page is displayed, where the prompt page may be a selection dialog box shown in fig. 6 and including two options, that is, "only one-layer map is needed" and "multiple-floor map is needed", in this example, the two options may be regarded as a single-floor scene identifier and a multiple-floor scene identifier, respectively. Therefore, when the user selects the option of 'only needing one-floor map', the target map generation interface selected by the user to be used is determined to be the single-floor map generation interface according to the selection operation of the user. When the user selects the option of 'needing multi-floor maps', the target map generation interface selected by the user to be used is determined to be the multi-floor map generation interface according to the selection operation of the user. And after the target map generation interface selected by the user is determined, automatically popping up the target map generation interface. At this time, according to the difference of the target map generation interface, the operation that the user can perform on the target map generation interface may refer to the related description of the foregoing optional implementation. Illustrated in fig. 6 is a case where the user selects the multi-floor mapping interface.
In alternative implementation 4, step 201 may be implemented as: if a second floor map sent by the robot is received, displaying a map storage prompt page, wherein the second floor map is sent when the robot determines that the second floor map is not matched with the stored floor map, and the map storage prompt page comprises an option of judging whether the second floor map is stored as the floor map of the multi-floor scene; and determining the target map generation interface selected by the user as the multi-floor map generation interface according to the operation of saving the second floor map as the floor map of the multi-floor scene by the user. In the implementation mode, the target map generation interface selected by the user can be displayed immediately after the target map generation interface is determined.
It should be noted that each floor map generated by the robot may be stored in the robot in addition to the terminal device, that is, the robot control App, that is, the user may send the floor map determined to be stored to the robot through the robot control App to store the floor map.
Therefore, in the implementation mode, the user can control the robot to normally work according to actual requirements, for example, when the floor of a certain floor needs to be cleaned, the sweeping robot is started to clean. In the working process of the robot, optionally, the floor environment of the current work can be identified, if the current work floor environment is found to be matched with a certain stored floor map, the floor map can be further used for subsequent work, for example, for a sweeping robot, when the robot finds that the current work floor environment is matched with the floor map of a floor, the robot can sweep according to the floor map, for example, the floor map may indicate which area can be swept and which area cannot be swept, and which area has a carpet laid on the ground to increase the suction force, etc. On the contrary, if the robot sends the current working floor environment and the stored floor map are not matched, it is indicated that the robot may work in the current working floor environment for the first time, at this time, the robot generates a floor map corresponding to the current working floor environment in the working process, which is called a second floor map, and sends the second floor map to the terminal device, and the user determines whether the second floor map should be used as the floor map of the current working floor environment.
After the terminal equipment receives the second floor map, the robot control App can be opened, and a map storage prompt page is displayed on a home page of the robot control App. As shown in fig. 7, optionally, the map saving prompt page may include a word "identify a new environment map and save as a new floor map" and corresponding yes or no options. When the user selects this option, it is determined that the multi-floor scene is present, and the second floor map is a new floor map, so that it is determined that the target map generation interface selected by the user is the multi-floor map generation interface, and at this time, the second floor map may be added to the multi-floor map generation interface, and the second floor map illustrated in fig. 7 is map two.
The implementation mode is to provide a scheme for user intervention in multi-floor map generation under a multi-floor scene, namely when a robot identifies a new floor map, the user determines whether to take the floor map as a map of a certain new floor. Therefore, the robot is only informed to the terminal device if it determines that the currently generated second floor map does not match the already stored floor map, i.e. at least the first floor map has already been stored in the robot.
And if no floor map is stored in the robot, it indicates that the robot has not yet worked at this time, and at this time, optionally, after the robot generates a first floor map, the robot may send the first floor map to the terminal device, and the terminal device may add the first floor map to each map generating interface, for example, to the single-floor map generating interface and the multi-floor map generating interface, respectively. At this time, as shown in fig. 7, the first floor map represented by map one is already added to the multi-floor map generation interface before map two is added. After that, if the user has a need to view the map generation interface before receiving the second floor map, because the robot control App does not know whether the robot is in a multi-floor scene or a single-floor scene at this time, optionally, the robot control App may randomly select one map generation interface from the multi-floor map generation interface and the single-floor map generation interface to display, and at this time, regardless of whether the map generation interface is the single-floor map generation interface or the multi-floor generation interface, only a variety of map editing functions may be displayed, so that the user is only allowed to edit the first map in the map generation interface. After receiving a second floor map, namely map two, determining whether the user selects a single-floor map generation interface or a multi-floor map generation interface based on the selection of the user, specifically, determining that the multi-floor map generation interface is selected by the user when the user selects a "yes" option in fig. 7, wherein at this time, a new map creation function can be displayed in the displayed multi-floor map generation interface besides a plurality of map editing functions; when the user selects the "no" option in fig. 7, it is determined that the user selected the single floor map generation interface, at which point map two is discarded.
In summary, in the embodiments of the present invention, different map generation interfaces may be provided for different floor scenes, and the operation functions related to map generation included in the different map generation interfaces are not completely the same. Therefore, the user can select a proper map generation interface according to the actual floor environment, and then the floor map of the floor scene is generated based on the manual assistance of the operation function related to the map generation provided in the map generation interface, so that the accuracy and the generation efficiency of the floor map generation result are improved.
The generation process of the floor map is introduced through the embodiment, and for a multi-floor scene, the problem of selecting the floor map is also involved in the normal working process of the robot later, so that the embodiment of the invention also provides a scheme that a user can select the corresponding floor map for the robot to use according to the floor on which the robot needs to work currently. The selection process of the floor map will be described with reference to the embodiment shown in fig. 8.
Fig. 8 is a flowchart of another map processing method according to an embodiment of the present invention, and as shown in fig. 8, the method includes the following steps:
801. and displaying a floor map selection interface.
802. And responding to the floor map selection operation triggered by the user in the floor map selection interface, and sending a work instruction to the robot so as to enable the robot to work according to the target floor map selected by the user.
In practical applications, how the user operates the robot control App to display the floor map selection interface is not specifically limited, for example, a button for triggering floor map selection may be provided on a home page or a certain page of the robot control App, and the user clicks the floor map selection interface to display.
Since the problem of floor map selection is generally only involved in a multi-floor scenario, the floor map selection interface may only be exposed if the user chooses to use the multi-floor map generation interface. In a single-floor scene, only one floor map is provided, so that a user can edit the floor map in a single-floor map generation interface and then send the floor map to the robot, and the floor map is adopted in the subsequent working process of the robot. Therefore, the present embodiment may be performed when the user selects the multi-floor map generation interface and a plurality of floor maps have been generated in the multi-floor map generation interface.
In addition, in an optional embodiment, in a multi-floor scenario, the floor map selection interface may be the same interface as the multi-floor map generation interface, that is, a plurality of generated floor maps may be displayed in the multi-floor map generation interface, so that a user may select a currently required floor map from the multi-floor map generation interface according to an actual requirement and issue the selected floor map to the robot.
However, the floor map selection interface and the multi-floor map generation interface may be designed independently. Several alternative floor map selection interface styles and floor map selection methods are provided below, but the floor map selection implementation is not limited to this. For convenience of description, a floor map corresponding to a floor on which the robot currently operates, which a user needs to select, is referred to as a target floor map.
As shown in fig. 9, in an alternative manner, a floor switching button may be included in the floor map selection interface for the user to select the target floor map by performing a switching operation on the floor switching button. It can be understood that the floor map selection interface further includes a map display area as illustrated in fig. 9, so that the map display area switches and displays the maps of the floors based on the switching operation triggered by the user on the floor switching button, thereby facilitating the user to find the desired target floor map. In addition, the floor map selection interface may further include a button (not shown in fig. 9) similar to "confirm" for determining which floor map the user has selected, and after the target floor map is displayed by switching, the user clicks "confirm" to trigger the terminal device to issue the work instruction to the robot, so that the target floor map can be carried in the work instruction and issued to the robot.
As shown in fig. 10, in another alternative, a floor identifier may be included in the floor map selection interface for the user to select a target floor map by performing a selection operation on the floor identifier. That is, the floor marks corresponding to the generated multiple floor maps may be displayed on the floor map selection interface, and the user may click a desired floor mark. Fig. 10 illustrates 1F, 2F, and 3F as floor marks of the first floor, the second floor, and the third floor, respectively. Optionally, the floor map selection interface further includes a map display area as illustrated in fig. 10, so that a corresponding floor map is displayed in the map display area based on a selection operation of the floor identifier by the user.
As shown in fig. 11, in another alternative, thumbnails of stored multiple floor maps are included in the floor map selection interface, so that the user can select a target floor map by performing a selection operation on the thumbnails. Specifically, optionally, as illustrated in fig. 11, the floor map selection interface may include a thumbnail display area in which thumbnails of a plurality of floor maps, such as a 1F map, a 2F map, and a 3F map, are displayed. In addition, the floor map selection interface further includes an enlarged view display area as illustrated in fig. 11, so that the thumbnail display area can be retracted or hidden based on the user's selection operation of the thumbnail of a certain floor map, and the enlarged view display area in which an enlarged version of the floor map corresponding to the thumbnail selected by the user is displayed in the floor map selection interface.
As shown in fig. 12, in another alternative, a perspective view formed by a plurality of stored floor maps is included in the floor map selection interface, so that the user can select a target floor map by performing a selection operation on the plurality of floor maps. Optionally, the perspective view may reflect an upper-lower positional relationship between a plurality of floors corresponding to the plurality of floor maps. For example, in the perspective view of the three layers formed by the map 1, the map 2, and the map 3 respectively corresponding to the first floor, the second floor, and the third floor illustrated in fig. 12, when the user wants to select the map 2 as the target floor map, the user only needs to click the layer where the map 2 is located.
In summary, in the embodiment of the present invention, the user can manually select the required floor map according to the actual requirement and issue the selected floor map to the robot, so as to ensure that the floor map used in the robot work process matches the floor environment where the floor map is located.
The map generation apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these map generation devices can each be configured using commercially available hardware components through the steps taught by the present solution.
Fig. 13 is a schematic structural diagram of a map processing apparatus according to an embodiment of the present invention, and as shown in fig. 13, the apparatus includes: a determining module 11 and a processing module 12.
The determining module 11 is configured to determine, in response to a map generation interface selection operation triggered by a user, a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to the robot, where the target map generation interface includes an operation function related to map generation.
The processing module 12 is configured to execute a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
In an optional embodiment, the apparatus further comprises: and the display module 13 is configured to display a map generation interface selection prompt page in response to an operation of starting the robot control application program by the user, where the map generation interface selection prompt page includes floor scene identifiers corresponding to the at least two kinds of map generation interfaces.
Accordingly, the determination module 11 may be configured to: and determining the target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page.
Correspondingly, the display module 13 may further be configured to: and displaying the target map generation interface in response to the map generation interface viewing operation triggered by the user.
In an optional embodiment, the display module 13 may be further configured to: and if a first floor map sent by the robot is received, displaying a map generation interface selection prompt page, wherein the first floor map is a floor map generated by the first work of the robot, and the map generation interface selection prompt page comprises floor scene identifications corresponding to the at least two kinds of map generation interfaces.
Accordingly, the determination module 11 may be configured to: and determining the target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page.
Correspondingly, the display module 13 may further be configured to: and displaying the target map generation interface in response to the map generation interface viewing operation triggered by the user.
In an optional embodiment, the display module 13 may be further configured to: and responding to the map generation interface viewing operation triggered by the user, and displaying a map generation interface selection prompt page, wherein the map generation interface selection prompt page comprises floor scene identifications corresponding to the at least two kinds of map generation interfaces.
Accordingly, the determination module 11 may be configured to: and determining the target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page.
In an optional embodiment, the display module 13 may be further configured to: and if a second floor map sent by the robot is received, displaying a map storage prompt page, wherein the second floor map is sent when the robot determines that the second floor map is not matched with the stored floor map, and the map storage prompt page comprises an option of judging whether the second floor map is stored as a floor map of a multi-floor scene.
Accordingly, the determination module 11 may be configured to: and determining that the target map generation interface selected by the user is a multi-floor map generation interface according to the operation of saving the second floor map as the floor map of the multi-floor scene by the user.
Optionally, if the target map generation interface is a single-floor map generation interface corresponding to a single-floor scene, and the operation function of the single-floor map generation interface includes multiple map editing functions, the processing module 12 may be configured to: adding the received floor map generated by the first work of the robot into the single-floor map generation interface; and responding to the editing operation of the user on the floor map generated by the first work of the robot according to the multiple map editing functions, and storing the edited floor map.
Optionally, if the target map generation interface is a multi-floor map generation interface corresponding to a multi-floor scene, and the operation functions of the multi-floor map generation interface include a plurality of map editing functions and a new map creating function, the processing module 12 may be configured to: responding to the selection operation of the user on the new map creation function, sending a new map creation instruction to the robot so as to control the robot to generate the third floor map corresponding to the current floor in the working process; adding the third floor map to the multi-floor map generation interface; and responding to the editing operation of the user on the third floor map according to the multiple map editing functions, and storing the edited third floor map.
Optionally, the display module 13 may further be configured to: and displaying a floor map selection interface.
Accordingly, the processing module 12 may be further configured to: and responding to the floor map selection operation triggered by the user in the floor map selection interface, and sending a work instruction to the robot so as to enable the robot to work according to the target floor map selected by the user.
Optionally, the floor map selection interface includes a floor switching button, so that the user can select the target floor map by performing a switching operation on the floor switching button.
Optionally, the floor map selection interface includes a floor identifier, so that the user can select the target floor map by performing a selection operation on the floor identifier.
Optionally, the floor map selection interface includes thumbnails of a plurality of stored floor maps, so that the user can select the target floor map by performing a selection operation on the thumbnails.
Optionally, the floor map selection interface includes a perspective view formed by a plurality of stored floor maps, so that the user can select the target floor map by performing a selection operation on the plurality of floor maps.
The apparatus shown in fig. 13 can execute the map processing method provided by the foregoing embodiments, and reference may be made to the related description of the foregoing embodiments for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the foregoing embodiments, and are not described herein again.
Having described the internal functions and structure of the map generation apparatus, in one possible design, the structure of the map processing apparatus may be implemented as a terminal device, such as a terminal device of a user, such as a mobile phone, a smart wearable device, a PC, etc., as shown in fig. 14, and the terminal device may include: a processor 21 and a memory 22. Wherein the memory 22 is used for storing a program for supporting the terminal device to execute the map processing method provided in the foregoing embodiment, and the processor 21 is configured to execute the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of performing the steps of:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
Optionally, the processor 21 is further configured to perform all or part of the steps in the foregoing embodiments.
The structure of the terminal device may further include a communication interface 23, which is used for communicating with other devices or a communication network.
Additionally, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising:
responding to a map generation interface selection operation triggered by a user, and determining a target map generation interface selected by the user from at least two map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
In addition, the computer instructions, when executed by one or more processors, may further cause the one or more processors to execute the programs involved in the map processing methods in the above embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable mapping apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable mapping apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable mapping apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable mapping apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A map processing method, comprising:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
2. The method of claim 1, wherein determining the target map generation interface selected by the user from at least two corresponding map generation interfaces of the robot in response to the map generation interface selection operation triggered by the user comprises:
responding to the operation of starting the robot control application program by the user, and displaying a map generation interface selection prompt page, wherein the map generation interface selection prompt page comprises floor scene identifications corresponding to the at least two map generation interfaces;
and determining the target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page.
3. The method of claim 1, wherein determining the target map generation interface selected by the user from at least two corresponding map generation interfaces of the robot in response to the map generation interface selection operation triggered by the user comprises:
if a first floor map sent by the robot is received, displaying a map generation interface selection prompt page, wherein the first floor map is generated by the robot working for the first time, and the map generation interface selection prompt page comprises floor scene identifications corresponding to the at least two kinds of map generation interfaces;
and determining the target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page.
4. The method of claim 1, wherein determining the target map generation interface selected by the user from at least two corresponding map generation interfaces of the robot in response to the map generation interface selection operation triggered by the user comprises:
responding to the map generation interface viewing operation triggered by the user, and displaying a map generation interface selection prompt page, wherein the map generation interface selection prompt page comprises floor scene identifications corresponding to the at least two kinds of map generation interfaces;
and determining the target map generation interface selected by the user according to the selection operation of the user in the map generation interface selection prompt page.
5. The method of claim 1, wherein determining the target map generation interface selected by the user from at least two corresponding map generation interfaces of the robot in response to the map generation interface selection operation triggered by the user comprises:
if a second floor map sent by the robot is received, displaying a map storage prompt page, wherein the second floor map is sent when the robot determines that the second floor map is not matched with the stored floor map, and the map storage prompt page comprises an option of judging whether the second floor map is stored as a floor map of a multi-floor scene;
and determining that the target map generation interface selected by the user is a multi-floor map generation interface according to the operation of saving the second floor map as the floor map of the multi-floor scene by the user.
6. The method according to any one of claims 1 to 4, wherein the target map generation interface is a single-floor map generation interface corresponding to a single-floor scene, and the operation functions of the single-floor map generation interface include a plurality of map editing functions;
before executing the map generation instruction triggered by the user through the operation function, the method further comprises the following steps:
adding the received floor map generated by the first work of the robot into the single-floor map generation interface;
the executing of the map generation instruction triggered by the user through the operation function includes:
and responding to the editing operation of the user on the floor map generated by the first work of the robot according to the multiple map editing functions, and storing the edited floor map.
7. The method according to any one of claims 1 to 5, wherein the target map generation interface is a multi-floor map generation interface corresponding to a multi-floor scene, and the operation functions of the multi-floor map generation interface comprise a plurality of map editing functions and new map creation functions;
the executing of the map generation instruction triggered by the user through the operation function includes:
responding to the selection operation of the user on the new map creation function, sending a new map creation instruction to the robot so as to control the robot to generate the third floor map corresponding to the current floor in the working process;
adding the third floor map to the multi-floor map generation interface;
and responding to the editing operation of the user on the third floor map according to the multiple map editing functions, and storing the edited third floor map.
8. The method of claim 7, further comprising:
displaying a floor map selection interface;
and responding to the floor map selection operation triggered by the user in the floor map selection interface, and sending a work instruction to the robot so as to enable the robot to work according to the target floor map selected by the user.
9. The method of claim 8, wherein a floor switch button is included in the floor map selection interface for the user to select the target floor map by switching the floor switch button.
10. The method of claim 8, wherein a floor identifier is included in the floor map selection interface for the user to select the target floor map by selecting the floor identifier.
11. The method of claim 8, wherein thumbnails of a plurality of stored floor maps are included in the floor map selection interface, so that the user can select the target floor map by selecting the thumbnails.
12. The method of claim 8, wherein the floor map selection interface comprises a perspective view formed by a plurality of stored floor maps, so that the user can select the target floor map by performing a selection operation on the plurality of floor maps.
13. A map processing apparatus, comprising:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for responding to map generation interface selection operation triggered by a user, and determining a target map generation interface selected by the user from at least two map generation interfaces corresponding to a robot, and the target map generation interface comprises an operation function related to map generation;
the processing module is used for executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
14. A terminal device, comprising: the device comprises a memory, a processor and a display screen; wherein the content of the first and second substances,
the memory is to store one or more computer instructions that, when executed by the processor, implement:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
15. A computer-readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
responding to a map generation interface selection operation triggered by a user, determining a target map generation interface selected by the user from at least two kinds of map generation interfaces corresponding to a robot, wherein the target map generation interface comprises an operation function related to map generation;
executing a map generation instruction triggered by the user through the operation function;
the at least two map generation interfaces comprise different operation functions related to map generation, correspond to floor scenes of the environment where the robot is located, and are single-floor scenes or multi-floor scenes.
CN201810747436.5A 2018-07-09 2018-07-09 Map processing method, map processing device, terminal equipment and storage medium Pending CN110704140A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810747436.5A CN110704140A (en) 2018-07-09 2018-07-09 Map processing method, map processing device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810747436.5A CN110704140A (en) 2018-07-09 2018-07-09 Map processing method, map processing device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110704140A true CN110704140A (en) 2020-01-17

Family

ID=69192542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810747436.5A Pending CN110704140A (en) 2018-07-09 2018-07-09 Map processing method, map processing device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110704140A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857136A (en) * 2020-07-02 2020-10-30 珠海格力电器股份有限公司 Target map processing method and device
CN112022002A (en) * 2020-08-21 2020-12-04 苏州三六零机器人科技有限公司 Map editing method, device, equipment and storage medium for sweeper
CN114185529A (en) * 2021-11-29 2022-03-15 江苏集萃智能制造技术研究所有限公司 Method for automatically generating guidance interface of guidance robot
WO2022133697A1 (en) * 2020-12-22 2022-06-30 北京洛必德科技有限公司 Mobile robot map construction method and apparatus, and computer device and storage medium
TWI779592B (en) * 2021-05-05 2022-10-01 萬潤科技股份有限公司 Map editing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101126808A (en) * 2007-08-02 2008-02-20 中国科学院自动化研究所 Robot navigation system and navigation method
CN104115082A (en) * 2012-02-08 2014-10-22 罗伯特有限责任公司 Method for automatically triggering a self-positioning process
WO2017079777A2 (en) * 2015-11-11 2017-05-18 RobArt GmbH Subdivision of maps for robot navigation
CN106793905A (en) * 2014-08-19 2017-05-31 三星电子株式会社 Clean robot and the control device for clean robot, control system and control method
CN107703890A (en) * 2013-01-21 2018-02-16 上海科斗电子科技有限公司 Intelligent control software system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101126808A (en) * 2007-08-02 2008-02-20 中国科学院自动化研究所 Robot navigation system and navigation method
CN104115082A (en) * 2012-02-08 2014-10-22 罗伯特有限责任公司 Method for automatically triggering a self-positioning process
CN107703890A (en) * 2013-01-21 2018-02-16 上海科斗电子科技有限公司 Intelligent control software system
CN106793905A (en) * 2014-08-19 2017-05-31 三星电子株式会社 Clean robot and the control device for clean robot, control system and control method
WO2017079777A2 (en) * 2015-11-11 2017-05-18 RobArt GmbH Subdivision of maps for robot navigation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李毅等: "基于手机app的室内导航系统的设计与开发", 《电子设计工程》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857136A (en) * 2020-07-02 2020-10-30 珠海格力电器股份有限公司 Target map processing method and device
CN112022002A (en) * 2020-08-21 2020-12-04 苏州三六零机器人科技有限公司 Map editing method, device, equipment and storage medium for sweeper
WO2022133697A1 (en) * 2020-12-22 2022-06-30 北京洛必德科技有限公司 Mobile robot map construction method and apparatus, and computer device and storage medium
TWI779592B (en) * 2021-05-05 2022-10-01 萬潤科技股份有限公司 Map editing method and device
CN114185529A (en) * 2021-11-29 2022-03-15 江苏集萃智能制造技术研究所有限公司 Method for automatically generating guidance interface of guidance robot

Similar Documents

Publication Publication Date Title
CN110704140A (en) Map processing method, map processing device, terminal equipment and storage medium
WO2023051227A1 (en) Control method and apparatus for cleaning device
US11886203B2 (en) Flight control method and apparatus, and control device
CN102799385B (en) desktop control method and device
CN105490897B (en) Control method and control device of household appliance and mobile terminal
CN111265151B (en) Robot control method, device and storage medium
KR102207443B1 (en) Method for providing graphic user interface and apparatus for the same
CN105144058B (en) Prompt is placed in delay
CN105068721A (en) Operation menu display method and terminal
JP2005004734A5 (en)
EP2963544A1 (en) Coordinating activity views across operating system domains
CN105378628A (en) Start and application navigation
CN106201523A (en) The control method of application program, actuation means and terminal
KR101395442B1 (en) linkage application control system and method of controlling of application loading
CN106126115A (en) A kind of method and device of the disk of EVM(extended virtual machine)
CN105117249A (en) Method and device for adding desktop widget on android terminal
JPH05204859A (en) Command storing system for os
CN111158254A (en) Method and device for starting scene and mobile phone
CN107544723B (en) Application program interaction method, device and system
CN113518187A (en) Video editing method and device
WO2012155844A1 (en) Method and device for automatic removal of code
JPH09274553A (en) Device and method for controlling window display
CN106484468A (en) The management method, managing device of application program and mobile terminal
CN105183707A (en) Content editing method and terminal
CN108090112A (en) The exchange method and device of a kind of search interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117

RJ01 Rejection of invention patent application after publication