CN111414786A - Method and device for information interaction with working robot - Google Patents

Method and device for information interaction with working robot Download PDF

Info

Publication number
CN111414786A
CN111414786A CN201910222204.2A CN201910222204A CN111414786A CN 111414786 A CN111414786 A CN 111414786A CN 201910222204 A CN201910222204 A CN 201910222204A CN 111414786 A CN111414786 A CN 111414786A
Authority
CN
China
Prior art keywords
working environment
working
image
robot
work
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910222204.2A
Other languages
Chinese (zh)
Inventor
鲍亮
汤进举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201910222204.2A priority Critical patent/CN111414786A/en
Publication of CN111414786A publication Critical patent/CN111414786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps

Abstract

The application discloses a method for interacting information with a working robot, which comprises the steps of obtaining a working environment map of the working robot through a user side, receiving operation information of a certain position or a certain identification result of the working environment map, responding to the operation information of the certain position or the certain identification result of the working environment map, obtaining an image of a working environment corresponding to the certain position or the certain identification result of the working environment map, and displaying the image of the working environment corresponding to the position or the identification result. The user can clearly, accurately and visually check the details of the working environment of the working robot by checking the displayed image of the corresponding working environment.

Description

Method and device for information interaction with working robot
Technical Field
The application relates to the field of information interaction, in particular to a method and a device for information interaction with a working robot.
Background
Image recognition refers to a technique of processing, analyzing and understanding an image with a computer to recognize various different modes of objects and objects.
Scene recognition is a technology for realizing human visual function by using a computer, and the research goal of the technology is to enable the computer to process images or videos and automatically recognize and understand scene information in the images and videos.
Although the robot can recognize the object in the scene at present, the user usually cannot participate in the robot, or the robot simply gives a recognition result and lacks interactivity with the user.
In daily life, with the continuous development of science and technology, various working robots are distributed in various aspects of life, for example, the intelligent accompanying machine greatly enriches the mental life of a user through abundant audio and video data, the floor sweeping robot can greatly save the physical labor of the user, and the working robot provides great convenience for the life of the user. At present, a sweeping robot can build a map and provide the map to a user side, the user side can partition the map, set a virtual wall, clean at a fixed point and the like based on the map, but because the information contained in the currently built map is limited and only relates to the contour lines of some obstacles and wall surfaces, a user can only know the rough contour of the working environment of the user through the contour lines and can not clearly and accurately know the details of the working environment of the user, so that the user cannot accurately set the virtual wall, clean at a fixed point, subdivide the area and the like.
Disclosure of Invention
The application provides a method and a device for information interaction with a working robot, which aim to solve the problem that the working robot in the prior art cannot provide clear and accurate details of a working environment for a user.
The application provides a method for interacting with work robot information, which comprises the following steps:
a user side acquires a working environment map of a working robot;
receiving operation information of a user on a certain position or a certain identification result of the working environment map;
responding to the operation information of a certain position or a certain identification result of the working environment map, and acquiring an image of the working environment corresponding to the certain position or the certain identification result of the working environment map;
and displaying the image of the working environment corresponding to the position or the identification result.
Optionally, the work machine environment map includes: the working robot collects images of the working environment;
identifying the object in the image and generating a corresponding identification result;
and identifying the recognition result at the corresponding position of the work environment map, namely the position of the recognition result in the work environment map corresponds to the position of the object corresponding to the recognition result in the work environment of the work robot.
Optionally, the recognition result in the work environment map, one or more images of the work environment used for obtaining the recognition result, and the position in the work environment map and the one or more images of the work environment corresponding to the position in the work environment map are correspondingly stored in the work robot or a cloud server connected to the work robot.
Optionally, the image showing the working environment corresponding to the position or the recognition result includes one or more images showing the working environment corresponding to the position or the recognition result.
Optionally, the displaying, in response to operation information of a certain recognition result of the work environment map, an image of a work environment corresponding to the recognition result, further includes:
and receiving a judgment result for judging whether the identification is correct or not according to the displayed image of the working environment and the corresponding identification result.
Optionally, the recognition result that has not been operated on the work environment map is displayed in a display mode different from the display mode of the recognition result that has been operated.
Optionally, the display mode includes: highlighting, thickening and adding identification on the recognition result.
Optionally, the method includes:
and when the user side is not positioned in the interface of the current working environment currently, responding to a new identification result generated by the working robot in the working environment map, generating information for reminding the user whether to check, if the user clicks to check, jumping to the interface of the current working environment map, acquiring the image of the working environment corresponding to the identification result, and displaying the acquired image of the working environment.
Optionally, the receiving a judgment result that the user judges whether the identification is correct according to the displayed image of the working environment and the corresponding identification result includes:
generating options for the displayed images of each working environment for the user to select;
and if the image has the object corresponding to the recognition result of the user trigger operation, selecting a corresponding option.
Optionally, if the option selected by the user indicates that the recognition result is wrong, pushing a possible recognition result for the user to select; and if the user selects one of the recognition results, replacing the recognition result judged to be wrong on the original working environment map with the recognition result.
Optionally, the possible recognition result is generated according to some objects that are usually present in the working environment where the working robot is located or according to other possible recognition results generated in the recognition process.
Optionally, the displaying, in response to operation information on a certain position of the work environment map, an image of a work environment corresponding to the position includes:
responding to the operation information of a certain position in the working environment map, and acquiring an image of a working environment acquired when the working robot works at the position;
and displaying the acquired image of the working environment.
Optionally, the method includes:
and displaying the acquired image of the working environment in a slide show mode.
Optionally, the displaying, in response to operation information on a certain position of the work environment map, an image of a work environment corresponding to the position includes:
determining area information of a certain position where a user triggers operation, wherein the certain position is located in a map;
acquiring an image of a working environment acquired when the working robot works in a corresponding area according to the area information;
and displaying the acquired image of the working environment.
The present application also provides a device for information interaction with a work robot, comprising:
the acquisition unit is used for acquiring a working environment map of the working robot;
the receiving unit is used for receiving operation information of a certain position or a certain identification result of the working environment map from a user;
the operation unit is used for responding to operation information of a certain position or a certain identification result of the working environment map and acquiring an image of the working environment corresponding to the certain position or the certain identification result of the working environment map;
and the display unit is used for displaying the image of the working environment corresponding to the position or the identification result.
Compared with the prior art, the two methods for information interaction with the working robot, provided by the application, acquire the working environment map of the working robot through the user side, receive the operation information of a certain position or a certain identification result of the working environment map, respond to the operation information of a certain position or a certain identification result of the working environment map, acquire the image of the working environment corresponding to a certain position or a certain identification result of the working environment map, and display the image of the working environment corresponding to the position or the identification result. The user can clearly, accurately and visually check the details of the working environment of the working robot by checking the displayed image of the corresponding working environment.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of one embodiment of a method for information interaction with a work robot of the present application;
FIG. 2 is a schematic diagram of a work environment map generated by a work robot according to an embodiment of a method for information interaction with a work robot of the present application;
fig. 3 is a schematic diagram of an embodiment of an apparatus for information interaction with a work robot according to the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of embodiments in many different forms than those described herein and is intended to be limited to the specific embodiments disclosed herein by those skilled in the art without departing from the spirit and scope of the present application.
The application provides a method and a device for information interaction with a working robot, which are provided by the following specific embodiments:
fig. 1 is a flow chart of an embodiment of a method for interacting with work robot information according to the present application. The method comprises the following steps:
step S101: the user side obtains a working environment map of the working robot.
In daily life, with the development of computer technology, working robots are distributed in the aspects of people's life, and great convenience is brought to people's work, life and the like. Working robots that are common in family life are: a sweeping robot, an intelligent accompanying robot and the like. These working robots bring great convenience to the life of the user.
The method for interacting with the working robot information provided in the embodiment is mainly performed depending on the working environment map generated by the working robot. Therefore, a user end needs to acquire a working environment map of the working robot.
It should be noted that, in this embodiment, the method is applied to a sweeping robot commonly used in family life, so as to illustrate that the step is to obtain a working environment map of the working robot. Of course, in other embodiments, the method provided by the application can also be used in other working robots.
In the prior art, the map of the working environment is established according to data acquired by a sensor of the robot, which is not described again, and the working environment map in the text also includes recognition results of some objects in the working environment on the basis. The method comprises the following specific steps:
step S101-1: the working robot collects images of its working environment.
Taking the sweeping robot as an example, the step of acquiring the image of the working environment by the working robot will be described in detail. The sweeping robot can acquire images in the home environment of a user in the working process of the home environment of the user, the images contain objects (such as shoes, garbage cans, dining tables, power lines, socks and the like) in the home environment of the user, and generally, the sweeping robot acquires the images of the working environment of the sweeping robot through an image sensor (such as a camera) of the sweeping robot.
The object of the working environment is collected through the image sensor of the object. For example, the sweeping robot collects images containing various objects in the home environment of the user through a camera of the sweeping robot during working.
In addition, the working robot can measure the distance between the object and the working robot according to a calibration system of an image sensor camera of the working robot, so that the position of the object in the working environment map is determined.
Step S101-2: and identifying the object in the image and generating a corresponding identification result.
After the step S101-1, the working robot collects the image of the working environment, and may store the image in its local storage, or may store the image in a cloud server connected to the working robot. And then, identifying the object in the image through an identification algorithm built in the working robot or an identification algorithm built in a cloud server connected with the working robot and generating a corresponding identification result.
Here, still taking the example of the sweeping robot, after the sweeping robot collects the image of the working environment of the sweeping robot, the image may be stored in a local storage of the sweeping robot, and then the object in the image is identified by an identification algorithm built in the sweeping robot and a corresponding identification result is generated.
In addition, the sweeping robot can also upload images containing objects in the family environment of the user to a cloud server connected with the sweeping robot after acquiring the images, and then the cloud server identifies the objects in the images through a built-in identification algorithm and generates corresponding identification results.
Step S101-3: and identifying a recognition result at a corresponding position of the working environment map, namely, the position of the recognition result in the working environment map corresponds to the position of the object corresponding to the recognition result in the working environment of the working robot.
In practical application, a working robot often establishes a corresponding working environment map in its interior or a cloud server connected to the working robot by using various sensors in the working process, and a conventional working environment map only has an approximate outline of the working environment in which the working robot is located in the working process, and cannot identify in detail what objects exist in the working environment, and simultaneously cannot view images of the working environment corresponding to the areas by selecting different areas, so that details of the working environment of the working robot cannot be viewed. In order to enable a user to clearly and intuitively check details of the working environment where the user is located through the working environment map generated by the working robot, after the steps S101-1 and S101-2, the working robot or a cloud server connected with the working robot uses the collected image of the working environment and the recognition result to identify the recognition result at a corresponding position of the working environment map, namely, the position of the recognition result in the working environment map corresponds to the position of an object corresponding to the recognition result in the working environment of the working robot, and a working environment map containing the recognition result of the object is generated. After the recognition result is generated, highlighting and thickening the recognition result, and adding an identifier on the recognition result, and after the user clicks and checks the recognition result, removing the identifier on the highlighting, thickening and recognition result so as to distinguish the recognition result which is not clicked and checked by the user.
In addition, in order to conveniently inquire about images of related working environments later, the working robot correspondingly stores the images of the working environments and the identification results thereof in the working robot or a cloud server connected with the working robot in the process of generating a working environment map containing the identification results of the objects, wherein the stored images in the working environments are one or more images used in the process of obtaining the identification results.
Here, still taking the sweeping robot commonly used in the home environment of the user as an example, as shown in fig. 2, after the sweeping robot collects images around the sweeping robot through its own camera in the working process of the home environment of the user, and identifies a corresponding object in the images through the above step S101-2, the corresponding identification result identified by the identification robot is identified at a corresponding position on the working environment map generated in its interior or in a cloud server connected to the working environment map, so as to generate the working environment map including the identification result of the object. For example, the sweeping robot identifies objects such as shoes, trash cans, power lines and the like through the collected images in the working process, corresponding recognition results are identified as shown in fig. 2, 2-102 (identification of shoes), 2-103 (identification of trash cans), 2-110 (identification of power lines), etc., at corresponding positions of the established work environment map, forming a work environment map, and at the same time, for the image of the work environment for identifying the identification result of the object, for example, the sweeping robot or the cloud server connected to the sweeping robot identifies the identification result of the trash can of 2 to 103 using the collected images 11, 12 and 13, and then the cloud server connected to the sweeping robot or the sweeping robot stores the identification result and the corresponding image for identifying the identification result in the form similar to { trash can identification, [ image 11, image 12 and image 13] }. In addition, in this embodiment, the recognition result of the corresponding object is identified in the work environment map in the form of an image identifier, or the recognition result of the corresponding object may be identified in the form of a character, a sound, or the like, which is not described one by one here.
In addition, in the process of generating the working environment map containing the identification result of the object, the working robot also stores the image of the working environment corresponding to the position in the working environment map in the working robot or a cloud server connected with the working robot correspondingly, and specifically, there may be two ways, one way is that the working robot converts the position of the acquired image of the working environment in the actual environment into a relative position in the generated working environment map, and establishes a corresponding relationship between the relative position and the image of the working environment, and of course, a corresponding clustering analysis algorithm may also be adopted to establish a corresponding relationship between the image of the working environment at the position close to the position and the relative position of the position in the working environment map.
For example, when a sweeping robot collects surrounding images through a camera carried by the sweeping robot in the working process of a user home environment, the sweeping robot can synchronously record and collect position information of the sweeping robot in the user home environment during the images, and then, when a working environment map is generated in a cloud server connected with the sweeping robot or the sweeping robot, the collected actual position information corresponding to the images of the working environment can be converted into relative position information in the working environment map, and the corresponding relation between the relative position information and the images corresponding to the actual position information is established. In practice, of course, because one or more images of the working environment are collected at a certain position of the actual working environment or a position adjacent to the certain position, in the process of converting the actual position information and the relative position information, a corresponding clustering analysis algorithm can be adopted at the same time to establish a corresponding relationship between one or more images of the working environment which are opposite or close to the certain position, so that the images of the actual working environment corresponding to the certain position can be conveniently checked by simply clicking the certain position in the working environment map. For example, the sweeping robot collects one or more images (e.g., image X1, image X2 and image X3) of the working environment at the actual coordinate position (X, Y) of the user's home environment (where X ≧ 0 and Y ≧ 0, which are only schematic properties), and correspondingly, the information of the images records the actual coordinate position (X, Y) of the working robot in the user's home environment when the images are collected, and of course, the sweeping robot may also collect one or more images (e.g., (X1, Y1) … (Xn, Yn) (n ≧ 1)) at a close position (e.g., (X1, Y1) … (X1, Y2)) from the distance (X, Y). While a working environment map is generated in a sweeping robot or a cloud server connected with the sweeping robot, a corresponding relationship is synchronously established between an image acquired by the working robot at an actual position of a user home environment and a relative position of the actual position in the working environment map, for example, a relative position (P, Q) in the working environment map corresponding to an actual coordinate position (X, Y), a corresponding relationship between a form (image Y1, image Y2) acquired by the image X1, image X2, image X3) is synchronously established, and a corresponding relationship between a relative coordinate position (P, Q) corresponding to an actual coordinate position (X, Y) and a similar position ((X1, Y1) … (Xn, Yn)) of (X, Y) can be established according to a corresponding cluster analysis algorithm, so that a corresponding relationship is finally established, q), [ image X1, image X2, image X3, image Y1, image Y2] }, and at the same time, the sweeping robot or the cloud server to which the sweeping robot is connected stores the established correspondence. Here, the description of establishing the correspondence between the image of the actual position of the working robot and the corresponding position in the generated working environment map is only schematic, and in practice, a more specific method may be designed according to an actual scene and requirements, and details are not described here.
In addition, there is also a mode that the working robot or the cloud server connected with the working robot can automatically partition the working environment map while generating the working environment map, and simultaneously, the corresponding relation between each partition and the working environment image corresponding to the partition is stored. For example, the working robot or the cloud server connected to the working robot divides the working environment map corresponding to the working robot into m partitions, and the corresponding relations are stored in a form similar to { partition 1, [ image 11 … image 1n ] }, …, { partition m, [ image m1 … image mn ] } (where m ≧ 1, and n ≧ 1).
Here, only two ways of establishing a corresponding relationship between the image of the working environment acquired by the working robot at the corresponding position of the actual working environment and the relative position of the working environment map generated by the image of the working environment are described, and in practice, which way can be selected to be used in the corresponding setting menu. Meanwhile, with the actual needs or the advancement of technology, other manners besides the two manners may also be adopted, and are not described herein again.
The working environment map generated in the step S101-3 is more detailed and complete than a common working environment map in the prior art, and not only covers the outline of the working environment of the working robot, but also includes the recognition result of the object recognized by the working robot in the working process.
After the working robot or the cloud server connected with the working robot generates the working environment map, the generated working environment map is stored in the working robot or the cloud server connected with the working robot, in order to obtain the working environment map of the working robot, the working environment map can be obtained through the working robot, and the working environment map can also be obtained by requesting the working robot or the cloud server connected with the working robot through the mobile terminal device. Specifically, the working environment map can be acquired by requesting a working robot or a cloud server connected to the working robot through an APP (application program) installed on the mobile terminal device, where the mobile terminal device may be a mobile terminal device such as a mobile phone, a Pad, or a mobile PC.
In this embodiment, for example, by installing an APP corresponding to the working robot on a mobile phone, a working environment map as shown in fig. 2 generated by the working robot in a working process is obtained by requesting the working robot or a cloud server connected to the working robot through the APP.
Step S102: and receiving operation information of a certain position or a certain identification result of the working environment map by a user.
After the working environment map of the working robot is acquired through the step S101, the operation information of the user on a certain position or a certain recognition result of the working environment map is received.
Wherein, include: displaying a current working environment map, and receiving operation information of a certain position or an identification result of a user on the working environment map, wherein the operation information can be touched and clicked, can also be clicked by an external device, and the like.
The current working environment map is displayed, and the working environment map can be acquired through a display device of the working robot or acquired through a request of the mobile terminal device to the working robot or a cloud server connected with the working robot. Here, still exemplify the robot of sweeping the floor, the user installs the APP corresponding to the robot of sweeping the floor on the mobile phone, and after obtaining the working environment map of the robot of sweeping the floor through the APP, the obtained working environment map is displayed in the APP, and meanwhile, the user receives the operation information of a certain area of the working environment map or the recognition result.
Step S103: and responding to the operation information of a certain position or a certain identification result of the working environment map, and acquiring an image of the working environment corresponding to the certain position or the certain identification result of the working environment map.
After receiving the operation information of the user on the certain position or the identification result of the working environment map through the step S102, in the working environment map of the working robot, in response to the operation information on the certain position or the identification result of the working environment map, an image of the working environment corresponding to the certain position or the identification result of the working environment map is acquired.
Step S104: and displaying the image of the working environment corresponding to the position or the identification result.
In step S103, in response to the operation information on the certain position or the identification result of the work environment map, after the image of the work environment corresponding to the certain position or the identification result of the work environment map is acquired, the work robot or the mobile terminal device corresponding to the work robot may display the image of the work environment corresponding to the position or the identification result. The image for displaying the working environment corresponding to the position or the recognition result comprises: and displaying one or more images of the working environment corresponding to the position or the identification result.
In addition, the operation information of a certain recognition result of the working environment map is responded, an image of the working environment corresponding to the certain recognition result of the working environment map is obtained, and the image of the working environment corresponding to the recognition result is displayed, so that various modes can be provided. One mode is to respond to operation information of a certain identification result identified on the working environment map, acquire an image of the working environment corresponding to the identification result, and display the acquired image of the working environment. Here, the sweeping robot is still used to exemplify such a manner, after a user obtains and displays a working environment map as shown in fig. 2 through a corresponding APP on a mobile phone, the APP checks the working environment map, and when the user clicks and selects a certain recognition result, the APP requests the sweeping robot or a cloud server connected to the sweeping robot to obtain an image of the working environment corresponding to the recognition result according to the recognition result, and displays the obtained image of the working environment. For example, when a user clicks a trash can identifier at 2-103 in a work environment map as shown in fig. 2, after the user clicks, responding to the click of the user, the APP acquires an image of the work environment used for identifying the 2-103 identifier from the sweeping robot or a cloud server connected to the sweeping robot according to the 2-103 identifier, and after receiving the request, the sweeping robot or the cloud server connected to the sweeping robot searches for the image used for identifying the 2-103 identifier according to the 2-103 identifier information in the request, and returns the image for the APP to display. For example, if the image of the working environment used for identifying the 2-103 identifier is found through the corresponding relationship { trash can 2-103, [ image 21, image 22, image 23] }, the image list [ image 21, image 22, image 23] is returned and displayed to the user by the APP end.
The other mode is that when the user is not in the interface of the current working environment, a new identification result is generated in the working environment map in response to the working robot, information reminding the user of whether to check is generated, if the user clicks to check, the user jumps to the interface of the current working environment map, the image of the working environment corresponding to the identification result is obtained, and the obtained image of the working environment is displayed. Here, still taking the sweeping robot as an example, in the working process of the sweeping robot, when the sweeping robot identifies the acquired image and generates a new identification result on the working environment map, a notification message is displayed on a display device of the sweeping robot, or a notification message is generated on a notification bar of the mobile terminal by a corresponding APP on the mobile terminal to notify a user of the notification message; and when the user checks, opening the APP and displaying the interface of the current working environment map, and acquiring the image of the working environment corresponding to the identification result. And meanwhile, displaying the acquired image of the working environment. For example, when the sweeping robot recognizes the recognition result of a power line in the working process, the corresponding APP on the mobile phone of the user can display a notification message similar to 'recognizing a new object, namely the power line, and checking' in the notification bar of the mobile phone of the user, after the user clicks and selects, the APP is opened and the interface of the current working environment is displayed, and meanwhile, the APP requests to acquire the image of the working environment used for recognizing the power line from the sweeping robot or a cloud server connected with the sweeping robot, and the acquired image of the working environment is displayed for the user to check.
Of course, the recognition result that has not been clicked on the work environment map is displayed in a display manner different from the display manner of the clicked recognition result. For example, a sweeping robot generates a new recognition result on a map in the working process, and before a user clicks and views the new recognition result, the new recognition result is displayed in a highlighting or thickening manner or in a manner of adding a mark to the recognition result. For example, the robot of sweeping the floor is at the in-process of work, the identification result of power cord, garbage bin, dining table is discerned in succession, if user side APP does not open, the user does not in time look over corresponding notice message yet, then can open the APP when looking over the operational environment map that the robot of sweeping the floor established next time at the user, in the operational environment map, there is the sign of green dot in the upper right corner to the image sign of the identification result of newly generating, the sign is newly generated and not selected the identification result of looking over, remind the user to click respectively and select the identification result that corresponds, and then obtain the image of the operational environment that corresponds and supply the user to look over. The distinction of the recognition result states in the work environment map generated by the work robot is described above only in one form, and may be specifically represented by other expression forms, which are not described one by one here.
In addition, the above two modes of displaying the image of the work environment corresponding to the recognition result in response to the operation information of the recognition result of the work environment map are described. The acquiring of the image of the working environment corresponding to the identification result specifically refers to requesting the working robot or a cloud server connected to the working robot to acquire the image of the working environment corresponding to the identification result according to the identification result. For the specific obtaining manner, detailed descriptions are given in the above two manners, and details are not repeated here. Of course, there may be other ways than the above two ways, which are not described one by one here.
In addition, in the working environment map of the working robot, while displaying the image of the working environment corresponding to the recognition result, further, a judgment result that whether the recognition is correct or not can be judged according to the displayed image of the working environment and the corresponding recognition result by the user can be further received, which specifically includes: and generating options for the displayed image of each working environment for a user to select, and if the image has an object corresponding to the recognition result of the user trigger operation, selecting the corresponding option.
Still taking the sweeping robot as an example, for example, when a user clicks the recognition result of shoes shown in 2-102 in the working environment map shown in fig. 2, the APP on the mobile terminal device requests the sweeping robot or the cloud server connected to the sweeping robot to acquire the image of the working environment corresponding to the 2-102 identifier according to the 2-102 identifier, the cloud server connected to the sweeping robot or the sweeping robot receives the request and returns the image of the corresponding working environment to the APP, then, the APP generates 2 options of "correct recognition" and "wrong recognition" for each acquired image, if there are shoes in the image, the user selects "correct recognition", otherwise, if there are no shoes in the image, the user selects "wrong recognition".
Another implementation manner may be that the acquired images are provided for the user to select in units of groups, each group includes a plurality of images, each group generates an option for the user to select, if an object corresponding to the recognition result of the object exists in each image of a group, a corresponding option is selected, and if an object corresponding to the recognition result of the object does not exist in any image of a group, another option is selected.
For example, a user clicks a recognition result of a shoe shown by 2-102 identifiers in a work environment map shown in fig. 2, an APP on a mobile terminal device requests a sweeping robot or a cloud server connected to the sweeping robot to acquire images of a work environment corresponding to the 2-102 identifiers according to the 2-102 identifiers, the sweeping robot or the cloud server connected to the sweeping robot receives the request and returns corresponding images to the APP, and then the APP takes a group as a unit, each group includes a plurality of images, 2 options of "correct recognition" and "wrong recognition" are generated for each group, if a trash can exists in each image of a group, the user selects "correct recognition", and otherwise, if no trash can exists in any image of the group, the user selects "wrong recognition".
Further, the options for the image generation are described. The expression forms may be expression forms in which "correct recognition" and "incorrect recognition" are described as examples above, or expression forms in which "YES" and "NO" or "smiling face expression" and "crying face expression" can express opposite meanings of correct recognition and incorrect recognition, and description thereof is omitted.
And generating options for the displayed images of each working environment for a user to select, and selecting the corresponding option if an object corresponding to the identification result of the user trigger operation exists in the images. When the user selects correct identification, the image used in identifying the object and the corresponding identification result can be used as data and a data tag, so that data set in the actual working environment of the working robot can be collected for training the identification algorithm in the working robot or a cloud server connected with the working robot. If the option selected by the user indicates that the recognition result is wrong, pushing possible recognition results for the user to select, and if the user selects one recognition result, replacing the recognition result judged to be wrong on the original working environment map with the recognition result. Here, still taking the sweeping robot as an example, the user views the work environment map shown in fig. 2 through a corresponding APP on the mobile terminal, and after the user clicks the shoe identification result shown in fig. 2-102, the user generates an option for the image in the image set [ image 31, image 32, image 33, image 34] used by the obtained identification 2-102 for the user to select; if the user finds that a trash can exists in the image instead of a shoe by looking at one or more of the images 31, 32, 33 and 34, the user selects an option of 'recognition error' from options generated by each image, and when the APP receives a judgment that the 'recognition error' is judged to be wrong, a possibly correct recognition result is pushed for the user to select, such as generating an option: the 'garbage can', 'power cord', 'dining table' and 'shoes' are used for a user to select, if identification results corresponding to corresponding objects in images exist in options, the user selects correct identification results, and meanwhile, the APP can upload the replaced identification results to a sweeping robot or a cloud server connected with the sweeping robot and replace the replaced identification results with new correct identification results on an original working environment map to judge the identification results as wrong identification results. For example, since the object existing in one or more of the images 31, 32, 33, and 34 is a trash can, the user may select an option of "trash can" from options of new recognition results after selecting a recognition error because the recognition result of the original shoe is an error, and then the APP uploads the replaced recognition result to the sweeping robot or a cloud server connected to the sweeping robot and replaces the recognition result of the shoe at 2-102 with the recognition result of the trash can on the original work environment map.
And generating a new recognition result for the user to select, wherein the possible recognition result is generated according to some objects which are usually appeared in the working environment where the working robot is located or other possible recognition results generated in the recognition process. When the images [ image 31, image 32, image 33, image 34] of the working environment corresponding to the recognition results of 2-102 are judged and the recognition error is found, the generated options are as follows: the "trash can", "power cord", "dining table", "shoes", that is, the options for the user to choose from, or the objects that are usually present in the user's home environment where the sweeping robot is located, such as "dining table", "shoes", etc., or other possible recognition results generated when the recognition algorithm recognizes 2-102, such as "trash can", "power cord", etc. Here, it is explained that, after receiving the judgment of the recognition error, the APP corresponding to the sweeping robot does not generate an option for the user to select unintentionally, but selects, from the statistical result of the image acquired by the back-end program in the working process of the sweeping robot on the object frequency appearing in the user home environment and the corresponding record of each recognition result on the other recognition results appearing during recognition in the recognition process, the recognition result with a higher object frequency and the other recognition results appearing during recognition as options for the user to select purposefully, so as to improve the hit frequency.
In the foregoing, it is described that, in the working environment map of the working robot, while displaying the image of the working environment corresponding to the recognition result, the judgment result indicating whether the recognition is correct or not may be determined according to the displayed image of the working environment and the corresponding recognition result. After the judgment operation, if the judgment result shows that the identification is correct, the image used when the object is identified and the corresponding identification result can be used as data and a data tag, the data tag is transmitted back to a work robot local storage storing original image data or a cloud server connected with the work robot, corresponding data mapping is performed in the work robot or the cloud server connected with the work robot, and the image data and the corresponding data tag are collected to a data set in the actual work environment of the work robot and are used for training an identification algorithm in the work robot or the cloud server connected with the work robot. Therefore, whether the scene recognition result of the recognition algorithm built in the working robot or the cloud server connected with the working robot is correct or not is collected, and the real correct recognition rate is convenient to count. Meanwhile, the judgment result of the image used by the user for judging the identification result of each object is recorded as the data label of the corresponding image, namely, the image data is marked, so that complicated manual marking is avoided, the data label is transmitted back to the local storage of the working robot storing the original image or the cloud server connected with the working robot, and equivalently, a data set of the identification algorithm is collected, and the data set can be used for training, testing and verifying the identification algorithm.
In addition, in the work environment map of the work robot, in response to operation information on a certain position of the work environment map, an image of the work environment corresponding to the position is displayed. One way is that: and responding to the operation information of a certain position in the working environment map, acquiring an image of the working environment acquired when the working robot works at the position, and displaying the acquired image of the working environment.
Here, still taking the sweeping robot as an example, when a user clicks a certain position, in response to a click operation of the user, the APP acquires a relative coordinate position of the position in the working environment map, for example, the position is the coordinate position (P, Q) described above, and then, the APP requests the sweeping robot or a cloud server connected to the sweeping robot to acquire an image of the working environment corresponding to the coordinate position (P, Q) according to the coordinate position (P, Q), the sweeping robot or the cloud server connected to the sweeping robot receives the request and returns a corresponding image to the APP, for example, the coordinate position (P) is found through the corresponding relationship { (P, Q), [ image X1, image X2, image X3, image Y1, image Y2] }, q) the image of the corresponding working environment is image X1, image X2, image X3, image Y1 and image Y2, then returning an image list [ image X1, image X2, image X3, image Y1 and image Y2] to the requested APP, and displaying the acquired image in the corresponding interface by the APP.
Further, another way is: determining the area information of a certain position where a user triggers operation, which is located in a map, acquiring the image of the working environment collected when the working robot works in the corresponding area according to the area information, and displaying the acquired image of the working environment.
Here, still taking the sweeping robot as an example, after a user opens a work environment map of the sweeping robot through a mobile terminal such as a mobile phone APP, the user may click any position in an area shown in the work environment map, and then, in response to a click operation on the position, the APP sends a request for obtaining an image of a work environment corresponding to the partition 1 to the sweeping robot or a cloud server connected to the sweeping robot according to information of the partition 1 based on area information of the position clicked by the user in the work environment map, where the partition corresponding to the position is the partition 1, and after receiving the request, the APP searches for the image of the work environment corresponding to the partition 1 according to information of the partition 1 in the request and returns the image of the work environment to the APP, and after receiving the image of the working environment corresponding to the position, the APP displays the acquired image of the working environment in an interface of a working environment map for a user to check.
Corresponding to the two response modes, because the number of the images of the working environment corresponding to the position is possibly more, when the working robot displays the acquired images of the working environment, the images of the working environment can be displayed in a slide mode. For example, after the sweeping robot acquires an image of a working environment corresponding to a certain position, the image of the working environment is displayed in a slide form in an interface of a working environment map displayed by the APP, so that a user can conveniently view the image. Of course, the user may also make corresponding settings in the corresponding settings for the conventional slide operations such as the image switching manner and the image switching time of the slide, because the prior art related to slide operations is mature, and is not described herein again.
The embodiment of the method for information interaction with the working robot is described in detail above, and by the method for information interaction with the working robot, a user can check details of the working environment of the working robot conveniently, clearly and accurately check the details of the working environment, and then can accurately set a virtual wall, clean a fixed point and subdivide an area on a working environment map, so that the operation fineness and accuracy of the user are greatly improved.
In the above description, a method for information interaction with a work robot is provided, and correspondingly, an apparatus for information interaction with a work robot is also provided. Please refer to fig. 3, which is a schematic diagram of an embodiment of an apparatus for information interaction with a work robot according to the present application. Because the device embodiment is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to partial description of the method embodiment. The device embodiments described below are merely illustrative.
An apparatus for information interaction with a work robot of the present embodiment includes:
an acquisition unit 301 configured to acquire a work environment map of the work robot;
a receiving unit 302, configured to receive a click operation of a user on a certain position or a certain identification result of the work environment map;
an operation unit 303, configured to obtain, in response to operation information on a certain position or a certain recognition result of the work environment map, an image of a work environment corresponding to the certain position or the certain recognition result of the work environment map;
and the display unit 304 is configured to display an image of the working environment corresponding to the position or the identification result.
The above embodiments of the method and apparatus for information interaction with a working robot provided by the present application are described in detail, and specific examples are applied herein to illustrate the principles and implementations of the present application, and the above descriptions of the embodiments are only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application. The protection scope of the present application shall be subject to the scope defined by the claims of the present application.

Claims (15)

1. A method for information interaction with a work robot, comprising:
a user side acquires a working environment map of a working robot;
receiving operation information of a user on a certain position or a certain identification result of the working environment map;
responding to the operation information of a certain position or a certain identification result of the working environment map, and acquiring an image of the working environment corresponding to the certain position or the certain identification result of the working environment map;
and displaying the image of the working environment corresponding to the position or the identification result.
2. Method for information interaction with a working robot according to claim 1, characterized in that:
the working robot collects images of the working environment;
identifying the object in the image and generating a corresponding identification result;
and identifying the recognition result at the corresponding position of the work environment map, namely the position of the recognition result in the work environment map corresponds to the position of the object corresponding to the recognition result in the work environment of the work robot.
3. Method for information interaction with a working robot according to claim 2, characterized in that:
and correspondingly storing the identification result in the working environment map, one or more working environment images used for acquiring the identification result, the position in the working environment map and one or more working environment images corresponding to the position in the working environment map in a working robot or a cloud server connected with the working robot.
4. The method for interacting information with a work robot of claim 1, wherein the presenting the image of the work environment corresponding to the location or the recognition result comprises presenting one or more images of the work environment corresponding to the location or the recognition result.
5. The method for interacting with work robot information according to claim 1, wherein the presenting an image of a work environment corresponding to a recognition result in response to operation information of the recognition result on the work environment map further comprises:
and receiving a judgment result that whether the identification is correct or not, wherein the judgment result is judged by the user according to the displayed image of the working environment and the corresponding identification result.
6. The method for information interaction with a work robot according to claim 5, wherein the recognition result that has not been operated on the work environment map is displayed in a manner different from the display manner of the recognition result that has been operated.
7. The method for interacting with work robot information of claim 6, wherein the display mode comprises: highlighting, thickening and adding identification on the recognition result.
8. The method for interacting with work robot information of claim 1, comprising:
and when the user side is not positioned in the interface of the current working environment currently, responding to a new identification result generated by the working robot in the working environment map, generating information for reminding the user whether to check, if the user clicks to check, jumping to the interface of the current working environment map, acquiring the image of the working environment corresponding to the identification result, and displaying the acquired image of the working environment.
9. The method for information interaction with a work robot according to claim 5, wherein the receiving of the judgment result that the user judges whether the recognition is correct according to the displayed image of the work environment and the corresponding recognition result comprises:
generating options for the displayed images of each working environment for the user to select;
and if the image has the object corresponding to the recognition result of the user trigger operation, selecting a corresponding option.
10. The method for information interaction with a work robot of claim 9, wherein if the user selected option indicates that the recognition result is wrong, pushing the possible recognition result for the user to select; and if the user selects one of the recognition results, replacing the recognition result judged to be wrong on the original working environment map with the recognition result.
11. Method for information interaction with a working robot according to claim 10, characterized in that the possible recognition results are generated from some objects that are regularly present in the working environment in which the working robot is located or from other possible recognition results generated during the recognition process.
12. The method for interacting with work robot information according to claim 1, wherein the presenting the image of the work environment corresponding to a certain position of the work environment map in response to the operation information of the position comprises:
responding to the operation information of a certain position in the working environment map, and acquiring an image of a working environment acquired when the working robot works at the position;
and displaying the acquired image of the working environment.
13. The method for information interaction with a work robot of claim 12, wherein the presenting the captured image of the work environment comprises:
and displaying the acquired image of the working environment in a slide show mode.
14. The method for interacting with work robot information according to claim 1, wherein the presenting the image of the work environment corresponding to a certain position of the work environment map in response to the operation information of the position comprises:
determining area information of a certain position where a user triggers operation, wherein the certain position is located in a map;
acquiring an image of a working environment acquired when the working robot works in a corresponding area according to the area information;
and displaying the acquired image of the working environment.
15. An apparatus for information interaction with a work robot, comprising:
the acquisition unit is used for acquiring a working environment map of the working robot;
the receiving unit is used for receiving operation information of a certain position or a certain identification result of the working environment map from a user;
the operation unit is used for responding to operation information of a certain position or a certain identification result of the working environment map and acquiring an image of the working environment corresponding to the certain position or the certain identification result of the working environment map;
and the display unit is used for displaying the image of the working environment corresponding to the position or the identification result.
CN201910222204.2A 2019-03-22 2019-03-22 Method and device for information interaction with working robot Pending CN111414786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910222204.2A CN111414786A (en) 2019-03-22 2019-03-22 Method and device for information interaction with working robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910222204.2A CN111414786A (en) 2019-03-22 2019-03-22 Method and device for information interaction with working robot

Publications (1)

Publication Number Publication Date
CN111414786A true CN111414786A (en) 2020-07-14

Family

ID=71490777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910222204.2A Pending CN111414786A (en) 2019-03-22 2019-03-22 Method and device for information interaction with working robot

Country Status (1)

Country Link
CN (1) CN111414786A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113341752A (en) * 2021-06-25 2021-09-03 杭州萤石软件有限公司 Intelligent door lock and cleaning robot linkage method and intelligent home system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100324769A1 (en) * 2007-02-13 2010-12-23 Yutaka Takaoka Environment map generating method and mobile robot (as amended)
CN106441298A (en) * 2016-08-26 2017-02-22 陈明 Method for map data man-machine interaction with robot view image
CN107272454A (en) * 2017-06-19 2017-10-20 中国人民解放军国防科学技术大学 A kind of real time human-machine interaction method based on virtual reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100324769A1 (en) * 2007-02-13 2010-12-23 Yutaka Takaoka Environment map generating method and mobile robot (as amended)
CN106441298A (en) * 2016-08-26 2017-02-22 陈明 Method for map data man-machine interaction with robot view image
CN107272454A (en) * 2017-06-19 2017-10-20 中国人民解放军国防科学技术大学 A kind of real time human-machine interaction method based on virtual reality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113341752A (en) * 2021-06-25 2021-09-03 杭州萤石软件有限公司 Intelligent door lock and cleaning robot linkage method and intelligent home system

Similar Documents

Publication Publication Date Title
CN107370983B (en) method and device for acquiring track of video monitoring system
CN104102335B (en) A kind of gestural control method, device and system
CN203224887U (en) Display control device
CN105302701B (en) Method, device and equipment for testing reaction time of terminal user interface
CN110400387A (en) A kind of joint method for inspecting, system and storage medium based on substation
CN111623755B (en) Enabling automatic measurements
US20110216011A1 (en) Remote control system for electronic device and remote control method thereof
CN103383731A (en) Projection interactive method and system based on fingertip positioning and computing device
CN112068752B (en) Space display method and device, electronic equipment and storage medium
CN104503888A (en) Warning method and device
WO2022267795A1 (en) Regional map processing method and apparatus, storage medium, and electronic device
CN104596588A (en) Environmental status time-space model generation method and system based on digital measurable images
CN107092350A (en) A kind of remote computer based system and method
CN108475431A (en) Image processing apparatus, image processing system, image processing method and program
CN105589801A (en) Mobile phone cluster test method and system
CN111770450B (en) Workshop production monitoring server, mobile terminal and application
CN112365623A (en) Electric power inspection system
CN111414786A (en) Method and device for information interaction with working robot
CN109507904B (en) Household equipment management method, server and management system
CN107708094A (en) device pairing method and device
CN107102794B (en) Operation processing method and device
CN115731349A (en) Method and device for displaying house type graph, electronic equipment and storage medium
CN113141433B (en) Method and device for testing screen sensitivity and processor
CN110472551A (en) A kind of across mirror method for tracing, electronic equipment and storage medium improving accuracy
CN113253622B (en) HomeMap-based network environment visualization control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination