CN106292657B - Mobile robot and patrol path setting method thereof - Google Patents

Mobile robot and patrol path setting method thereof Download PDF

Info

Publication number
CN106292657B
CN106292657B CN201610596000.1A CN201610596000A CN106292657B CN 106292657 B CN106292657 B CN 106292657B CN 201610596000 A CN201610596000 A CN 201610596000A CN 106292657 B CN106292657 B CN 106292657B
Authority
CN
China
Prior art keywords
user
command
patrol
mobile robot
sound source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610596000.1A
Other languages
Chinese (zh)
Other versions
CN106292657A (en
Inventor
陈本东
张芊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Publication of CN106292657A publication Critical patent/CN106292657A/en
Application granted granted Critical
Publication of CN106292657B publication Critical patent/CN106292657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles

Abstract

The invention relates to a mobile robot and a patrol path setting method thereof. A patrol path setting method for a mobile robot may include: entering a patrol path setting mode in response to a first command of a user; locking the user and tracking the user's movements; determining a movement path while tracking movement of the user; and storing the determined movement path as a patrol path. The method is simple and convenient to operate, does not need additional equipment, and has a wider application range.

Description

Mobile robot and patrol path setting method thereof
Technical Field
The present invention relates generally to the field of human-computer interaction technology, and more particularly, to a mobile robot and a method of setting a patrol route of the mobile robot.
Background
Intelligent robots are related to a cross-domain of disciplines including, for example, artificial intelligence, wireless communications, semiconductors, computers, mechanical engineering, electrical engineering, and the like. With the continuous development of science and technology, many kinds of intelligent robots have been developed and manufactured to serve various fields of human production and life, for example, sweeping robots, nursing robots, transfer robots, security patrol robots, etc. These robots can move within the work area autonomously or following a set path to perform their designed functions.
In the related art, a movement path, also referred to as a patrol path, is generally set for a mobile robot by running an app program on a portable device such as a mobile phone, a tablet computer, or the like. Taking a cell phone as an example, it may establish a communication connection with the mobile robot through, for example, bluetooth, a wireless local area network, the internet, or even a communication network. By running the app program, the mobile phone can display a work environment map stored on the mobile robot, and can edit a patrol route of the mobile robot on the map. After the editing of the patrol path is completed, the app program may save the patrol path to a memory included in the mobile robot for execution by the mobile robot.
Disclosure of Invention
However, the conventional patrol path setting method described above has many drawbacks. First, the conventional method is only suitable for a case where the mobile robot already has a map of its working environment because the mobile app application needs to edit a patrol path of the mobile robot on the basis of the map. Second, the conventional method requires the use of additional equipment such as a mobile phone or a tablet computer, and requires a large display screen of the mobile phone, for example, increasing the system cost. Furthermore, the app application for setting the patrol path generally involves complicated operations, which are difficult for people who are not familiar with computer operations, such as the elderly.
In order to solve at least one of the above problems, the present application proposes a novel patrol path setting scheme for a mobile robot, which conveniently and rapidly sets a patrol path for the mobile robot by causing the mobile robot to track the movement of a human user.
According to an aspect of the present application, there is provided a patrol path setting method for a mobile robot, which may include: entering a patrol path setting mode in response to a first command of a user; locking the user and tracking the user's movements; determining a movement path while tracking movement of the user; and storing the determined movement path as a patrol path.
In some embodiments, the patrol path setting method may further include: completing the patrol path in response to a second command by the user, or in response to tracking the user back to an origin.
In some embodiments, when the first command is a local trigger command or a remote communication command, locking the user may include locking the user based on image recognition. In some embodiments, locking the user based on image recognition may include: capturing an environmental image with an image sensor; identifying one or more potential users in the captured environmental image; locking one potential user when the one potential user is identified; and when multiple potential users are identified, locking the closest potential user. In some embodiments, when multiple potential users are identified, a type of the first command is further determined, a closest one of the multiple potential users is locked if the first command is a local trigger command, a first moving one of the multiple potential users is locked if the first command is a remote communication command, and the closest moving user is locked if both users are moving.
In some embodiments, when the first command is a voice command, locking the user may include locking the user based on a combination of image recognition and sound source localization. In some embodiments, locking the user based on a combination of image recognition and sound source localization may include: determining a sound source direction based on the first command as a voice command; rotating an image sensor to the sound source direction; capturing an environmental image in the sound source direction with the image sensor; identifying and locking a user in the captured environmental image in the direction of the sound source. In some embodiments, locking the user based on a combination of image recognition and sound source localization may include: a step of determining a sound source direction based on the first command as a voice command; b, rotating the image sensor to the direction of the sound source; a step c of capturing an environmental image in the sound source direction using the image sensor; step d, identifying the user based on the captured environment image; step e, if the user is successfully identified based on the captured environment image, locking the user; step f, if the user is not successfully identified based on the captured environment image, determining whether to advance in the sound source direction according to a preset strategy; step g, if the movable robot is determined to advance along the sound source direction according to a preset strategy, driving the movable robot to advance along the sound source direction, and then returning to the step c; and a step h of not driving the mobile robot to advance in the sound source direction if it is determined not to advance in the sound source direction according to a preset strategy, and also returning to the step c.
In some embodiments, tracking the movement of the user may include: obtaining profile information of the user; and tracking the user's profile for movement. In some embodiments, the contour information may include one or more of an overall human body contour, a head contour, or other local contours of a human body.
In some embodiments, tracking the movement of the user may further comprise: in response to a third command from the user, a corresponding movement is performed. The third command may be a voice command or a telecommunication command.
In some embodiments, determining the movement path may include: determining relative coordinates of each waypoint traversed by the mobile robot with respect to a previous waypoint.
In some embodiments, storing the determined movement path as a patrol path may comprise: storing all path points passed by the movable robot as patrol paths; or storing one or more waypoints indicated by the fourth command of the user among all waypoints passed by the mobile robot as a patrol route. The fourth command may be one of a voice command, a local trigger command, and a remote communication command.
In some embodiments, the mobile robot already has a map of the work environment. The stored patrol route is a route located in a map of the work environment.
In some embodiments, the mobile robot does not yet have a map of the work environment. At this time, the patrol path setting method may further include: while tracking the user's movements, running a SLAM algorithm to build a map of the work environment. The stored patrol route is a route located in the map of the constructed work environment.
In some embodiments, the patrol path setting method may further include: setting one or more of a patrol time and a patrol frequency of the patrol path in response to a fifth command of the user. The fifth command may be one of a voice command, a local trigger command, and a remote communication command. In some embodiments, setting one or more of a patrol time and a patrol frequency of the patrol path may be performed when the mobile robot tracks that the user completes the patrol path. After the setting of the patrol time and patrol frequency is completed, the mobile robot may exit the patrol route setting mode.
According to another aspect of the present application, there is provided a mobile robot, which may include: a driving part for moving the movable robot; at least one image sensor for capturing an image; at least one microphone for capturing sound; and at least one processor in communication with the image sensor and the microphone, the processor configured to execute computer instructions stored in a computer readable medium to perform the methods discussed above.
In some embodiments, the at least one microphone may comprise a microphone array.
According to yet another aspect of the application, a computer program product is provided, which may comprise computer program instructions, which, when executed by one or more processors, cause the processors to perform the method discussed above.
According to yet another aspect of the present application, there is provided a computer readable medium, which may have stored thereon computer program instructions, which, when executed by one or more processors, may cause the processors to perform the method discussed above.
Compared with the prior art, the scheme provided by the application can be used for setting the patrol path of the mobile robot without additional equipment. In addition, the scheme of the application avoids complex app application program operation, and is simple and convenient to operate, so that even people unfamiliar with computer operation can easily set the patrol path of the mobile robot by using the scheme of the application. In addition, the scheme of the application can also be applied to the condition that the movable robot does not have a map of the working environment, so that the application range and the scene are wider.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the exemplary embodiments of the present application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the exemplary embodiments of the application do not constitute a limitation of the application. In the drawings, the same or similar reference numbers generally refer to the same or similar parts or steps.
Fig. 1 is a flowchart illustrating a patrol path setting method for a mobile robot according to an exemplary embodiment of the present invention.
Fig. 2 is a flowchart illustrating a method of locking a user based on image recognition according to an exemplary embodiment of the present invention.
Fig. 3A is a flowchart illustrating a method of locking a user based on a combination of image recognition and sound source localization according to an exemplary embodiment of the present invention.
Fig. 3B is a flowchart illustrating a method of locking a user based on a combination of image recognition and sound source localization according to another exemplary embodiment of the present invention.
FIG. 4A is a diagram illustrating an example scenario in which a mobile robot tracks user movement according to an example embodiment of the invention.
FIG. 4B is a diagram illustrating an example scenario in which a mobile robot tracks user movement according to another example embodiment of the present invention.
Fig. 5 is a block diagram illustrating a structure of a mobile robot according to another exemplary embodiment of the present invention.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of The Invention
As described above, in the related art, when a patrol path is set for a mobile robot, an app application program needs to be run using a portable mobile device such as a mobile phone or a tablet computer, and the app application program needs to read an environment map stored on the mobile robot, and draw or edit the patrol path for the mobile robot based on the map. Such a conventional patrol path setting method has many disadvantages such as requiring additional equipment, being complicated in operation, and also requiring prior knowledge of an environment map. In the case of a dynamic environment, for example, the environment map stored on the mobile robot may have not been consistent with the actual environment, resulting in the patrol path set in the app application may have been non-patrollable.
The invention solves the above problems by a simple technical scheme. One basic idea of the invention is that a user can take the mobile robot to patrol the patrol path once, the mobile robot can remember the patrol path and can perform patrol on the path later. This scheme may also be referred to as a "teach-replay" mode. Therefore, the method and the device avoid the need for the mobile phone and the app application program, do not need to operate the complex interface and function of the app application program, only need to lead the mobile robot to walk along the patrol path once, and are simple and quick to operate. The solution of the present invention is applicable not only to a case where the mobile robot already knows the environment map, but also to a case where the mobile robot does not already know the environment map. In the latter case, for example, the mobile robot may build an environment map through a simultaneous localization and mapping (SLAM) algorithm while following the user along the patrol path to set the patrol path. The patrol path setting method completes the setting of the patrol path while patrolling along the actual path, and ensures the consistency between the set patrol path and the latest real environment, thereby avoiding the problem that the patrol path is unavailable due to the contradiction between the patrol path set by a mobile phone app application program and the real dynamic environment.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary embodiments
Fig. 1 is a flowchart illustrating a patrol path setting method for a mobile robot according to an exemplary embodiment of the present invention. As shown in fig. 1, the patrol path setting method 100 for a mobile robot may begin at step 110 when the mobile robot receives a first command from a user to enter a patrol path setting mode. The first command may be any form of command. For example, in some embodiments, the first command may be a locally triggered command, such as a command that the user triggers by pressing a physical button or touch screen on the mobile robot itself; in some embodiments, the first command may also be a remote communication command, such as an infrared signal command issued by a user using a remote control dedicated to the mobile robot, or a command issued using an own cell phone, which may establish a communication connection with the mobile robot through a wireless local area network, the internet, or a communication network, etc., and run a specific app application to control the mobile robot. It should be understood that where the mobile robot is controlled by the mobile app application to set the patrol path here, the app application may not be required to read the environment map stored in the mobile robot. Preferably, in some embodiments, the first command may be a voice command from a user. For example, the user may issue a voice command "set patrol path" to the mobile robot, and a microphone component such as a microphone array on the mobile robot picks up the user voice and extracts a keyword "set patrol path" through a voice recognition algorithm, thereby entering a patrol path setting mode in response to the command. After entering the patrol path setting mode in correct response to the first command, the mobile robot may also provide an output to the outside indicating successful entry into the mode, for example, it may emit a light or flashing light of a certain color to the outside with a light emitting part to indicate entry into the mode, or it may sound "the patrol path setting mode has been entered" with a speaker to indicate that the mode has been successfully entered.
Upon entering the patrol path setup mode, the mobile robot may determine at step 120 whether the first command it receives is in the form of a voice command or a non-voice command, such as the aforementioned local trigger command or remote communication command. Based on the form of the first command, the mobile robot may perform different processing. For example, when the first command is a voice command, the mobile robot may then identify and lock the user based on a combination of image recognition and sound source localization, as shown in step 140; when the first command is not a voice command, the mobile robot may then recognize and lock the user based only on image recognition, as shown in step 130. The steps of identifying and locking the user, such as step 130 and step 140, will be described in more detail below.
After successfully locking the user in step 130 or step 140, the mobile robot may then track and move with the user, as shown in step 150. The user may take the mobile robot to travel along a patrol route desired to be set in a work environment. In addition to following the user's movement using the tracking algorithm, the mobile robot may receive a second command from the user to move. For example, in some specific application scenarios, a user may wish the mobile robot to patrol an area that a human user cannot reach or is not suitable to reach, at which point the user may use a second command to cause the robot to move to the area without having to move itself to the area. It will thus be appreciated that the second command is preferably a voice command or a telecommunication command.
While moving with the user, the mobile robot may execute a positioning algorithm to determine its path of movement. The mobile robot may use various existing positioning methods, for example, a relative positioning method such as dead reckoning (DeadReckoning), an absolute positioning method such as GPS positioning, beacon positioning, map matching positioning, and probabilistic positioning, and a combined positioning method including both the relative positioning and the absolute positioning. The mobile robot can select different positioning methods according to different application scenarios, for example, an indoor robot can adopt a relative positioning method, and an outdoor robot can adopt an absolute positioning method or a combined positioning method.
Taking the dead reckoning method in the relative positioning method suitable for the indoor robot as an example, the mobile robot knows its initial position and sets it as the origin. As the robot follows the user's movements, it determines the displacement vector of each of its waypoints relative to the last waypoint using, for example, displacement measuring components such as odometers and accelerometers, heading sensors such as angular rate gyroscopes, magnetic compasses, differential odometers, and the like. Such a displacement vector with respect to the previous path point is also referred to as relative coordinates, and by adding up the relative coordinates of all the path points, a displacement vector of the current position of the mobile robot with respect to the origin, which is also referred to as absolute coordinates, can be obtained. In some embodiments, the mobile robot may employ dead reckoning to determine the relative coordinates of various waypoints on its path of movement.
It can be understood that the dead reckoning is an accumulation process, and the measurement error and the calculation error are gradually accumulated with time, so that the positioning accuracy is continuously reduced with time. Thus, in some embodiments, dead reckoning may also be combined with other sensor information for positioning. For example, in some embodiments, when a map of the current work environment has been stored in the mobile robot, the mobile robot may also determine the distance of the mobile robot from other map features using, for example, laser sensors, sonar sensors, etc. for correcting the coordinates of its current location.
It will of course be appreciated that the invention is also applicable to situations where the mobile robot does not already have a map of the work environment. In this case, the mobile robot may construct a map of the work environment while following for movement, using, for example, a simultaneous localization and mapping (SLAM) algorithm, while determining its movement path in the map. At this time, the mobile robot may adopt various algorithms to improve the positioning accuracy, for example, UMBmark check is used to correct the system error caused by the physical structure of the robot and the accuracy factor of the inertial sensor, Gyrodometry algorithm is used to correct the direction error caused by the uneven ground, schedule and others propose sensor fusion information to correct the influence of wheel slip on positioning, and so on. These algorithms are well known in the art and therefore will not be described in detail here.
While more relative positioning algorithm embodiments are given above, it should be understood that absolute positioning or combined positioning may also be used when the mobile robot is in a suitable application scenario. Again, these positioning algorithms are known in the art and therefore will not be described in detail here.
As described above, the mobile robot determines its path of travel (or, in other words, the coordinates of various waypoints on the path of travel) at step 160. When the mobile robot follows the user back to the origin, the mobile robot may determine that the entire movement path has been completed, storing the movement path as a patrol path, as shown in step 170. In other embodiments, the mobile robot may be caused to determine that the entire movement path has been completed and perform the patrol path storing step 170 upon receiving a third command from the user. The third command may be any form of command, such as the aforementioned voice command, local trigger command, or remote communication command, etc. For example, the mobile robot determines that the entire patrol route is completed upon receiving a user's voice command "patrol end". This is advantageous, for example, in situations where the patrol path is not a closed loop, or where patrol of other paths is to continue after a closed loop has been formed past the origin.
Although not shown in the flow chart of fig. 1, in some embodiments, after determining that the entire patrol path is completed, the patrol parameters of the patrol path, such as patrol time, patrol frequency, etc., may also be set before storing the patrol path (step 170). For example, the mobile robot may play a prompt voice "please set patrol time", etc. to prompt the user to set patrol time, and the user issues a voice command such as "13 o ' 30 min", "22 o ' clock to 7 o ' clock next day", etc. to set patrol time. The mobile robot may also play a prompt voice such as "please set patrol frequency" to prompt the user to set patrol frequency, or may further play some options such as "option a, daily" for the user to select. The user may set the patrol frequency verbally or may select from options provided. Of course, the user may set the patrol parameter in other ways at other times. For example, the patrol parameter may be set first when entering the patrol path setting mode, may be set using a touch display screen on the mobile robot, or the like. When the set patrol parameters such as time contradict other patrol routes, the user may be prompted to select which patrol is preferentially executed or which patrol is abandoned. After the settings of various patrol parameters are completed, the mobile robot may store the movement path with these parameters as a patrol path.
Finally, after the setting of the patrol path is completed, the mobile robot may exit the patrol path setting mode in step 180, and may inform the user of the exit of the patrol path setting mode by, for example, an alert tone or a light signal.
As previously described with reference to steps 130 and 140, the mobile robot may lock the user using image recognition or a combination of image recognition and sound source localization based on the type of the first command. FIG. 2 shows a flowchart of a method for locking a user based on image recognition, which may be used, for example, in step 130 of FIG. 1, according to an exemplary embodiment of the present invention. As shown in FIG. 2, a method 200 of locking a user based on image recognition may begin with a step 210 of first capturing an image of an environment. It should be understood that, when referring to capturing an image in the present invention, not only a case where a single frame image is captured but also a case where a sequence of images is captured, that is, a plurality of frame images are captured continuously or at predetermined time intervals and the intended purpose is achieved with the plurality of frame images having the predetermined time intervals. This understanding also applies to the case of picking up ambient sound, for example. In step 210, the mobile robot may rotate to capture an image of the 360 degree surroundings.
Next, at step 220, the mobile robot may identify potential users using an image recognition algorithm based on the captured environmental image. For example, the mobile robot may identify the potential user using a face recognition algorithm, and may also identify the potential user using other algorithms, such as a body contour recognition algorithm, a body feature contour recognition algorithm, such as a head contour recognition algorithm, a torso contour recognition algorithm, and so forth.
The identification results may then be examined to determine whether a potential user or multiple potential users were identified at step 230. In an ideal case, when only one potential user is identified, it may be determined that the potential user is the user who is going to take the mobile robot for patrol path setting, and thus the user is directly locked, as shown in step 240. On the other hand, when two or more potential users are identified in step 220, it may be necessary to further identify the user who issued the first command to the mobile robot from among the plurality of potential users. Although not shown in the flow diagram of fig. 2, in some embodiments, the closest one of the plurality of potential users may be directly locked. In other embodiments, one may also proceed to step 250 to further determine whether the type of the first command is a local command or a remote command. When it is determined that the first command is a locally triggered command, such as a command triggered by a physical button or touch screen on the mobile robot itself, it may be determined that the user issuing the first command is next to the mobile robot, and therefore the mobile robot locks the closest user, as shown in step 260. When it is determined that the first command is a telecommunication command, such as a command issued via a cell phone or remote control, the potential user who moved first may be locked in step 270, since it may be assumed that the user will immediately take the mobile robot to patrol after triggering the patrol path setting mode of the mobile robot. In some special cases, there may be multiple potential users moving at the same time, and as a default policy, the closest moving user may be locked at that time. Through these steps, the process of locking the user using image recognition is completed.
Fig. 3A illustrates a flowchart of a method of locking a user based on a combination of image recognition and sound source localization according to an exemplary embodiment of the present invention, and fig. 3B illustrates a flowchart of a method of locking a user based on a combination of image recognition and sound source localization according to another exemplary embodiment of the present invention, and the methods illustrated in fig. 3A and 3B may be used, for example, in step 140 of fig. 1.
Referring first to FIG. 3A, a method 300 for locking a user based on a combination of image recognition and sound source localization may begin at step 310 by determining a sound source direction based on a first command that is a voice command. The mobile robot may perform sound source localization step 310 using, for example, a microphone array. In short, since the arrangement positions of the respective microphones in the microphone array are different, the phases at which the first voice command, i.e., the sound wave, issued by the user reaches the respective microphones after being transmitted in the air are also different. According to the phase difference of each microphone when collecting the same sound, the time difference of the same sound reaching each pair of microphones can be calculated, and then the sound source is positioned on a hyperboloid which takes the position of the pair of microphones as a focus and the sound transmission distance corresponding to the arrival time difference as a parameter. Using multiple pairs of microphones results in multiple time differences, and thus multiple hyperboloids, and the sound source position is at the intersection point of these hyperboloids, which is the basic principle of sound source localization. In step 310, the exact location of the sound source (i.e., direction + distance) may not need to be determined, but only the sound source direction.
Then, at step 320, the image sensor on the mobile robot may be rotated to the sound source direction determined at step 310, and an image in the sound source direction is captured at step 330. At step 340, the user is identified and locked using the captured image in the direction of the sound source. The steps of capturing an image and identifying and locking the user may be substantially as described above with reference to fig. 2 and will not be described in detail here.
In the method shown in fig. 3A, since sound source localization is used to determine the direction of the user, the accuracy of user locking can be greatly improved, and the locking time can be saved. For example, a user in the direction of the sound source can be directly identified and locked without having to be concerned with users in other directions. It should be appreciated that in the method shown in fig. 3A, the sound source direction may also be continuously corrected with subsequent voice commands following the aforementioned first command to further improve the accuracy of the user's lock.
In some cases, even if the correct sound direction is determined, the user may not be identified and locked. The method 300' shown in fig. 3B solves this problem to some extent. Steps 310, 320 and 330 in the method 300' of fig. 3B may be the same as the method 300 of fig. 3A, and thus a repeated description thereof is omitted here.
Referring to FIG. 3B, after the image is captured at step 330, progress is made to step 350 to determine whether the user was successfully identified. If the user is successfully identified, proceed to step 360, lock the user; otherwise, the process proceeds to step 370, and it is determined whether the sound source can be moved to the sound source direction based on a predetermined strategy. The predetermined policy includes, for example, whether it is already in a particular location and cannot proceed (e.g., has reached the wall, etc.), or the current process is interrupted by receiving a new voice command, etc. If it is determined in step 370 that the movement can be continued toward the sound source direction, it is moved a distance toward the sound source direction in step 380 and then the image capturing operation of step 330 is repeatedly performed. Being closer to the sound source location, the chances of successfully identifying the user after capturing the image increase. On the other hand, if it is determined in step 370 that the movement to the sound source direction cannot be continued, the process also returns to step 330 to repeat the process of image capturing and recognizing in an attempt to lock the user.
Although embodiments for identifying and locking users based on a combination of image recognition and sound source localization are described above, the present invention is not limited to these particular embodiments, but other embodiments are also possible, such as those disclosed in the applicant's prior chinese patent application 201610341566.X, the entire contents of which are incorporated herein by reference in their entirety.
In the above described embodiments, image recognition and speech recognition techniques are used. It will be appreciated that the present invention may utilize not only existing image and speech recognition techniques, but also related recognition techniques that are being or will be developed in the future. It should also be understood that the image recognition described herein includes not only recognition of visual images, but also recognition techniques such as infrared images, laser images, ultrasonic images, and the like.
FIG. 4A illustrates an example scenario in which a mobile robot tracks user movement according to an example embodiment of the present invention. For example, mobile robot 10 may be disposed in work environment 400, and work environment 400 may have several unreachable locations, e.g., 402, 404, etc. Docking station 410 for mobile robot 10 is disposed at a location in work environment 400. At rest, mobile robot 10 may dock at docking station 410 for charging. Mobile robot 10 already has a map of the entire work environment 400 and can set the location of docking station 410 as the origin.
By performing the patrol path setting method described above, the mobile robot 10 can move following the user 20, thereby generating the movement path 420. While in some of the embodiments described above, the entire path 420 may be saved as a patrol path, in some embodiments, some waypoints on the path 420, such as waypoints 422, 424, 426 and 428, may also be saved as patrol paths. For example, when mobile robot 10 reaches a certain waypoint, such as waypoint 422, user 20 may issue a command "set patrol point" to treat the waypoint as a location that must be patrolled to. In some embodiments, the user 20 may even set a time when the location should be patrolled. Similarly, user 20 may set a number of such patrol points, such as points 424, 426, and 428, etc., as the lead robot 10 moves. When the entire path is completed, mobile robot 10 may save the sequence of points as a patrol path. When patrolling is performed later, the mobile robot 10 reaches predetermined patrol points at predetermined times according to the path, and the path between the patrol points may be autonomously determined by the mobile robot 10.
The example scenario of FIG. 4B is similar to that of FIG. 4A, except that mobile robot 10 has not previously had a map of work environment 400'. Accordingly, in the case of fig. 4B, mobile robot 10 may perform instantaneous localization and mapping (SLAM) to generate a map of the work environment while following the movement of user 20. As shown in fig. 4B, mobile robot 10 has generated a portion of a map along path 420. When the patrol route is completed, the mobile robot 10 may save the map of the part first, and then complete the map of the entire work environment at an appropriate time later.
Having described embodiments of some operations related to a patrol path setting method for a mobile robot, an example of the mobile robot is described below with reference to fig. 5. Fig. 5 is a block diagram illustrating a structure of a mobile robot according to another exemplary embodiment of the present invention. As shown in fig. 5, the mobile robot 10 may include an image sensor 11, a microphone 12, a memory 13, an output part 14, a driving part 15, and a processor 17, which are connected to each other through a bus system 16.
The image sensor 11 may be, for example, one or more cameras, which may be monocular cameras, binocular cameras, or more. It should be understood, however, that the present invention is not limited to visible light imaging, and that the image sensor 11 may also be, for example, an infrared imaging device, an ultrasonic imaging device, a laser imaging device, or the like, such that these devices may capture images that can be used for user identification. Furthermore, the image sensor 11 can also be used to monitor the working environment during patrols.
The microphone 12 may be, for example, a microphone, preferably an array of microphones, for receiving voice commands, for sound source localization, etc.
The memory 13 may store data or program instructions for use by the processor 17. For example, the processor 17 may execute program instructions stored in the memory 13 to implement those methods and steps described above using data acquired from the image sensor 11 and the microphone 12 and using data stored in the memory 13. The results of the execution by the processor 17 may be stored in the memory 13 or output via the output means 14. The output component 14 may be various types of output components such as speakers, printers, displays, light bulbs, and the like. The bus system 16 ensures that data can be transmitted between these components, including the drive component 15.
The drive component 15 may be used to drive the mobile robot 10 in motion. For example, the driving part 15 may include a motor, and wheels or a crawler or the like driven by the motor. The operation of the motor may be controlled by the processor 17.
The processor 17 may include one or more processors, or may include a plurality of processor cores packaged in one processor. In addition to a conventional central processing unit, the processor 17 may also be a so-called coprocessor, such as a graphics coprocessor for processing image or video signals, an audio coprocessor for processing audio signals, or the like. The processor 17 may also comprise a so-called controller or the like dedicated to processing a certain component or a certain signal.
Although not shown, mobile robot 10 may also include many other components, such as sensors, such as gyroscopes, lasers, sonars, actuators that perform certain functions, such as cleaning, handling, detection, and the like, as well as communication components for establishing wireless communication connections with the outside world. Mobile robot 10 may have a variety of different components depending on the field in which mobile robot 10 is designed to be used.
Application scenario example
The mobile robot 10 of the present invention is applicable to various scenarios. For example, the robot 10 may be used in a sweeping robot at home, for setting a sweeping route of the sweeping robot using the above method. The robot 10 may also be a transfer robot in a factory, for example, which can transfer goods from one place to another place at a designated time and in a designated route. The robot 10 may also be a security patrol robot for use in homes, communities, factories, etc., which may be set to patrol along a fixed route at night to secure the homes, communities, and factories. Such a robot may perform the role of a traditional night keeper, for example. The robot 10 may also be a nursing robot for e.g. homes, nursing homes, hospitals, etc. which may patrol according to a predetermined route to ensure e.g. that the elderly, patients, etc. are in a normal state. For example, when the robot 10 "sees" that the elderly or the patients lie on the ground, or "listens" to the call or call for help of the elderly or the patients, it may timely send a notification or a warning or the like to the relevant person or organization
It should be understood that the mobile robot 10 and associated methods described above may also be applied to many other aspects where it is desirable to move the robot 10 along a prescribed path.
Exemplary computer program product and computer-readable storage Medium
In addition to the above methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps of the patrol path setting method described herein according to various embodiments of the present application.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps of the patrol path setting method according to various embodiments of the present application described herein.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the apparatus and methods of the present application, the components or steps may be disassembled and/or reassembled. These decompositions and/or recombinations are to be considered as equivalents of the present application.
Although the various steps of the method are described above in a certain order, it should be understood that the steps may be performed in a different order or multiple steps may be performed simultaneously. Or in some embodiments, certain steps may be performed continuously at all times. The method of the present invention encompasses all of these different orders of execution.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (18)

1. A patrol path setting method for a mobile robot, comprising:
entering a patrol path setting mode in response to a first command of a user;
locking the user based on image recognition and tracking movement of the user;
determining a movement path while tracking movement of the user; and
the determined movement path is stored as a patrol path,
wherein locking the user based on image recognition comprises:
capturing an environmental image with an image sensor;
identifying potential users in the captured environmental image, wherein when a plurality of potential users are identified, a further determination is made as to the type of the first command, and if the first command is a local trigger command, a closest one of the plurality of potential users is locked.
2. The patrol path setting method according to claim 1, further comprising:
completing the patrol path in response to a second command by the user, or in response to tracking the user back to an origin.
3. The patrol path setting method of claim 1, wherein when the first command is a local trigger command or a remote communication command, locking the user comprises locking the user based on image recognition, and
wherein, when the first command is a voice command, locking the user comprises locking the user based on a combination of image recognition and sound source localization.
4. The patrol path setting method of claim 1, wherein one potential user is locked when the one potential user is identified.
5. The patrol path setting method of claim 1, wherein if the first command is a remote communication command, a first moving user of the plurality of potential users is locked, and if a plurality of users are all moving, a closest moving user is locked.
6. The patrol path setting method according to claim 3, wherein locking the user based on a combination of image recognition and sound source localization comprises:
determining a sound source direction based on the first command as a voice command;
rotating an image sensor to the sound source direction;
capturing an environmental image in the sound source direction with the image sensor;
identifying and locking a user in the captured environmental image in the direction of the sound source.
7. The patrol path setting method according to claim 3, wherein locking the user based on a combination of image recognition and sound source localization comprises:
a step of determining a sound source direction based on the first command as a voice command;
b, rotating the image sensor to the direction of the sound source;
a step c of capturing an environmental image in the sound source direction using the image sensor;
step d, identifying the user based on the captured environment image;
step e, if the user is successfully identified based on the captured environment image, locking the user;
step f, if the user is not successfully identified based on the captured environment image, determining whether to advance in the sound source direction according to a preset strategy;
step g, if the movable robot is determined to advance along the sound source direction according to a preset strategy, driving the movable robot to advance along the sound source direction, and then returning to the step c; and
step h, if it is determined not to advance in the sound source direction according to a preset strategy, the mobile robot is not driven to advance in the sound source direction, and also returns to step c.
8. The patrol path setting method of claim 1, wherein tracking the movement of the user comprises:
obtaining profile information of the user; and
the user's profile is tracked for movement,
wherein the contour information includes one or more of a full body contour, a head contour, a torso contour, or other partial contours of a human body.
9. The patrol path setting method of claim 8, wherein tracking the movement of the user further comprises:
in response to a third command from the user, wherein the third command is a voice command or a telecommunication command, performing the corresponding movement.
10. The patrol path setting method according to claim 1, wherein the determining the moving path includes:
determining relative coordinates of each waypoint traversed by the mobile robot with respect to a previous waypoint.
11. The patrol path setting method of claim 10, wherein storing the determined movement path as a patrol path comprises:
storing all path points passed by the movable robot as patrol paths; or
Storing one or more waypoints indicated by a fourth command of the user, of all waypoints passed by the mobile robot, as a patrol path, wherein the fourth command is one of a voice command, a local trigger command, and a remote communication command.
12. The patrol path setting method according to claim 1, wherein the mobile robot has a map of a work environment, and
wherein the stored patrol route is a route located in a map of the work environment.
13. The patrol path setting method according to claim 1, further comprising:
running a SLAM algorithm to construct a map of a work environment while tracking movements of the user,
wherein the stored patrol route is a route located in a map of the constructed work environment.
14. The patrol path setting method according to claim 1, further comprising:
setting one or more of a patrol time and a patrol frequency of the patrol path in response to a fifth command of the user, wherein the fifth command is one of a voice command, a local trigger command, and a remote communication command.
15. The patrol path setting method of claim 14, wherein setting one or more of a patrol time and a patrol frequency of the patrol path is performed when the mobile robot traces that the user completes the patrol path.
16. A mobile robot, comprising:
a driving part for moving the movable robot;
at least one image sensor for capturing an image;
at least one microphone for capturing sound; and
at least one processor in communication with the image sensor and the microphone, the processor configured to execute computer instructions stored in a computer readable medium to perform the method of any of claims 1-15.
17. The mobile robot of claim 16 wherein the at least one microphone comprises a microphone array.
18. A computer-readable medium having stored therein computer program instructions which, when executed by one or more processors, cause the processors to perform the method of any one of claims 1-15.
CN201610596000.1A 2016-07-22 2016-07-26 Mobile robot and patrol path setting method thereof Active CN106292657B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016105874699 2016-07-22
CN201610587469 2016-07-22

Publications (2)

Publication Number Publication Date
CN106292657A CN106292657A (en) 2017-01-04
CN106292657B true CN106292657B (en) 2020-05-01

Family

ID=57652757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610596000.1A Active CN106292657B (en) 2016-07-22 2016-07-26 Mobile robot and patrol path setting method thereof

Country Status (1)

Country Link
CN (1) CN106292657B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781332A (en) * 2017-02-14 2017-05-31 上海斐讯数据通信技术有限公司 The method and system of alarm are realized by sweeping robot
CN107180277B (en) * 2017-05-24 2020-03-27 江西理工大学 Unmanned aerial vehicle inspection path planning method applying elite reverse harmony search
CN107315414B (en) * 2017-07-14 2021-04-27 灵动科技(北京)有限公司 Method and device for controlling robot to walk and robot
CN107633270A (en) * 2017-09-29 2018-01-26 上海与德通讯技术有限公司 Intelligent identification Method, robot and computer-readable recording medium
CN109048899A (en) * 2018-08-15 2018-12-21 深圳市烽焌信息科技有限公司 A kind of patrol robot and storage medium
CN109062212A (en) * 2018-08-15 2018-12-21 深圳市烽焌信息科技有限公司 A kind of robot and storage medium for patrol
CN109085833A (en) * 2018-08-15 2018-12-25 深圳市烽焌信息科技有限公司 A kind of patrol robot and storage medium
CN109159131A (en) * 2018-09-25 2019-01-08 浙江智玲机器人科技有限公司 Unmanned meal delivery robot
CN111436861B (en) * 2018-12-27 2023-02-17 北京奇虎科技有限公司 Block edge closing processing method, electronic equipment and readable storage medium
CN109784821A (en) * 2019-04-04 2019-05-21 白冰 A kind of garden servicer and garden service system
CN112017661A (en) * 2019-05-31 2020-12-01 江苏美的清洁电器股份有限公司 Voice control system and method of sweeping robot and sweeping robot
CN110941265A (en) * 2019-11-05 2020-03-31 盟广信息技术有限公司 Map entry method and device, computer equipment and storage medium
CN117148836A (en) * 2021-08-20 2023-12-01 科沃斯机器人股份有限公司 Self-moving robot control method, device, equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635737A (en) * 2015-01-22 2015-05-20 马鞍山纽泽科技服务有限公司 Garden tour blind-guide-handcart intelligent system
CN104932501A (en) * 2015-05-29 2015-09-23 深圳市傲通环球空气过滤器有限公司 Movement device with path self-learning function
CN105159302A (en) * 2015-09-23 2015-12-16 许继集团有限公司 Automatic guided vehicle (AGV) and automatic navigation method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4079792B2 (en) * 2003-02-06 2008-04-23 松下電器産業株式会社 Robot teaching method and robot with teaching function

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635737A (en) * 2015-01-22 2015-05-20 马鞍山纽泽科技服务有限公司 Garden tour blind-guide-handcart intelligent system
CN104932501A (en) * 2015-05-29 2015-09-23 深圳市傲通环球空气过滤器有限公司 Movement device with path self-learning function
CN105159302A (en) * 2015-09-23 2015-12-16 许继集团有限公司 Automatic guided vehicle (AGV) and automatic navigation method thereof

Also Published As

Publication number Publication date
CN106292657A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106292657B (en) Mobile robot and patrol path setting method thereof
US11830618B2 (en) Interfacing with a mobile telepresence robot
WO2019085568A1 (en) Video monitoring method for mobile robot
US10723027B2 (en) Robot systems incorporating cloud services systems
US11673269B2 (en) Method of identifying dynamic obstacle and robot implementing same
JP6392972B2 (en) Method, persistent computer readable medium and system implemented by a computing system
JP6705465B2 (en) Observability grid-based autonomous environment search
US9323250B2 (en) Time-dependent navigation of telepresence robots
US7598976B2 (en) Method and apparatus for a multisensor imaging and scene interpretation system to aid the visually impaired
JP4630146B2 (en) Position management system and position management program
WO2018068771A1 (en) Target tracking method and system, electronic device, and computer storage medium
US11688081B2 (en) Method of performing simultaneous localization and mapping with respect to a salient object in an image
CN112161618B (en) Storage robot positioning and map construction method, robot and storage medium
CN106548231B (en) Mobile control device, mobile robot and method for moving to optimal interaction point
JP6330200B2 (en) SOUND SOURCE POSITION ESTIMATION DEVICE, MOBILE BODY, AND MOBILE BODY CONTROL METHOD
He et al. Wearable ego-motion tracking for blind navigation in indoor environments
Zhang et al. An indoor navigation aid for the visually impaired
US10607079B2 (en) Systems and methods for generating three dimensional skeleton representations
US11347226B2 (en) Method of redefining position of robot using artificial intelligence and robot of implementing thereof
CN113791627B (en) Robot navigation method, equipment, medium and product
KR20160000162A (en) Self moving method of service robot
CN108733059A (en) A kind of guide method and robot
US20220329988A1 (en) System and method for real-time indoor navigation
JP2015066621A (en) Robot control system, robot, output control program and output control method
JP2009178782A (en) Mobile object, and apparatus and method for creating environmental map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant