WO2020259274A1 - 区域识别方法、机器人和存储介质 - Google Patents

区域识别方法、机器人和存储介质 Download PDF

Info

Publication number
WO2020259274A1
WO2020259274A1 PCT/CN2020/095049 CN2020095049W WO2020259274A1 WO 2020259274 A1 WO2020259274 A1 WO 2020259274A1 CN 2020095049 W CN2020095049 W CN 2020095049W WO 2020259274 A1 WO2020259274 A1 WO 2020259274A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
label image
robot
image
identified
Prior art date
Application number
PCT/CN2020/095049
Other languages
English (en)
French (fr)
Inventor
温贤达
刘德
郑卓斌
王立磊
Original Assignee
广东宝乐机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东宝乐机器人股份有限公司 filed Critical 广东宝乐机器人股份有限公司
Publication of WO2020259274A1 publication Critical patent/WO2020259274A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Definitions

  • This application relates to the field of robotics, in particular to an area recognition method, robot and storage medium.
  • robots can replace humans to complete part of the indoor work.
  • it is usually necessary to limit the working area of the robot so that the robot only works in the working area and prohibits the robot from entering the non-working area, for example, only working in the living room or bedroom, and prohibits entering the bathroom and kitchen.
  • Scheme one uses an infrared virtual wall to limit the working area.
  • the infrared virtual wall generator emits a beam of infrared light as the virtual wall. After the robot detector detects the beam, Retreat away from the beam to make the robot work in the area defined by the infrared virtual wall generating device; solution two, use the magnetic stripe virtual wall to limit the working area, and lay the magnetic stripe on the ground, and the sensor set by the robot detects the magnetic stripe emitted When the magnetic signal is applied, the robot backs away from the magnetic strip, making the robot work in the working area defined by the magnetic strip.
  • the established virtual wall will fail due to the failure of the infrared device and the magnetic stripe, which reduces the accuracy of the robot's detection of the virtual wall, thereby making the robot unable to accurately identify the division of the virtual wall. Area, the robot accidentally entered the wrong area.
  • an area identification method includes:
  • the area type of the area to be identified is determined according to the first label image and the second label image, where the area to be identified is an area that does not include a robot divided by the virtual wall.
  • the identifying the first label image and the second label image includes:
  • the method before the step of instructing the robot to move in the direction of the second label image, the method further includes:
  • the method further includes:
  • the area type of the area to be identified is determined according to the first data information and the second data information.
  • the method further includes:
  • the first data information and the second data information include the scene type of the area to be identified, and the method further includes:
  • the cleaning mode of the area to be identified is set to a cleaning mode corresponding to the scene type.
  • the method further includes:
  • the area to be identified is marked as a non-work area in the electronic map, and the robot is prohibited from entering the non-work area.
  • the method further includes:
  • the area type of the area to be identified is a work area, mark the area to be identified as a work area in the electronic map.
  • the method further includes:
  • the current position of the robot is saved as the waiting position, and after the cleaning of the waiting area is completed, the robot returns to the waiting position to continue cleaning the current area.
  • the method further includes:
  • the method further includes:
  • the first label image and the second label image are arranged on an image card, and the image card is adsorbed on both sides of the entrance of the area to be identified by suction cups.
  • the surface of the label image is covered with a fluorescent layer.
  • a robot in a second aspect, includes:
  • An image recognition module for recognizing a first label image and a second label image; the first label image and the second label image have a corresponding relationship;
  • An obtaining module configured to obtain first position information corresponding to the first label image and second position information corresponding to the second label image respectively;
  • a virtual wall setting module for setting a virtual wall between the first label image and the second label image according to the first position information and the second position information
  • the area recognition module is configured to determine the area type of the area to be recognized according to the first label image and the second label image, where the area to be recognized is an area that does not include a robot divided by the virtual wall.
  • a robot including a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the area type of the area to be identified is determined according to the first label image and the second label image; the area to be identified is an area that does not include a robot divided by the virtual wall.
  • a computer-readable storage medium is provided with a computer program stored thereon, and the computer program implements the following steps when executed by a processor:
  • the area type of the area to be identified is determined according to the first label image and the second label image; the area to be identified is an area that does not include a robot divided by the virtual wall.
  • the area recognition method, device, robot, and storage medium described above respectively obtain the first position information corresponding to the first label image and the second position corresponding to the second label image by recognizing the first label image and the second label image Information, setting a virtual wall between the first label image and the second label image according to the first location information and the second location information.
  • the area type of the area to be identified is determined according to the first label image and the second label image.
  • a virtual wall can be established between a group of label images, thereby realizing the division of the current area, and determining the area type of the area to be identified through the information carried by the label image, which improves the accuracy of virtual wall detection.
  • FIG. 1 is a diagram of the implementation environment of the area identification method provided by an embodiment of the application
  • FIG. 2 is a flowchart of an area identification method provided by an embodiment of the application
  • FIG. 3 is a flowchart of another area identification method provided by an embodiment of the application.
  • FIG. 5 is a flowchart of another area identification method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of a cleaning process provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of another cleaning process provided by an embodiment of the application.
  • FIG. 8 is a flowchart of another area identification method provided by an embodiment of the application.
  • FIG. 9 is a block diagram of a robot provided by an embodiment of the application.
  • FIG. 10 is a block diagram of another robot provided by an embodiment of the application.
  • Fig. 11 is a block diagram of a robot provided by an embodiment of the application.
  • the area identification method provided in this application can be applied to the implementation environment as shown in FIG. 1.
  • the robot 101 can directly communicate with the terminal device 103.
  • the robot 101 may communicate with the server 102, and the server 102 may communicate with the terminal device 103.
  • the robot 101 can be, but is not limited to, various intelligent robots, self-moving robots, and sweeping robots.
  • the server 104 can be implemented by an independent server or a server cluster composed of multiple servers.
  • the terminal device 103 can be, but not limited to, a smart phone. , Desktop computers, notebook computers, palmtop computers, etc.
  • FIG. 2 shows a flowchart of an area recognition method provided by this embodiment, and the area recognition method can be applied to the robot 101 in the implementation environment described above.
  • Step 202 Identify a first label image and a second label image; the first label image and the second label image have a corresponding relationship.
  • first label image and the second label image may be label images in any form, and specifically may be a barcode, a two-dimensional code, text or other specific images.
  • the identification methods of the first label image and the second label image may select corresponding identification methods according to different label image forms, which are not limited in detail in the embodiment of the present invention.
  • the first label image and the second label image have a corresponding relationship and appear in groups.
  • the robot includes an image acquisition module, the image acquisition module is used to identify the first tag image and the second tag image, when the robot moves on the electronic map according to a preset route, the image acquisition module will collect in real time
  • the environment image is recognized in the environment image according to a preset image recognition method to obtain the first label image and the second label image.
  • the electronic map has been established before the start of the application process and saved in the memory of the robot.
  • the electronic map may be stored in a server communicating with the robot, and at the same time, the electronic map may also be stored in a terminal device communicating with the robot or the server. Whenever the electronic map is changed, it will be synchronized in the robot, server and terminal device.
  • Step 204 Acquire first location information corresponding to the first label image and second location information corresponding to the second label image, respectively.
  • the robot will sequentially obtain the first position information of the first label image and the second position information of the second label image.
  • the first position information and the second position information are respectively used Yu represents the relative coordinates of the first label image and the second label image on the electronic map.
  • the robot moves according to a preset route on the electronic map.
  • the image acquisition module of the robot recognizes the first tag image
  • the robot obtains its own relative position coordinates in the electronic map.
  • the distance to the first tag image is relatively close, and the relative coordinates of the robot can be approximated as the relative coordinates of the first tag image in the electronic map.
  • the relative position coordinates of the robot in the electronic map at this time are used as the second position information.
  • Step 206 Set a virtual wall between the first label image and the second label image according to the first location information and the second location information.
  • the robot After acquiring the relative coordinates of the first label image and the second label image in the electronic map, the robot establishes a connection line between the two relative coordinates, and connects the two The relative coordinates and the line between them are the virtual walls set. Then, the robot modifies the electronic map according to the virtual wall information, that is, adds a set virtual wall to the electronic map. In addition, the robot will synchronize the modified electronic map to the server and terminal equipment.
  • Step 208 Determine the area type of the area to be identified according to the first label image and the second label image; the area to be identified is an area that does not include robots divided by the virtual wall.
  • the established virtual wall will divide the previous area into two areas bounded by the virtual wall, where the area on one side of the robot is the current area, and the The area excluding the robot divided by the virtual wall is the area to be recognized.
  • first label image and the second label image also include the area type of the area to be identified, and the robot obtains the to be identified by analyzing the first label image and the second label image The area type of the area.
  • the area recognition method by identifying the first label image and the second label image, the first position information corresponding to the first label image and the second position corresponding to the second label image are respectively obtained Information, a virtual wall is set between the first label image and the second label image according to the first position information and the second position information, and according to the first label image and the second label image
  • the area type of the area to be recognized is determined, where the area to be recognized is an area that does not include robots divided by the virtual wall.
  • the corresponding virtual wall can be quickly established through a set of label images, which improves the accuracy of virtual wall detection, and the area type of the area to be recognized is determined through the set of label images, thereby ensuring Improve the accuracy of area recognition.
  • FIG. 3 shows a flowchart of another area recognition method provided by this embodiment, and the area recognition method can be applied to the robot 101 in the implementation environment described above.
  • the above step 202 may specifically include the following steps:
  • Step 302 Obtain the first label image.
  • the robot collects real-time environmental images through the image acquisition module during the movement and recognizes the first tag image.
  • the image acquisition module may be in a fixed orientation or in any orientation, which is not limited in this embodiment. When the image acquisition module is in any orientation, the current orientation information will be saved when the first label image is acquired.
  • Step 304 Analyze the first label image to obtain the relative positional relationship between the first label image and the second label image.
  • the first label image contains a relative positional relationship with the second label image
  • the robot will use a corresponding image recognition and analysis method to analyze the image according to the image type of the first label image.
  • the first label image is analyzed to obtain the relative positional relationship between the first label image and the second label image.
  • Step 306 Instruct the robot to move in the direction of the second label image according to the relative position relationship to obtain the second label image.
  • the robot will move in the direction of the second label image to obtain the second label image according to the acquired relative position relationship in combination with the acquisition direction of the image acquisition module.
  • the first label image contains "left” information
  • the second label image contains “right” information
  • the "left” and “right” information reflects the first label The relative positional relationship between the image and the second label image. That is to say, the first label image is the left label image of this group of label images, and the robot can identify and analyze the "left” information in the first label image according to the "left” information Look to the right to get the second label image in this group of label images.
  • the direction in which the robot collects the first label image is true north
  • the robot analyzes the first label image to obtain "left” information
  • the "left" information indicates that the second label image is on the right side of the acquisition direction, that is In the right direction of the true north direction (true east direction)
  • the robot keeps the acquisition direction of the image acquisition module as the true north direction and moves in the true east direction to obtain the second label image.
  • the robot by analyzing the first label image, the relative positional relationship between the first label image and the second label image is obtained, and the relative positional relationship indicates the The robot moves in the direction of the second label image to obtain the second label image. Since the relative position relationship of another label image is added to the sticky note image, the robot can quickly and accurately find another label image even when only one label image is found, which improves the speed of virtual wall construction.
  • FIG. 4 shows A flowchart of another area recognition method provided by an embodiment of the present application is shown.
  • the area recognition method can be applied to the robot 101 in the implementation environment described above. Based on the embodiment shown in FIG. 3, before step 306, the following steps may be specifically included:
  • Step 402 Save the current position of the robot as a position to be worked.
  • the robot when the robot obtains the first label image, and before instructing the robot to move in the direction of the second label image, the current relative position coordinates are saved as the position to be worked .
  • the robot is in a specific working mode before acquiring the first label image. Since the first label image is obtained, the virtual wall establishment process is triggered, and the current specific working mode needs to be suspended to obtain the second label image to complete the virtual wall establishment process. However, when acquiring the second label image, it will leave the current working position, and the purpose of saving the current position as the waiting position is to enable the robot to quickly return to the previous position and re-enter the specific working mode.
  • Step 404 After acquiring the second label image, return to the waiting position.
  • the robot after the robot obtains the second label image, it immediately returns to the waiting position. On the way back to the waiting position, the analysis of the second label image and the virtual wall can be performed. Establishment and other operations.
  • the current position of the robot is saved as the position to be worked, and after the second label image is obtained, the position is returned to the position to be worked.
  • the robot can quickly return to the previous position and immediately restore the previous working mode, thereby minimizing the impact of the virtual wall establishment process on the current working mode.
  • FIG. 5 shows a flowchart of another area recognition method provided by an embodiment of the present application.
  • the area recognition method can be applied to the robot 101 in the implementation environment described above. Based on the embodiment shown in FIG. 2, the above step 208 may specifically include the following steps:
  • Step 502 Obtain first data information corresponding to the first label image and second data information corresponding to the second label image, respectively, where the first data information and the second data information are used to indicate the waiting Identify the area type of the area.
  • the first data information and the second data information may be used to indicate whether the area to be identified is a working area or a non-working area. Further, the first data information and the second data information may also be used to characterize the scene type of the area to be identified, and the scene type may include a living room, a bedroom, a kitchen, and the like.
  • the robot when the robot obtains the first label image, it analyzes the first label image to obtain the first data information; correspondingly, when the robot obtains the second label image In the case of an image, the second label image is analyzed to obtain the second data information.
  • the robot when the robot obtains the first label image and the second label image, the first label image and the second label image are saved, and the calculation load of the robot is low.
  • the threshold is preset, the stored first label image and the second label image are analyzed to obtain the first data information and the second data information.
  • Step 504 Determine the area type of the area to be identified according to the first data information and the second data information.
  • the first data information corresponding to the first label image and the second data information corresponding to the second label image are separately obtained, and the first data information and the second data information
  • the second data information determines the area type of the area to be identified.
  • the robot can quickly identify the area type of the area to be identified according to the data information in the label image.
  • the area type of the to-be-identified area can be changed simply and flexibly, which can be applied to various scenarios.
  • the embodiment of the present application also provides another area recognition method, which can be applied to the robot 101 in the implementation environment described above.
  • the foregoing step 504 may specifically include the following steps:
  • the robot compares the acquired first data information and the second data information, and determines whether the virtual wall is successfully set according to the comparison result. If the first data information is the same as the second data information, it is determined that the virtual wall is set successfully.
  • the robot sets the same area type information contained in the first data information and the second data information as the area type of the area to be identified.
  • the robot determines that the virtual wall setting has failed, and the robot uses the virtual wall previously established based on the first location information and the second location information in the electronic map. Data deletion. And send out an alarm prompt to the server or terminal.
  • the alarm prompt is used to indicate that the virtual wall has failed to be established, and indicates that the first label image and the second label image on both sides of the entrance of the area to be identified are not the same group of label images, so that the user at least Replace one of the label images.
  • the robot can prompt the user with a warning message when it finds that the label image is set incorrectly. Solved the problem that robot recognition error may occur because the user sets two label images that are not the same group on both sides of the entrance of the area to be recognized.
  • the embodiment of the present application also provides another area identification method, which can be applied to the implementation environment described above.
  • the first data information and the second data information include the scene type of the area to be identified.
  • it may specifically include: The cleaning mode of is set to the cleaning mode corresponding to the scene type.
  • the robot may further The scene type of the area to be identified is acquired. Then, the robot sets the cleaning mode of the area to be identified to the cleaning mode corresponding to the scene type. Then, when the robot enters the area to be identified, it will clean the area to be identified by adopting a cleaning mode corresponding to the scene type.
  • the scene type is the room type of the area to be identified, which may include: kitchen, living room, bedroom, etc. Since the environmental waste in each scene type is different, the cleaning mode that needs to be adopted also needs to be adaptively changed.
  • the cleaning mode with the identification area can be set to a cleaning mode for the kitchen, for example Use the mode of increasing the power and water output of mopping.
  • the cleaning mode with the recognition area can be set as a cleaning mode for the bedroom, such as increasing the suction and side of the fan. The pattern of brush speed.
  • the scene of the area to be identified in the first data information and the second data information by obtaining the scene type of the area to be identified in the first data information and the second data information, the scene of the area to be identified can be further performed.
  • the classification of types further improves the application type of this application in different scenarios.
  • the embodiment of the present application also provides another area identification method, which can be applied to the implementation environment described above.
  • step 208 the following steps may be specifically included:
  • the area type of the area to be identified is a non-work area, mark the area to be identified as a non-work area in the electronic map, and prohibit the robot from entering the non-work area.
  • the robot recognizes the area type of the area to be identified as a non-work area through the first label image and the second label image, and marks the area to be identified in the electronic map by color, text, etc., and the marked electronic map It can be synchronized to the server through a communication connection, and the server can also synchronously send the marked electronic map to other terminals connected to it, so that the user can grasp the current area division status.
  • the area type of the area to be identified is a work area, mark the area to be identified as a work area in the electronic map.
  • the robot will mark the area to be identified in the electronic map with the work area information of the area to be identified by color, text, etc.
  • the color mark of the work area and the color mark of the non-work area are Obvious difference.
  • the working area and the non-working area are marked on the area to be recognized on the electronic map, so that the user can grasp the current area division status more easily and conveniently.
  • marking the area to be identified as a work area in an electronic map may specifically include the following steps:
  • the area to be identified is marked as the area to be cleaned, and the area to be cleaned is cleaned after the cleaning of the current area is completed.
  • the robot when it is determined that the area type of the area to be identified is a work area, the robot will mark it as an area to be cleaned, and continue to clean the current area. If in the process of cleaning the current area, another work area is recognized again, and the other work area is also set as the area to be cleaned until the cleaning work is completed in the current area.
  • the cleaning can be performed in sequence according to the identified order. It is also possible to generate an optimal cleaning sequence according to the relative positions of the multiple regions to be cleaned, and to clean the multiple regions to be cleaned according to the optimal cleaning sequence.
  • marking the area to be identified as a work area in an electronic map may specifically include the following steps:
  • the robot whenever the area type of the area to be recognized is recognized as the working area, it stops cleaning the current area, and saves the relative position coordinates of the current position on the electronic map as the position to be worked.
  • the area with identification is immediately cleaned according to the corresponding cleaning mode.
  • the robot stops cleaning the current area, it moves in the direction of the area to be identified, and when it enters the area to be identified, the corresponding cleaning mode is used for cleaning. Then, when the cleaning work of the area to be identified is completed, return to the position to be worked, and continue to clean the current area.
  • the robot when the robot recognizes that the bedroom is a work area during the process of cleaning the lobby, it immediately uses the cleaning mode corresponding to the bedroom to clean the bedroom, and when the bedroom is cleaned, it returns to the previous work in the lobby Position, continue to clean the hall; similarly, when it is detected that the study and kitchen are working areas, the detected area to be identified will also be cleaned immediately until the cleaning of the hall is completed.
  • the robot during the cleaning process of the current area, if the area to be identified is a work area, it immediately cleans the area to be identified according to the cleaning mode corresponding to the area to be identified, and continues after the cleaning is completed The cleaning process of the current area. The immediacy of cleaning the area to be identified is ensured, and the problem of incomplete cleaning of the current area caused by cleaning other areas in the middle is avoided by saving the position to be worked.
  • the embodiment of the present application also provides another area identification method, which can be applied to the implementation environment described above.
  • another area identification method which can be applied to the implementation environment described above.
  • after the step of marking the area to be identified as the work area in the electronic map if it is determined that the area type of the area to be identified is a work area it may specifically include The following steps:
  • each time the robot detects an area to be identified it will analyze the scene type of the area to be identified, and mark the identified scene type in the corresponding area to be identified in the electronic map in time. After the marking is completed, the robot can synchronize the marked electronic map to the server, and the server can also synchronize the marked electronic map to other terminals connected to it, so that the user can grasp the area type of each area in the current electronic map.
  • the robot will update the cleaning degree of each area at a certain frequency, and mark the cleaning degree in each area of the electronic map.
  • the cleaning degree may be the time that has been cleaned, the estimated remaining cleaning time, the cleaned area, the remaining cleaned area, and the like.
  • the marking method can be text marking, color depth marking, color filling area, etc.
  • the robot can synchronize the marked electronic map to the server, and the server can also synchronize the marked electronic map to other terminals connected to it, so that the user can grasp the current electronic map of each area The degree of cleaning.
  • FIG. 8 shows a flowchart of another area identification method provided by an embodiment of the present application.
  • the area identification method can be applied to the implementation environment described above.
  • the above step 204 may specifically include the following steps:
  • Step 902 Obtain the first position coordinates and the first shooting direction when the robot recognizes the first tag image
  • Step 904 Obtain the second position coordinates and the second shooting direction when the robot recognizes the second tag image
  • the robot whenever the robot detects a tag image, it saves the current position coordinates, that is, saves the relative position of the robot in the electronic map, and saves the shooting direction when the image acquisition module collects the tag image.
  • Step 906 Calculate the first area ratio and the second area ratio of the first label image and the second label image in the environment image respectively;
  • the image acquisition module recognizes the label image, it collects an environment image containing the label image in real time in advance, and uses an image recognition algorithm to identify the label image from the environment image. After that, the robot calculates the area ratio of the label image in the environment image.
  • the area ratio may reflect the distance of the robot from the label image. For example, the smaller the area ratio, the farther the robot is from the label image.
  • Step 908 Determine a first shooting distance corresponding to the first area ratio and a second shooting distance corresponding to the second area ratio according to the corresponding relationship between the preset area ratio and the shooting distance.
  • the shooting distance and area ratio of the robot and the label image have a certain corresponding relationship, and the robot can obtain the first area according to the preset area ratio and the corresponding relationship between the shooting distance.
  • Step 910 Acquire location information corresponding to the first tag image according to the first location coordinates, the first shooting direction, and the first shooting distance;
  • Step 912 Obtain location information corresponding to the second tag image according to the second location coordinates, the second shooting direction, and the second shooting distance.
  • the robot since the robot obtains the coordinate position, shooting direction, and shooting distance when the tag image is taken, the actual coordinate position of the tag image in the electronic map can be obtained.
  • the area recognition method provided by the embodiment of the present application, by acquiring the coordinate position, shooting direction, and shooting distance when the tag image is taken, more accurate relative position coordinates of the tag image in the electronic map are obtained. Furthermore, the accuracy of the virtual wall setting is ensured, and the accuracy of the division and recognition of the to-be-recognized area is improved.
  • the first label image and the second label image are arranged on an image card, and the image card is adsorbed on both sides of the entrance of the area to be identified by suction cups.
  • the label image is adsorbed on the door frame or wall by the combination of the suction cup and the image card, so that the label image can be taken off at will and the adsorption position can be changed, which is extremely convenient.
  • the suction cup and the image card can be designed to form a certain inclination angle, so that the robot can more easily collect images and improve the image recognition effect.
  • the surface of the label image is covered with a fluorescent layer.
  • the fluorescent layer enables the label image to be collected and recognized by the robot even when the light is weak.
  • FIG. 9 shows a block diagram of a robot 1000 provided by an embodiment of the present application.
  • the robot 1000 may include: an image recognition module 1001, an acquisition module 1002, a virtual wall setting module 1003, and an area recognition module 1004. among them:
  • the image recognition module 1001 is used to recognize the first label image and the second label image.
  • the acquiring module 1002 is configured to respectively acquire the first location information corresponding to the first label image and the second location information corresponding to the second label image.
  • the virtual wall setting module 1003 is configured to set a virtual wall between the first label image and the second label image according to the first position information and the second position information.
  • the area recognition module 1004 is configured to determine the area type of the area to be recognized according to the first label image and the second label image, and the area to be recognized is a boundary that does not include robots and is divided by the virtual wall. area.
  • the image recognition module 1001 is specifically configured to: obtain the first label image; analyze the first label image to obtain the first label image and the second label image The relative positional relationship of; according to the relative positional relationship, the robot is instructed to move in the direction of the second label image to obtain the second label image.
  • the image recognition module 1001 is configured to: save the current position of the robot as a position to be worked; after acquiring the second label image, return to the position to be worked.
  • the acquisition module 1002 is specifically configured to: acquire the first position coordinates and the first shooting direction when the robot recognizes the first label image; The second position coordinates and the second shooting direction in the case of two label images; respectively calculate the first area ratio and the second area ratio of the first label image and the second label image in the environment image; according to the preset area
  • the corresponding relationship between the ratio and the shooting distance is to determine the first shooting distance corresponding to the first area ratio and the second shooting distance corresponding to the second area ratio; according to the first position coordinates, the first Acquire position information corresponding to the first label image by a shooting direction and the first shooting distance; Acquire the second label image according to the second position coordinates, the second shooting direction, and the second shooting distance The corresponding location information.
  • the area identification module 1004 is specifically configured to: obtain first data information corresponding to the first label image and second data information corresponding to the second label image, respectively, The first data information and the second data information are used to indicate the area type of the area to be identified; the area type of the area to be identified is determined according to the first data information and the second data information.
  • the area identification module 1004 is specifically configured to: if the first data information is the same as the second data information, determine that the virtual wall is set successfully; if the first data information is If the second data information is different, it is determined that the virtual wall setting fails, and an alarm is issued.
  • an embodiment of the present application also provides a robot 1100.
  • the robot 1100 may optionally include an area type setting module 1005, a scene type setting module 1006, and work Control module 1007. among them:
  • the area type setting module 1005 is configured to: if it is determined that the area type of the area to be identified is a non-work area, mark the area to be identified as a non-work area in the electronic map, and prohibit the robot from entering the non-work area Area.
  • the area type setting module 1005 is configured to: if it is determined that the area type of the area to be identified is a work area, mark the area to be identified as a work area in an electronic map.
  • the first data information and the second data information include the scene type of the area to be identified, and the scene type setting module 1006 is configured to:
  • the cleaning mode is set to a cleaning mode corresponding to the scene type.
  • the work control module 1007 is configured to mark the area to be identified as an area to be cleaned, and clean the area to be cleaned after the current area is cleaned.
  • the work control module 1007 is configured to: save the current position of the robot as a waiting position; after cleaning the area to be identified, return to the waiting position and continue cleaning Current area.
  • the work control module 1007 is used to mark the scene type and cleaning degree of each area in the electronic map.
  • Each module in the above-mentioned robot can be implemented in whole or in part by software, hardware and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • a robot is provided, and its internal structure diagram may be as shown in FIG. 9.
  • the robot includes a processor, a memory, a network interface, an image acquisition module and a database connected by a system bus.
  • the robot's processor is used to provide calculation and control capabilities.
  • the robot's memory includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the robot's network interface is used to communicate with external terminals through a network connection.
  • the computer program is executed by the processor to realize an area identification method.
  • FIG. 9 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the robot to which the solution of the present application is applied.
  • the specific robot may include More or fewer components are shown in the figure, or some components are combined, or have different component arrangements.
  • a robot including a memory and a processor, and a computer program is stored in the memory, and the processor implements the following steps when executing the computer program:
  • the area type of the area to be identified is determined according to the first label image and the second label image; the area to be identified is an area that does not include a robot divided by the virtual wall.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • the area type of the area to be identified is determined according to the first label image and the second label image; the area to be identified is an area that does not include a robot divided by the virtual wall as a boundary.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

一种区域识别方法,包括:识别第一标签图像及第二标签图像,分别获取第一标签图像对应的第一位置信息和第二标签图像对应的第二位置信息,根据第一位置信息及第二位置信息在第一标签图像和第二标签图像之间设置虚拟墙,根据第一标签图像及第二标签图像确定待识别区域的区域类型,待识别区域为以虚拟墙为界限划分的不包括机器人(101)的区域。该方法能够提高虚拟墙检测的准确性,进而提高对待识别区域的识别准确性。还涉及一种机器人和存储介质。

Description

区域识别方法、机器人和存储介质
本申请要求于2019年06月24日提交中国专利局,申请号为201910479524.6,申请名称为″区域识别方法、机器人和存储介质″的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人领域,特别是涉及一种区域识别方法、机器人和存储介质。
背景技术
随着机器人技术的发展,机器人已经可以取代人类完成一部分室内的工作。在机器人进行移动工作时,通常需要限定机器人的工作区域,使得机器人只在工作区工作并禁止机器人进入非工作区,例如,仅在客厅或卧室中工作,并禁止进入卫生间及厨房等。
现有划分工作区及非工作区的方案主要有两种:方案一,采用红外虚拟墙限定工作区域,红外虚拟墙产生装置发射一束红外光作为虚拟墙,机器人探测器探测到该光束后,后退离开该光束,使得机器人工作在红外虚拟墙产生装置限定的区域内;方案二,采用磁条虚拟墙限定工作区域,将磁条铺设在地面上,机器人设置的传感器探测到该磁条发出的磁信号时,机器人后退离开该磁条,使得机器人工作在磁条所限定的工作区域内。
然而,上述两种限制机器人工作区域的方案中,建立的虚拟墙会因红外装置、磁条的失效而失效,降低了机器人对虚拟墙检测的准确率,进而使得机器人无法准确识别虚拟墙划分的区域,出现机器人误进入到错误区域的问题。
发明内容
基于此,有必要针对上述技术问题,提供一种能够准确识别区域类型,避免机器人误入错误区域的区域识别方法、装置、机器人和存储介质。
第一方面,提供了一种区域识别方法,所述方法包括:
识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型,所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
在其中一个实施例中,所述识别所述第一标签图像及所述第二标签图像,包括:
获取所述第一标签图像;
分析所述第一标签图像,得到所述第一标签图像与所述第二标签图像的相对位置关系;
根据所述相对位置关系指示所述机器人向所述第二标签图像方向移动以获取所述第二标签图像。
在其中一个实施例中,在指示所述机器人向所述第二标签图像方向移动的步骤之前,所述方法还包括:
保存所述机器人当前位置作为待工作位置;
在获取到所述第二标签图像后,返回所述待工作位置。
在其中一个实施例中,所述根据所述第一标签图像及所述第二标签图像 确定待识别区域的区域类型,所述方法还包括:
分别获取所述第一标签图像对应的第一数据信息和所述第二标签图像对应的第二数据信息,所述第一数据信息及所述第二数据信息用于指示所述待识别区域的区域类型;
根据所述第一数据信息及所述第二数据信息判定所述待识别区域的区域类型。
在其中一个实施例中,所述方法还包括:
若所述第一数据信息与所述第二数据信息相同,判定虚拟墙设置成功;
若所述第一数据信息与所述第二数据信息不同,判定虚拟墙设置失败,发出告警提示。
在其中一个实施例中,所述第一数据信息及所述第二数据信息包含所述待识别区域的场景类型,所述方法还包括:
将所述待识别区域的清扫模式设置为与所述场景类型对应的清扫模式。
在其中一个实施例中,所述方法还包括:
若判定所述待识别区域的区域类型为非工作区,在电子地图中将所述待识别区域标记为非工作区域,并禁止所述机器人进入非工作区。
在其中一个实施例中,所述方法还包括:
若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域。
在其中一个实施例中,所述方法还包括:
将所述待识别区域标记为待清扫区域,在对当前区域清扫完成后对所述待清扫区域进行清扫;或
保存所述机器人当前位置作为待工作位置,在对所述待识别区域清扫完成后,返回所述待工作位置,继续清扫当前区域。
在其中一个实施例中,所述方法还包括:
在所述电子地图中标记每一区域的场景类型及清扫程度。
在其中一个实施例中,所述分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息,所述方法还包括:
获取所述机器人识别所述第一标签图像时的第一位置坐标和第一拍摄方向;
获取所述机器人识别所述第二标签图像时的第二位置坐标和第二拍摄方向;
分别计算所述第一标签图像及所述第二标签图像在环境图像中的第一面积比例及第二面积比例;
根据预设的面积比例和拍摄距离之间的对应关系,分别确定所述第一面积比例对应的第一拍摄距离,及所述第二面积比例对应的第二拍摄距离;
根据所述第一位置坐标、所述第一拍摄方向及所述第一拍摄距离获取所述第一标签图像对应的位置信息;
根据所述第二位置坐标、所述第二拍摄方向及所述第二拍摄距离获取所述第二标签图像对应的位置信息。
在其中一个实施例中,所述第一标签图像及所述第二标签图像设置在图像卡片上,所述图像卡片通过吸盘吸附在所述待识别区域入口的两侧。
在其中一个实施例中,所述标签图像表面覆盖有荧光层。
第二方面,提供了一种机器人,所述机器人包括:
图像识别模块,用于识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
获取模块,用于分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
虚拟墙设置模块,用于根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
区域识别模块,用于根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型,所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
第三方面,提供一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型;所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型;所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
上述区域识别方法、装置、机器人和存储介质,通过识别第一标签图像及第二标签图像,分别获取所述第一标签图像对应的第一位置信息和所述第 二标签图像对应的第二位置信息,根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙。根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型。可以在一组标签图像之间建立虚拟墙,进而实现对当前区域的划分,并且通过标签图像携带的信息确定待识别区域的区域类型,提高了虚拟墙检测的准确度。
附图说明
图1为本申请实施例提供的区域识别方法的实施环境图;
图2为本申请实施例提供的一种区域识别方法的流程图;
图3为本申请实施例提供的另一种区域识别方法的流程图;
图4为本申请实施例提供的另一种区域识别方法的流程图;
图5为本申请实施例提供的另一种区域识别方法的流程图;
图6为本申请实施例提供的一种清扫流程示意图;
图7为本申请实施例提供的另一种清扫流程示意图;
图8为本申请实施例提供的另一种区域识别方法的流程图;
图9为本申请实施例提供的一种机器人的框图;
图10为本申请实施例提供的另一种机器人的框图;
图11为本申请实施例提供的一种机器人的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的区域识别方法,可以应用于如图1所示的实施环境中。在一个实施例中,机器人101可以与终端设备103直接通信。在另一个可选的实施例中,所述机器人101可以与服务器102进行通信,所述服务器102可 以与终端设备103进行通信。其中,机器人101可以但不限于是各种智能机器人、自移动机器人和扫地机器人,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现,终端设备103可以但不限于是智能手机、台式计算机、笔记本电脑、掌上计算机等。
请参考图2,其示出了本实施例提供的一种区域识别方法的流程图,该区域识别方法可以应用于上文所述的实施环境中的机器人101中。
步骤202,识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系。
其中,所述第一标签图像及所述第二标签图像可以任意形式的标签图像,具体可以为条形码,二维码,文字或其他特定图像。对所述第一标签图像及所述第二标签图像的识别方法可以根据标签图像形式的不同而选择对应的识别方法,本发明实施例对此不做详细限定。
具体的,所述第一标签图像与所述第二标签图像存在对应关系,并且成组出现。
具体地,机器人包含图像采集模块,所述图像采集模块用于识别所述第一标签图像及第二标签图像,在机器人在电子地图中按照预设路线移动时,所述图像采集模块会实时采集环境图像,并根据预设的图像识别方法在所述环境图像中识别以得到所述第一标签图像及第二标签图像。所述电子地图在本申请流程开始之前已经建立,并保存在机器人的存储器中。同样的,所述电子地图可以保存在与所述机器人通信的服务器中,同时,所述电子地图也可以保存在与所述机器人或所述服务器通信的终端设备中。每当所述电子地图发生更改时,都会在机器人、服务器及终端设备中同步。
步骤204,分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息。
在本申请的一个实施例中,机器人会依次获取第一标签图像的第一位置信息及所述第二标签图像的第二位置信息,所述第一位置信息和所述第二位置信息分别用于表示所述第一标签图像和所述第二标签图像在电子地图的相对坐标。
具体的,所述机器人在电子地图中按照预设路线移动,当机器人的图像采集模块识别到所述第一标签图像时,所述机器人获取自身在所述电子地图中的相对位置坐标,由于机器人识别到所述第一标签图像时与所述第一标签图像距离较近,可以将机器人的相对坐标近似的作为所述第一标签图像在电子地图中的相对坐标。
相应的,当机器人识别到所述第二标签图像时,将此时所述机器人在电子地图中的相对位置坐标作为所述第二位置信息。
步骤206,根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙。
具体的,机器人在获取到所述第一标签图像和所述第二标签图像分别在所述电子地图中的相对坐标之后,在这两个相对坐标之间建立连线,并将所述两个相对坐标及其之间的连线即为设置的虚拟墙。接着,机器人根据该虚拟墙信息对所述电子地图进行修改,即在所述电子地图中增加设置好的虚拟墙。另外,机器人会将修改后的电子地图同步到服务器及终端设备中。
步骤208,根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型;所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
在本申请的一个实施例中,建立好的虚拟墙会将之前的区域划为以所述虚拟墙为界限的两个区域,其中,所述机器人所在的一侧区域为当前区域,以所述虚拟墙为界限划分的不包括机器人的区域为待识别区域。
此外,所述第一标签图像及所述第二标签图像还包含所述待识别区域的 区域类型,所述机器人通过分析所述第一标签图像及所述第二标签图像,获得所述待识别区域的区域类型。
在本申请实施例提供的区域识别方法中,通过识别第一标签图像及第二标签图像,分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息,根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙,根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型,其中,所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。根据本申请实施例提供的区域识别方法,可以通过一组标签图像快速的建立对应的虚拟墙,提高了虚拟墙检测的准确度,并且通过该组标签图像确定待识别区域的区域类型,从而保证了区域识别的准确性。
请参考图3,其示出了本实施例提供的另一种区域识别方法的流程图,该区域识别方法可以应用于上文所述的实施环境中的机器人101中。在上述图2所示实施例的基础上,上述步骤202具体可以包括以下步骤:
步骤302,获取所述第一标签图像。
在本申请的一个实施例中,机器人在移动过程中通过图像采集模块实时采集环境图像并识别得到所述第一标签图像。所述图像采集模块可以是固定朝向的,也可以是任意朝向的,本实施例对此不作限定。当图像采集模块是任意朝向时,在获取到所述第一标签图像时,会保存当前朝向信息。
步骤304,分析所述第一标签图像,得到所述第一标签图像与所述第二标签图像的相对位置关系。
在本申请的一个实施例中,所述第一标签图像中包含与所述第二标签图像的相对位置关系,机器人会根据第一标签图像的图像类型,采用对应的图像识别分析方法对所述第一标签图像进行分析,得到所述第一标签图像与所述第二标签图像的相对位置关系。
步骤306,根据所述相对位置关系指示所述机器人向所述第二标签图像方向移动以获取所述第二标签图像。
在本申请的一个实施例中,机器人会根据获取到的相对位置关系,结合图像采集模块的采集方向,向所述第二标签图像方向移动以获取所述第二标签图像。
在一个具体的实施例中,所述第一标签图像中包含″左″信息,所述第二标签图像中包含″右″信息,所述″左″″右″信息反映了所述第一标签图像与所述第二标签图像的相对位置关系。也就是说,所述第一标签图像为这一组标签图像的左侧标签图像,所述机器人在识别并分析得到第一标签图像中的″左″信息时,可以根据所述″左″信息向右寻找以得到这一组标签图像中的第二标签图像。机器人采集到所述第一标签图像的方向是正北方向,所述机器人分析所述第一标签图像得到″左″信息,所述″左″信息表示第二标签图像在采集方向的右侧,即正北方向的右侧方向(正东方向),所述机器人将图像采集模块的采集方向保持为正北方向,并向正东方向移动,以获取所述第二标签图像。
在本申请实施例提供的区域识别方法中,通过分析所述第一标签图像,得到所述第一标签图像与所述第二标签图像的相对位置关系,并根据所述相对位置关系指示所述机器人向所述第二标签图像方向移动以获取所述第二标签图像。由于在便签图像中添加了另一个标签图像的相对位置关系,可以使得机器人在只发现一个标签图像的情况下,也可以快速并准确的寻找出另一个标签图像,提高了虚拟墙建设的速度。
在机器人识别到所述第一标签图像时,可能正处于一种特定工作模式,当获取到所述第二标签图像之后,需要重新进入到特定工作模式中,因此,请参阅图4,其示出了本申请实施例提供的另一种区域识别方法的流程图,该区域识别方法可以应用于上文所述的实施环境中的机器人101中。在上述 图3所示实施例的基础上,步骤306之前,具体可以包括以下步骤:
步骤402,保存所述机器人当前位置作为待工作位置。
在本申请的一个实施例中,当所述机器人获取到所述第一标签图像,并在指示所述机器人向所述第二标签图像方向移动之前,会保存当前的相对位置坐标作为待工作位置。
在一个具体的实施例中,所述机器人在获取到第一标签图像之前,正处于一种特定工作模式。由于获取到了第一标签图像,触发了虚拟墙建立进程,需要暂停当前的特定工作模式去获取第二标签图像以完成虚拟墙建立进程。但是,获取第二标签图像时会离开当前工作位置,保存当前位置作为待工作位置的目的是为了使所述机器人可以迅速返回先前位置,并重新进入特定工作模式中。
步骤404,在获取到所述第二标签图像后,返回所述待工作位置。
在本申请的一个实施例中,在所述机器人获取到第二标签图像后,立即返回所述待工作位置,在返回待工作位置的途中,可以进行所述第二标签图像的分析、虚拟墙建立等操作。
在本申请实施例提供的区域识别方法中,通过保存所述机器人当前位置作为待工作位置,在获取到所述第二标签图像后,返回所述待工作位置。使得机器人在完成虚拟墙建立进程之后,可以快速返回先前位置,并立即恢复之前的工作模式,进而使得虚拟墙建立进程对当前工作模式的影响程度降到最低。请参阅图5,其示出了本申请实施例提供的另一种区域识别方法的流程图,该区域识别方法可以应用于上文所述的实施环境中的机器人101中。在上述图2所示实施例的基础上,上述步骤208具体可以包括以下步骤:
步骤502,分别获取所述第一标签图像对应的第一数据信息和所述第二标签图像对应的第二数据信息,所述第一数据信息及所述第二数据信息用于指示所述待识别区域的区域类型。
具体的,所述第一数据信息及所述第二数据信息可以用于指示所述待识别区域是工作区或是非工作区。进一步的,所述第一数据信息及所述第二数据信息还可以用于表征所述待识别区域的场景类型,场景类型可以包括客厅、卧室、厨房等。
在本申请的一个实施例中,当机器人获取到所述第一标签图像时,分析所述第一标签图像,以得到所述第一数据信息;相应的,当机器人获取到所述第二标签图像时,分析所述第二标签图像,以得到所述第二数据信息。
在本申请的另一个实施例中,当机器人获取到所述第一标签图像及所述第二标签图像时,保存所述第一标签图像及所述第二标签图像,在机器人的计算负载低于预设阈值时,分析保存的第一标签图像及第二标签图像,以得到所述第一数据信息及所述第二数据信息。
步骤504,根据所述第一数据信息及所述第二数据信息判定所述待识别区域的区域类型。
在本申请实施例提供的区域识别方法中,通过别获取所述第一标签图像对应的第一数据信息和所述第二标签图像对应的第二数据信息,根据所述第一数据信息及所述第二数据信息判定所述待识别区域的区域类型。使得机器人可以根据标签图像中的数据信息快速识别所述待识别区域的区域类型。并且,通过设置不同的第一标签图像和所述第二标签图像,可以简单灵活对所述待识别区域进行区域类型的更改,可以适用于多种场景。
在本申请的实际应用中,由于用户将不是同一组的两个标签图像设置在待识别区域入口的两侧,可能会出现机器人识别错误的问题。因此,本申请实施例还提供了另一种区域识别方法,该区域识别方法可以应用于上文所述的实施环境中的机器人101中。在上文所述的实施例的基础上,上述步骤504具体可以包括以下步骤:
若所述第一数据信息与所述第二数据信息相同,判定虚拟墙设置成功。
具体的,机器人将获取到的所述第一数据信息及所述第二数据信息进行对比,根据对比结果确定虚拟墙是否设置成功。若所述第一数据信息与所述第二数据信息相同,判定虚拟墙设置成功。机器人将所述第一数据信息与所述第二数据信息包含的相同的区域类型信息设置为所述待识别区域的区域类型。
若所述第一数据信息与所述第二数据信息不同,判定虚拟墙设置失败,发出告警提示。
在本实施例中,若所述第一数据信息与所述第二数据信息不同,机器人判定虚拟墙设置失败,机器人在电子地图中将之前根据第一位置信息及第二位置信息建立的虚拟墙数据删除。并发出告警提示至服务器或终端,所述告警提示用于表示该虚拟墙建立失败,并指示待识别区域入口两侧的第一标签图像与第二标签图像不是同一组标签图像,以使用户至少更换其中一个标签图像。
在本申请实施例提供的区域识别方法中,通过判断所述第一数据信息与所述第二数据信息是否相同,进而判定虚拟墙设置成功或判定虚拟墙设置失败,发出告警提示。机器人可以在发现标签图像设置错误的情况下,以告警信息提示用户。解决了因用户将不是同一组的两个标签图像设置在待识别区域入口的两侧,可能会出现机器人识别错误的问题。
在机器人的实际工作场景中,除了对于工作区和非工作区的划分,往往还需要对所述带识别区域的具体场景类型进行识别,以便采用对应的工作模式在所述待识别区域内工作。因此,本申请实施例还提供了另一种区域识别方法,该区域识别方法可以应用于上文所述的实施环境中。在上文所述的实施例的基础上,所述第一数据信息及所述第二数据信息包含所述待识别区域的场景类型,在步骤504之后,具体可以包括:将所述待识别区域的清扫模 式设置为与所述场景类型对应的清扫模式。
具体的,机器人在获取到所述第一数据信息及所述第二数据信息之后,由于所述第一数据信息及所述第二数据信息包含所述待识别区域的场景类型,机器人还可以进一步获取到所述待识别区域的场景类型。接着,机器人将所述待识别区域的清扫模式设置为与所述场景类型对应的清扫模式。进而使得机器人进入所述待识别区域内时,便会采用与所述场景类型对应的清扫模式对所述待识别区域进行清扫。
其中,所述场景类型为所述待识别区域的房间类型,可以包括:厨房,客厅,卧室等。由于每种场景类型中存在的环境垃圾不同,因此,需要采用的清扫模式也需要适应性的更改。
在一个具体的实施例中,若识别出所述待识别区域的区域类型为厨房时,由于厨房油烟较重,便可以将所述带识别区域的清扫模式设置为针对于厨房的清扫模式,例如采用加大拖地的功率和出水量的模式。当识别出所述待识别区域的区域类型为卧室时,由于卧室毛发较多,便可以将所述带识别区域的清扫模式设置为针对于卧室的清扫模式,例如采用加大风机的吸力和边刷的转速的模式。
在本申请实施例提供的区域识别方法中,通过获取所述第一数据信息及所述第二数据信息中的所述待识别区域的场景类型,可以更进一步的对所述待识别区域进行场景类型的划分,继而进一步的提升了本申请在不同场景的下的应用型。
本申请实施例还提供了另一种区域识别方法,该区域识别方法可以应用于上文所述的实施环境中。在上文所述的实施例的基础上,在步骤208之后,具体可以包括以下步骤:
若判定所述待识别区域的区域类型为非工作区,在电子地图中将所述待 识别区域标记为非工作区域,并禁止所述机器人进入非工作区。
具体的,机器人通过第一标签图像和第二标签图像识别出待识别区域的区域类型为非工作区,并通过颜色、文字等标示对电子地图中该待识别区域进行标记,标记后的电子地图可以通过通信连接同步至服务器中,服务器也可将标记后的电子地图同步发送至与其连接的其他终端,以使用户掌握当前区域划分状态。
若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域。
具体的,与上文标记方式相同,机器人会将待识别区域的工作区信息通过颜色、文字等标示对电子地图中该待识别区域进行标记,工作区的颜色标记与非工作区的颜色标记有明显差别。
在本申请实施例提供的区域识别方法中,通过在所述电子地图中对待识别区域标记工作区和非工作区,使得用户可以更加轻松便捷地掌握当前区域划分状态。
本申请实施例还提供了另一种区域识别方法,该区域识别方法可以应用于上文所述的实施环境中。在上文所述的实施例的基础上,所述若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域,具体可以包括以下步骤:
将所述待识别区域标记为待清扫区域,在对当前区域清扫完成后对所述待清扫区域进行清扫。
具体的,当判断所述待识别区域的区域类型为工作区时,机器人会将其标记为待清扫区域,并继续对当前区域进行清扫。若在对当前区域进行清扫的过程中,再一次识别到其他的工作区,将其他工作区也设置为待清扫区域,直到当前区域完成清扫工作。
进一步的,在对当前区域清扫完成后,若存在多个待清扫区域,可以按照识别到的顺序依次进行清扫。也可以根据所述多个待清扫区域的相对位置,生成一个最优清扫顺序,并按照该最优清扫顺序对所述多个待清扫区域进行清扫。
如图6所示,机器人在对大厅进行打扫的过程中,识别到卧室、书房、厨房是工作区域,机器人将卧室、书房、厨房保存为待工作区域,在对大厅清扫完成之后,按照一定顺序对书房、卧室、厨房进行清扫,并且针对不同场景类型的待工作区域采用不同的清扫模式。
在本实施例中,由于先完成对当前区域的清扫,再依顺序对多个待识别区域进行清扫,在各个区域的清扫模式不相同的情况下,减少了清扫模式切换时间的浪费,也避免了多次切换清扫模式而造成的电量损失。
本申请实施例还提供了另一种区域识别方法,该区域识别方法可以应用于上文所述的实施环境中。在上文所述的实施例的基础上,所述若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域,具体可以包括以下步骤:
保存所述机器人当前位置作为待工作位置;
在对所述待识别区域清扫完成后,返回所述待工作位置,继续清扫当前区域。
具体的,机器人对当前区域的清扫过程中,每当识别所述待识别区域的区域类型为工作区时,停止对当前区域的清扫,保存当前位置在电子地图的相对位置坐标作为待工作位置,立即对所述带识别区域按照对应的清扫模式进行清扫。在机器人停止对当前区域清扫时,向所述待识别区域方向移动,当进入所述待识别区域时,采用对应的清扫模式进行清扫。然后,当完成所述待识别区域的清扫工作后,返回所述待工作位置,并继续对当前区域进行 清扫。
如图7所示,机器人在对大厅进行打扫的过程中,当识别到卧室是工作区域时,立即采用与卧室对应的清扫模式对卧室进行清扫,在完成对卧室清扫时,返回大厅之前的工作位置,继续对大厅进行清扫;同样的,当检测到书房、厨房是工作区域时,也立即对检测到的待识别区域进行清扫,直到完成对大厅的清扫。
在本实施例中,机器人在当前区域的清扫进程中,若发现待识别区域为工作区,立即按照与待识别区域对应的清扫模式对所述待识别区域进行清扫,并在清扫完成后,继续当前区域的清扫进程。保证了对所述待识别区域清扫的即时性,并且,通过保存待工作位置的方式,避免了因中途清扫其他区域而造成当前区域未彻底清扫的问题。
本申请实施例还提供了另一种区域识别方法,该区域识别方法可以应用于上文所述的实施环境中。在上文所述的实施例的基础上,在所述若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域的步骤之后,具体可以包括以下步骤:
在所述电子地图中标记每一区域的场景类型及清扫程度。
具体的,每当机器人在检测到一个待识别区域时,都会分析所述待识别区域的场景类型,并及时将识别到的场景类型标记在所述电子地图中对应的待识别区域中。在标记完成后,机器人可以将标记后的电子地图同步至服务器端,服务器也可将标记后的电子地图同步至与其连接的其他终端中,以使用户掌握当前电子地图中各个区域的区域类型。
另外,机器人在对各个区域进行清扫的过程中,会以一定频率更新对各个区域的清扫程度,并将在所述电子地图的各个区域中标记清扫程度。所述清扫程度可以是已清扫的时间、预计剩余清扫时间、已清扫面积、剩余清扫 面积等。标记方法可以是文字标注、颜色深浅程度标记、颜色填充面积等方式。同样的,在标记完成后,机器人可以将标记后的电子地图同步至服务器端,服务器也可将标记后的电子地图同步至与其连接的其他终端中,以使用户掌握当前电子地图中各个区域的清扫程度。
请参阅图8,其示出了本申请实施例提供的另一种区域识别方法的流程图,该区域识别方法可以应用于上文所述的实施环境中。在上述图2所示实施例的基础上,上述步骤204具体可以包括以下步骤:
步骤902、获取所述机器人识别所述第一标签图像时的第一位置坐标和第一拍摄方向;
步骤904、获取所述机器人识别所述第二标签图像时的第二位置坐标和第二拍摄方向;
在本实施例中,每当机器人检测到标签图像时,都会保存当前的位置坐标,即保存机器人在所述电子地图中的相对位置,并且保存图像采集模块采集标签图像时的拍摄方向。
步骤906、分别计算所述第一标签图像及所述第二标签图像在环境图像中的第一面积比例及第二面积比例;
具体的,所述图像采集模块识别到标签图像之前,会预先实时采集包含所述标签图像的环境图像,在利用图像识别算法从所述环境图像中识别所述标签图像。之后,机器人计算标签图像在所述环境图像的面积比例,所述面积比例可以反映所述机器人距离所述标签图像的远近,例如,面积比例越小,所述机器人距离所述标签图像越远。
步骤908、根据预设的面积比例和拍摄距离之间的对应关系,分别确定所述第一面积比例对应的第一拍摄距离,及所述第二面积比例对应的第二拍摄距离;
根据上文描述可知,所述机器人与所述标签图像的拍摄距离与面积比例具有一定的对应关系,所述机器人可以根据预设的面积比例和拍摄距离之间的对应关系,得到与第一面积比例对应的第一拍摄距离,及第二面积比例对应的第二拍摄距离。
步骤910、根据所述第一位置坐标、所述第一拍摄方向及所述第一拍摄距离获取所述第一标签图像对应的位置信息;
步骤912、根据所述第二位置坐标、所述第二拍摄方向及所述第二拍摄距离获取所述第二标签图像对应的位置信息。
在本实施例中,由于机器人得到了拍摄标签图像时的坐标位置、拍摄方向及拍摄距离,进而可以得到所述标签图像在所述电子地图中的实际坐标位置。
在本申请实施例提供的区域识别方法中,,通过对拍摄标签图像时的坐标位置、拍摄方向及拍摄距离的获取,得到了更为精准的标签图像在所述电子地图中的相对位置坐标,进而保证了虚拟墙设置的准确性,也就提高了对所述待识别区域划分及识别的准确性。
在本申请的一个实施例中,所述第一标签图像及所述第二标签图像设置在图像卡片上,所述图像卡片通过吸盘吸附在待识别区域入口的两侧。在本实施例中,通过吸盘和图像卡片的组合将标签图像吸附在门框或墙壁上,使得标签图像可以随意取下并更改吸附位置,具有极高的便利性。再进一步的实施例中,可以在设计上使得吸盘与图像卡片形成成一定的倾斜角,让机器人可以更容易采集图像,提高图像识别效果。
在进一步的实施例中,所述标签图像表面覆盖有荧光层。所述荧光层使得所述标签图像即使在光线较弱的情况下,也可以被机器人采集并识别。
应该理解的是,虽然图2-5、8的流程图中的各个步骤按照箭头的指示依 次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-5、8中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
请参考图9,其示出了本申请实施例提供的一种机器人1000的框图。如图9所示,所述机器人1000可以包括:图像识别模块1001、获取模块1002、虚拟墙设置模块1003和区域识别模块1004。其中:
所述图像识别模块1001,用于识别第一标签图像及第二标签图像。
所述获取模块1002,用于分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息。
所述虚拟墙设置模块1003,用于根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙。
所述区域识别模块1004,用于根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型,所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
在本申请的一个实施例中,所述图像识别模块1001,具体用于:获取所述第一标签图像;分析所述第一标签图像,得到所述第一标签图像与所述第二标签图像的相对位置关系;根据所述相对位置关系指示所述机器人向所述第二标签图像方向移动以获取所述第二标签图像。
在本申请的一个实施例中,所述图像识别模块1001,用于:保存所述机器人当前位置作为待工作位置;在获取到所述第二标签图像后,返回所述待 工作位置。
在本申请的一个实施例中,所述获取模块1002,具体用于:获取所述机器人识别所述第一标签图像时的第一位置坐标和第一拍摄方向;获取所述机器人识别所述第二标签图像时的第二位置坐标和第二拍摄方向;分别计算所述第一标签图像及所述第二标签图像在环境图像中的第一面积比例及第二面积比例;根据预设的面积比例和拍摄距离之间的对应关系,分别确定所述第一面积比例对应的第一拍摄距离,及所述第二面积比例对应的第二拍摄距离;根据所述第一位置坐标、所述第一拍摄方向及所述第一拍摄距离获取所述第一标签图像对应的位置信息;根据所述第二位置坐标、所述第二拍摄方向及所述第二拍摄距离获取所述第二标签图像对应的位置信息。
在本申请的一个实施例中,所述区域识别模块1004,具体用于:分别获取所述第一标签图像对应的第一数据信息和所述第二标签图像对应的第二数据信息,所述第一数据信息及所述第二数据信息用于指示所述待识别区域的区域类型;根据所述第一数据信息及所述第二数据信息判定所述待识别区域的区域类型。
在本申请的一个实施例中,所述区域识别模块1004,具体用于:若所述第一数据信息与所述第二数据信息相同,判定虚拟墙设置成功;若所述第一数据信息与所述第二数据信息不同,判定虚拟墙设置失败,发出告警提示。
参考图8,本申请实施例还提供了一种机器人1100,所述机器人1100除了包括机器人1000包括的各模块外,可选的,还可以包括区域类型设置模块1005、场景类型设置模块1006和工作控制模块1007。其中:
所述区域类型设置模块1005,用于:若判定所述待识别区域的区域类型为非工作区,在电子地图中将所述待识别区域标记为非工作区域,并禁止所述机器人进入非工作区。
在本申请的一个实施例中,所述区域类型设置模块1005,用于:若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域。
在本申请的一个实施例中,所述第一数据信息及所述第二数据信息包含所述待识别区域的场景类型,所述场景类型设置模块1006,用于:将所述待识别区域的清扫模式设置为与所述场景类型对应的清扫模式。
所述工作控制模块1007,用于:将所述待识别区域标记为待清扫区域,在对当前区域清扫完成后对所述待清扫区域进行清扫。
在本申请的一个实施例中,所述工作控制模块1007,用于:保存所述机器人当前位置作为待工作位置;在对所述待识别区域清扫完成后,返回所述待工作位置,继续清扫当前区域。
在本申请的一个实施例中,所述工作控制模块1007,用于:在所述电子地图中标记每一区域的场景类型及清扫程度。
关于机器人的具体限定可以参见上文中对于区域识别方法的限定,在此不再赘述。上述机器人中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种机器人,其内部结构图可以如图9所示。该机器人包括通过系统总线连接的处理器、存储器、网络接口、图像采集模块和数据库。其中,该机器人的处理器用于提供计算和控制能力。该机器人的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该机器人的网络接口用于与外部的终端通 过网络连接通信。该计算机程序被处理器执行时以实现一种区域识别方法。
本领域技术人员可以理解,图9中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的机器人的限定,具体的机器人可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种机器人,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型;所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型; 所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (16)

  1. 一种区域识别方法,其特征在于,所述方法包括:
    识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
    分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
    根据所述第一位置信息及所述第二位置信息在所述第一标签图像和所述第二标签图像之间设置虚拟墙;
    根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型;所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
  2. 根据权利要求1所述的方法,其特征在于,所述识别所述第一标签图像及所述第二标签图像,包括:
    获取所述第一标签图像;
    分析所述第一标签图像,得到所述第一标签图像与所述第二标签图像的相对位置关系;
    根据所述相对位置关系指示所述机器人向所述第二标签图像方向移动以获取所述第二标签图像。
  3. 根据权利要求2所述的方法,其特征在于,在指示所述机器人向所述第二标签图像方向移动的步骤之前,所述方法还包括:
    保存所述机器人当前位置作为待工作位置;
    在获取到所述第二标签图像后,返回所述待工作位置。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型,包括:
    分别获取所述第一标签图像对应的第一数据信息和所述第二标签图像对应的第二数据信息,所述第一数据信息及所述第二数据信息用于指示所述待识别区域的区域类型;
    根据所述第一数据信息及所述第二数据信息判定所述待识别区域的区域类型。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    若所述第一数据信息与所述第二数据信息相同,判定虚拟墙设置成功;
    若所述第一数据信息与所述第二数据信息不同,判定虚拟墙设置失败,发出告警提示。
  6. 根据权利要求4所述的方法,其特征在于,所述第一数据信息及所述第二数据信息包含所述待识别区域的场景类型,所述方法还包括:
    将所述待识别区域的清扫模式设置为与所述场景类型对应的清扫模式。
  7. 根据权利要求1至6任意一项所述的方法,其特征在于,所述方法还包括:
    若判定所述待识别区域的区域类型为非工作区,在电子地图中将所述待识别区域标记为非工作区域,并禁止所述机器人进入非工作区。
  8. 根据权利要求1至6任意一项所述的方法,其特征在于,所述方法还包括:
    若判定所述待识别区域的区域类型为工作区,在电子地图中将所述待识别区域标记为工作区域。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    将所述待识别区域标记为待清扫区域,在对当前区域清扫完成后对所述待清扫区域进行清扫;或
    保存所述机器人当前位置作为待工作位置,在对所述待识别区域清扫完成后,返回所述待工作位置,继续清扫当前区域。
  10. 根据权利要求8述的方法,其特征在于,所述方法还包括:
    在所述电子地图中标记每一区域的场景类型及清扫程度。
  11. 根据权利要求1至6任意一项所述的方法,其特征在于,所述分别 获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息,包括:
    获取所述机器人识别所述第一标签图像时的第一位置坐标和第一拍摄方向;
    获取所述机器人识别所述第二标签图像时的第二位置坐标和第二拍摄方向;
    分别计算所述第一标签图像及所述第二标签图像在环境图像中的第一面积比例及第二面积比例;
    根据预设的面积比例和拍摄距离之间的对应关系,分别确定所述第一面积比例对应的第一拍摄距离,及所述第二面积比例对应的第二拍摄距离;
    根据所述第一位置坐标、所述第一拍摄方向及所述第一拍摄距离获取所述第一标签图像对应的位置信息;
    根据所述第二位置坐标、所述第二拍摄方向及所述第二拍摄距离获取所述第二标签图像对应的位置信息。
  12. 根据权利要求1所述的方法,其特征在于,所述第一标签图像及所述第二标签图像设置在图像卡片上,所述图像卡片通过吸盘吸附在所述待识别区域入口的两侧。
  13. 根据权利要求1所述的方法,其特征在于,所述标签图像表面覆盖有荧光层。
  14. 一种机器人,其特征在于,包括:
    图像识别模块,用于识别第一标签图像及第二标签图像;所述第一标签图像与所述第二标签图像存在对应关系;
    获取模块,用于分别获取所述第一标签图像对应的第一位置信息和所述第二标签图像对应的第二位置信息;
    虚拟墙设置模块,用于根据所述第一位置信息及所述第二位置信息在所 述第一标签图像和所述第二标签图像之间设置虚拟墙;
    区域识别模块,用于根据所述第一标签图像及所述第二标签图像确定待识别区域的区域类型,所述待识别区域为以所述虚拟墙为界限划分的不包括机器人的区域。
  15. 一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至13中任一项所述方法的步骤。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
PCT/CN2020/095049 2019-06-24 2020-06-09 区域识别方法、机器人和存储介质 WO2020259274A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910548570.7 2019-06-24
CN201910548570.7A CN110450152A (zh) 2019-06-24 2019-06-24 区域识别方法、机器人和存储介质

Publications (1)

Publication Number Publication Date
WO2020259274A1 true WO2020259274A1 (zh) 2020-12-30

Family

ID=68480818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/095049 WO2020259274A1 (zh) 2019-06-24 2020-06-09 区域识别方法、机器人和存储介质

Country Status (2)

Country Link
CN (1) CN110450152A (zh)
WO (1) WO2020259274A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173415A (zh) * 2023-11-03 2023-12-05 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110450152A (zh) * 2019-06-24 2019-11-15 广东宝乐机器人股份有限公司 区域识别方法、机器人和存储介质
CN113128545B (zh) * 2020-01-16 2023-08-29 科沃斯机器人股份有限公司 机器人采集样本的方法以及其装置
CN111198549B (zh) * 2020-02-18 2020-11-06 湖南伟业动物营养集团股份有限公司 一种基于大数据的家禽养殖监测管理系统
CN111248818B (zh) * 2020-03-05 2021-08-13 美智纵横科技有限责任公司 一种状态控制方法、扫地机器人及计算机存储介质
CN111399502A (zh) * 2020-03-09 2020-07-10 惠州拓邦电气技术有限公司 移动机器人的建图方法、装置及移动机器人
CN111374614A (zh) * 2020-03-19 2020-07-07 北京小米移动软件有限公司 清洁设备的控制方法、装置及存储介质
CN111523334B (zh) * 2020-04-09 2023-09-19 美智纵横科技有限责任公司 虚拟禁区的设置方法、装置、终端设备、标签和存储介质
CN111539398B (zh) * 2020-07-13 2021-10-01 追觅创新科技(苏州)有限公司 自移动设备的控制方法、装置及存储介质
CN112171659A (zh) * 2020-08-17 2021-01-05 深圳市优必选科技股份有限公司 一种机器人及其限制区域识别方法和装置
CN112363516A (zh) * 2020-10-26 2021-02-12 深圳优地科技有限公司 虚拟墙生成方法、装置、机器人及存储介质
CN113183141A (zh) * 2021-06-09 2021-07-30 乐聚(深圳)机器人技术有限公司 双足机器人的行走控制方法、装置、设备及存储介质
CN114339593A (zh) * 2021-12-21 2022-04-12 美智纵横科技有限责任公司 可移动设备及其控制方法、控制装置、可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160113195A1 (en) * 2014-10-28 2016-04-28 Deere & Company Robotic mower navigation system
CN107981790A (zh) * 2017-12-04 2018-05-04 深圳市沃特沃德股份有限公司 室内区域划分方法及扫地机器人
CN107997690A (zh) * 2017-12-04 2018-05-08 深圳市沃特沃德股份有限公司 室内区域划分方法及扫地机器人
CN108227687A (zh) * 2016-12-09 2018-06-29 广东德豪润达电气股份有限公司 智能机器人识别虚拟边界方法、行进方法及信标
CN109421067A (zh) * 2017-08-31 2019-03-05 Neato机器人技术公司 机器人虚拟边界
CN109744945A (zh) * 2017-11-08 2019-05-14 杭州萤石网络有限公司 一种区域属性确定方法、装置、系统及电子设备
CN110450152A (zh) * 2019-06-24 2019-11-15 广东宝乐机器人股份有限公司 区域识别方法、机器人和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101021267B1 (ko) * 2010-09-20 2011-03-11 주식회사 모뉴엘 청소로봇 시스템 및 그 제어 방법
TW201240636A (en) * 2011-04-11 2012-10-16 Micro Star Int Co Ltd Cleaning system
CN104062973B (zh) * 2014-06-23 2016-08-24 西北工业大学 一种基于图像标志物识别的移动机器人slam方法
AU2016214109B2 (en) * 2015-02-05 2021-07-01 Grey Orange Pte. Ltd. Apparatus and method for navigation path compensation
US9868211B2 (en) * 2015-04-09 2018-01-16 Irobot Corporation Restricting movement of a mobile robot
CN106155049A (zh) * 2015-04-15 2016-11-23 小米科技有限责任公司 智能清洁设备及其引导方法、引导桩、智能清洁系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160113195A1 (en) * 2014-10-28 2016-04-28 Deere & Company Robotic mower navigation system
CN108227687A (zh) * 2016-12-09 2018-06-29 广东德豪润达电气股份有限公司 智能机器人识别虚拟边界方法、行进方法及信标
CN109421067A (zh) * 2017-08-31 2019-03-05 Neato机器人技术公司 机器人虚拟边界
CN109744945A (zh) * 2017-11-08 2019-05-14 杭州萤石网络有限公司 一种区域属性确定方法、装置、系统及电子设备
CN107981790A (zh) * 2017-12-04 2018-05-04 深圳市沃特沃德股份有限公司 室内区域划分方法及扫地机器人
CN107997690A (zh) * 2017-12-04 2018-05-08 深圳市沃特沃德股份有限公司 室内区域划分方法及扫地机器人
CN110450152A (zh) * 2019-06-24 2019-11-15 广东宝乐机器人股份有限公司 区域识别方法、机器人和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173415A (zh) * 2023-11-03 2023-12-05 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统
CN117173415B (zh) * 2023-11-03 2024-01-26 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统

Also Published As

Publication number Publication date
CN110450152A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2020259274A1 (zh) 区域识别方法、机器人和存储介质
Baltzakis et al. Fusion of laser and visual data for robot motion planning and collision avoidance
CN104536445B (zh) 移动导航方法和系统
JP7414978B2 (ja) 駐車スペース及びその方向角検出方法、装置、デバイス及び媒体
CN104737085A (zh) 用于自主地检测或处理地面的机器人和方法
CN111609852A (zh) 语义地图构建方法、扫地机器人及电子设备
JP2008304268A (ja) 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム
JP4880805B2 (ja) 物体位置推定装置、物体位置推定方法、及び、物体位置推定プログラム
CN108303096A (zh) 一种视觉辅助激光定位系统及方法
KR101333496B1 (ko) 과거 지도 데이터 기반의 이동 로봇 제어 장치 및 방법
KR101207535B1 (ko) 이동 로봇의 이미지 기반 동시적 위치 인식 및 지도 작성 방법
CN111679661A (zh) 基于深度相机的语义地图构建方法及扫地机器人
CN112034830A (zh) 一种地图信息处理方法、装置及移动设备
CN111714028A (zh) 清扫设备的禁区脱困方法、装置、设备及可读存储介质
KR20200055239A (ko) 로봇군 제어 방법 및 시스템
CN111061270A (zh) 一种全面覆盖方法、系统及作业机器人
KR100998709B1 (ko) 물체의 공간적 의미정보를 이용한 로봇의 자기위치 추정 방법
CN112528959B (zh) 用于清洁机器人的障碍物识别方法
CN114995459A (zh) 机器人的控制方法、装置、设备及存储介质
CN112276933A (zh) 移动机器人的控制方法和移动机器人
Lee et al. Self-localization of a mobile robot without camera calibration using projective invariants
CN113031006B (zh) 一种定位信息的确定方法、装置及设备
KR102377475B1 (ko) 이동체의 자율 주행 시스템 및 위치 추정 방법
US20230368492A1 (en) Operating room objects and workflow tracking using depth cameras
Persiani et al. Traveling Drinksman-A Mobile Service Robot for People in Care-Homes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20830927

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20830927

Country of ref document: EP

Kind code of ref document: A1