WO2019091310A1 - 区域属性确定 - Google Patents

区域属性确定 Download PDF

Info

Publication number
WO2019091310A1
WO2019091310A1 PCT/CN2018/112923 CN2018112923W WO2019091310A1 WO 2019091310 A1 WO2019091310 A1 WO 2019091310A1 CN 2018112923 W CN2018112923 W CN 2018112923W WO 2019091310 A1 WO2019091310 A1 WO 2019091310A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
obstacle
cleaning
target
map
Prior art date
Application number
PCT/CN2018/112923
Other languages
English (en)
French (fr)
Inventor
沈冰伟
蒋腻聪
朱建华
郭斌
杜安强
蒋海青
Original Assignee
杭州萤石软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州萤石软件有限公司 filed Critical 杭州萤石软件有限公司
Priority to PL18876322.1T priority Critical patent/PL3689215T3/pl
Priority to ES18876322T priority patent/ES2951843T3/es
Priority to EP18876322.1A priority patent/EP3689215B1/en
Priority to US16/762,448 priority patent/US11877716B2/en
Publication of WO2019091310A1 publication Critical patent/WO2019091310A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2836Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means characterised by the parts which are controlled
    • A47L9/2852Elements for displacement of the vacuum cleaner or the accessories therefor, e.g. wheels, casters or nozzles
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/009Carrying-vehicles; Arrangements of trollies or wheels; Means for avoiding mechanical obstacles
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2805Parameters or conditions being sensed
    • A47L9/2826Parameters or conditions being sensed the condition of the floor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2894Details related to signal transmission in suction cleaners
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a method, device, system, and electronic device for determining a regional attribute.
  • Intelligent cleaning equipment such as sweeping robots, is a popular smart appliance that can automatically clean up in the scene to be cleaned with a certain amount of artificial intelligence.
  • a virtual wall is a physical device that emits signals that can be perceived by the intelligent cleaning device to form a virtual wall that prevents the intelligent cleaning device from going to a specific area for cleaning.
  • the intelligent cleaning device realizes intelligent partitioning through the virtual wall during intelligent cleaning
  • the virtual wall is required to be equipped with a virtual wall, which leads to high cost, and since the virtual wall needs power supply, it is not convenient to use.
  • the present application provides a method, device, system, and electronic device for determining a regional attribute to reduce the cost of implementing intelligent partitioning by the intelligent cleaning device.
  • the present application provides a method for determining a region attribute, the method comprising: identifying a dimension line in a target map, wherein the target map is a scene to be cleaned by the smart cleaning device during the cleaning process. a map; determining a closed area in the target map and a connected area based on the first location based on the identified marked line and the auxiliary object in the target map, wherein the auxiliary object includes a map boundary and an obstacle
  • the first position is a position of the preset reference object in the target map; the closed area is determined as a custom cleaning area in the cleaning process of the intelligent cleaning device, and the connected area is determined as The intelligent cleaning device cleans the normal cleaning area during the cleaning process.
  • the present application provides an area attribute determining apparatus, including: an annotation line identification unit, configured to identify an annotation line in a target map, wherein the target map is a smart cleaning device that is dependent on the cleaning process. Sweeping a map of the target scene; the closed and connected area determining unit is configured to determine the closed area in the target map and the Unicom based on the first position based on the identified marked line and the auxiliary object in the target map a region, wherein the auxiliary object includes a map boundary and an obstacle, the first location is a location of the preset reference object in the target map; and an area attribute determining unit configured to determine the closed region as the The intelligent cleaning device cleans the custom cleaning area during the cleaning process, and determines the communication area as the normal cleaning area during the cleaning process of the intelligent cleaning device.
  • an annotation line identification unit configured to identify an annotation line in a target map, wherein the target map is a smart cleaning device that is dependent on the cleaning process. Sweeping a map of the target scene
  • the closed and connected area determining unit
  • the present application provides an electronic device including: an internal bus, a memory, a processor, and a communication interface.
  • the processor, the communication interface, and the memory complete communication with each other through the internal bus.
  • the memory is configured to store a machine feasible instruction corresponding to the area attribute determining method.
  • the processor configured to read the machine readable instructions on the memory, and execute the instructions to: identify an annotation line in a target map, wherein the target map is a smart cleaning device a map of the target scene to be cleaned on which the cleaning process depends; determining a closed area in the target map and a connected area based on the first position based on the identified marked line and the auxiliary object in the target map,
  • the auxiliary object includes a map boundary and an obstacle, the first location is a position of the preset reference object in the target map; and the closed area is determined as a customization in the cleaning process of the intelligent cleaning device
  • the cleaning area is determined, and the communication area is determined as a normal cleaning area in the cleaning process of the intelligent cleaning device.
  • the present application provides an intelligent cleaning system including a mobile terminal and a smart cleaning device.
  • the mobile terminal is configured to obtain a user instruction about a marked line in the target map, and send the user instruction to the smart cleaning device, so that the smart cleaning device identifies the target map based on the user instruction.
  • the marking line wherein the target map is a map of the target scene to be cleaned which the intelligent cleaning device depends on during the cleaning process.
  • the intelligent cleaning device is configured to identify an annotation line in the target map; and based on the identified annotation line and the auxiliary object in the target map, determine a closed area in the target map and a base location based on the first location a communication area, wherein the auxiliary object includes a map boundary and an obstacle, the first position is a position of the preset reference object in the target map; and the closed area is determined to be in the cleaning process of the intelligent cleaning device Customizing the cleaning area, and determining the communication area as a normal cleaning area in the cleaning process of the intelligent cleaning device; and cleaning the target scene according to the determined custom cleaning area and the normal cleaning area.
  • the present application provides an intelligent cleaning system, including: a mobile terminal, a cloud server, and a smart cleaning device.
  • the mobile terminal is configured to obtain a user instruction about a marked line in the target map, and send the user instruction to the cloud server, so that the cloud server identifies the marked line in the target map based on the user instruction.
  • the target map is a map of the target scene to be cleaned by the smart cleaning device during the cleaning process;
  • the cloud server is configured to identify the marked line in the target map; based on the identified labeled line and An auxiliary object in the target map, determining a closed area in the target map and a connected area based on the first location, wherein the auxiliary object includes a map boundary and an obstacle, and the first position is a preset Determining a position of the reference object in the target map; and determining the closed area as a custom cleaning area in the cleaning process of the smart cleaning device, and determining the communication area as being in the cleaning process of the intelligent cleaning device Normal cleaning area.
  • the intelligent cleaning device is configured to clean the target scene according to the customized cleaning area and the normal cleaning area determined by the cloud server.
  • the user can assign a label line in the target map according to the cleaning requirement to divide the area, and then identify the label line in the target map in the process of determining the area attribute, based on the identified label line and target.
  • the auxiliary objects in the map determine the connected area and the closed area, and determine the connected area as the normal cleaning area and the closed area as the custom cleaning area. Since virtual wall devices are not required, intelligent partitioning can be performed according to the user's personalized cleaning requirements, thereby reducing the cost of implementing intelligent partitioning.
  • FIG. 1 is a flowchart of a method for determining an area attribute according to an exemplary embodiment of the present application
  • FIGS. 2a and 2b are schematic diagrams of a map in which the line is a straight line
  • Figure 3 is a schematic diagram of a map in which the marked line is a broken line
  • FIG. 4 is a schematic diagram of a map in which the marked lines are polygonal
  • FIG. 5 is a flowchart of a method for determining an area attribute according to another exemplary embodiment of the present application.
  • FIG. 6 is a schematic diagram showing the positional relationship of the coordinate points utilized by the rotation angle and the translation vector obtaining process
  • Figure 7a is a positional relationship diagram of a rectangle and an obstacle
  • Figure 7b is a result of the obstacle of the obstacle in Figure 7a processed by a single obstacle boundary
  • Figure 8a is a positional relationship diagram of a rectangle and an obstacle
  • Figure 8b is a result of the obstacle of the obstacle in Figure 8a processed by a single obstacle boundary
  • Figure 9a is a positional relationship diagram of a rectangle and an obstacle
  • Figure 9b is a result of the result of the obstacle being treated by a single obstacle boundary in Figure 9a;
  • Figure 10a is a positional relationship diagram of a rectangle and an obstacle
  • Figure 10b is a result of the obstacle of the first type of multi-obstacle boundary treatment in Figure 10a
  • Figure 10c is a second type of obstacle of the obstacle in Figure 10a. a result map obtained by the boundary treatment
  • FIG. 11 is a structural diagram of an area attribute determining apparatus according to an exemplary embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an area attribute determining apparatus according to another exemplary embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an intelligent cleaning system according to an exemplary embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an intelligent cleaning system according to another exemplary embodiment of the present application.
  • first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the present application.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • the present application provides a method, a device, and an electronic device for determining a region attribute, so as to solve the problem of high cost and inconvenient use caused by realizing intelligent partitioning in the prior art.
  • the execution subject of the area attribute determining method provided by the present application may be an area attribute determining apparatus.
  • the area attribute determining device may be a function software running in the intelligent cleaning device.
  • the smart cleaning device cleans the target scene according to the customized cleaning area and the normal cleaning area determined by the smart cleaning device.
  • the area attribute determining device may also be function software running in the cloud server corresponding to the smart cleaning device.
  • the smart cleaning device targets the target according to the customized cleaning area and the normal cleaning area determined by the cloud server. The scene is cleaned.
  • the intelligent cleaning device includes but is not limited to a cleaning robot, and the so-called cleaning robot may also be referred to as an automatic cleaning machine, a smart vacuum, a robot vacuum cleaner, and the like.
  • a method for determining a region attribute may include the following steps:
  • the target map is a map of the target scene to be cleaned by the intelligent cleaning device during the cleaning process.
  • the dimension line can include, but is not limited to, a straight line, a broken line, or a polygon.
  • the map corresponding to the target scene may be constructed in advance.
  • the user can draw a label line in the target map based on the partitioning idea of setting different cleaning attributes for the Unicom area and the closed area area, so as to complete the sealing by the marked line drawn.
  • the target map of the target scene may be constructed by the smart cleaning device, or may be shared by other electronic devices, such as other intelligent cleaning devices of the same type or different types, to the smart cleaning device. It should be noted that, when constructing the target map of the target scene, one or more of laser, radar, ultrasound, and visual cameras may be relied on. The specific construction technique may be constructed on any map in the well-known technology in the art. Technology is not limited here. In addition, it should be emphasized that when the area attribute determining device runs on the smart cleaning device, the target map utilized by the smart cleaning device when determining the area attribute may be obtained by the smart cleaning device, or may be obtained.
  • the other device is shared and obtained; and when the area attribute determining device runs on the cloud server corresponding to the smart cleaning device, the target map utilized by the cloud server when determining the area attribute may be uploaded by the smart cleaning device corresponding to the cloud server. , can also be obtained by uploading from other devices.
  • the smart cleaning device can display the target map through its own display screen.
  • the user can issue user instructions on the dimension line in the target map, such as issuing a user command by drawing a dimension line or issuing a user command by specifying coordinate information.
  • the smart sweeping device can then obtain user instructions regarding the dimension line and identify the dimension line in the target map based on the user command.
  • the target map can be displayed by a mobile terminal in communication with the smart cleaning device.
  • the user can issue a user instruction about the annotation line in the target map displayed by the mobile terminal, such as issuing a user instruction by drawing an annotation line or issuing a user instruction by giving coordinate information.
  • the smart sweeping device can then obtain user instructions for the dimension line from the mobile terminal and identify the dimension line in the target map based on the user command.
  • the target map can be displayed by the mobile terminal that communicates with the smart cleaning device.
  • the user can issue a user instruction about the annotation line in the target map displayed by the mobile terminal, such as issuing a user instruction by drawing an annotation line or issuing a user instruction by giving coordinate information.
  • the cloud server can obtain user instructions regarding the annotation line from the mobile terminal and identify the annotation lines in the target map based on the user instruction.
  • the target map is a map that the smart cleaning device relies on during the cleaning of the target scene.
  • the intelligent cleaning device can convert the coordinates in the target map into coordinates in the world coordinate system, and then use the coordinates in the world coordinate system to clean the target scene. How the intelligent cleaning device relies on the target map for cleaning, any technique well known in the art can be applied, and therefore, no further description is made here.
  • the drawable function of the dimension line in the target map can be implemented by any technique well known in the art, and in addition, it can be identified in the map by any recognition technique well known in the art. Marking lines are not limited here. Moreover, the color, thickness, and the like of the marked line can be set by the user.
  • different cleaning attributes may be set for the closed area and the connected area.
  • the auxiliary object may include a map boundary and an obstacle, and the first position is a position of the preset reference object in the target map.
  • an object with a relatively fixed position is usually selected as a preset reference object.
  • the preset reference object may include any one or more of the following: a charging pile of the smart cleaning device, or a non-moving object other than the charging pile in the target scene.
  • the non-moving object can be a wardrobe, a bed, a table, and the like.
  • marking lines have different ways of forming closed areas. The following describes the formation of the enclosed area by combining different marking lines:
  • the closed area is an area surrounded by the marked line and the target auxiliary object and does not include the first position.
  • the target auxiliary object is an auxiliary object in the target map that intersects the label line.
  • the preset reference object is a charging pile 01
  • the marked line is a straight line L1
  • the straight line L1 and the map boundary constitute a closed area, that is, a shaded area z1 in the figure.
  • the preset reference object is a charging pile 01
  • the marked line is two straight lines L2 and L3
  • the straight line L2 and the map boundary constitute a closed area, that is, the shadow area z2 in the figure.
  • the line L3 and the map boundary constitute a closed area, that is, the shaded area z3 in the figure.
  • the closed area is an area surrounded by the marked line and the target auxiliary object and does not include the first position.
  • the target auxiliary object is an auxiliary object in the target map that intersects the label line.
  • the preset reference object is the charging pile 01
  • the marked line is the broken line L4
  • the broken line L4 and the map boundary constitute a closed area, that is, the shaded area z4 in the figure.
  • the closed area is an area enclosed by each side of the polygon.
  • the preset reference object is a charging pile 01
  • the marked line is a diamond L5
  • the area enclosed by the diamond L5 is a closed area, that is, a shaded area z5 in the figure.
  • the manner of determining the connected area based on the first position may be determined by any one of the well-known areas in the art.
  • S103 Determine the closed area as a custom cleaning area in the cleaning process of the intelligent cleaning device, and determine the communication area as a normal cleaning area in the cleaning process of the intelligent cleaning device.
  • the closed area and the connected area may be determined as a custom cleaning area in the process of cleaning the target scene by the intelligent cleaning device, and the The Unicom area is determined as the normal cleaning area during the cleaning of the target scene by the intelligent cleaning device.
  • the so-called normal cleaning area is an area where the intelligent cleaning device performs cleaning according to a normal cleaning program during cleaning; and the cleaning type corresponding to the custom cleaning area can be set by the user, for example, focusing on cleaning or No cleaning is allowed.
  • the so-called clean-up requires intelligent cleaning equipment to increase the cleaning power, and prohibits cleaning without the need for intelligent cleaning equipment cleaning.
  • the user can draw one or more marked lines in the target map to form one or more closed areas.
  • the types of the plurality of label lines may be the same type or different types.
  • the user can assign a label line in the target map according to the cleaning requirement to divide the area, and then identify the label line in the target map in the process of determining the area attribute.
  • the connected area and the closed area are determined based on the identified annotated line and the auxiliary object in the target map, and the connected area is determined as a normal cleaning area and the closed area is determined as a custom cleaning area. Since the solution does not require virtual wall equipment, intelligent partitioning can be performed according to the user's personalized cleaning requirements, thereby effectively reducing the cost and realizing the convenience of implementing intelligent partitioning.
  • the present application also provides a method for determining the area attribute.
  • a method for determining an area attribute may include the following steps:
  • the auxiliary object includes a map boundary and an obstacle, and the first position is a position of the preset reference object in the target map.
  • S203 Determine the closed area as a custom cleaning area in the cleaning process of the intelligent cleaning device, and determine the communication area as a normal cleaning area in the cleaning process of the intelligent cleaning device.
  • the S201-S203 in this embodiment is similar to the S101-S103 in the foregoing embodiment, except that the dimension line in the embodiment is a rectangle, and in the previous embodiment, the dimension line may be a straight line, a polygonal line, a polygon, or the like. .
  • the obstacle coordinate information is information obtained by the intelligent cleaning device performing obstacle detection on the target scene.
  • the intelligent cleaning device can perform cleaning according to the custom cleaning area and the normal cleaning area, and perform obstacle detection on the target scene during the cleaning process. Correct the two types of cleaning areas.
  • the intelligent cleaning device may perform obstacle detection on the target scene before cleaning to correct the two types of cleaning areas.
  • the area attribute determining device can directly obtain the obstacle coordinate information detected by the smart cleaning device.
  • the smart cleaning device may upload the detected obstacle coordinate information to the cloud server, and the area attribute determining device may obtain the obstacle.
  • Object coordinate information may be directly obtained.
  • the intelligent cleaning device can detect the coordinates of the obstacle under the coordinate system with the intelligent cleaning device as the origin based on radar, laser, ultrasound, visual camera, and the like. Then, the coordinates of the obstacle in the coordinate system with the intelligent cleaning device as the origin can be directly used as the obstacle coordinate information.
  • the obstacle coordinate information may also be the coordinates of the obstacle in the world coordinate system. That is to say, after obtaining the coordinates of the obstacle in the coordinate system with the intelligent cleaning device as the origin, the obtained coordinates are converted into the coordinates in the world coordinate system.
  • the following describes the conversion relationship between the coordinates in the target map and the coordinates in the world coordinate system, and the conversion between the coordinates in the coordinate system with the intelligent cleaning device as the origin and the coordinates in the world coordinate system. relationship.
  • [x y] T is the coordinate in the target map
  • s is the scale factor
  • is the rotation angle of the coordinate system of the target map and the world coordinate system
  • [t x t y ] T is the coordinate system of the target map and The translation vector of the world coordinate system
  • [X m Y m ] T is the coordinate in the world coordinate system.
  • s, ⁇ , and [t x t y ] T are constants that have been determined when constructing the target map.
  • the conversion formula (2) between the coordinates in the coordinate system with the intelligent cleaning device as the origin and the coordinates in the world coordinate system is as follows:
  • [x' r y' r ] T is the coordinate of the obstacle detected by the intelligent cleaning device in the coordinate system with the intelligent cleaning device as the origin
  • ⁇ ′ is the rotation angle of the intelligent cleaning device in the world coordinate system
  • [t' x t' y ] T is the translation vector of the intelligent cleaning device in the world coordinate system
  • [X' m Y' m ] T is the coordinate of the obstacle in the world coordinate system.
  • the point of p point in the world coordinate system is [x y ⁇ ] T at a certain moment in the intelligent cleaning device, and the next time can be obtained according to the following formulas (3)-(7) (sufficiently spaced from the previous time) Small) intelligent cleaning device p' point coordinates [x'y' ⁇ '] T in the world coordinate system.
  • ⁇ s l , ⁇ s r are the distances traveled by the left and right wheels of the intelligent cleaning device when the intelligent cleaning device is from point p to point p, the distance can be obtained by the relevant sensor; the direction of the intelligent cleaning device is taken as Positive direction; L is the distance between the left and right wheels of the intelligent cleaning device. Then, when the intelligent cleaning device is located at the point p' in the world coordinate system, the corresponding rotation angle in the world coordinate system is ⁇ ', the translation vector
  • the custom area cleaning area that is, the area enclosed by the label line, and the positional relationship of the obstacle in the real environment of the target scene can be compared, thereby bordering the custom cleaning area. Refined processing.
  • the normal cleaning area can be corrected based on the processing result obtained by the boundary refinement processing.
  • the accuracy of the intelligent partition is further improved while reducing the implementation cost of the intelligent partition.
  • the step of performing boundary refinement processing on the custom cleaning area based on the obstacle coordinate information and the labeling line may include: determining, based on the obstacle coordinate information, If there is no obstacle in the scene area corresponding to the label line, the first notification information for prompting whether to mark the error is output; when the result of the feedback based on the first notification information is YES, the new label line is identified and determined. The new closed area corresponding to the identified new dimension line, and the current custom cleaning area is adjusted to the new closed area.
  • the step of performing boundary refinement processing on the custom cleaning area based on the obstacle coordinate information and the labeling line may include: based on the obstacle coordinate information, if If it is determined that there is no obstacle in the scene area corresponding to the label line, based on the obstacle coordinate information, it is determined whether the distance between the closest obstacle and the scene area corresponding to the marked line is less than a predetermined distance threshold; when the judgment result is yes And outputting second notification information for prompting whether to perform area adjustment according to the nearest obstacle; when the result of the feedback based on the second notification information by the user is YES, performing a single obstacle boundary on the target map for the nearest obstacle Processing, obtaining a first obstacle boundary, and adjusting the current custom cleaning area to the area enclosed by the first obstacle boundary.
  • the nearest obstacle is an obstacle that is closest to the scene area corresponding to the marked line in the target scene.
  • the scene area corresponding to the label line is an area corresponding to the closed area formed by the label line in the actual environment of the target scene.
  • the specific process is: determining a coordinate set in the world coordinate system corresponding to the obstacle coordinate information of each obstacle and the label line Corresponding coordinate set in the world coordinate system; determining whether the coordinate set in the world coordinate system corresponding to each obstacle and the coordinate set in the world coordinate system corresponding to the marked line do not have the same coordinate, and when the judgment result is all , indicating that there is no obstacle in the scene area corresponding to the label line.
  • the positional relationship as shown in Fig. 7a is that the obstacle D is located outside the area corresponding to the rectangle 10, and for the rectangle 10, there is no obstacle inside. It should be noted that the gray area in FIGS.
  • FIG. 7a and 7b serves as an exemplary background area for carrying the rectangle 10 and the obstacle D for conveniently reflecting the relative positional relationship between the rectangle 10 and the obstacle D in the same scene. It does not have any limiting meaning. Similarly, the subsequent gray areas of Figs. 8, 9, and 10 function similarly to Fig. 7.
  • determining whether the distance between the closest obstacle and the scene region corresponding to the marked line is smaller than a predetermined distance threshold may also be based on a coordinate set of the world coordinate system corresponding to the recent obstacle The positional relationship between the coordinate sets in the world coordinate system corresponding to the marked line is realized.
  • the processing manner of the single obstacle boundary processing may include: determining an outer contour coordinate set of the single obstacle in the target map; and determining, from the outer contour coordinate set, a vertex coordinate set V A corresponding to the obstacle The coordinate points corresponding to the respective coordinates in the vertex coordinate set V A are connected, and the rectangle formed by the connection is determined as the obstacle boundary of the obstacle.
  • the vertex coordinate set V A can be expressed as follows:
  • V A ⁇ (x min , y min ), (x min , y max ), (x max , y min ), (x max , y max ) ⁇ .
  • the obstacle D in Figure 7a is processed after passing through a single obstacle boundary, and the processing results can be seen in Figure 7b.
  • the outer contour coordinate set of the obstacle in the target map can be converted by the conversion formula (1) and the conversion formula (2) described above.
  • the user can not only feed back the result, but also draw a new label line, so that the new label line can be recognized later. If the user determines that the label is correct, the user can feedback the result to No. At this time, the subsequent refinement processing may not be performed.
  • the subsequent refinement processing may not be performed.
  • the subsequent finening processing may not be performed, that is, the current current customized area is defaulted. For the exact area.
  • the step of performing boundary refinement processing on the custom cleaning area based on the obstacle coordinate information and the labeling line may include: if based on the obstacle coordinate information, It is determined that the scene area corresponding to the marked line intersects with a single obstacle, and then the proportion of the intersecting parts occupying the area of the intersecting obstacle is determined. It should be noted that the intersection described in the present application refers to the overlap of the scene regions corresponding to the obstacles and the marked lines.
  • the area ratio is greater than a predetermined ratio threshold, performing a single obstacle boundary treatment on the target map on the intersecting obstacles, obtaining a second obstacle boundary, and adjusting the current custom cleaning area to the second obstacle The area enclosed by the boundary of the object.
  • the third notification information for prompting whether to mark the error is output.
  • the result of the feedback based on the third notification information by the user is YES
  • the new annotation line is identified, the new closed area corresponding to the identified new marked line is determined, and the current custom cleaning area is adjusted to the new closed area; or And performing a single obstacle boundary treatment on the target map on the intersecting obstacles, obtaining a third obstacle boundary, and adjusting the current custom cleaning area to an area enclosed by the third obstacle boundary.
  • the specific process is: determining a coordinate set in the world coordinate system corresponding to the obstacle coordinate information and corresponding to the label line a set of coordinates in the world coordinate system; determining whether the coordinate set in the world coordinate system corresponding to the marked line only contains coordinates of part or all of the world coordinate system corresponding to the certain obstacle; when the judgment result is yes, Indicates that the scene area corresponding to the label line intersects with a single obstacle.
  • the above area ratio when calculating the above area ratio, it can be realized by calculating the area of the area, or the same number of coordinates.
  • the ratio of the number of grids of the obstacle in the scene area corresponding to the label line to the number of grids of the obstacle on the map may also be calculated to determine the intersection portion. The proportion of the area of the intersecting obstacles.
  • the feedback result may be no. At this time, the subsequent refinement processing may not be performed.
  • the area corresponding to the rectangle 20 contains a single obstacle E; and after the obstacle E is processed through the single obstacle boundary, the processing result is as shown in Fig. 8b.
  • the area corresponding to the rectangle 30 intersects the portion of the single obstacle F.
  • the processing result can be seen in FIG. 9b.
  • the step of performing boundary refinement processing on the custom cleaning area based on the obstacle coordinate information and the labeling line may include: based on the obstacle coordinate information, if Determining that the scene area corresponding to the label line intersects at least two obstacles respectively, and then intersecting at least two obstacles to perform the first type of multi-obstacle boundary treatment on the target map or the second type of multi-obstacle
  • the boundary processing obtains a fourth obstacle boundary; and, the current custom cleaning area is adjusted to the area enclosed by the fourth obstacle boundary.
  • the specific process is: determining a coordinate set in the world coordinate system corresponding to the obstacle coordinate information, and determining, according to the obstacle coordinate information, that the scene area corresponding to the label line intersects at least two obstacles respectively a coordinate set in the world coordinate system corresponding to the dimension line; determining whether the coordinate set in the world coordinate system corresponding to the dimension line includes coordinates of part or all of the world coordinate system corresponding to at least two obstacles; When the result is YES, it indicates that the scene area corresponding to the label line intersects at least two obstacles respectively. As shown in the positional relationship shown in Fig. 10a, the area corresponding to the rectangle 40 intersects the obstacles D, E, and F.
  • the processing of the first type of multi-obstacle boundary treatment may include performing the following processing on each of the at least two obstacles.
  • the vertex coordinate set V A can be expressed as follows:
  • V A ⁇ (x min , y min ), (x min , y max ), (x max , y min ), (x max , y max ) ⁇ .
  • Fig. 10a The obstacles D, E, and F in Fig. 10a are processed after passing through the first type of multi-obstacle boundary, and the processing result can be seen in Fig. 10b.
  • the processing of the second type of multi-obstacle boundary processing may include: determining an outer contour coordinate set of each of the at least two obstacles in the target map; determining the all outer contour coordinate sets from the determined a set of vertex coordinates V B of the entire area where the at least two obstacles are located; sequentially the respective x values in the search range [x amin , x amax ] are the first type of target objects, and traverse the outer contour coordinates of the at least two obstacles
  • vertex coordinate set V B can be expressed as follows:
  • V B ⁇ (x amin , y amin ), (x amin , y amax ), (x amax , y amin ), (x amax , y amax ) ⁇ .
  • the vertex coordinate set Amin x V B the minimum value in the x direction
  • x is a vertex coordinate set AMAX V B, the maximum value in the x direction
  • y amin V B is set in the vertex coordinate, y direction
  • y amax is the maximum value in the y-direction coordinate set of all obstacles in the scene region corresponding to the dimension line.
  • the respective vertex coordinate sets can be obtained.
  • FIG. 10a there are obstacles D, E, and F in the dimension line 40, and the corresponding vertex coordinate sets can be obtained.
  • the vertex coordinates formed by the maximum and minimum values The set is V B .
  • the vertex coordinate set V B is extracted from the coordinate set of the entire region having 17 coordinates.
  • the set of outer contour coordinate sets will be traversed by using x values of 5, 6, 7, and 8 as the first type of target objects, respectively;
  • the set of outer contour coordinate sets will be traversed by using the y values of 10, 11, 12, ... 20 as the second type of target objects, respectively.
  • the concave envelope algorithm includes, but is not limited to, the rolling ball method and the Alpha shape algorithm.
  • the basic principle of the rolling ball method is:
  • the basic principle of the Alpha shape algorithm is: on the basis of the convex hull, a parameter ⁇ is set, and in the process of reconstructing the shape of the Alpha shape, the vertices that are too far apart are not connected like the convex hull, and if the parameter ⁇ tends to infinity , the Alpha shape will not have infinite proximity to the convex hull; and if ⁇ is small, the alpha shape will tend to sag in a certain position to better fit the shape of the point set.
  • Fig. 10a The obstacles D, E, and F in Fig. 10a are processed after the second type of multi-obstacle boundary treatment, and the processing result can be seen in Fig. 10c.
  • the embodiment of the present application further provides an area attribute determining apparatus.
  • the apparatus may include an annotation line identification unit 310, a closed and communication area determining unit 320, and an area attribute determining unit. 330.
  • An annotation line identification unit 310 is configured to identify the annotation line in the target map.
  • the target map is a map of the target scene to be cleaned by the intelligent cleaning device during the cleaning process.
  • the closed and connected area determining unit 320 is configured to determine a closed area in the target map and a connected area based on the first position based on the identified marked line and the auxiliary object in the target map.
  • the auxiliary object includes a map boundary and an obstacle, and the first position is a position of the preset reference object in the target map.
  • the area attribute determining unit 330 is configured to determine the closed area as a custom cleaning area in the cleaning process of the smart cleaning device, and determine the connected area as a normal cleaning area in the cleaning process of the intelligent cleaning device.
  • the user can assign a label line in the target map according to the cleaning requirement to divide the area, and then identify the label line in the target map in the process of determining the area attribute, based on the identified label line and target.
  • the auxiliary objects in the map determine the connected area and the closed area, and determine the connected area as the normal cleaning area and the closed area as the custom cleaning area. Since the solution does not require a virtual wall device, the intelligent partitioning can be performed according to the user's personalized cleaning requirements. Therefore, the problem of high cost and inconvenient use in implementing the intelligent partitioning in the prior art is solved.
  • the dimension line may comprise a straight line or a fold line.
  • the closed area is an area surrounded by the marked line and the target auxiliary object and does not include the first position.
  • the target auxiliary object is an auxiliary object in the target map that intersects the label line.
  • the dimension line may comprise a polygon.
  • the closed area is an area surrounded by the sides of the polygon.
  • the preset reference object includes any one or more of the following:
  • the apparatus may further include an obstacle coordinate information obtaining unit 340, a fine processing unit 350, and a correcting unit 360.
  • the obstacle coordinate information obtaining unit 340 is configured to obtain current obstacle coordinate information in the target scene, where the obstacle coordinate information is information obtained by the intelligent cleaning device performing obstacle detection on the target scene. .
  • the refinement processing unit 350 is configured to perform boundary refinement processing on the custom cleaning area based on the obstacle coordinate information and the label line;
  • the correcting unit 360 is configured to correct the normal cleaning area based on the processing result obtained by the refinement processing unit 350.
  • the refinement processing unit 350 is configured to: if it is determined that there is no obstacle in the scene area corresponding to the label line, based on the obstacle coordinate information, perform the following operations.
  • the refinement processing unit 350 is specifically configured to: if it is determined that the distance between the scene region of the closest obstacle and the scene region corresponding to the marked line is less than a predetermined distance threshold, based on the obstacle coordinate information, Then perform the following operations.
  • the obstacle boundary is processed to obtain a first obstacle boundary, and the custom cleaning area is adjusted to an area surrounded by the first obstacle boundary.
  • the closest obstacle is an obstacle that is closest to the scene area corresponding to the marked line in the target scene.
  • the refinement processing unit 350 is specifically configured to:
  • a single obstacle boundary processing on the target map is performed on the intersecting obstacles to obtain a second obstacle boundary.
  • the custom cleaning area is adjusted to an area enclosed by the second obstacle boundary.
  • the refinement processing unit 350 is specifically configured to:
  • the processing manner of the single obstacle boundary processing includes:
  • a set of outer contour coordinates of a single obstacle in the target map is determined.
  • V A ⁇ (x min , y min ), (x min , y max ), (x max , y min ), (x max , y max ) ⁇ ;
  • the coordinate points corresponding to the respective coordinates in the vertex coordinate set V A are connected, and the rectangle formed by the connection is determined as the obstacle boundary of the obstacle.
  • the processing manner of the multiple obstacle boundary processing may include:
  • a single obstacle boundary treatment is performed for each of the plurality of obstacles.
  • the processing manner of the multiple obstacle boundary processing may include:
  • a set of outer contour coordinates of each of the at least two obstacles in the target map is determined.
  • a vertex coordinate set V B of the entire area in which the at least two obstacles are located is determined from all of the determined outer contour coordinate sets.
  • the set of outer contour coordinates of the at least two obstacles ⁇ S 1 , S 2 , . . . S i ... S are sequentially traversed by the respective x values in the search range [x amin , x amax ] as the first type of target object.
  • N find the coordinates with the x value of the current first type of target object, and obtain the coordinates with the smallest y value and the maximum y value from all the coordinates corresponding to the current first type target object found, and
  • the acquired coordinates are included in the set S x , where N is the number of obstacles in the scene area corresponding to the label line, x amin is the minimum value in the x direction of the vertex coordinate set V B , and x amax is the The maximum value in the x direction of the vertex coordinate set V B .
  • the set of outer contour coordinates of the at least two obstacles ⁇ S 1 , S 2 , . . . S i ... S are sequentially traversed by the respective y values in the search range [y amin , y amax ] as the second type target object.
  • N find the coordinates of the target object with the y value of the current second type, and obtain the coordinates with the smallest x value and the maximum x value from all the coordinates corresponding to the current second type target object found, and
  • the acquired coordinates are included in the set S y
  • y amin is the minimum value in the y direction of the vertex coordinate set V B
  • y amax is the maximum value in the y direction of the vertex coordinate set V B .
  • the present application further provides an electronic device, as shown in FIG. 13, the electronic device includes: an internal bus 410, a memory 420, a processor 430, and a communications interface 440;
  • the processor 430, the communication interface 440, and the memory 420 complete communication with each other through the internal bus 410.
  • the memory 420 is configured to store a machine feasible instruction corresponding to the area attribute determining method.
  • the processor 430 is configured to read the machine readable instructions on the memory 420 and execute the instructions to implement the following operations.
  • the target map is a map of the target scene to be cleaned by the intelligent cleaning device during the cleaning process.
  • the first location is a location of a preset reference object in the target map.
  • the closed area is determined as a custom cleaning area in the cleaning process of the intelligent cleaning device, and the communication area is determined as a normal cleaning area in the cleaning process of the intelligent cleaning device.
  • the electronic device may be a smart cleaning device or a cloud server corresponding to the smart cleaning device.
  • the memory 420 may be, for example, a non-volatile memory.
  • the processor 430 may call a logic instruction that executes the implementation area attribute determining method in the memory 420 to execute the above-described area attribute determining method.
  • the present application also provides an intelligent cleaning system.
  • the intelligent cleaning system includes: a mobile terminal 1410 and a smart cleaning device 1420.
  • the mobile terminal 1410 is configured to obtain a user instruction about a marked line in the target map, and send the user instruction to the smart cleaning device 1420, so that the smart cleaning device 1420 identifies the target based on the user instruction.
  • the target map is a map of the target cleaning scene that the smart cleaning device 1420 depends on during the cleaning process.
  • the smart cleaning device 1420 is configured to identify an annotation line in the target map; determine, based on the identified annotation line and the auxiliary object in the target map, the closed area in the target map and the first location as a base point a communication area, wherein the auxiliary object includes a map boundary and an obstacle, the first position is a position of the preset reference object in the target map; determining the closed area as the smart cleaning device 1420 to clean a custom cleaning area in the process, and determining the communication area as a normal cleaning area during the cleaning process of the intelligent cleaning device 1420; and for determining the customized cleaning area and the normal cleaning area according to the determined The target scene is cleaned.
  • the smart cleaning device 1420 includes, but is not limited to, a sweeping robot.
  • the so-called sweeping robot is also called an automatic sweeper, a smart vacuum cleaner, a robot vacuum cleaner, and the like.
  • the mobile terminal 1410 includes, but is not limited to, a smart phone, a tablet computer, and a notebook computer.
  • the target map can be displayed in the mobile terminal 1410, and the user can issue a user instruction on the annotation line in the target map, such as issuing a user instruction by drawing a label line or issuing a user instruction by giving coordinate information.
  • the target map in the smart cleaning device 1420 can be built by the smart cleaning device or shared by other devices.
  • the intelligent partitioning can be performed according to the user's personalized cleaning requirements. Therefore, the problem of high cost and inconvenient use in implementing the intelligent partitioning in the prior art is solved.
  • the present application also provides an intelligent cleaning system with respect to the above method embodiments.
  • the smart cleaning system includes: a mobile terminal 1510, a cloud server 1530, and a smart cleaning device 1520.
  • the mobile terminal 1510 is configured to obtain a user instruction about a marked line in the target map, and send the user instruction to the cloud server 1530, so that the cloud server 1530 identifies the target map based on the user instruction. Marking line.
  • the target map is a map of the smart cleaning device 1520 that depends on the target scene to be cleaned during the cleaning process.
  • the cloud server 1530 is configured to identify an annotation line in the target map, and determine, according to the identified annotation line and the auxiliary object in the target map, the closed area in the target map and the first location as a base point.
  • a communication area wherein the auxiliary object includes a map boundary and an obstacle, the first position is a position of the preset reference object in the target map; and the closed area is determined as the cleaning process of the smart cleaning device 1520 A custom cleaning area in the middle, and the communication area is determined as a normal cleaning area in the cleaning process of the intelligent cleaning device 1520.
  • the intelligent cleaning device 1520 is configured to clean the target scene according to the customized cleaning area and the normal cleaning area determined by the cloud server 1530.
  • the intelligent cleaning device 1520 includes but is not limited to a cleaning robot, and the so-called cleaning robot is also called an automatic cleaning machine, a smart vacuum, a robot vacuum cleaner, and the like.
  • the mobile terminal 1510 includes, but is not limited to, a smart phone, a tablet computer, and a notebook computer.
  • the target map can be displayed in the mobile terminal 1510, and the user can issue a user instruction about the marked line in the target map, such as issuing a user instruction by drawing a marked line or issuing a user instruction by giving coordinate information.
  • the target map in the cloud server 1530 can be uploaded by the smart cleaning device, or uploaded by other devices.
  • the intelligent partitioning can be performed according to the user's personalized cleaning requirements. Therefore, the problem of high cost and inconvenient use in implementing the intelligent partitioning in the prior art is solved.
  • the functions of the logic instructions implementing the region attribute determination method may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk, and the like.
  • Other embodiments of the present application will be readily apparent to those skilled in the ⁇ RTIgt;
  • the application is intended to cover any variations, uses, or adaptations of the application, which are in accordance with the general principles of the application and include common general knowledge or common technical means in the art that are not disclosed herein. .
  • the specification and examples are to be regarded as illustrative only,

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Processing Or Creating Images (AREA)
  • Navigation (AREA)
  • Image Generation (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Error Detection And Correction (AREA)

Abstract

一种区域属性确定方法、装置、系统及电子设备。方法包括:识别目标地图中的标注线(S101);其中,目标地图为智能清扫设备(1420)在清扫过程所依赖的、待清扫目标场景的地图;基于识别到的标注线(L1、L2、L3、L4、L5)和目标地图中的辅助对象,确定目标地图中的封闭区域(Z1、Z2、Z3、Z4、Z5)和以第一位置为基点的联通区域(S102),其中辅助对象包括地图边界和障碍物,第一位置为预设参考对象在目标地图中的位置;将封闭区域(Z1、Z2、Z3、Z4、Z5)确定为智能清扫设备(1420)清扫过程中的自定义清扫区域,并将联通区域确定为智能清扫设备(1420)清扫过程中的正常清扫区域(S103)。

Description

区域属性确定
相关申请的交叉引用
本专利申请要求于2017年11月08日提交的、申请号为201711090900.X、发明名称为“一种区域属性确定方法、装置、系统及电子设备”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及人工智能领域,特别涉及一种区域属性确定方法、装置、系统及电子设备。
背景技术
智能清扫设备,如扫地机器人,是较受欢迎的一种智能电器,其能凭借一定的人工智能,自动在待清扫的场景内完成清扫工作。
为了实现智能分区,智能清扫设备通常配备有虚拟墙。虚拟墙是一种实体设备,其发射可以被智能清扫设备感知的信号,形成一堵虚拟的墙,阻止智能清扫设备前往特定区域进行清扫。
尽管智能清扫设备在智能清扫时通过虚拟墙实现了智能分区,但是,由于必须要配备实体的虚拟墙,导致成本较高,且由于虚拟墙需要电源供电,使用上不够方便。
发明内容
有鉴于此,本申请提供一种区域属性确定方法、装置、系统及电子设备,以降低智能清扫设备实现智能分区时的成本。
具体地,本申请是通过如下技术方案实现的:
第一方面,本申请提供了一种区域属性确定方法,所述方法包括:识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;将所述封闭区域确定为所述智 能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
第二方面,本申请提供了一种区域属性确定装置,包括:标注线识别单元,用于识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;封闭及联通区域确定单元,用于基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;区域属性确定单元,用于将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
第三方面,本申请提供了一种电子设备,包括:内部总线、存储器、处理器和通信接口。其中,所述处理器、所述通信接口、所述存储器通过所述内部总线完成相互间的通信。所述存储器,用于存储区域属性确定方法对应的机器可行指令。所述处理器,用于读取所述存储器上的所述机器可读指令,并执行所述指令以实现如下操作:识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
第四方面,本申请提供了一种智能清扫系统,包括移动终端和智能清扫设备。所述移动终端,用于获得关于目标地图中的标注线的用户指令,并将所述用户指令发送至所述智能清扫设备,以使所述智能清扫设备基于所述用户指令识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图。所述智能清扫设备,用于识别目标地图中的标注线;基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域;以及按照所确定出的自定义清扫区域和正常清扫区域,对所述目标场景进行清扫。
第五方面,本申请提供了一种智能清扫系统,包括:移动终端、云服务器和智能清扫设备。所述移动终端,用于获得关于目标地图中的标注线的用户指令,并将所述用户指令发送至所述云服务器,以使所述云服务器基于所述用户指令识别目标地图中的标注线,其中,所述目标地图为所述智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;所述云服务器,用于识别目标地图中的标注线;基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;以及将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。所述智能清扫设备,用于根据所述云服务器所确定出的自定义清扫区域和正常清扫区域,对所述目标场景进行清扫。
本申请所提供方案中,用户可以根据清扫需求在目标地图中给定标注线以划分区域,进而在区域属性确定过程中,对目标地图中的标注线进行识别,基于识别到的标注线和目标地图中的辅助对象确定得到联通区域和封闭区域,并将联通区域确定为正常清扫区域而将封闭区域确定为自定义清扫区域。由于无需虚拟墙设备,即可依据用户个性化的清扫需求进行智能分区,因此,降低了实现智能分区时的成本。
附图说明
图1是本申请一示例性实施例示出的一种区域属性确定方法的流程图;
图2a和图2b是标注线为直线的地图示意图;
图3是标注线为折线的地图示意图;
图4是标注线为多边形的地图示意图;
图5是本申请另一示例性实施例示出的一种区域属性确定方法的流程图;
图6是旋转角度和平移向量求取过程所利用的坐标点位置关系示意图;
图7a是矩形与障碍物的一种位置关系图,图7b为图7a中障碍物经过单障碍物边界处理所得的结果图;
图8a是矩形与障碍物的一种位置关系图,图8b为图8a中障碍物经过单障碍物边界处理所得的结果图;
图9a是矩形与障碍物的一种位置关系图,图9b为图9a中障碍物经过单障碍物边界处理所得的结果图;
图10a是矩形与障碍物的一种位置关系图,图10b为图10a中障碍物经过第一类多障碍物边界处理所得的结果图,图10c为图10a中障碍物经过第二类多障碍物边界处理所得的结果图;
图11是本申请一示例性实施例示出的一种区域属性确定装置的结构图;
图12是本申请另一示例性实施例示出的一种区域属性确定装置的结构示意图;
图13是本申请一示例性实施例示出的一种电子设备的结构示意图;
图14是本申请一示例性实施例示出的一种智能清扫系统的结构示意图;
图15是本申请另一示例性实施例示出的一种智能清扫系统的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
为了解决现有技术问题,本申请提供了一种区域属性确定方法、装置及电子设备,以解决现有技术中实现智能分区时所带来的成本高及使用不便的问题。
下面首先对本申请所提供的一种区域属性确定方法进行介绍。
需要说明的是,本申请所提供的一种区域属性确定方法的执行主体可以为一种区域属性确定装置。其中,该区域属性确定装置可以为运行于智能清扫设备中的功能软件,此时,该智能清扫设备根据自身所确定出的自定义清扫区域和正常清扫区域对目标场景进行清扫。当然,该区域属性确定装置也可以为运行于智能清扫设备所对应云服务器中的功能软件,此时,该智能清扫设备根据该云服务器所确定出的自定义清扫区域和正常清扫区域来对目标场景进行清扫。具体的,该智能清扫设备包括但不局限于扫地机器人,所谓扫地机器人也可以被称为自动打扫机、智能吸尘、机器人吸尘器等。
如图1所示,本申请所提供的一种区域属性确定方法,可以包括如下步骤:
S101,识别目标地图中的标注线。其中,该目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图。并且,该标注线可以包括但不局限于:直线、折线或多边形。
为了实现对待清扫目标场景的智能分区,可以预先构建该目标场景所对应的地图,即目标地图。当用户根据个性化需求对目标场景进行分区清扫时,用户可以基于对联通区域和封闭区域区设置不同清扫属性的分区思想,在目标地图中绘制标注线,以通过所绘制的标注线来完成封闭区域和联通区域的区分。
并且,该目标场景的目标地图可以由该智能清扫设备所构建,也可以由其他电子设备、如其他同类型或不同类型的智能清扫设备,共享到该智能清扫设备。需要说明的是,在构建目标场景的目标地图时,可以依赖激光、雷达、超声和视觉摄像头等方式中的一种或多种,具体的构建技术可以为本领域熟知技术中的任一地图构建技术,在此不做限定。另外,需要强调的是,当该区域属性确定装置运行在该智能清扫设备上时,该智能清扫设备在确定区域属性时所利用的目标地图可以通过该智能清扫设备自行构建来得到,也可以通过其他设备共享来得到;而该区域属性确定装置运行在该智能清扫设备对应的云服务器上时,该云服务器在确定区域属性时所利用的目标地图可以通过云服务器对应的智能清扫设备上传来得到,也可以通过其他设备上传来得到。
可以理解的是,当该区域属性确定装置运行在该智能清扫设备上时,在一种实现方式中,该智能清扫设备可以通过自身的显示屏幕来展示该目标地图。这样,用户可以在目标地图中发出关于标注线的用户指令,如通过绘制标注线发出用户指令或通过给定坐标信息来发出用户指令。进而该智能清扫设备可以获得关于标注线的用户指令,并基于该用户指令识别到目标地图中的标注线。当然,当该区域属性确定装置运行在该智能清扫设备时,在另一种实现方式中,可以通过与该智能清扫设备相通信的移动终端来展示 该目标地图。这样,用户可以在移动终端所展示的目标地图中发出关于标注线的用户指令,如通过绘制标注线发出用户指令或通过给定坐标信息来发出用户指令。进而该智能清扫设备可以从该移动终端处获得关于标注线的用户指令,并基于该用户指令识别到目标地图中的标注线。
而当该区域属性确定装置运行在该智能清扫设备所对应的云服务器时,可以通过与该智能清扫设备通信的移动终端来展示该目标地图。这样,用户可以在移动终端所展示的目标地图中发出关于标注线的用户指令,如通过绘制标注线发出用户指令或通过给定坐标信息来发出用户指令。进而该云服务器可以从该移动终端处获得关于标注线的用户指令,并基于该用户指令识别到目标地图中的标注线。
另外,需要强调的是,该目标地图为该智能清扫设备在清扫该目标场景过程中所依赖的地图。在清扫时,智能清扫设备可以将目标地图中的坐标转换为世界坐标系中的坐标,进而利用该世界坐标系下的坐标对目标场景进行清扫。智能清扫设备如何依赖目标地图进行清扫,可应用本领域熟知的任意技术,因此,在此不做赘述。
本领域技术人员可以理解的是,该目标地图中的关于标注线的可绘制功能可以通过本领域熟知的任意技术实现,另外,可以通过本领域熟知的任一识别技术来识别在地图中绘制的标注线,在此不做限定。并且,标注线的颜色、粗细等等均可以由用户自行设定。
S102,基于所识别到的标注线和该目标地图中的辅助对象,确定该目标地图中的封闭区域和以第一位置为基点的联通区域。
在确定该目标地图中的封闭区域和以第一位置为基点的联通区域后,可以对封闭区域和联通区域设置不同的清扫属性。
其中,该辅助对象可以包括地图边界和障碍物,该第一位置为预设参考对象在该目标地图中的位置。在具体应用中,通常选择位置较为固定的对象作为预设参考对象。可选地,该预设参考对象可以包括以下任意一个或多个:该智能清扫设备的充电桩,或者,该目标场景中除该充电桩以外的非运动对象。举例而言:该非运动对象可以为衣柜、床、桌子等等。
需要说明的是,不同类型的标注线,所对应的封闭区域的形成方式不同。下面结合不同的标注线,来介绍封闭区域的形成方式:
当该标注线为直线时,封闭区域为该标注线和目标辅助对象所围成的、且不包含该 第一位置的区域。其中,该目标辅助对象为该目标地图中与该标注线相交的辅助对象。如图2a所示出的地图示例中,该预设参考对象为充电桩01,标注线为直线L1,该直线L1和地图边界构成了封闭区域,即该图中的阴影区域z1。如图2b所示出的地图示例中,该预设参考对象为充电桩01,标注线为两条直线L2和L3,直线L2和地图边界构成了一个封闭区域,即该图中的阴影区域z2,而直线L3和地图边界构成了一个封闭区域,即该图中的阴影区域z3。
当该标注线为折线时,该封闭区域为该标注线和目标辅助对象所围成的、且不包含该第一位置的区域。其中,该目标辅助对象为该目标地图中与该标注线相交的辅助对象。如图3所示出的地图示例中,该预设参考对象为充电桩01,标注线为折线L4,该折线L4和地图边界构成了封闭区域,即该图中的阴影区域z4。
当该标注线为多边形时,该封闭区域为该多边形的各边所围成的区域。如图4所示出的地图示例中,该预设参考对象为充电桩01,标注线为菱形L5,该菱形L5所围成的区域为封闭区域,即图中的阴影区域z5。
需要说明的是,上述的关于封闭区域的说明仅仅是示例性说明,并不应该构成对本申请的限定。另外,在第一位置和标注线被确定后,以第一位置为基点的联通区域的确定方式可以采用本领域熟知技术中的任一种联通区域确定方式。
S103,将该封闭区域确定为该智能清扫设备清扫过程中的自定义清扫区域,并将该联通区域确定为该智能清扫设备清扫过程中的正常清扫区域。
在确定出封闭区域和联通区域后,基于封闭区域和联通区域对应不同的清扫属性的分区思想,可以将该封闭区域确定为该智能清扫设备清扫目标场景过程中的自定义清扫区域,并将该联通区域确定为该智能清扫设备清扫目标场景过程中的正常清扫区域。并且,可以理解的是,所谓的正常清扫区域为智能清扫设备在清扫时按照正常清扫程序来进行清扫的区域;而自定义清扫区域所对应的清扫类别可以由用户自行设定,例如着重清扫或禁止清扫。其中,所谓着重清扫即需要智能清扫设备加大清扫力度,而禁止清扫即无需智能清扫设备清扫。
需要强调的是,在具体应用中,由于用户的清扫需求较为多样化,因此,根据清扫需求,用户可以在目标地图中绘制一条或多条标注线,形成一个或多个封闭区域。并且,多条标注线的类型可以为同一种或为不同种。
本申请所提供方案中,用户可以根据清扫需求在目标地图中给定标注线以划分区域, 进而在区域属性确定过程中,对目标地图中的标注线进行识别。基于识别到的标注线和目标地图中的辅助对象确定得到联通区域和封闭区域,并将联通区域确定为正常清扫区域而将封闭区域确定为自定义清扫区域。由于本方案无需虚拟墙设备,即可依据用户个性化的清扫需求进行智能分区,因此,可有效降低实现智能分区时的成本并提高使用便利性。
另外,由于目标地图上通常没有高精度的尺度化信息,或者,因为用户行为而使得障碍物临时移动或增减,因此,可能导致用户在目标地图上给定的自定义清扫区域与目标场景中的用户希望给定的实际区域,即希望被自定义的区域,存在偏差。因此,为了进一步提高智能分区的精度,对于多边形的标注线,当该多边形为矩形时,本申请还提供了一种区域属性确定方法。
对于标注线为矩形而言,如图5所示,本申请另一实施例所提供的一种区域属性确定方法,可以包括如下步骤:
S201,识别目标地图中的标注线。
S202,基于所识别到的标注线和该目标地图中的辅助对象,确定该目标地图中的封闭区域和以第一位置为基点的联通区域。
其中,该辅助对象包括地图边界和障碍物,该第一位置为预设参考对象在该目标地图中的位置。
S203,将该封闭区域确定为该智能清扫设备清扫过程中的自定义清扫区域,并将该联通区域确定为该智能清扫设备清扫过程中的正常清扫区域。
其中,本实施例中S201-S203与上述实施例中的S101-S103类似,区别仅仅在于,本实施例中标注线为矩形,而上一实施例中标注线可以为直线、折线或多边形等等。
S204,获得该目标场景中当前的障碍物坐标信息。
其中,该障碍物坐标信息为该智能清扫设备对该目标场景进行障碍物检测所得到的信息。
可以理解的是,在自定义清扫区域和正常清扫区域确定后,该智能清扫设备可以按照该自定义清扫区域和正常清扫区域进行清扫,并在清扫过程中,对该目标场景进行障碍物检测,以对两类清扫区域进行修正。或者,在该自定义清扫区域和正常清扫区域确定后,该智能清扫设备可以在清扫之前,对该目标场景进行障碍物检测,以对两类清扫 区域进行修正。
并且,当该区域属性确定装置运行于该智能清扫设备时,该区域属性确定装置可以直接获得该智能清扫设备所检测到的障碍物坐标信息。而当该区域属性确定装置运行于该智能清扫设备所对应的云服务器时,该智能清扫设备可以将所检测到的障碍物坐标信息上传至该云服务器,进而该区域属性确定装置可以获得该障碍物坐标信息。
另外,智能清扫设备可以基于雷达、激光、超声、视觉摄像头等检测到障碍物在以该智能清扫设备为原点的坐标系下的坐标。那么,可以将障碍物在以该智能清扫设备为原点的坐标系下的坐标直接作为障碍物坐标信息。当然,该障碍物坐标信息也可以为障碍物在世界坐标系下的坐标。也就是说,在获得障碍物在以该智能清扫设备为原点的坐标系下的坐标后,将所获得的坐标转换为世界坐标系下的坐标。为了方案清楚,下面介绍关于目标地图中的坐标与世界坐标系下的坐标之间的转换关系,以及以该智能清扫设备为原点的坐标系下的坐标与世界坐标系下的坐标之间的转换关系。
用户所给定矩形在目标地图中的坐标与在世界坐标系中的坐标之间的转换公式(1)如下:
Figure PCTCN2018112923-appb-000001
其中,[x y] T为在目标地图中的坐标,s为尺度因子,θ为该目标地图的坐标系与世界坐标系的旋转角度,[t x t y] T为该目标地图的坐标系与世界坐标系的平移向量,[X m Y m] T为世界坐标系中的坐标。其中,s、θ以及[t x t y] T为构建该目标地图时已确定出的常量。
相应的,以智能清扫设备为原点的坐标系下的坐标与世界坐标系下的坐标之间的转换公式(2)如下:
Figure PCTCN2018112923-appb-000002
其中,[x' r y' r] T为智能清扫设备检测到的障碍物在以智能清扫设备为原点的坐标系下 的坐标,θ'为该智能清扫设备在世界坐标系中的旋转角度,[t' x t' y] T为该智能清扫设备在世界坐标系中的平移向量,[X' m Y' m] T为障碍物在世界坐标系下的坐标。其中,θ'以及[t' x t' y] T可以根据里程信息确定,具体确定方式如下。
如图6所示,已知智能清扫设备某时刻在世界坐标系中p点坐标为[x y θ] T,可根据以下公式(3)-(7)得到下一时刻(与上一时刻间隔足够小)智能清扫设备在世界坐标系中p'点坐标[x' y' θ'] T
Figure PCTCN2018112923-appb-000003
Figure PCTCN2018112923-appb-000004
Δx=Δd cos(θ+Δθ/2)          (5),
Δy=Δd sin(θ+Δθ/2)          (6),
Figure PCTCN2018112923-appb-000005
其中,Δs l,Δs r分别为智能清扫设备由p点到p'点时,智能清扫设备的左轮和右轮所走的路程,该路程可通过相关传感器获取;取该智能清扫设备前进方向为正方向;L为智能清扫设备的左轮和右轮之间的距离。那么,智能清扫设备在世界坐标系下位于p'点时,对应的在世界坐标系中的旋转角度为θ',平移向量
Figure PCTCN2018112923-appb-000006
S205,基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理。
在获得该障碍物坐标信息后,可以比对自定义区域清扫区域,即该标注线所围成的区域,在目标场景的真实环境中与障碍物的位置关系,从而对自定义清扫区域进行边界精细化处理。
为了方案清楚和布局清晰,后续对基于该障碍物坐标信息和该标注线,对当前的自定义清扫区域进行边界精细化处理的具体实现方式进行介绍。
S206,基于边界精细化处理所得的处理结果,修正该正常清扫区域。
在对自定义清扫区域进行边界精细化处理后,可以基于边界精细化处理所得的处理结果,修正该正常清扫区域。
本实施例中,在降低智能分区的实现成本的同时,进一步提高了智能分区的精准性。
为了方案清楚,下面介绍基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理的几种具体实现方式。
可选地,在一种具体实现方式中,所述基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理的步骤,可以包括:基于该障碍物坐标信息,如果判断出该标注线对应的场景区域内未存在障碍物,则输出用于提示是否标注错误的第一通知信息;在用户基于所述第一通知信息反馈的结果为是时,识别新标注线,确定所识别到的新标注线对应的新封闭区域,并将当前的自定义清扫区域调整为该新封闭区域。
可选地,在另一种具体实现方式中,所述基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理的步骤,可以包括:基于该障碍物坐标信息,如果判断出该标注线对应的场景区域内未存在障碍物,则基于该障碍物坐标信息,判断最近障碍物与该标注线对应的场景区域之间的距离是否小于预定距离阈值;当判断结果为是时,输出用于提示是否按照最近障碍物进行区域调整的第二通知信息;在用户基于该第二通知信息反馈的结果为是时,对该最近障碍物进行该目标地图上的单障碍物边界处理,得到第一障碍物边界,并将当前的自定义清扫区域调整为该第一障碍物边界所围成的区域。其中,该最近障碍物为所述目标场景中与该标注线所对应的场景区域距离最近的障碍物。
其中,标注线对应的场景区域为:在目标场景的实际环境中,与基于该标注线所形成的封闭区域相对应的区域。
基于该障碍物坐标信息,判断该标注线对应的场景区域内是否未存在障碍物,具体过程为:确定各个障碍物的障碍物坐标信息所对应的世界坐标系下的坐标集以及该标注线所对应的世界坐标系下的坐标集;判断各个障碍物对应的世界坐标系下的坐标集和该标注线对应的世界坐标系下的坐标集是否均不存在相同坐标,当判断结果均为是时,表明该标注线对应的场景区域内未存在障碍物。如图7a所示的位置关系:障碍物D位于 矩形10所对应区域外,对于矩形10而言,其内部未存在任何障碍物。需要说明的是,图7a和7b中的灰色区域作为承载矩形10和障碍物D的示例性的背景区域,用于方便体现出同一场景内的矩形10和障碍物D之间的相对位置关系,并不具有任何限定意义,同样的,后续的图8、图9和图10的灰色区域的作用与图7类似。
并且,基于该障碍物坐标信息,判断最近障碍物与该标注线对应的场景区域之间的距离是否小于预定距离阈值的具体实现方式也可以基于最近障碍物对应的世界坐标系下的坐标集和该标注线对应的世界坐标系下的坐标集之间的位置关系来实现。
具体的,该单障碍物边界处理的处理方式可以包括:确定所述单障碍物在该目标地图中的外轮廓坐标集;从该外轮廓坐标集中,确定该障碍物对应的顶点坐标集V A;将该顶点坐标集V A中各个坐标所对应坐标点进行连接,并将连接所形成的矩形确定为该障碍物的障碍物边界。其中,顶点坐标集V A可表示为如下:
V A={(x min,y min),(x min,y max),(x max,y min),(x max,y max)}。
如图7a和7b所示,图7a中的障碍物D在经过单障碍物边界处理后,处理结果可以参见图7b。其中,障碍物在该目标地图中的外轮廓坐标集可以通过上述的转换公式(1)和转换公式(2)转换得到。
另外,在输出第一通知信息后,如果用户判断出标注错误,此时,用户不但可以反馈结果为是,还可以绘制新标注线,以在后续可以识别到新标注线。而如果用户判断出标注正确,此时,用户可以反馈结果为否,此时,可以不进行后续的精细化处理。
类似的,在输出第二通知信息后,如果用户不希望按照最近障碍物进行区域调整,用户可以反馈结果为否,此时,可以不进行后续的精细化处理。并且,当基于该障碍物坐标信息,判断出最近障碍物与该标注线对应的场景区域之间的距离不小于预定距离阈值时,可以不进行后续的精细化处理,即默认当前的自定义区域为准确的区域。
可选地,在又一种具体实现方式中,所述基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理的步骤,可以包括:基于该障碍物坐标信息,如果判断出该标注线对应的场景区域与单个障碍物相交,则确定相交部分占所相交的障碍物的面积比例。需要注意的是,本申请所述的相交,指的是障碍物与标注线对应的场景区域有重叠。
当该面积比例大于预定比例阈值时,对所相交的障碍物进行该目标地图上的单障碍物边界处理,得到第二障碍物边界,并将当前的自定义清扫区域调整为所述第二障碍物 边界所围成的区域。
当该面积比例不大于预定比例阈值时,输出用于提示是否标注错误的第三通知信息。在用户基于该第三通知信息反馈的结果为是时,识别新标注线,确定所识别到的新标注线对应的新封闭区域,并将当前的自定义清扫区域调整为该新封闭区域;或者,对所相交的障碍物进行该目标地图上的单障碍物边界处理,得到第三障碍物边界,并将当前的自定义清扫区域调整为该第三障碍物边界所围成的区域。
其中,基于该障碍物坐标信息,判断该标注线对应的场景区域是否与单个障碍物相交,具体过程为:确定该障碍物坐标信息所对应的世界坐标系下的坐标集以及该标注线所对应的世界坐标系下的坐标集;判断该标注线所对应的世界坐标系下的坐标集中是否仅仅包含某一障碍物对应的部分或全部的世界坐标系下的坐标;当判断结果为是时,表明该标注线对应的场景区域与单个障碍物相交。
在一些例子中,在计算上述面积比例时,可以通过计算区域面积,或者,相同的坐标数量的方式来实现。或者,在栅格化地图中,也可以计算障碍物在该标注线所对应的场景区域中的栅格数与障碍物在地图上所占栅格数之间的比例,来确定相交部分占所相交的障碍物的面积比例。
在输出第三通知信息后,如果用户确定出未标注错误,可以反馈结果为否,此时,可以不进行后续的精细化处理。
其中,单障碍物边界处理的具体过程可以参照上述具体实现方式中所述的过程,在此不做赘述。如图8a所示的位置关系,矩形20所对应区域内包含单个障碍物E;而障碍物E在经过单障碍物边界处理后,处理结果如图8b所示。如图9a所示的位置关系,矩形30所对应区域与单个障碍物F的部分相交。并且,图9a中的障碍物F在经过单障碍物边界处理后,处理结果可以参见图9b。
可选地,在另一种具体实现方式中,所述基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理的步骤,可以包括:基于该障碍物坐标信息,如果判断出所述标注线对应的场景区域分别与至少两个障碍物相交,则将所相交的至少两个障碍物进行该目标地图上的第一类多障碍物边界处理或第二类多障碍物边界处理,得到第四障碍物边界;以及,将当前的自定义清扫区域调整为该第四障碍物边界所围成的区域。
其中,基于该障碍物坐标信息,如果判断出所述标注线对应的场景区域分别与至少两个障碍物相交,具体过程为:确定该障碍物坐标信息所对应的世界坐标系下的坐标集 以及该标注线所对应的世界坐标系下的坐标集;判断该标注线所对应的世界坐标系下的坐标集中是否包含至少两个障碍物对应的部分或全部的世界坐标系下的坐标;当判断结果为是时,表明该标注线对应的场景区域分别与至少两个障碍物相交。如图10a所示的位置关系,矩形40所对应区域与障碍物D、E和F相交。
第一类多障碍物边界处理的处理方式可以包括:对至少两个障碍物中的每个障碍物执行如下处理。
确定障碍物在该目标地图中的外轮廓坐标集;从该外轮廓坐标集中,确定该障碍物对应的顶点坐标集V A;将该顶点坐标集V A中各个坐标所对应坐标点进行连接,并将连接所形成的矩形确定为该障碍物的障碍物边界。其中,顶点坐标集V A可以表示为如下:
V A={(x min,y min),(x min,y max),(x max,y min),(x max,y max)}。
图10a中的障碍物D、E和F在经过第一类多障碍物边界处理后,处理结果可以参见图10b。
第二类多障碍物边界处理的处理方式可以包括:确定至少两个障碍物中每个障碍物在所述目标地图中的外轮廓坐标集;从所确定出的所有外轮廓坐标集中确定所述至少两个障碍物所在整体区域的顶点坐标集V B;依次以查找范围[x amin,x amax]内的各个x值为第一类目标对象,遍历所述至少两个障碍物的外轮廓坐标集的集合{S 1,S 2,…S i…S N},查找所具有x值为当前第一类目标对象的坐标,并从所查找到的当前第一类目标对象对应的所有坐标中获取具有最小y值和最大y值的坐标,并将所获取的坐标计入到集合S x中;依次以查找范围[y amin,y amax]内的各个y值为第二类目标对象,遍历所述至少两个障碍物的外轮廓坐标集的集合{S 1,S 2,…S i…S N},查找所具有y值为当前第二类目标对象的坐标,并从所查找到的当前第二类目标对象对应的所有坐标中获取具有最小x值和最大x值的坐标,并将所获取的坐标计入到集合S y中;取集合S x与集合S y的交集,基于凹包算法,将所述交集中的坐标对应的坐标点进行连接,将连接所形成的闭合曲线确定为所述至少两个障碍物对应的多障碍物边界。其中N为标注线对应的场景区域内障碍物的个数。
其中,顶点坐标集V B可表示为如下:
V B={(x amin,y amin),(x amin,y amax),(x amax,y amin),(x amax,y amax)}。
其中,x amin为顶点坐标集V B中,x方向上的最小值;x amax为在顶点坐标集V B中,x方向上的最大值;y amin为顶点坐标集V B中,y方向上的最小值;y amax为在标注线对应的 场景区域内所有障碍物的顶点坐标集中,y方向上的最大值。
对标注线对应的场景区域内的所有障碍物,可以得到各自的顶点坐标集。如图10a所示,标注线40里面存在障碍物D、E和F,可以得到各自对应的顶点坐标集。在上述三个顶点坐标集中,查找x方向上的最小值x amin与最大值x amax,以及y方向上的最小值y amin与最大值y amax,由这几个最大、最小值构成的顶点坐标集即为V B
举例而言,假设三个障碍物,障碍物1对应的外轮廓坐标集中有5个坐标,障碍物2对应的外轮廓坐标集中有7个坐标,障碍物3对应的外轮廓坐标集中有5个坐标,那么,顶点坐标集V B是从具有17个坐标的整体区域的坐标集中提取得到的。
另外,假设查找范围为[x amin=5,x amax=8],那么将分别以x值为5、6、7和8作为第一类目标对象来遍历查找外轮廓坐标集的集合;类似的,假设查找范围为[y amin=10,y amax=20],那么将分别以y值为10、11、12……20作为第二类目标对象来遍历查找外轮廓坐标集的集合。
并且,凹包算法包含但不局限于滚球法、Alpha shape算法。其中,滚球法的基本原理为:
(1)先求出一个Y值最小(Y值相同则取X最大)的点,作为初始点;
(2)从初始点出发,找出点集中最近点,作为初始边,此时给定半径为R的圆就卡在该弦上,找到第一个初始弦;
(3)循环寻找接下来的弦,假如上一条弦为DE,则下一条弦必然从E开始,连接到E的一个R领域内的点F,寻找点F可以使用如下的原则:先对E的R领域的点,以E为中心ED向量为基准进行极坐标方向排序,之后依次为R领域点Fi(i的范围为[1,N))建立以EFi为弦的圆,然后检查其中是否包含其他的点,若不包括,且弦长小于给定阈值,则EFi即为新弦;否则基于点集中相距最远两点距离值的一半作为临时半径R进行搜索,以与弦DE的夹角小于给定阈值的弦中弦长最短的弦EFi作为新弦;
(4)依次找到所有的弦,直到找不到新弦或遇到以前已经作为弦的点为止。
而Alpha shape算法的基本原理为:在凸包的基础上,设定一个参数α,在Alpha shape重建形状的过程中就不会像凸包一样连接相距过远的顶点,参数α若趋于无穷大,则这个Alpha shape会无无限接近凸包;而α取小了,则alpha shape会倾向于在一定位置凹陷进去,以更加贴合点集的外形。
图10a中的障碍物D、E和F在经过第二类类多障碍物边界处理后,处理结果可以参见图10c。
需要强调的是,上述基于该障碍物坐标信息和该标注线,对自定义清扫区域进行边界精细化处理的具体实现方式,仅仅作为示例,并不应该构成对本申请实施例的限定。
相应于上述方法实施例,本申请实施例还提供了一种区域属性确定装置,如图11所示,所述装置可以包括标注线识别单元310,封闭及联通区域确定单元320和区域属性确定单元330。
标注线识别单元310,用于识别目标地图中的标注线。其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图。
封闭及联通区域确定单元320,用于基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域。其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置。
区域属性确定单元330,用于将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
本申请所提供方案中,用户可以根据清扫需求在目标地图中给定标注线以划分区域,进而在区域属性确定过程中,对目标地图中的标注线进行识别,基于识别到的标注线和目标地图中的辅助对象确定得到联通区域和封闭区域,并将联通区域确定为正常清扫区域而将封闭区域确定为自定义清扫区域。由于本方案无需虚拟墙设备,即可依据用户个性化的清扫需求进行智能分区,因此,解决了现有技术中实现智能分区时所带来的成本高及使用不便的问题。
可选地,所述标注线可以包括直线或折线。
相应的,所述封闭区域为所述标注线和目标辅助对象所围成的、不包含所述第一位置的区域。其中,所述目标辅助对象为所述目标地图中与所述标注线相交的辅助对象。
可选地,所述标注线可以包括多边形。
相应的,所述封闭区域为所述多边形的各边所围成的区域。
可选地,所述预设参考对象包括以下任意一个或多个:
所述智能清扫设备的充电桩;或者,所述目标场景中除所述充电桩以外的非运动对象。
可选地,当所述多边形为矩形时,如图12所示,所述装置还可以包括障碍物坐标信息获得单元340,精细化处理单元350和修正单元360。
障碍物坐标信息获得单元340,用于获得所述目标场景中当前的障碍物坐标信息,其中,所述障碍物坐标信息为所述智能清扫设备对所述目标场景进行障碍物检测所得到的信息。
精细化处理单元350,用于基于所述障碍物坐标信息和所述标注线,对所述自定义清扫区域进行边界精细化处理;
修正单元360,用于基于所述精细化处理单元350所得的处理结果,修正所述正常清扫区域。
可选地,所述精细化处理单元350具体用于:基于所述障碍物坐标信息,如果判断出所述标注线对应的场景区域内未存在障碍物,则执行如下操作。
输出用于提示是否标注错误的第一通知信息;在用户基于所述第一通知信息反馈的结果为是时,识别新标注线,确定所识别到的所述新标注线对应的新封闭区域,并将所述自定义清扫区域调整为所述新封闭区域。
更进一步地,所述精细化处理单元350具体用于:基于所述障碍物坐标信息,如果判断出最近障碍物的场景区域与所述标注线对应的场景区域之间的距离小于预定距离阈值,则执行如下操作。
输出用于提示是否按照所述最近障碍物进行区域调整的第二通知信息,在用户基于所述第二通知信息反馈的结果为是时,对所述最近障碍物进行所述目标地图上的单障碍物边界处理,得到第一障碍物边界,并将所述的自定义清扫区域调整为所述第一障碍物边界所围成的区域。其中,所述最近障碍物为所述目标场景中与所述标注线所对应的场景区域距离最近的障碍物。
可选地,所述精细化处理单元350具体用于:
基于所述障碍物坐标信息,如果判断出所述标注线对应的场景区域与单个障碍 物相交,则执行如下操作。
确定相交部分占所相交的障碍物的面积比例。
当所述面积比例大于预定比例阈值时,对所相交的障碍物进行所述目标地图上的单障碍物边界处理,得到第二障碍物边界。
将所述自定义清扫区域调整为所述第二障碍物边界所围成的区域。
当所述面积比例不大于预定比例阈值时,输出用于提示是否标注错误的第三通知信息,在用户基于所述第三通知信息反馈的结果为是时,识别新标注线,确定所识别到的所述新标注线对应的新封闭区域,并将所述自定义清扫区域调整为所述新封闭区域;或者,对所相交的障碍物进行所述目标地图上的单障碍物边界处理,得到第三障碍物边界,并将所述自定义清扫区域调整为所述第三障碍物边界所围成的区域。
可选地,所述精细化处理单元350具体用于:
基于所述障碍物坐标信息,如果判断出所述标注线对应的场景区域分别与至少两个障碍物相交,则将所相交的至少两个障碍物进行所述目标地图上的多障碍物边界处理,得到第四障碍物边界;将所述自定义清扫区域调整为所述第四障碍物边界所围成的区域。
可选地,所述单障碍物边界处理的处理方式包括:
确定单个障碍物在所述目标地图中的外轮廓坐标集。
从所述外轮廓坐标集中,确定该障碍物对应的顶点坐标集V A;其中,
V A={(x min,y min),(x min,y max),(x max,y min),(x max,y max)};
将所述顶点坐标集V A中各个坐标所对应坐标点进行连接,并将连接所形成的矩形确定为该障碍物的障碍物边界。
可选地,所述多障碍物边界处理的处理方式可以包括:
对所述多障碍物中的每个障碍物,执行单障碍物边界处理。
可选地,所述多障碍物边界处理的处理方式可以包括:
确定所述至少两个障碍物中每个障碍物在所述目标地图中的外轮廓坐标集。
从所确定出的所有外轮廓坐标集中确定所述至少两个障碍物所在整体区域的顶点坐标集V B
依次以查找范围[x amin,x amax]内的各个x值为第一类目标对象,遍历所述至少两个障碍物的外轮廓坐标集的集合{S 1,S 2,…S i…S N},查找所具有x值为当前第一类目标对象的坐标,并从所查找到的当前第一类目标对象对应的所有坐标中获取具有最小y值和最大y值的坐标,并将所获取的坐标计入到集合S x中,其中N为标注线对应的场景区域内障碍物的个数,x amin为所述顶点坐标集V B中x方向上的最小值,x amax为所述顶点坐标集V B中x方向上的最大值。
依次以查找范围[y amin,y amax]内的各个y值为第二类目标对象,遍历所述至少两个障碍物的外轮廓坐标集的集合{S 1,S 2,…S i…S N},查找所具有y值为当前第二类目标对象的坐标,并从所查找到的当前第二类目标对象对应的所有坐标中获取具有最小x值和最大x值的坐标,并将所获取的坐标计入到集合S y中,y amin为所述顶点坐标集V B中y方向上的最小值,y amax为所述顶点坐标集V B中y方向上的最大值。
取集合S x与集合S y的交集,基于凹包算法,将所述交集中的坐标对应的坐标点进行连接,将连接所形成的闭合曲线确定为所述至少两个障碍物对应的多障碍物边界。
另外,本申请还提供了一种电子设备,如图13所示,所述电子设备包括:内部总线410、存储器(memory)420、处理器(processor)430和通信接口(Communications Interface)440;其中,所述处理器430、所述通信接口440、所述存储器420通过所述内部总线410完成相互间的通信。
其中,所述存储器420,用于存储区域属性确定方法对应的机器可行指令。
所述处理器430,用于读取所述存储器420上的所述机器可读指令,并执行所述指令以实现如下操作。
识别目标地图中的标注线;其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图。
基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域;其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置。
将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
其中关于区域属性确定方法的具体步骤的相关描述可以参见本申请方法实施例中的描述内容,在此不做赘述。并且,需要强调的是,该电子设备可以为智能清扫设备, 也可以为智能清扫设备对应的云服务器。
其中,存储器420例如可以是非易失性存储器(non-volatile memory)。处理器430可以调用执行存储器420中的实现区域属性确定方法的逻辑指令,以执行上述区域属性确定方法。
另外,相应于上述方法实施例,本申请还提供了一种智能清扫系统。如图14所示,该智能清扫系统包括:移动终端1410和智能清扫设备1420.
所述移动终端1410,用于获得关于目标地图中的标注线的用户指令,并将所述用户指令发送至所述智能清扫设备1420,以使所述智能清扫设备1420基于所述用户指令识别目标地图中的标注线。其中,所述目标地图为所述智能清扫设备1420在清扫过程所依赖的、待清扫目标场景的地图。
所述智能清扫设备1420,用于识别目标地图中的标注线;基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;将所述封闭区域确定为所述智能清扫设备1420清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备1420清扫过程中的正常清扫区域;以及用于按照所确定出的自定义清扫区域和正常清扫区域,对所述目标场景进行清扫。
其中,在具体应用中,该智能清扫设备1420包括但不局限于扫地机器人,所谓扫地机器人又称为自动打扫机、智能吸尘、机器人吸尘器等。该移动终端1410包括但不局限于:智能手机、平板电脑、笔记本电脑。并且,该移动终端1410中可以展示目标地图,用户可以在目标地图中发出关于标注线的用户指令,如通过绘制标注线发出用户指令或通过给定坐标信息来发出用户指令。而该智能清扫设备1420中的目标地图可以通过智能清扫设备自行构建所得,也可以由其他设备共享所得。
其中,关于区域属性确定方法的具体步骤的相关描述可以参见本申请方法实施例中的描述内容,在此不做赘述。
由于本方案无需虚拟墙设备,即可依据用户个性化的清扫需求进行智能分区,因此,解决了现有技术中实现智能分区时所带来的成本高及使用不便的问题。
另外,相对于上述方法实施例,本申请还提供了一种智能清扫系统。如图15所示,该智能清扫系统包括:移动终端1510、云服务器1530和智能清扫设备1520。
所述移动终端1510,用于获得关于目标地图中的标注线的用户指令,并将所述用户指令发送至所述云服务器1530,以使所述云服务器1530基于所述用户指令识别目标地图中的标注线。其中,所述目标地图为所述智能清扫设备1520在清扫过程所依赖的、待清扫目标场景的地图。
所述云服务器1530,用于识别目标地图中的标注线;基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;将所述封闭区域确定为所述智能清扫设备1520清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备1520清扫过程中的正常清扫区域。
所述智能清扫设备1520,用于根据所述云服务器1530所确定出的自定义清扫区域和正常清扫区域,对所述目标场景进行清扫。
其中,在具体应用中,该智能清扫设备1520包括但不局限于扫地机器人,所谓扫地机器人又称为自动打扫机、智能吸尘、机器人吸尘器等。该移动终端1510包括但不局限于:智能手机、平板电脑、笔记本电脑。并且,该移动终端1510中可以展示目标地图,用户可以在目标地图中发出关于标注线的用户指令,如通过绘制标注线发出用户指令或通过给定坐标信息来发出用户指令。而该云服务器1530中的目标地图可以通过智能清扫设备上传所得,也可以由其他设备上传所得。
其中,关于区域属性确定方法的具体步骤的相关描述可以参见本申请方法实施例中的描述内容,在此不做赘述。
由于本方案无需虚拟墙设备,即可依据用户个性化的清扫需求进行智能分区,因此,解决了现有技术中实现智能分区时所带来的成本高及使用不便的问题。
实现区域属性确定方法的逻辑指令的功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access  Memory)、磁碟或者光盘等各种可以存储程序代码的介质。本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (16)

  1. 一种区域属性确定方法,包括:
    识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;
    基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;
    将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并
    将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
  2. 根据权利要求1所述的方法,其特征在于,
    所述标注线包括直线或折线;
    所述封闭区域为所述标注线和目标辅助对象所围成的、不包含所述第一位置的区域,其中,所述目标辅助对象为所述目标地图中与所述标注线相交的辅助对象。
  3. 根据权利要求1所述的方法,其特征在于,
    所述标注线包括多边形;
    所述封闭区域为所述多边形的各边所围成的区域。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述预设参考对象包括以下任意一个或多个:
    所述智能清扫设备的充电桩,
    所述目标场景中除所述充电桩以外的非运动对象。
  5. 根据权利要求3所述的方法,其特征在于,所述多边形为矩形;所述方法还包括:
    获得所述目标场景中当前的障碍物坐标信息,其中,所述障碍物坐标信息为所述智能清扫设备对所述目标场景进行障碍物检测所得到的信息;
    基于所述障碍物坐标信息和所述标注线,对所述自定义清扫区域进行边界精细化处理;
    基于所述边界精细化处理所得的处理结果,修正所述正常清扫区域。
  6. 根据权利要求5所述的方法,其特征在于,基于所述障碍物坐标信息和所述标注线,对所述自定义清扫区域进行所述边界精细化处理包括:
    基于所述障碍物坐标信息,如果判断出所述标注线对应的场景区域内未存在障碍物,则输出用于提示是否标注错误的第一通知信息;
    在用户基于所述第一通知信息反馈的结果为是时,识别新标注线,
    确定所识别到的所述新标注线对应的新封闭区域,并
    将所述自定义清扫区域调整为所述新封闭区域。
  7. 根据权利要求5所述的方法,其特征在于,基于所述障碍物坐标信息和所述标注线,对所述自定义清扫区域进行所述边界精细化处理包括:
    基于所述障碍物坐标信息,判断最近障碍物与所述标注线对应的场景区域之间的距离是否小于预定距离阈值,其中,所述最近障碍物为所述目标场景中与所述标注线所对应的场景区域距离最近的障碍物;
    当所述判断结果为是时,输出用于提示是否按照所述最近障碍物进行区域调整的第二通知信息,
    在用户基于所述第二通知信息反馈的结果为是时,对所述最近障碍物进行所述目标地图上的单障碍物边界处理,得到第一障碍物边界,并
    将所述自定义清扫区域调整为所述第一障碍物边界所围成的区域。
  8. 根据权利要求5所述的方法,其特征在于,基于所述障碍物坐标信息和所述标注线,对所述自定义清扫区域进行边界精细化处理包括:
    基于所述障碍物坐标信息,如果判断出所述标注线对应的场景区域与单个障碍物相交,则确定相交部分占所相交的障碍物的面积比例;
    当所述面积比例大于预定比例阈值时,
    对所相交的障碍物进行所述目标地图上的单障碍物边界处理,得到第二障碍物边界,并
    将所述自定义清扫区域调整为所述第二障碍物边界所围成的区域;
    当所述面积比例不大于预定比例阈值时,
    输出用于提示是否标注错误的第三通知信息,在用户基于所述第三通知信息反馈的结果为是时,识别新标注线,确定所识别到的所述新标注线对应的新封闭区域,并将所述自定义清扫区域调整为所述新封闭区域;或者,
    对所相交的障碍物进行所述目标地图上的单障碍物边界处理,得到第三障碍物边界,并将所述自定义清扫区域调整为所述第三障碍物边界所围成的区域。
  9. 根据权利要求5所述的方法,其特征在于,基于所述障碍物坐标信息和所述标注线,对所述自定义清扫区域进行边界精细化处理包括:
    基于所述障碍物坐标信息,如果判断出所述标注线对应的场景区域分别与至少两个障碍物相交,则将所相交的至少两个障碍物进行所述目标地图上的多障碍物边界处理, 得到第四障碍物边界;
    将所述自定义清扫区域调整为所述第四障碍物边界所围成的区域。
  10. 根据权利要求7或8所述的方法,其特征在于,所述单障碍物边界处理包括:
    确定单个障碍物在所述目标地图中的外轮廓坐标集;
    从所述外轮廓坐标集中,确定该障碍物对应的顶点坐标集V A
    将所述顶点坐标集V A中各个坐标所对应坐标点进行连接,并将连接所形成的矩形确定为该障碍物的障碍物边界。
  11. 根据权利要求9所述的方法,其特征在于,所述多障碍物边界处理包括:
    对所述至少两个障碍物中的每个障碍物,执行单障碍物边界处理。
  12. 根据权利要求8所述的方法,其特征在于,所述多障碍物边界处理包括:
    确定所述至少两个障碍物中每个障碍物在所述目标地图中的外轮廓坐标集;
    从所确定出的所有外轮廓坐标集中确定所述至少两个障碍物所在整体区域的顶点坐标集V B
    依次以查找范围[x amin,x amax]内的各个x值为第一类目标对象,遍历所述至少两个障碍物的外轮廓坐标集的集合{S 1,S 2,…S i…S N},查找所具有x值为当前第一类目标对象的坐标,并从所查找到的当前第一类目标对象对应的所有坐标中获取具有最小y值和最大y值的坐标,并将所获取的坐标计入到集合S x中,其中N为标注线对应的场景区域内障碍物的个数,x amin为所述顶点坐标集V B中x方向上的最小值,x amax为所述顶点坐标集V B中x方向上的最大值;
    依次以查找范围[y amin,y amax]内的各个y值为第二类目标对象,遍历所述至少两个障碍物的外轮廓坐标集的集合{S 1,S 2,…S i…S N},查找所具有y值为当前第二类目标对象的坐标,并从所查找到的当前第二类目标对象对应的所有坐标中获取具有最小x值和最大x值的坐标,并将所获取的坐标计入到集合S y中,y amin为所述顶点坐标集V B中y方向上的最小值,y amax为所述顶点坐标集V B中y方向上的最大值;
    取集合S x与集合S y的交集,基于凹包算法,将所述交集中的坐标对应的坐标点进行连接,将连接所形成的闭合曲线确定为所述至少两个障碍物对应的多障碍物边界。
  13. 一种区域属性确定装置,包括:
    标注线识别单元,用于识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;
    封闭及联通区域确定单元,用于基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助 对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;
    区域属性确定单元,用于将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域。
  14. 一种电子设备,包括:内部总线、存储器、处理器和通信接口;
    其中,所述处理器、所述通信接口、所述存储器通过所述内部总线完成相互间的通信;
    其中,所述存储器,用于存储区域属性确定方法对应的机器可行指令;
    所述处理器,用于读取所述存储器上的所述机器可读指令,并执行所述指令以实现任一权利要求1-12所述的区域属性确定的方法。
  15. 一种智能清扫系统,包括:移动终端和智能清扫设备;
    所述移动终端,用于获得关于目标地图中的标注线的用户指令,并将所述用户指令发送至所述智能清扫设备,以使所述智能清扫设备基于所述用户指令识别目标地图中的标注线,其中,所述目标地图为智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;
    所述智能清扫设备,用于
    识别目标地图中的标注线;
    基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物,所述第一位置为预设参考对象在所述目标地图中的位置;
    将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域;以及
    按照所确定出的自定义清扫区域和正常清扫区域,对所述目标场景进行清扫。
  16. 一种智能清扫系统,包括:移动终端、云服务器和智能清扫设备;
    所述移动终端,用于获得关于目标地图中的标注线的用户指令,并将所述用户指令发送至所述云服务器,以使所述云服务器基于所述用户指令识别目标地图中的标注线,其中,所述目标地图为所述智能清扫设备在清扫过程所依赖的、待清扫目标场景的地图;
    所述云服务器,用于
    识别目标地图中的标注线;
    基于所识别到的标注线和所述目标地图中的辅助对象,确定所述目标地图中的封闭区域和以第一位置为基点的联通区域,其中,所述辅助对象包括地图边界和障碍物, 所述第一位置为预设参考对象在所述目标地图中的位置;以及
    将所述封闭区域确定为所述智能清扫设备清扫过程中的自定义清扫区域,并将所述联通区域确定为所述智能清扫设备清扫过程中的正常清扫区域;
    所述智能清扫设备,用于根据所述云服务器所确定出的自定义清扫区域和正常清扫区域,对所述目标场景进行清扫。
PCT/CN2018/112923 2017-11-08 2018-10-31 区域属性确定 WO2019091310A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PL18876322.1T PL3689215T3 (pl) 2017-11-08 2018-10-31 Określanie atrybutu obszaru
ES18876322T ES2951843T3 (es) 2017-11-08 2018-10-31 Determinación de un atributo de región
EP18876322.1A EP3689215B1 (en) 2017-11-08 2018-10-31 Region attribute determination
US16/762,448 US11877716B2 (en) 2017-11-08 2018-10-31 Determining region attribute

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711090900.XA CN109744945B (zh) 2017-11-08 2017-11-08 一种区域属性确定方法、装置、系统及电子设备
CN201711090900.X 2017-11-08

Publications (1)

Publication Number Publication Date
WO2019091310A1 true WO2019091310A1 (zh) 2019-05-16

Family

ID=66401401

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112923 WO2019091310A1 (zh) 2017-11-08 2018-10-31 区域属性确定

Country Status (6)

Country Link
US (1) US11877716B2 (zh)
EP (1) EP3689215B1 (zh)
CN (1) CN109744945B (zh)
ES (1) ES2951843T3 (zh)
PL (1) PL3689215T3 (zh)
WO (1) WO2019091310A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113693521A (zh) * 2021-02-10 2021-11-26 北京石头世纪科技股份有限公司 自动清洁设备控制方法及装置、介质及电子设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112051841B (zh) * 2019-06-05 2023-06-02 南京苏美达智能技术有限公司 一种障碍物边界生成方法及装置
CN110450152A (zh) * 2019-06-24 2019-11-15 广东宝乐机器人股份有限公司 区域识别方法、机器人和存储介质
CN112631267B (zh) * 2019-10-09 2023-01-24 苏州宝时得电动工具有限公司 自动行走设备控制方法及自动行走设备
CN111552286B (zh) * 2020-04-22 2024-05-07 深圳市优必选科技股份有限公司 一种机器人及其移动控制方法和装置
CN111474932B (zh) * 2020-04-23 2021-05-11 大连理工大学 一种集成情景经验的移动机器人建图与导航方法
CN113806455B (zh) * 2020-06-12 2024-03-29 未岚大陆(北京)科技有限公司 地图构建方法、设备及存储介质
CN111728535B (zh) * 2020-06-22 2021-12-28 上海高仙自动化科技发展有限公司 一种生成清扫路径的方法、装置、电子设备及存储介质
CN111753768B (zh) * 2020-06-29 2023-07-28 北京百度网讯科技有限公司 表示障碍物形状的方法、装置、电子设备和存储介质
CN111722630B (zh) * 2020-06-30 2024-02-02 深圳银星智能集团股份有限公司 清洁机器人的分区边界扩展方法、装置、设备及存储介质
CN112180945B (zh) * 2020-10-22 2023-08-04 南京苏美达智能技术有限公司 一种自动生成障碍物边界的方法及自动行走设备
CN113469843B (zh) * 2021-06-29 2023-07-21 青岛海尔科技有限公司 储藏设备的控制方法、装置、存储介质和电子设备
CN113741446B (zh) * 2021-08-27 2024-04-16 深圳市优必选科技股份有限公司 一种机器人自主探索的方法、终端设备及存储介质
CN114983272B (zh) * 2022-05-30 2024-05-14 美智纵横科技有限责任公司 场景地图处理方法和装置、清洁组件和清洁设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040096083A1 (en) * 2002-11-19 2004-05-20 Honda Motor Co., Ltd. Mobile apparatus
CN101101203A (zh) * 2006-07-05 2008-01-09 三星电子株式会社 使用特征点来划分区域的设备、方法、介质和移动机器人
CN104161487A (zh) * 2013-05-17 2014-11-26 恩斯迈电子(深圳)有限公司 移动装置
JP2016019029A (ja) * 2014-07-04 2016-02-01 シャープ株式会社 移動式電波モニター装置
CN106983460A (zh) * 2017-04-07 2017-07-28 小狗电器互联网科技(北京)股份有限公司 一种扫地机器人区域清扫图像控制方法
TW201726044A (zh) * 2016-01-28 2017-08-01 原相科技股份有限公司 自動清掃機控制方法以及自動清掃機
CN107028558A (zh) * 2016-02-03 2017-08-11 原相科技股份有限公司 计算机可读取记录媒体及自动清扫机
CN107239074A (zh) * 2016-03-29 2017-10-10 苏州宝时得电动工具有限公司 自动工作系统及其工作区域的地图建立方法
CN107328417A (zh) * 2017-06-13 2017-11-07 上海斐讯数据通信技术有限公司 一种智能扫地机器人定位方法及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005296511A (ja) * 2004-04-15 2005-10-27 Funai Electric Co Ltd 自走式掃除機
WO2014050689A1 (ja) * 2012-09-27 2014-04-03 シャープ株式会社 画像処理装置
CN103472823B (zh) * 2013-08-20 2015-11-18 苏州两江科技有限公司 一种智能机器人用的栅格地图创建方法
CN104460663A (zh) * 2013-09-23 2015-03-25 科沃斯机器人科技(苏州)有限公司 智能手机控制清扫机器人的方法
KR102306709B1 (ko) * 2014-08-19 2021-09-29 삼성전자주식회사 청소 로봇, 청소 로봇의 제어 장치, 제어 시스템, 및 제어 방법
CN106652756A (zh) * 2015-11-03 2017-05-10 圆通速递有限公司 基于电子地图的多边形精确绘制方法及其应用方法
KR20170053351A (ko) * 2015-11-06 2017-05-16 삼성전자주식회사 청소 로봇 및 그 제어 방법
CN106774294A (zh) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 一种移动机器人虚拟墙设计方法
CN106272420B (zh) * 2016-08-30 2019-07-02 北京小米移动软件有限公司 机器人及机器人控制方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040096083A1 (en) * 2002-11-19 2004-05-20 Honda Motor Co., Ltd. Mobile apparatus
CN101101203A (zh) * 2006-07-05 2008-01-09 三星电子株式会社 使用特征点来划分区域的设备、方法、介质和移动机器人
CN104161487A (zh) * 2013-05-17 2014-11-26 恩斯迈电子(深圳)有限公司 移动装置
JP2016019029A (ja) * 2014-07-04 2016-02-01 シャープ株式会社 移動式電波モニター装置
TW201726044A (zh) * 2016-01-28 2017-08-01 原相科技股份有限公司 自動清掃機控制方法以及自動清掃機
CN107028558A (zh) * 2016-02-03 2017-08-11 原相科技股份有限公司 计算机可读取记录媒体及自动清扫机
CN107239074A (zh) * 2016-03-29 2017-10-10 苏州宝时得电动工具有限公司 自动工作系统及其工作区域的地图建立方法
CN106983460A (zh) * 2017-04-07 2017-07-28 小狗电器互联网科技(北京)股份有限公司 一种扫地机器人区域清扫图像控制方法
CN107328417A (zh) * 2017-06-13 2017-11-07 上海斐讯数据通信技术有限公司 一种智能扫地机器人定位方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113693521A (zh) * 2021-02-10 2021-11-26 北京石头世纪科技股份有限公司 自动清洁设备控制方法及装置、介质及电子设备

Also Published As

Publication number Publication date
US20200359867A1 (en) 2020-11-19
PL3689215T3 (pl) 2023-08-28
ES2951843T3 (es) 2023-10-25
EP3689215A4 (en) 2021-01-20
CN109744945A (zh) 2019-05-14
US11877716B2 (en) 2024-01-23
EP3689215A1 (en) 2020-08-05
CN109744945B (zh) 2020-12-04
EP3689215A8 (en) 2020-09-23
EP3689215B1 (en) 2023-05-31

Similar Documents

Publication Publication Date Title
WO2019091310A1 (zh) 区域属性确定
US10198823B1 (en) Segmentation of object image data from background image data
EP3505866B1 (en) Method and apparatus for creating map and positioning moving entity
AU2017387638B2 (en) Computer vision systems and methods for detecting and modeling features of structures in images
US20230260151A1 (en) Simultaneous Localization and Mapping Method, Device, System and Storage Medium
CN108885459B (zh) 导航方法、导航系统、移动控制系统及移动机器人
CN109074083B (zh) 移动控制方法、移动机器人及计算机存储介质
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
WO2020119684A1 (zh) 一种3d导航语义地图更新方法、装置及设备
US10068373B2 (en) Electronic device for providing map information
WO2020223974A1 (zh) 更新地图的方法及移动机器人
CN109584302B (zh) 相机位姿优化方法、装置、电子设备和计算机可读介质
US6898518B2 (en) Landmark-based location of users
WO2019062651A1 (zh) 一种定位建图的方法及系统
CN111094895B (zh) 用于在预构建的视觉地图中进行鲁棒自重新定位的系统和方法
WO2020113423A1 (zh) 目标场景三维重建方法、系统及无人机
TW201941098A (zh) 智慧型裝置跟焦方法、裝置、智慧型裝置及儲存媒體
US20220156969A1 (en) Visual localization method, terminal, and server
CN110657804B (zh) 室内位置服务
KR20160129000A (ko) 모바일 디바이스를 위한 실시간 3d 제스처 인식 및 트랙킹 시스템
JP2024509690A (ja) 三次元地図を構築する方法および装置
CN113095184B (zh) 定位方法、行驶控制方法、装置、计算机设备及存储介质
WO2019113859A1 (zh) 基于机器视觉的虚拟墙构建方法及装置、地图构建方法、可移动电子设备
EP4177790A1 (en) Map creation method and apparatus for self-moving device, and device and storage medium
WO2019045728A1 (en) ELECTRONIC DEVICES, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR CONTROLLING 3D MODELING OPERATIONS BASED ON POSITION MEASUREMENTS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18876322

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018876322

Country of ref document: EP

Effective date: 20200429

NENP Non-entry into the national phase

Ref country code: DE