WO2020200282A1 - 机器人工作区域地图构建方法、装置、机器人和介质 - Google Patents

机器人工作区域地图构建方法、装置、机器人和介质 Download PDF

Info

Publication number
WO2020200282A1
WO2020200282A1 PCT/CN2020/083000 CN2020083000W WO2020200282A1 WO 2020200282 A1 WO2020200282 A1 WO 2020200282A1 CN 2020083000 W CN2020083000 W CN 2020083000W WO 2020200282 A1 WO2020200282 A1 WO 2020200282A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
robot
edge
array
unit
Prior art date
Application number
PCT/CN2020/083000
Other languages
English (en)
French (fr)
Inventor
乌尔奇
牛延升
刘帅
Original Assignee
北京石头世纪科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京石头世纪科技股份有限公司 filed Critical 北京石头世纪科技股份有限公司
Priority to EP20784889.6A priority Critical patent/EP3951544A4/en
Priority to US17/601,026 priority patent/US20220167820A1/en
Publication of WO2020200282A1 publication Critical patent/WO2020200282A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/242Means based on the reflection of waves generated by the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • G05D1/2462Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using feature-based mapping
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/648Performing a task within a working area or space, e.g. cleaning
    • G05D1/6482Performing a task within a working area or space, e.g. cleaning by dividing the whole area or space in sectors to be processed separately
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/10Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2107/00Specific environments of the controlled vehicles
    • G05D2107/40Indoor domestic environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/10Land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals
    • G05D2111/17Coherent light, e.g. laser signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/60Combination of two or more signals
    • G05D2111/67Sensor fusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present disclosure relates to the field of control technology, and in particular to a method, device, robot and medium for constructing a robot working area map.
  • General sweeping robots use inertial navigation, lidar or camera for map planning and navigation.
  • users When users use sweeping robots to sweep the floor, they will see the cleaning area division in real time on the mobile device.
  • cleaning area division is not a room unit type.
  • the division is to randomly divide the clean area into multiple zones based on only the coordinate information of the clean area.
  • the above-mentioned conventional display methods can no longer meet the needs of users.
  • the living room area of the home is a frequently walking area.
  • the above natural division method will cause confusion to the user, because the map is formed in real time Yes, it is difficult for the user to make the robot go directly to the area to be cleaned during the next cleaning.
  • the general map formation method cannot effectively divide the room, and the user cannot accurately specify a specific cleaning room.
  • the living room is a frequently active area and there is a lot of dust.
  • the user cannot clearly and accurately instruct the sweeping robot to go to the living room to perform the entire living room. Area cleaning.
  • the current technology can only support the robot to reach the living room area, but it cannot guarantee that it will clean the entire area of the living room after arrival.
  • the embodiments of the present disclosure provide a method, device, robot, and storage medium for constructing a map of a robot working area.
  • embodiments of the present disclosure provide a method for constructing a robot work area map, the method including:
  • the work area is divided into a plurality of sub-areas based on the reference information.
  • the real-time scanning of obstacles in the driving path and recording the position parameters of the obstacles include:
  • lidar to scan obstacles in the driving path in real time, and determine whether the scanning position is the edge position of the obstacle
  • the method after the recording the coordinate parameters of the edge position of the obstacle each scan, the method includes:
  • the acquiring image information of the obstacle in the driving path in real time includes:
  • multiple pieces of image information of the edge are acquired in real time from different positions and/or different angles through the camera device.
  • the method when it is determined as the edge of the obstacle, after acquiring multiple pieces of image information of the edge in real time from different positions and/or different angles through a camera device, the method includes:
  • the determining that the obstacle is based on the reference information of the working area according to the position parameter and the image information includes:
  • the edge of the obstacle is determined as the reference position of the working area.
  • the dividing the work area into multiple sub-areas based on the reference information includes:
  • the multiple sub-regions are marked.
  • the mark includes: color and/or name mark.
  • an embodiment of the present disclosure provides an apparatus for constructing a robot work area map, the apparatus including:
  • the scanning unit is used to scan obstacles in the driving path in real time and record the position parameters of the obstacles;
  • a camera unit for acquiring image information of the obstacle in the driving path in real time
  • a determining unit configured to determine that the obstacle is based on reference information of the working area according to the position parameter and the image information
  • the dividing unit is configured to divide the work area into multiple sub-areas based on the reference information.
  • the scanning unit is also used to:
  • lidar to scan obstacles in the driving path in real time, and determine whether the scanning position is the edge position of the obstacle
  • the scanning unit is also used to:
  • the camera unit is also used for:
  • multiple pieces of image information of the edge are acquired in real time from different positions and/or different angles through the camera device.
  • the camera unit is also used for:
  • the determining unit is further configured to:
  • the edge of the obstacle is determined as the reference position of the working area.
  • the dividing unit is further used for:
  • the multiple sub-regions are marked.
  • the mark includes: color and/or name mark.
  • an embodiment of the present disclosure provides an apparatus for constructing a robot work area map, including a processor and a memory, the memory storing computer program instructions that can be executed by the processor, and the processor executing the computer program When instructing, implement any of the method steps described above.
  • an embodiment of the present disclosure provides a robot, including the device described in any one of the above.
  • embodiments of the present disclosure provide a non-transitory computer-readable storage medium that stores computer program instructions that, when called and executed by a processor, implement any of the method steps described above.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the disclosure
  • Figure 2 is a top view of a robot structure provided by an embodiment of the disclosure.
  • Figure 3 is a bottom view of the robot structure provided by an embodiment of the disclosure.
  • FIG. 4 is a front view of the robot structure provided by an embodiment of the disclosure.
  • Figure 5 is a perspective view of a robot structure provided by an embodiment of the disclosure.
  • FIG. 6 is a block diagram of the robot structure provided by an embodiment of the disclosure.
  • FIG. 7 is a schematic flowchart of a method for constructing a robot map provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic diagram of a sub-process of a method for constructing a robot map provided by an embodiment of the disclosure.
  • FIG. 9 is a schematic diagram of a sub-process of a method for constructing a robot map provided by an embodiment of the disclosure.
  • FIG. 10 is a schematic structural diagram of a robot map construction device provided by another embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of the electronic structure of a robot provided by an embodiment of the disclosure.
  • FIG. 12 is a schematic diagram of the construction result of a robot map provided by an embodiment of the disclosure.
  • the robot 100 can travel on the ground through various combinations of movement relative to the following three mutually perpendicular axes defined by the main body 110: the front and rear axis X, the lateral axis Y, and the central vertical axis Z.
  • the forward driving direction along the front and rear axis X is denoted as “forward”
  • the backward driving direction along the front and rear axis X is denoted as “rear”.
  • the lateral axis Y essentially extends between the right wheel and the left wheel of the robot along the axis defined by the center point of the driving wheel module 141.
  • the robot 100 can rotate around the Y axis.
  • the robot 100 can rotate around the Z axis. In the forward direction of the robot, when the robot 100 is tilted to the right of the X axis, it is “turn right”, and when the robot 100 is tilted to the left of the X axis, it is “turn left”.
  • the application scenario includes a robot, such as a sweeping robot, a mopping robot, a vacuum cleaner, a lawn mower, and so on.
  • the robot may be a sweeping robot or a mopping robot.
  • the robot may be provided with a voice recognition system to receive voice instructions from the user, and rotate in the direction of the arrow according to the voice instructions to respond to the user's voice instructions. At the same time, the robot can clean in the direction indicated by the arrow after responding to the instruction, and scan and photograph the cleaning area to obtain map information of the room.
  • the robot can also be equipped with a voice output device to output prompt voices.
  • the robot may be provided with a touch-sensitive display to receive operation instructions input by the user.
  • the robot can also be provided with wireless communication modules such as a WIFI module and a Bluetooth module to connect with the smart terminal, and receive operation instructions transmitted by the user using the smart terminal through the wireless communication module.
  • the robot 100 includes a machine body 110, a sensing system 120, a control system, a driving system 140, a cleaning system, an energy system, and a human-computer interaction system 170.
  • the main body 110 of the machine includes a forward part 111 and a rearward part 112, which have an approximate circular shape (the front and rear are both circular), and may also have other shapes, including but not limited to the approximate D-shape of the front and rear circles.
  • the sensing system 120 includes a position determining device 121 located above the machine body 110, a buffer 122 located in the forward portion 111 of the machine body 110, a cliff sensor 123 and an ultrasonic sensor, an infrared sensor, and a magnetometer.
  • Sensing devices such as accelerometers, gyroscopes, and odometers provide the control system 130 with various position information and movement status information of the machine.
  • the position determining device 121 includes but is not limited to at least one camera and a laser distance measuring device (LDS). The following takes the laser ranging device of the triangulation ranging method as an example to illustrate how to determine the position.
  • the basic principle of the triangulation ranging method is based on the proportional relationship of similar triangles, so I won't repeat it here.
  • the laser distance measuring device includes a light emitting unit and a light receiving unit.
  • the light-emitting unit may include a light source that emits light
  • the light source may include a light-emitting element, such as an infrared or visible light emitting diode (LED) that emits infrared light or visible light.
  • the light source may be a light emitting element that emits a laser beam.
  • a laser diode (LD) is used as an example of a light source.
  • the light source using the laser beam can make the measurement more accurate than other lights.
  • the laser diode can be a point laser, which measures the two-dimensional position information of obstacles, or it can be a line laser, which measures three-dimensional position information of obstacles within a certain range.
  • the light receiving unit may include an image sensor on which a light spot reflected or scattered by an obstacle is formed.
  • the image sensor may be a collection of multiple unit pixels in a single row or multiple rows. These light-receiving elements can convert optical signals into electrical signals.
  • the image sensor may be a complementary metal oxide semiconductor (CMOS) sensor or a charge coupled device (CCD) sensor, and it is preferably a complementary metal oxide semiconductor (CMOS) sensor due to cost advantages.
  • the light receiving unit may include a light receiving lens assembly. The light reflected or scattered by the obstacle may travel through the light receiving lens assembly to form an image on the image sensor.
  • the light receiving lens assembly may include a single lens or multiple lenses.
  • the base may support the light-emitting unit and the light-receiving unit, and the light-emitting unit and the light-receiving unit are arranged on the base and separated from each other by a certain distance.
  • the base can be rotatably arranged on the main body 110, or the base itself can be rotated without rotating but by setting a rotating element to rotate the emitted light and the received light.
  • the rotational angular velocity of the rotating element can be obtained by setting the optocoupler element and the code disc.
  • the optocoupler element senses the tooth gap on the code disc.
  • the instantaneous angular velocity can be obtained by dividing the slip time of the tooth gap and the distance between the tooth gaps.
  • the data processing device connected with the light-receiving unit such as DSP, records and transmits the obstacle distance values at all angles relative to the 0 degree angle of the robot to the data processing unit in the control system 130, such as an application processor containing a CPU (AP), the CPU runs a positioning algorithm based on particle filtering to obtain the current position of the robot, and draws a map based on this position for navigation.
  • the positioning algorithm preferably uses real-time positioning and mapping (SLAM).
  • the laser distance measuring device based on the triangulation method can measure the distance value at an infinite distance beyond a certain distance in principle, it is very difficult to realize long-distance measurement, such as 6 meters or more, mainly because The size of the pixel unit on the sensor of the light unit is limited, and it is also affected by the photoelectric conversion speed of the sensor, the data transmission speed between the sensor and the connected DSP, and the calculation speed of the DSP.
  • the measured value of the laser distance measuring device affected by the temperature will also undergo changes that the system cannot tolerate, mainly because the thermal expansion and deformation of the structure between the light-emitting unit and the light-receiving unit causes the angle between the incident light and the emitted light to change, and the light-emitting unit And the light receiving unit itself also has temperature drift problems. After long-term use of the laser distance measuring device, the deformation caused by the accumulation of temperature changes, vibration and other factors will also seriously affect the measurement results.
  • the accuracy of the measurement result directly determines the accuracy of drawing the map and is the basis for the robot to further implement the strategy, which is particularly important.
  • the forward part 111 of the machine body 110 can carry a buffer 122.
  • the buffer 122 is detected by a sensor system, such as an infrared sensor.
  • the robot can pass the events detected by the buffer 122, such as obstacles and walls, and control the driving wheel module 141 to make the robot respond to the events, such as moving away obstacle.
  • the control system 130 is set on the main circuit board in the main body of the machine 110, and includes non-transitory memory, such as hard disk, flash memory, random access memory, and computing processor for communication, such as central processing unit, application processor, and application processing.
  • the device uses a positioning algorithm, such as SLAM, to draw a real-time map of the environment where the robot is located according to the obstacle information fed back by the laser ranging device.
  • the control system 130 can plan the most efficient and reasonable cleaning path and cleaning method based on the map information drawn by SLAM, which greatly improves the cleaning efficiency of the robot.
  • the driving system 140 can manipulate the robot 100 to travel across the ground based on driving commands having distance and angle information, such as x, y, and ⁇ components.
  • the driving system 140 includes a driving wheel module 141.
  • the driving wheel module 141 can simultaneously control the left wheel and the right wheel.
  • the driving wheel module 141 preferably includes a left driving wheel module and a right driving wheel module.
  • the left and right driving wheel modules are opposed to each other along a transverse axis defined by the main body 110.
  • the robot may include one or more driven wheels 142, which include but are not limited to universal wheels.
  • the driving wheel module includes a walking wheel, a driving motor, and a control circuit for controlling the driving motor.
  • the driving wheel module can also be connected to a circuit for measuring driving current and an odometer.
  • the driving wheel module 141 can be detachably connected to the main body 110 to facilitate disassembly, assembly and maintenance.
  • the driving wheel may have a biased drop suspension system, fastened in a movable manner, such as rotatably attached, to the robot body 110, and receives a spring bias that is biased downward and away from the robot body 110.
  • the spring bias allows the driving wheel to maintain contact and traction with the ground with a certain ground force, while the cleaning element of the robot 100 also contacts the ground 10 with a certain pressure.
  • the cleaning system may be a dry cleaning system and/or a wet cleaning system.
  • the main cleaning function comes from the cleaning system 151 composed of a roller brush, a dust box, a fan, an air outlet, and the connecting parts between the four.
  • the roller brush that interferes with the ground to a certain extent sweeps the garbage on the ground and rolls it to the front of the dust suction port between the roller brush and the dust box, and then is sucked into the dust box by the suction gas generated by the fan and passed through the dust box.
  • the dust removal capability of the sweeper can be characterized by the dust pick-up efficiency (DPU).
  • the cleaning efficiency DPU is affected by the structure and material of the roller brush, and is affected by the suction port, dust box, fan, air outlet and the connection between the four
  • the wind utilization rate of the air duct formed by the components is affected by the type and power of the wind turbine, which is a responsible system design issue.
  • the improvement of dust removal capacity is of greater significance to cleaning robots with limited energy. Because the improvement of dust removal ability directly and effectively reduces the energy requirements, that is to say, the original machine that can clean 80 square meters of ground with a single charge can be evolved into a single charge to clean 100 square meters or more.
  • the service life of the battery that reduces the number of recharges will also be greatly increased, so that the frequency of user replacement of the battery will also increase.
  • the dry cleaning system may also include a side brush 152 with a rotating shaft, which is at an angle relative to the ground for moving debris to the rolling brush area of the cleaning system.
  • the energy system includes rechargeable batteries, such as nickel-metal hydride batteries and lithium batteries.
  • the rechargeable battery can be connected with a charging control circuit, a battery pack charging temperature detection circuit, and a battery undervoltage monitoring circuit.
  • the charging control circuit, battery pack charging temperature detection circuit, and battery undervoltage monitoring circuit are then connected to the single-chip control circuit.
  • the main unit is connected to the charging pile through charging electrodes arranged on the side or below the fuselage for charging. If there is dust on the exposed charging electrode, the plastic body around the electrode will melt and deform due to the accumulation of electric charges during the charging process, and even cause the electrode itself to deform, and normal charging cannot continue.
  • the human-computer interaction system 170 includes buttons on the host panel for the user to select functions; it may also include a display screen and/or indicator light and/or speaker.
  • the display screen, indicator light and speaker show the user the current state of the machine or Function options; can also include mobile phone client programs.
  • the mobile phone client can show the user the map of the environment where the equipment is located and the location of the machine, which can provide users with richer and more user-friendly functions.
  • Fig. 6 is a block diagram of a cleaning robot according to the present disclosure.
  • the sweeping robot may include: a microphone array unit for recognizing a user's voice, a communication unit for communicating with a remote control device or other devices, a moving unit for driving the main body, a cleaning unit, and a storage unit for storing Information storage unit.
  • the input unit buttons of the sweeping robot, etc.
  • object detection sensor can be connected to the control unit to transmit predetermined information to the control Unit or receive predetermined information from the control unit.
  • the microphone array unit can compare the voice input through the receiving unit with the information stored in the memory unit to determine whether the input voice corresponds to a specific command. If it is determined that the input voice corresponds to a specific command, the corresponding command is transmitted to the control unit. If the detected voice cannot be compared with the information stored in the memory unit, the detected voice can be regarded as noise to ignore the detected voice.
  • the detected voice corresponds to the words "come here, here, here, here", and there is a text control command (come here) corresponding to the words in the information stored in the memory unit.
  • the corresponding command can be transmitted to the control unit.
  • the direction detection unit may detect the direction of the voice by using the time difference or level of the voice input to the plurality of receiving units.
  • the direction detection unit transmits the direction of the detected voice to the control unit.
  • the control unit may determine the movement path by using the voice direction detected by the direction detection unit.
  • the position detection unit may detect the coordinates of the subject in predetermined map information.
  • the information detected by the camera and the map information stored in the memory unit may be compared with each other to detect the current position of the subject.
  • the location detection unit can also use the Global Positioning System (GPS).
  • GPS Global Positioning System
  • the position detection unit can detect whether the subject is arranged in a specific position.
  • the position detection unit may include a unit for detecting whether the main body is arranged on the charging pile.
  • a main body in a method for detecting whether a main body is arranged on a charging pile, it may be detected whether the main body is arranged at a charging position according to whether electric power is input into the charging unit. For another example, it is possible to detect whether the main body is arranged at the charging position through a charging position detection unit arranged on the main body or the charging pile.
  • the communication unit may transmit/receive predetermined information to/from a remote control device or other devices.
  • the communication unit can update the map information of the sweeping robot.
  • the driving unit can operate the moving unit and the cleaning unit.
  • the driving unit may move the moving unit along a moving path determined by the control unit.
  • the memory unit stores predetermined information related to the operation of the cleaning robot. For example, map information of the area where the cleaning robot is arranged, control command information corresponding to the voice recognized by the microphone array unit, direction angle information detected by the direction detection unit, position information detected by the position detection unit, and objects
  • the obstacle information detected by the detection sensor can be stored in the memory unit.
  • the control unit can receive information detected by the receiving unit, camera, and object detection sensor.
  • the control unit may recognize the user's voice based on the transmitted information, detect the direction in which the voice occurs, and detect the location of the cleaning robot.
  • the control unit can also operate the mobile unit and the cleaning unit.
  • the embodiments of the present disclosure provide a method, a device, a robot, and a storage medium for constructing a robot work area map, so as to enable the robot to clearly divide the work area map and clean the designated cleaning area accurately.
  • the embodiment of the present disclosure provides a method for constructing a robot work area map, and the method includes the following method steps:
  • Step S102 Scan the obstacles in the travel path in real time, and record the position parameters of the obstacles.
  • the sweeping robot receives the user's voice control instructions to start the cleaning task. At this time, the sweeping robot does not have a detailed map of the work area, but only performs basic cleaning tasks. The sweeping robot obtains information about obstacles in the driving path while cleaning.
  • the real-time scanning may use at least one laser radar located on the robot device to perform scanning.
  • the specific hardware structure is described above, and will not be repeated here. As shown in Figure 8, the specific method steps are as follows:
  • Step S1022 Use lidar to scan obstacles in the driving path in real time, and determine whether the scanning position is the edge position of the obstacle;
  • the sweeping robot performs cleaning tasks, while the lidar scans obstacles in the driving path in real time, such as walls, beds, etc., when continuing to scan to obtain the edge position information of the obstacles, for example, the previous continuous scanning is the obstacle wall, and the door frame position is suddenly scanned.
  • the position is recognized as the target position, and the robot can perform repeated scans on the spot or nearby to confirm whether the position is the target (door frame) position, and obtain the width and height of the scanned position and the door frame parameters stored in the robot storage device through judgment
  • (width, height) is close, the position is judged as the target position.
  • the usual door frame width is 50-100cm and the height is 200-240cm. In this way, if the corresponding scanning parameter falls into this position, it is judged as the target position.
  • Step S1024 when it is determined that it is the edge position of the obstacle, repeatedly scanning the edge position multiple times;
  • Step S1026 Record the coordinate parameters of the edge position of the obstacle in each scan.
  • the above scanning process is both a process of data recording.
  • the data recorded at this time is the coordinate parameter of the scanning position (door frame).
  • the position of the rotating charging pile is the origin of the coordinates
  • the left door frame coordinates at this time are constructed.
  • the two-dimensional coordinates are a1( 90,150), a2(91,151), a3(89,150), a4(92,152), etc.
  • [a1,b1], [a2,b2], [a3,b3], [a4,b4] constitute an array of door frame width, and the calculation shows that the door frame width is 80cm, 82cm, 82cm, 200cm.
  • the coordinate data of the door frame height can be obtained, such as c1(0,200), c2(0,201), c3(0,203), c4(0,152), etc., and the door frame height is 200cm, 201cm, 203cm , 152cm and other data.
  • the door frame height is 200cm, 201cm, 203cm , 152cm and other data.
  • store the above-mentioned data in the storage device of the cleaning robot. Continue the repeated actions of cleaning, scanning, recording, and storing until the cleaning is completed.
  • the following calculation method is included.
  • the calculation method can be calculated after the sweeping robot returns to the pile and is charged, and the robot is idle at this time Status is conducive to the processing and analysis of big data. This method is one of the preferred methods, specifically as follows:
  • a coordinate parameter that satisfies a certain nearby value is selected from a plurality of sets of the coordinate parameters.
  • the aforementioned coordinate parameters a1 (90, 150), a2 (91, 151), a3 (89, 150), a4 (92, 152) are all adjacent coordinate parameters, and the corresponding position coordinate value can be determined within plus or minus 5 Is the adjacent coordinate parameter.
  • b1 (170, 150), b2 (173, 151), b3 (171, 150), b4 (292, 152), b4 (292, 152) can be regarded as exceeding the neighboring position, and this parameter is excluded Normal parameter range.
  • c4(0,152) is also a non-adjacent parameter.
  • the aggregation process can be performed in a variety of ways, one of which can be, for example, to average the same positional parameters, such as the above a1(90,150), a2(91,151), a3(89,150), a4(92,152)
  • the calculated a(90.5, 150.75) is stored as the first array for subsequent retrieval.
  • Step S104 Acquire image information of the obstacle in the driving path in real time.
  • the sweeping robot receives the user's voice control instructions to start the cleaning task. At this time, the sweeping robot does not have a detailed map of the work area, but only performs basic cleaning tasks. The sweeping robot obtains image information of obstacles in the driving path while cleaning.
  • At least one camera device located on the robot device can be used for scanning.
  • the specific hardware structure is described as above, and will not be repeated here. As shown in Figure 9, the specific method steps are as follows:
  • Step S1042 Determine whether it is the edge of the obstacle.
  • the judgment result of the scanning radar can be called, or it can be determined by itself based on the camera image.
  • the judgment result of invoking the scanning radar is used, for example, in combination with the door frame position determined in step 1024, then the camera is photographed at the same time during the repeated scanning of the radar.
  • Step S1044 when it is determined to be the edge of the obstacle, acquire multiple pieces of image information of the edge in real time from different positions and/or different angles through the camera.
  • the calculation method may perform calculation after the sweeping robot returns to the pile for charging. At this time, the robot is in an idle state, which is beneficial to the processing and analysis of big data.
  • the characteristic line is a border line of the door frame, including any characteristic line determined according to different gray values, and the characteristic line is obtained from images taken at different angles and positions.
  • the classification of the feature lines obtained above is basically based on the clustering of the feature lines located near the same position into one group, and the more discrete ones are divided into the next group.
  • the number of feature lines in the same group exceeds a certain threshold, for example, 10 is a threshold, and the number of feature lines exceeding 10 is regarded as a mark position. That is, confirm that the position is the door frame position.
  • these characteristic lines may not be inferred as gates. For example, those less than 10 are not regarded as door frame positions.
  • the coordinate parameter of the position is recorded.
  • the parameter can be a two-dimensional or three-dimensional coordinate parameter.
  • Step S106 Determine, according to the position parameter and the image information, that the obstacle is based on the reference information of the working area.
  • the determination process is preferably carried out after the sweeping robot returns to the pile for charging. At this time, the robot is in an idle state, which is beneficial to the processing and analysis of big data, as follows:
  • the door frame recognition program and the room division program need to be called through the central control program to execute the corresponding steps, and after calling the corresponding program and parameters:
  • the first array a (90.5, 150.75) and the second array A (90, 150) are described above, and the parameters in the related arrays are compared and analyzed to confirm whether the parameters are at the same position twice.
  • the edge of the obstacle is determined as the reference position of the working area.
  • the range can be set based on experience, for example, the value is 3.
  • the effective double identification makes the identification of door frames and the like more accurate, which is beneficial to the accurate division of areas.
  • Step S108 Divide the work area into multiple sub-areas based on the reference information.
  • this step includes:
  • the mark includes: color and/or name mark.
  • the user can also edit the room name, such as the living room, bedroom, kitchen, bathroom, balcony, etc., on the above blocks, and save it. Before starting the machine next time, the user can assign the robot to one of the above areas for partial cleaning. As shown in Figure 12.
  • the robot provided by the embodiment of the present disclosure can use lidar to perform two-dimensional scanning, and obtain the width data of the door frame with high accuracy. Since the width of the door frame belongs to a predictable range in the room, the sweeping robot can recognize After this width, it will be marked as a preparation for the dividing point of the room. At the same time, in order to further improve the accuracy rate, combined with the camera to extract and recognize the characteristics of the door, the two sets of data are combined and verified to accurately obtain the position parameters of the door, and the room is divided by the position of the door to form an accurate
  • the room layout map of this disclosure uses radar scanning and camera shooting with dual insurance, which greatly improves the accuracy of room door recognition, and avoids room segmentation chaos caused by incorrect recognition of the door.
  • the user controls the robot to execute related control instructions through voice commands.
  • the embodiment of the present disclosure provides an apparatus for constructing a robot work area map, including a scanning unit 1002, a camera unit 1004, a confirming unit 1006, and a dividing unit 1008, which are used to perform the specific steps of the above method, which are specifically as follows:
  • Scanning unit 1002 used to scan obstacles in the driving path in real time, and record position parameters of the obstacles.
  • the sweeping robot receives the user's voice control instructions to start the cleaning task. At this time, the sweeping robot does not have a detailed map of the work area, but only performs basic cleaning tasks. The sweeping robot obtains information about obstacles in the driving path while cleaning.
  • the real-time scanning may use at least one laser radar located on the robot device to perform scanning.
  • the specific hardware structure is described above, and will not be repeated here.
  • the scanning unit 1002 is further configured to perform the following steps:
  • Step S1022 Use lidar to scan obstacles in the driving path in real time, and determine whether the scanning position is the edge position of the obstacle;
  • the sweeping robot performs cleaning tasks, while the lidar scans obstacles in the driving path in real time, such as walls, beds, etc., when continuing to scan to obtain the edge position information of the obstacles, for example, the previous continuous scanning is the obstacle wall, and the door frame position is suddenly scanned.
  • the position is recognized as the target position, and the robot can perform repeated scans on the spot or nearby to confirm whether the position is the target (door frame) position, and obtain the width and height of the scanned position and the door frame parameters stored in the robot storage device through judgment
  • (width, height) is close, the position is judged as the target position.
  • the usual door frame width is 50-100cm and the height is 200-240cm. In this way, if the corresponding scanning parameter falls into this position, it is judged as the target position.
  • Step S1024 when it is determined that it is the edge position of the obstacle, repeatedly scanning the edge position multiple times;
  • Step S1026 Record the coordinate parameters of the edge position of the obstacle in each scan.
  • the above scanning process is both a process of data recording.
  • the data recorded at this time is the coordinate parameter of the scanning position (door frame).
  • the position of the rotating charging pile is the origin of the coordinates
  • the left door frame coordinates at this time are constructed.
  • the two-dimensional coordinates are a1( 90,150), a2(91,151), a3(89,150), a4(92,152), etc.
  • [a1,b1], [a2,b2], [a3,b3], [a4,b4] constitute an array of door frame width, and the calculation shows that the door frame width is 80cm, 82cm, 82cm, 200cm.
  • the coordinate data of the door frame height can be obtained, such as c1(0,200), c2(0,201), c3(0,203), c4(0,152), etc., and the door frame height is 200cm, 201cm, 203cm , 152cm and other data.
  • the door frame height is 200cm, 201cm, 203cm , 152cm and other data.
  • store the above-mentioned data in the storage device of the cleaning robot. Continue the repeated actions of cleaning, scanning, recording, and storing until the cleaning is completed.
  • the following calculation method is included.
  • the calculation method can be calculated after the sweeping robot returns to the pile and is charged, and the robot is idle at this time Status is conducive to the processing and analysis of big data. This method is one of the preferred methods, specifically as follows:
  • a coordinate parameter that satisfies a certain nearby value is selected from a plurality of sets of the coordinate parameters.
  • the aforementioned coordinate parameters a1 (90, 150), a2 (91, 151), a3 (89, 150), a4 (92, 152) are all adjacent coordinate parameters, and the corresponding position coordinate value can be determined within plus or minus 5 Is the adjacent coordinate parameter.
  • b1 (170, 150), b2 (173, 151), b3 (171, 150), b4 (292, 152), b4 (292, 152) can be regarded as exceeding the neighboring position, and this parameter is excluded Normal parameter range.
  • c4(0,152) is also a non-adjacent parameter.
  • the aggregation process can be performed in a variety of ways, one of which can be, for example, to average the same positional parameters, such as the above a1(90,150), a2(91,151), a3(89,150), a4(92,152)
  • the calculated a(90.5, 150.75) is stored as the first array for subsequent retrieval.
  • Camera unit 1004 used to obtain image information of the obstacle in the driving path in real time.
  • the sweeping robot receives the user's voice control instructions to start the cleaning task. At this time, the sweeping robot does not have a detailed map of the work area, but only performs basic cleaning tasks. The sweeping robot obtains image information of obstacles in the driving path while cleaning.
  • At least one camera device located on the robot device can be used for scanning.
  • the specific hardware structure is described as above, and will not be repeated here.
  • the camera unit 1004 also performs the following method steps:
  • Step S1042 Determine whether it is the edge of the obstacle.
  • the judgment result of the scanning radar can be called, or it can be determined by itself based on the camera image.
  • the judgment result of invoking the scanning radar is used, for example, in combination with the door frame position determined in step 1024, then the camera is photographed at the same time during the repeated scanning of the radar.
  • Step S1044 when it is determined to be the edge of the obstacle, acquire multiple pieces of image information of the edge in real time from different positions and/or different angles through the camera.
  • the calculation method may perform calculation after the sweeping robot returns to the pile for charging. At this time, the robot is in an idle state, which is beneficial to the processing and analysis of big data.
  • the characteristic line is a border line of the door frame, including any characteristic line determined according to different gray values, and the characteristic line is obtained from images taken at different angles and positions.
  • the classification of the feature lines obtained above is basically based on the clustering of the feature lines located near the same position into one group, and the more discrete ones are divided into the next group.
  • the number of feature lines in the same group exceeds a certain threshold, for example, 10 is a threshold, and the number of feature lines exceeding 10 is regarded as a mark position. That is, confirm that the position is the door frame position.
  • these characteristic lines may not be inferred as gates. For example, those less than 10 are not regarded as door frame positions.
  • the coordinate parameter of the position is recorded.
  • the parameter can be a two-dimensional or three-dimensional coordinate parameter.
  • the confirmation unit 1006 is configured to determine, according to the position parameter and the image information, that the obstacle is based on the reference information of the working area.
  • the determination process is preferably carried out after the sweeping robot returns to the pile for charging. At this time, the robot is in an idle state, which is beneficial to the processing and analysis of big data, as follows:
  • the door frame recognition program and the room division program need to be called through the central control program to execute the corresponding steps, and after calling the corresponding program and parameters:
  • the first array a (90.5, 150.75) and the second array A (90, 150) are described above, and the parameters in the related arrays are compared and analyzed to confirm whether the parameters are at the same position twice.
  • the edge of the obstacle is determined as the reference position of the working area.
  • the range can be set based on experience, for example, the value is 3.
  • the effective double identification makes the identification of door frames and the like more accurate, which is beneficial to the accurate division of areas.
  • the dividing unit 1008 is configured to divide the working area into multiple sub-areas based on the reference information.
  • Some possible implementations include:
  • the mark includes: color and/or name mark.
  • the user can also edit the room name, such as the living room, bedroom, kitchen, bathroom, balcony, etc., on the above blocks, and save it. Before starting the machine next time, the user can assign the robot to one of the above areas for partial cleaning.
  • the robot provided by the embodiment of the present disclosure can use lidar to perform two-dimensional scanning, and obtain the width data of the door frame with high accuracy. Since the width of the door frame belongs to a predictable range in the room, the sweeping robot can recognize After this width, it will be marked as a preparation for the dividing point of the room. At the same time, in order to further improve the accuracy rate, combined with the camera to extract and identify the characteristics of the door, the two sets of data are combined and verified to accurately obtain the position parameters of the door, and the room is divided by the position of the door to form an accurate
  • the room layout map of this disclosure uses radar scanning and camera shooting with dual insurance, which greatly improves the accuracy of room door recognition, and avoids room segmentation chaos caused by incorrect recognition of the door.
  • the embodiment of the present disclosure provides a device for constructing a robot work area map, including a processor and a memory.
  • the memory stores computer program instructions that can be executed by the processor.
  • the processor executes the computer program instructions, The method steps of any of the foregoing embodiments.
  • Embodiments of the present disclosure provide a robot, including the device for constructing a map of a robot working area as described in any of the above embodiments.
  • the embodiments of the present disclosure provide a non-transitory computer-readable storage medium that stores computer program instructions that, when called and executed by a processor, implement the method steps of any of the foregoing embodiments.
  • the robot 1100 may include a processing device (such as a central processing unit, a graphics processor, etc.) 1101, which can be loaded into a random access memory according to a program stored in a read-only memory (ROM) 1102 or from a storage device 1108 (RAM)
  • the program in 1103 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic robot 1100 are also stored.
  • the processing device 1101, the ROM 1102, and the RAM 1103 are connected to each other through a bus 1104.
  • An input/output (I/O) interface 1105 is also connected to the bus 1104.
  • the following devices can be connected to the I/O interface 1105: including input devices 1106 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 1107 such as a device; a storage device 1108 such as a magnetic tape, a hard disk, etc.; and a communication device 1109.
  • the communication device 1109 may allow the electronic robot 1100 to perform wireless or wired communication with other robots to exchange data.
  • FIG. 7 shows an electronic robot 1100 with various devices, it should be understood that it is not required to implement or have all the devices shown. It may alternatively be implemented or provided with more or fewer devices.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 1109, or installed from the storage device 1108, or installed from the ROM 1102.
  • the processing device 1101 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned robot; or it may exist alone without being assembled into the robot.
  • the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented in a software manner, or may be implemented in a hardware manner.
  • the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first obtaining unit can also be described as "a unit for obtaining at least two Internet Protocol addresses.”
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

一种机器人(100)工作区域地图构建方法、装置、机器人(100)及介质,其中,机器人(100)工作区域地图构建方法包括:实时扫描行驶路径中障碍物,并记录障碍物的位置参数(S102);实时获取行驶路径中的障碍物的图像信息(S104);根据位置参数和图像信息确定障碍物基于工作区域的基准信息(S106);基于基准信息将工作区域划分为多个子区域(S108)。机器人(100)工作区域地图构建方法通过雷达扫描和摄像头拍摄双保险的方式,使得对房间房门的识别准确率大幅提高,避免错误识别房门导致的房间分割混乱。

Description

机器人工作区域地图构建方法、装置、机器人和介质
相关申请的交叉引用
本申请要求于2019年4月2日递交的中国专利申请第201910261018.X号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开涉及控制技术领域,尤其涉及一种机器人工作区域地图构建方法、装置、机器人和介质。
背景技术
随着技术的发展,出现了各种各样的智能化机器人,比如扫地机器人、拖地机器人、吸尘器、除草机等。这些机器人能够通过语音识别系统接收用户输入的语音指令,以执行该语音指令指示的操作,这不仅解放了劳动力,还节约了人力成本。
一般的扫地机器人采用惯性导航、激光雷达或者摄像头进行地图规划和导航,用户在用扫地机器人扫地时,会在移动设备上实时看到清扫的区域划分,然而,这样的清洁区域划分并非房间单元式划分,仅基于清洁区域的坐标信息而将清洁区域随意划分为多个区。
随着用户的需求逐渐提高,上述常规的显示方式已经满足不了用户需求。在某些特定状态下,例如,家中客厅的区域是经常走动的区域,用户想设定让机器人去客厅区域进行清扫的时候,上述自然划分方式,会给用户带来困扰,因为地图是实时形成的,用户很难在下次清扫时让机器人直接到待清扫区域去。
一般的地图形成方式不能对房间进行有效的划分,用户不能准确的指定特定的清扫房间,例如客厅是经常活动的区域,灰尘较多,用户无法清晰和准确地指示扫地机器人去客厅进行客厅的整个区域清扫。目前的技术仅能够支持机器人到达客厅区域,但是无法保证到达后在客厅整个区域内进行清扫。
因此,需要通过一种方式使得机器人对房间进行区域划分,以使得用户能够准确指定机器人到准确的区域进行清扫。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
有鉴于此,本公开实施例提供一种机器人工作区域地图构建方法、装置、机器人以及存储介质。
第一方面,本公开实施例提供一种机器人工作区域地图构建方法,所述方法包括:
实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数;
实时获取所述行驶路径中的所述障碍物的图像信息;
根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息;
基于所述基准信息将所述工作区域划分为多个子区域。
在一些可能的实现方式中,所述实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数,包括:
采用激光雷达实时扫描行驶路径中障碍物,并判断扫描位置是否为所述障碍物边缘位置;
当确定是所述障碍物边缘位置时,多次重复扫描所述边缘位置;
记录每次扫描所述障碍物边缘位置的坐标参数。
在一些可能的实现方式中,所述记录每次扫描所述障碍物边缘位置的坐标参数之后,包括:
从多组所述坐标参数中选出满足某一临近值的坐标参数;
将所述选出的坐标参数进行聚合处理;
将聚合处理后的坐标参数存入第一数组。
在一些可能的实现方式中,所述实时获取所述行驶路径中的所述障碍物的图像信息,包括:
判断是否为所述障碍物的边缘;
当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息。
在一些可能的实现方式中,所述当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息之后,包括:
从生成的所述图像信息中提取特征线;
将具有相似角度和相似位置的特征线分为同一组;
所述同一组中的特征线超过一定阈值,则认定为标记位置;
记录所述标记位置的位置坐标,并存入第二数组。
在一些可能的实现方式中,所述根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息,包括:
比较所述第一数组和第二数组;
当所述第一数组和第二数组接近到一定范围时,确定所述障碍物边缘为所述工作区域的基准位置。
在一些可能的实现方式中,所述基于所述基准信息将所述工作区域划分为多个子区域,包括:
将所述基准信息作为各子区域的入口,将所述工作区域划分为多个子区域;
对所述多个子区域进行标记。
在一些可能的实现方式中,所述标记包括:颜色和/或名称标记。
第二方面,本公开实施例提供一种机器人工作区域地图构建装置,所述装置包括:
扫描单元,用于实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数;
摄像单元,用于实时获取所述行驶路径中的所述障碍物的图像信息;
确定单元,用于根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息;
划分单元,用于基于所述基准信息将所述工作区域划分为多个子区域。
在一些可能的实现方式中,所述扫描单元还用于:
采用激光雷达实时扫描行驶路径中障碍物,并判断扫描位置是否为所述障碍物边缘位置;
当确定是所述障碍物边缘位置时,多次重复扫描所述边缘位置;
记录每次扫描所述障碍物边缘位置的坐标参数。
在一些可能的实现方式中,所述扫描单元还用于:
从多组所述坐标参数中选出满足某一临近值的坐标参数;
将所述选出的坐标参数进行聚合处理;
将聚合处理后的坐标参数存入第一数组。
在一些可能的实现方式中,所述摄像单元还用于:
判断是否为所述障碍物的边缘;
当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息。
在一些可能的实现方式中,所述摄像单元还用于:
从生成的所述图像信息中提取特征线;
将具有相似角度和相似位置的特征线分为同一组;
所述同一组中的特征线超过一定阈值,则认定为标记位置;
记录所述标记位置的位置坐标,并存入第二数组。
在一些可能的实现方式中,所述确定单元还用于:
比较所述第一数组和第二数组;
当所述第一数组和第二数组接近到一定范围时,确定所述障碍物边缘为所述工作区域的基准位置。
在一些可能的实现方式中,所述划分单元还用于:
将所述基准信息作为各子区域的入口,将所述工作区域划分为多个子区域;
对所述多个子区域进行标记。
在一些可能的实现方式中,所述标记包括:颜色和/或名称标记。
第三方面,本公开实施例提供一种机器人工作区域地图构建装置,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的计算机程序指令,所述处理器执行所述计算机程序指令时,实现如上任一所述的方法步骤。
第四方面,本公开实施例提供一种机器人,包括如上任一项所述的装置。
第五方面,本公开实施例提供一种非瞬时性计算机可读存储介质,存储有计算机程序指令,所述计算机程序指令在被处理器调用和执行时实现如上任一所述的方法步骤。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一应用场景示意图;
图2为本公开实施例提供的机器人结构俯视图;
图3为本公开实施例提供的机器人结构仰视图;
图4为本公开实施例提供的机器人结构正视图;
图5为本公开实施例提供的机器人结构立体图;
图6为本公开实施例提供的机器人结构框图;
图7为本公开实施例提供的机器人地图构建方法的流程示意图;
图8为本公开实施例提供的机器人地图构建方法的子流程示意图;
图9为本公开实施例提供的机器人地图构建方法的子流程示意图;
图10为本公开另一实施例提供的机器人地图构建装置的结构示意图;
图11为本公开实施例提供的机器人的电子结构示意图;
图12为本公开实施例提供的机器人地图构建结果示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
应当理解,尽管在本公开实施例中可能采用术语第一、第二、第三等来描述……,但这些……不应限于这些术语。这些术语仅用来将……彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一……也可以被称为第二……,类似地,第二……也可以被称为第一……。
为了更加清楚地描述机器人的行为,进行如下方向定义:
如图5所示,机器人100可通过相对于由主体110界定的如下三个相互垂直轴的移动的各种组合在地面上行进:前后轴X、横向轴Y及中心垂直轴Z。沿着前后轴X的前向驱动方向标示为“前向”,且沿着前后轴X的向后驱动方向标示为“后向”。横向轴Y 实质上是沿着由驱动轮模块141的中心点界定的轴心在机器人的右轮与左轮之间延伸。
机器人100可以绕Y轴转动。当机器人100的前向部分向上倾斜,向后向部分向下倾斜时为“上仰”,且当机器人100的前向部分向下倾斜,向后向部分向上倾斜时为“下俯”。另外,机器人100可以绕Z轴转动。在机器人的前向方向上,当机器人100向X轴的右侧倾斜为“右转”,当机器人100向X轴的左侧倾斜为“左转”。
请参阅图1,为本公开实施例提供的一种可能的应用场景,该应用场景包括机器人,例如扫地机器人、拖地机器人、吸尘器、除草机等等。在某些实施例中,该机器人可以是扫地机器人、拖地机器人。在实施中,机器人可以设置有语音识别系统,以接收用户发出的语音指令,并根据语音指令按照箭头方向进行转动,以响应用户的语音指令。同时机器人响应指令后可以按照箭头指示方向进行清扫,并对清扫区域实施扫描、拍摄以获取房间的地图信息,机器人还可以设有语音输出装置,以输出提示语音。在其他实施例中,机器人可以设置有触敏显示器,以接收用户输入的操作指令。机器人还可以设置有WIFI模块、Bluetooth模块等无线通讯模块,以与智能终端连接,并通过无线通讯模块接收用户利用智能终端传输的操作指令。
相关机器人的结构描述如下,如图2-5所示。
机器人100包含机器主体110、感知系统120、控制系统、驱动系统140、清洁系统、能源系统和人机交互系统170。
机器主体110包括前向部分111和后向部分112,具有近似圆形形状(前后都为圆形),也可具有其他形状,包括但不限于前方后圆的近似D形形状。
如图2和图4所示,感知系统120包括位于机器主体110上方的位置确定装置121、位于机器主体110的前向部分111的缓冲器122、悬崖传感器123和超声传感器、红外传感器、磁力计、加速度计、陀螺仪、里程计等传感装置,向控制系统130提供机器的各种位置信息和运动状态信息。位置确定装置121包括但不限于至少一个摄像头、激光测距装置(LDS)。下面以三角测距法的激光测距装置为例说明如何进行位置确定。三角测距法的基本原理基于相似三角形的等比关系,在此不做赘述。
激光测距装置包括发光单元和受光单元。发光单元可以包括发射光的光源,光源可以包括发光元件,例如发射红外光线或可见光线的红外或可见光线发光二极管(LED)。优选地,光源可以是发射激光束的发光元件。在本实施例中,将激光二极管(LD)作为光源的例子。具体地,由于激光束的单色、定向和准直特性,使用激光束的光源可以使得测量相比于其它光更为准确。例如,相比于激光束,发光二极管(LED)发射的红外光线或可见光线受周围环境因素影响(例如对象的颜色或纹理),而在测量准确性上可能有所降低。激光二极管(LD)可以是点激光,测量出障碍物的二维位置信息,也可以是线激光,测量出障碍物一定范围内的三维位置信息。
受光单元可以包括图像传感器,在该图像传感器上形成由障碍物反射或散射的光点。图像传感器可以是单排或者多排的多个单位像素的集合。这些受光元件可以将光信号转换 为电信号。图像传感器可以为互补金属氧化物半导体(CMOS)传感器或者电荷耦合元件(CCD)传感器,由于成本上的优势优选是互补金属氧化物半导体(CMOS)传感器。而且,受光单元可以包括受光透镜组件。由障碍物反射或散射的光可以经由受光透镜组件行进以在图像传感器上形成图像。受光透镜组件可以包括单个或者多个透镜。
基部可以支撑发光单元和受光单元,发光单元和受光单元布置在基部上且彼此间隔一特定距离。为了测量机器人周围360度方向上的障碍物情况,可以使基部可旋转地布置在主体110上,也可以基部本身不旋转而通过设置旋转元件而使发射光、接收光发生旋转。旋转元件的旋转角速度可以通过设置光耦元件和码盘获得,光耦元件感应码盘上的齿缺,通过齿缺间距的滑过时间和齿缺间距离值相除可得到瞬时角速度。码盘上齿缺的密度越大,测量的准确率和精度也就相应越高,但在结构上就更加精密,计算量也越高;反之,齿缺的密度越小,测量的准确率和精度相应也就越低,但在结构上可以相对简单,计算量也越小,可以降低一些成本。
与受光单元连接的数据处理装置,如DSP,将相对于机器人0度角方向上的所有角度处的障碍物距离值记录并传送给控制系统130中的数据处理单元,如包含CPU的应用处理器(AP),CPU运行基于粒子滤波的定位算法获得机器人的当前位置,并根据此位置制图,供导航使用。定位算法优选使用即时定位与地图构建(SLAM)。
基于三角测距法的激光测距装置虽然在原理上可以测量一定距离以外的无限远距离处的距离值,但实际上远距离测量,例如6米以上,的实现是很有难度的,主要因为受光单元的传感器上像素单元的尺寸限制,同时也受传感器的光电转换速度、传感器与连接的DSP之间的数据传输速度、DSP的计算速度影响。激光测距装置受温度影响得到的测量值也会发生系统无法容忍的变化,主要是因为发光单元与受光单元之间的结构发生的热膨胀变形导致入射光和出射光之间的角度变化,发光单元和受光单元自身也会存在温漂问题。激光测距装置长期使用后,由于温度变化、振动等多方面因素累积而造成的形变也会严重影响测量结果。测量结果的准确性直接决定了绘制地图的准确性,是机器人进一步进行策略实行的基础,尤为重要。
如图2和图3所示,机器主体110的前向部分111可承载缓冲器122,在清洁过程中驱动轮模块141推进机器人在地面行走时,缓冲器122经由传感器系统,例如红外传感器,检测机器人100的行驶路径中的一或多个事件,机器人可通过由缓冲器122检测到的事件,例如障碍物、墙壁,而控制驱动轮模块141使机器人来对所述事件做出响应,例如远离障碍物。
控制系统130设置在机器主体110内的电路主板上,包括与非暂时性存储器,例如硬盘、快闪存储器、随机存取存储器,通信的计算处理器,例如中央处理单元、应用处理器,应用处理器根据激光测距装置反馈的障碍物信息利用定位算法,例如SLAM,绘制机器人所在环境中的即时地图。并且结合缓冲器122、悬崖传感器123和超声传感器、红外传感器、磁力计、加速度计、陀螺仪、里程计等传感装置反馈的距离信息、速度信息综合判断 扫地机当前处于何种工作状态,如过门槛,上地毯,位于悬崖处,上方或者下方被卡住,尘盒满,被拿起等等,还会针对不同情况给出具体的下一步动作策略,使得机器人的工作更加符合主人的要求,有更好的用户体验。进一步地,控制系统130能基于SLAM绘制的即使地图信息规划最为高效合理的清扫路径和清扫方式,大大提高机器人的清扫效率。
驱动系统140可基于具有距离和角度信息,例如x、y及θ分量,的驱动命令而操纵机器人100跨越地面行驶。驱动系统140包含驱动轮模块141,驱动轮模块141可以同时控制左轮和右轮,为了更为精确地控制机器的运动,优选驱动轮模块141分别包括左驱动轮模块和右驱动轮模块。左、右驱动轮模块沿着由主体110界定的横向轴对置。为了机器人能够在地面上更为稳定地运动或者更强的运动能力,机器人可以包括一个或者多个从动轮142,从动轮包括但不限于万向轮。驱动轮模块包括行走轮和驱动马达以及控制驱动马达的控制电路,驱动轮模块还可以连接测量驱动电流的电路和里程计。驱动轮模块141可以可拆卸地连接到主体110上,方便拆装和维修。驱动轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式附接,到机器人主体110,且接收向下及远离机器人主体110偏置的弹簧偏置。弹簧偏置允许驱动轮以一定的着地力维持与地面的接触及牵引,同时机器人100的清洁元件也以一定的压力接触地面10。
清洁系统可为干式清洁系统和/或湿式清洁系统。作为干式清洁系统,主要的清洁功能源于滚刷、尘盒、风机、出风口以及四者之间的连接部件所构成的清扫系统151。与地面具有一定干涉的滚刷将地面上的垃圾扫起并卷带到滚刷与尘盒之间的吸尘口前方,然后被风机产生并经过尘盒的有吸力的气体吸入尘盒。扫地机的除尘能力可用垃圾的清扫效率DPU(Dust pick up efficiency)进行表征,清扫效率DPU受滚刷结构和材料影响,受吸尘口、尘盒、风机、出风口以及四者之间的连接部件所构成的风道的风力利用率影响,受风机的类型和功率影响,是个负责的系统设计问题。相比于普通的插电吸尘器,除尘能力的提高对于能源有限的清洁机器人来说意义更大。因为除尘能力的提高直接有效降低了对于能源要求,也就是说原来充一次电可以清扫80平米地面的机器,可以进化为充一次电清扫100平米甚至更多。并且减少充电次数的电池的使用寿命也会大大增加,使得用户更换电池的频率也会增加。更为直观和重要的是,除尘能力的提高是最为明显和重要的用户体验,用户会直接得出扫得是否干净/擦得是否干净的结论。干式清洁系统还可包含具有旋转轴的边刷152,旋转轴相对于地面成一定角度,以用于将碎屑移动到清洁系统的滚刷区域中。
能源系统包括充电电池,例如镍氢电池和锂电池。充电电池可以连接有充电控制电路、电池组充电温度检测电路和电池欠压监测电路,充电控制电路、电池组充电温度检测电路、电池欠压监测电路再与单片机控制电路相连。主机通过设置在机身侧方或者下方的充电电极与充电桩连接进行充电。如果裸露的充电电极上沾附有灰尘,会在充电过程中由于电荷的累积效应,导致电极周边的塑料机体融化变形,甚至导致电极本身发生变形,无法继续正常充电。
人机交互系统170包括主机面板上的按键,按键供用户进行功能选择;还可以包括显 示屏和/或指示灯和/或喇叭,显示屏、指示灯和喇叭向用户展示当前机器所处状态或者功能选择项;还可以包括手机客户端程序。对于路径导航型清洁设备,在手机客户端可以向用户展示设备所在环境的地图,以及机器所处位置,可以向用户提供更为丰富和人性化的功能项。
图6是根据本公开的扫地机器人的方框图。
根据当前实施例的扫地机器人可以包括:用于识别用户的语音的麦克阵列单元、用于与远程控制设备或其他设备通信的通信单元、用于驱动主体的移动单元、清洁单元、以及用于存储信息的存储器单元。输入单元(扫地机器人的按键等)、物体检测传感器、充电单元、麦克阵列单元、方向检测单元、位置检测单元、通信单元、驱动单元以及存储器单元可以连接到控制单元,以将预定信息传送到控制单元或从控制单元接收预定信息。
麦克阵列单元可以将通过接收单元输入的语音与存储在存储器单元中的信息比较,以确定输入语音是否对应于特定的命令。如果确定所输入的语音对应于特定的命令,则将对应的命令传送到控制单元。如果无法将检测到的语音与存储在存储器单元中的信息相比较,则所检测到的语音可被视为噪声以忽略所检测到的语音。
例如,检测到的语音对应词语“过来、来这里、到这里、到这儿”,并且存在与存储在存储器单元的信息中的词语相对应的文字控制命令(come here)。在这种情况下,可以将对应的命令传送到控制单元中。
方向检测单元可以通过使用输入到多个接收单元的语音的时间差或水平来检测语音的方向。方向检测单元将检测到的语音的方向传送到控制单元。控制单元可以通过使用由方向检测单元检测到的语音方向来确定移动路径。
位置检测单元可以检测主体在预定地图信息内的坐标。在一个实施例中,由摄像头检测到的信息与存储在存储器单元中的地图信息可以相互比较以检测主体的当前位置。除了摄像头之外,位置检测单元还可以使用全球定位系统(GPS)。
从广义上说,位置检测单元可以检测主体是否布置在特定的位置上。例如,位置检测单元可以包括用于检测主体是否布置在充电桩上的单元。
例如,在用于检测主体是否布置在充电桩上的方法中,可以根据电力是否输入到充电单元中来检测主体是否布置在充电位置处。又例如,可以通过布置在主体或充电桩上的充电位置检测单元来检测主体是否布置在充电位置处。
通信单元可以将预定信息传送到/接收自远程控制设备或者其他设备。通信单元可以更新扫地机器人的地图信息。
驱动单元可以操作移动单元和清洁单元。驱动单元可以沿由控制单元确定的移动路径移动所述移动单元。
存储器单元中存储与扫地机器人的操作有关的预定信息。例如,扫地机器人所布置的区域的地图信息、与麦克阵列单元所识别的语音相对应的控制命令信息、由方向检测单元检测到的方向角信息、由位置检测单元检测到的位置信息以及由物体检测传感器检测到的 障碍物信息可以存储在存储器单元中。
控制单元可以接收由接收单元、摄像头以及物体检测传感器检测到的信息。控制单元可以基于所传送的信息识别用户的语音、检测语音发生的方向、以及检测扫地机器人的位置。此外,控制单元还可以操作移动单元和清洁单元。
本公开实施例提供一种机器人工作区域地图构建方法、装置、机器人以及存储介质,用以使机器人能够清晰的划分工作区域地图,准确的到指定清扫区域清扫。
如图7所示,应用于图1应用场景中的机器人,用户通过语音指令控制机器人执行相关的控制指令。本公开实施例提供一种机器人工作区域地图构建方法,所述方法包括如下方法步骤:
步骤S102:实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数。
扫地机器人接收用户的语音控制指令开始执行清扫任务,此时的扫地机器人由于没有工作区域的详细地图,只是执行基本的清扫任务,扫地机器人一边清扫一边获取行驶路径中的障碍物信息。
所述实时扫描可以采用位于机器人设备上的至少一个激光雷达进行扫描,具体的硬件结构描述如上所述,此处不再赘述。如图8所示,具体的方法步骤如下所述:
步骤S1022:采用激光雷达实时扫描行驶路径中障碍物,并判断扫描位置是否为所述障碍物边缘位置;
扫地机器人执行清扫任务,同时激光雷达实时扫描行驶路径中障碍物,例如墙、床等,当继续扫描获得了障碍物的边缘位置信息,例如之前连续扫描为障碍物墙,突然扫描到了门框位置,此时认定该位置为目标位置,机器人可以在原地或附近进行重复扫描确认,该位置是否为目标(门框)位置,当通过判断获取扫描位置的宽度、高度与存储于机器人存储设备中的门框参数(宽、高)接近时,判断该位置为目标位置,通常的门框宽度50-100cm,高200-240cm,这样,如果相应的扫描参数落入该位置时,则判断为目标位置。
步骤S1024:当确定是所述障碍物边缘位置时,多次重复扫描所述边缘位置;
通过如上步骤,确认所述位置为门框位置,则通过在原位置或稍加移动位置来反复扫描所述门框的宽度和高度,即从不同位置、不同角度进行反复扫描以获得多组扫描数据,扫描时可以先进行多次门框宽度扫描,获得宽度数据,然后再进行多次高度扫描,获得门框高度数据。
步骤S1026:记录每次扫描所述障碍物边缘位置的坐标参数。
上述扫描的过程既是进行数据记录的过程,此时的数据记录为扫描位置(门框)的坐标参数,例如旋转充电桩位置为坐标原点,构建此时的左门框坐标,例如二维坐标为a1(90,150)、a2(91,151)、a3(89,150)、a4(92,152)等,同理,扫描门框的右侧边并获得右门框的坐标,例如b1(170,150)、b2(173,151)、b3(171,150)、b4(292,152)等。此时由[a1,b1]、[a2,b2]、[a3,b3]、[a4,b4]构成了门框宽度的数组,计算可知门框宽度为80cm,82cm,82cm,200cm。同理,可以获得门框高度的坐标数据,例如c1(0,200)、c2(0,201)、c3(0, 203)、c4(0,152)等,获得门框高度为200cm,201cm,203cm,152cm等数据。并将上述数据存储于扫地机器人存储设备中。继续清扫、扫描、记录、存储的重复动作,直到清扫完毕。
在一些可能的实现方式中,所述记录每次扫描所述障碍物边缘位置的坐标参数之后,包括如下的计算方法,该计算方法可以在扫地机器人回桩充电后进行计算,此时机器人处于闲置状态,有利于大数据的处理分析。此方法为优选的方法之一,具体如下:
第一、从多组所述坐标参数中选出满足某一临近值的坐标参数。
例如,上述坐标参数a1(90,150)、a2(91,151)、a3(89,150)、a4(92,152)均为临近的坐标参数,对应位置坐标值在正负5以内可以认定为临近坐标参数。上述坐标参数b1(170,150)、b2(173,151)、b3(171,150)、b4(292,152)中b4(292,152)可以认定为超越了临近位置,将该参数排出出正常参数范围。c4(0,152)也属于非临近参数。
第二、将所述选出的坐标参数进行聚合处理。
聚合处理可以采用多种方式进行,其中之一例如可以对相同位置参数求平均值,例如上述a1(90,150)、a2(91,151)、a3(89,150)、a4(92,152)的临近值为(90+91+89+92)/4=90.5,(150+151+150+152)/4=150.75,临近坐标为a(90.5,150.75)。
第三、将聚合处理后的坐标参数存入第一数组。
例如,将上述计算后的a(90.5,150.75)作为第一数组进行存储,供后续调取使用。
步骤S104:实时获取所述行驶路径中的所述障碍物的图像信息。
扫地机器人接收用户的语音控制指令开始执行清扫任务,此时的扫地机器人由于没有工作区域的详细地图,只是执行基本的清扫任务,扫地机器人一边清扫一边获取行驶路径中的障碍物图像信息。
可以采用位于机器人设备上的至少一个摄像头设备进行扫描,具体的硬件结构描述如上所述,此处不再赘述。如图9所示,具体的方法步骤如下所述:
步骤S1042:判断是否为所述障碍物的边缘。
该步骤中判断是否为障碍物边缘位置,可以调用扫描雷达的判断结果,也可以根据摄像头图像进行自行确定。优选,采用调用扫描雷达的判断结果,例如结合步骤1024中判断为门框位置,则在进行雷达的重复扫描过程中,同时进行摄像头的拍摄。
步骤S1044:当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息。
获取到多幅图像后,还可以包括如下的计算方法步骤,该计算方法可以在扫地机器人回桩充电后进行计算,此时机器人处于闲置状态,有利于大数据的处理分析。
第一、从生成的所述图像信息中提取特征线。
所述特征线为门框边框线,包括任何根据灰度值不同而确定出的特征线,并从不同角度、不同位置拍摄的图像中获取所述特征线。
第二、将具有相似角度和相似位置的特征线分为同一组。
对上述获取到的特征线进行分类,基本是按照位于同一位置附近的特征线聚合为一组,离散较远的则划分为下一组。
第三、所述同一组中的特征线超过一定阈值,则认定为标记位置。
位于同一组中的特征线数量超过一定阈值,例如10条为一阈值,超过10条的则认定为标记位置。也即确认该位置为门框位置。此外,在少量的特征线共同聚集在单个组中的情况下,这些特征线可能不会被推断为门。例如少于10的不认定为门框位置。
第四、记录所述标记位置的位置坐标,并存入第二数组。
对于确定了的标记位置,记录该位置的坐标参数,如上所述,该参数可以为二维或三维坐标参数。例如记录该门框的坐标参数为A(90,150)。
步骤S106:根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息。
该确定过程优选为扫地机器人回桩充电后进行,此时机器人处于闲置状态,有利于大数据的处理分析,具体如下:
该方法中,需要通过中控程序调用门框识别程序、房间分割程序执行相应的步骤,调用相应的程序及参数后:
第一、比较所述第一数组和第二数组。
例如,如上第一数组a(90.5,150.75)以及第二数组A(90,150),并对相关数组中的参数进行对比分析,二次确认是否为同一位置参数。
第二、当所述第一数组和第二数组接近到一定范围时,确定所述障碍物边缘为所述工作区域的基准位置。
该范围可以根据经验设定,例如取值为3。如上第一数组a(90.5,150.75)以及第二数组A(90,150),比较后90.5-90=0.5,150.75-150=0.75,均处于3的范围之内,则认定该坐标位置为门框位置。
经过上述的雷达扫描数据和摄像头数据,有效的双重认定使得对于门框之类的认定更加准确,从而有利于区域的准确划分。
步骤S108:基于所述基准信息将所述工作区域划分为多个子区域。
在一些可能的实现方式中,该步骤包括:
将所述基准信息作为各子区域的入口,将所述工作区域划分为多个子区域;并对所述多个子区域进行标记。例如,所述标记包括:颜色和/或名称标记。
通过对不同区域实施不同颜色配置,有利于色盲人群对分块区域进行识别和划分。用户也可以在上述各块上编辑房间名称、例如客厅、卧室、厨房、洗手间、阳台等,并进行保存。下次机器启动前,用户可以指定机器人到上述区域中的一个进行局部清扫。如图12所示。
本公开实施例提供的机器人能够采用激光雷达进行二维扫描,高准确度的获取到门框的宽度数据,由于门框的宽度在房间中属于一个可预知的范围,扫地机器人通过准确的扫 描,识别到这个宽度后即将其标记为房间分割点的预备。同时,为了进一步提高准确率,再结合摄像头对房门的特征进行提取识别,两组数据进行结合验证,准确的获得房门的位置参数,通过该房门的位置对房间进行区域划分,形成准确的房间布局地图,本公开通过雷达扫描和摄像头拍摄双保险的方式,使得对房间房门的识别准确率大幅提高,避免错误识别房门导致的房间分割混乱。
如图10所示,应用于图1应用场景中的机器人,用户通过语音指令控制机器人执行相关的控制指令。本公开实施例提供一种机器人工作区域地图构建装置,包括扫描单元1002、摄像单元1004、确认单元1006以及划分单元1008,用于执行如上所述方法的具体步骤,具体如下:
扫描单元1002:用于实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数。
扫地机器人接收用户的语音控制指令开始执行清扫任务,此时的扫地机器人由于没有工作区域的详细地图,只是执行基本的清扫任务,扫地机器人一边清扫一边获取行驶路径中的障碍物信息。
所述实时扫描可以采用位于机器人设备上的至少一个激光雷达进行扫描,具体的硬件结构描述如上所述,此处不再赘述。如图8所示,扫描单元1002还用于执行如下步骤:
步骤S1022:采用激光雷达实时扫描行驶路径中障碍物,并判断扫描位置是否为所述障碍物边缘位置;
扫地机器人执行清扫任务,同时激光雷达实时扫描行驶路径中障碍物,例如墙、床等,当继续扫描获得了障碍物的边缘位置信息,例如之前连续扫描为障碍物墙,突然扫描到了门框位置,此时认定该位置为目标位置,机器人可以在原地或附近进行重复扫描确认,该位置是否为目标(门框)位置,当通过判断获取扫描位置的宽度、高度与存储于机器人存储设备中的门框参数(宽、高)接近时,判断该位置为目标位置,通常的门框宽度50-100cm,高200-240cm,这样,如果相应的扫描参数落入该位置时,则判断为目标位置。
步骤S1024:当确定是所述障碍物边缘位置时,多次重复扫描所述边缘位置;
通过如上步骤,确认所述位置为门框位置,则通过在原位置或稍加移动位置来反复扫描所述门框的宽度和高度,即从不同位置、不同角度进行反复扫描以获得多组扫描数据,扫描时可以先进行多次门框宽度扫描,获得宽度数据,然后再进行多次高度扫描,获得门框高度数据。
步骤S1026:记录每次扫描所述障碍物边缘位置的坐标参数。
上述扫描的过程既是进行数据记录的过程,此时的数据记录为扫描位置(门框)的坐标参数,例如旋转充电桩位置为坐标原点,构建此时的左门框坐标,例如二维坐标为a1(90,150)、a2(91,151)、a3(89,150)、a4(92,152)等,同理,扫描门框的右侧边并获得右门框的坐标,例如b1(170,150)、b2(173,151)、b3(171,150)、b4(292,152)等。此时由[a1,b1]、[a2,b2]、[a3,b3]、[a4,b4]构成了门框宽度的数组,计算可知门框宽度为80cm,82cm,82cm,200cm。同理,可以获得门框高度的坐标数据,例如c1(0,200)、c2(0,201)、c3(0, 203)、c4(0,152)等,获得门框高度为200cm,201cm,203cm,152cm等数据。并将上述数据存储于扫地机器人存储设备中。继续清扫、扫描、记录、存储的重复动作,直到清扫完毕。
在一些可能的实现方式中,所述记录每次扫描所述障碍物边缘位置的坐标参数之后,包括如下的计算方法,该计算方法可以在扫地机器人回桩充电后进行计算,此时机器人处于闲置状态,有利于大数据的处理分析。此方法为优选的方法之一,具体如下:
第一、从多组所述坐标参数中选出满足某一临近值的坐标参数。
例如,上述坐标参数a1(90,150)、a2(91,151)、a3(89,150)、a4(92,152)均为临近的坐标参数,对应位置坐标值在正负5以内可以认定为临近坐标参数。上述坐标参数b1(170,150)、b2(173,151)、b3(171,150)、b4(292,152)中b4(292,152)可以认定为超越了临近位置,将该参数排出出正常参数范围。c4(0,152)也属于非临近参数。
第二、将所述选出的坐标参数进行聚合处理。
聚合处理可以采用多种方式进行,其中之一例如可以对相同位置参数求平均值,例如上述a1(90,150)、a2(91,151)、a3(89,150)、a4(92,152)的临近值为(90+91+89+92)/4=90.5,(150+151+150+152)/4=150.75,临近坐标为a(90.5,150.75)。
第三、将聚合处理后的坐标参数存入第一数组。
例如,将上述计算后的a(90.5,150.75)作为第一数组进行存储,供后续调取使用。
摄像单元1004:用于实时获取所述行驶路径中的所述障碍物的图像信息。
扫地机器人接收用户的语音控制指令开始执行清扫任务,此时的扫地机器人由于没有工作区域的详细地图,只是执行基本的清扫任务,扫地机器人一边清扫一边获取行驶路径中的障碍物图像信息。
可以采用位于机器人设备上的至少一个摄像头设备进行扫描,具体的硬件结构描述如上所述,此处不再赘述。如图9所示,摄像单元1004还执行如下方法步骤:
步骤S1042:判断是否为所述障碍物的边缘。
该步骤中判断是否为障碍物边缘位置,可以调用扫描雷达的判断结果,也可以根据摄像头图像进行自行确定。优选,采用调用扫描雷达的判断结果,例如结合步骤1024中判断为门框位置,则在进行雷达的重复扫描过程中,同时进行摄像头的拍摄。
步骤S1044:当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息。
获取到多幅图像后,还可以包括如下的计算方法步骤,该计算方法可以在扫地机器人回桩充电后进行计算,此时机器人处于闲置状态,有利于大数据的处理分析。
第一、从生成的所述图像信息中提取特征线。
所述特征线为门框边框线,包括任何根据灰度值不同而确定出的特征线,并从不同角度、不同位置拍摄的图像中获取所述特征线。
第二、将具有相似角度和相似位置的特征线分为同一组。
对上述获取到的特征线进行分类,基本是按照位于同一位置附近的特征线聚合为一组,离散较远的则划分为下一组。
第三、所述同一组中的特征线超过一定阈值,则认定为标记位置。
位于同一组中的特征线数量超过一定阈值,例如10条为一阈值,超过10条的则认定为标记位置。也即确认该位置为门框位置。此外,在少量的特征线共同聚集在单个组中的情况下,这些特征线可能不会被推断为门。例如少于10的不认定为门框位置。
第四、记录所述标记位置的位置坐标,并存入第二数组。
对于确定了的标记位置,记录该位置的坐标参数,如上所述,该参数可以为二维或三维坐标参数。例如记录该门框的坐标参数为A(90,150)。
确认单元1006:用于根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息。
该确定过程优选为扫地机器人回桩充电后进行,此时机器人处于闲置状态,有利于大数据的处理分析,具体如下:
该方法中,需要通过中控程序调用门框识别程序、房间分割程序执行相应的步骤,调用相应的程序及参数后:
第一、比较所述第一数组和第二数组。
例如,如上第一数组a(90.5,150.75)以及第二数组A(90,150),并对相关数组中的参数进行对比分析,二次确认是否为同一位置参数。
第二、当所述第一数组和第二数组接近到一定范围时,确定所述障碍物边缘为所述工作区域的基准位置。
该范围可以根据经验设定,例如取值为3。如上第一数组a(90.5,150.75)以及第二数组A(90,150),比较后90.5-90=0.5,150.75-150=0.75,均处于3的范围之内,则认定该坐标位置为门框位置。
经过上述的雷达扫描数据和摄像头数据,有效的双重认定使得对于门框之类的认定更加准确,从而有利于区域的准确划分。
划分单元1008:用于基于所述基准信息将所述工作区域划分为多个子区域。
在一些可能的实现方式中,具体包括:
将所述基准信息作为各子区域的入口,将所述工作区域划分为多个子区域;并对所述多个子区域进行标记。例如,所述标记包括:颜色和/或名称标记。
通过对不同区域实施不同颜色配置,有利于色盲人群对分块区域进行识别和划分。用户也可以在上述各块上编辑房间名称、例如客厅、卧室、厨房、洗手间、阳台等,并进行保存。下次机器启动前,用户可以指定机器人到上述区域中的一个进行局部清扫。
本公开实施例提供的机器人能够采用激光雷达进行二维扫描,高准确度的获取到门框的宽度数据,由于门框的宽度在房间中属于一个可预知的范围,扫地机器人通过准确的扫描,识别到这个宽度后即将其标记为房间分割点的预备。同时,为了进一步提高准确率, 再结合摄像头对房门的特征进行提取识别,两组数据进行结合验证,准确的获得房门的位置参数,通过该房门的位置对房间进行区域划分,形成准确的房间布局地图,本公开通过雷达扫描和摄像头拍摄双保险的方式,使得对房间房门的识别准确率大幅提高,避免错误识别房门导致的房间分割混乱。
本公开实施例提供一种机器人工作区域地图构建装置,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的计算机程序指令,所述处理器执行所述计算机程序指令时,实现前述任一实施例的方法步骤。
本公开实施例提供一种机器人,包括如上任一实施例所述的机器人工作区域地图构建装置。
本公开实施例提供一种非瞬时性计算机可读存储介质,存储有计算机程序指令,所述计算机程序指令在被处理器调用和执行时实现前述任一实施例的方法步骤。
如图11所示,机器人1100可以包括处理装置(例如中央处理器、图形处理器等)1101,其可以根据存储在只读存储器(ROM)1102中的程序或者从存储装置1108加载到随机访问存储器(RAM)1103中的程序而执行各种适当的动作和处理。在RAM 1103中,还存储有电子机器人1100操作所需的各种程序和数据。处理装置1101、ROM 1102以及RAM 1103通过总线1104彼此相连。输入/输出(I/O)接口1105也连接至总线1104。
通常,以下装置可以连接至I/O接口1105:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1106;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1107;包括例如磁带、硬盘等的存储装置1108;以及通信装置1109。通信装置1109可以允许电子机器人1100与其他机器人进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子机器人1100,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1109从网络上被下载和安装,或者从存储装置1108被安装,或者从ROM 1102被安装。在该计算机程序被处理装置1101执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何 包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述机器人中所包含的;也可以是单独存在,而未装配入该机器人中。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
最后应说明的是:以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围。

Claims (19)

  1. 一种机器人工作区域地图构建方法,其特征在于,所述方法包括:
    实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数;
    实时获取所述行驶路径中的所述障碍物的图像信息;
    根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息;
    基于所述基准信息将所述工作区域划分为多个子区域。
  2. 根据权利要求1所述的方法,其特征在于,所述实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数,包括:
    采用激光雷达实时扫描行驶路径中障碍物,并判断扫描位置是否为所述障碍物边缘位置;
    当确定是所述障碍物边缘位置时,多次重复扫描所述边缘位置;
    记录每次扫描所述障碍物边缘位置的坐标参数。
  3. 根据权利要求2所述的方法,其特征在于,所述记录每次扫描所述障碍物边缘位置的坐标参数之后,包括:
    从多组所述坐标参数中选出满足某一临近值的坐标参数;
    将所述选出的坐标参数进行聚合处理;
    将聚合处理后的坐标参数存入第一数组。
  4. 根据权利要求3所述的方法,其特征在于,所述实时获取所述行驶路径中的所述障碍物的图像信息,包括:
    判断是否为所述障碍物的边缘;
    当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息。
  5. 根据权利要求4所述的方法,其特征在于,所述当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息之后,包括:
    从生成的所述图像信息中提取特征线;
    将具有相似角度和相似位置的特征线分为同一组;
    所述同一组中的特征线超过一定阈值,则认定为标记位置;
    记录所述标记位置的位置坐标,并存入第二数组。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息,包括:
    比较所述第一数组和第二数组;
    当所述第一数组和第二数组接近到一定范围时,确定所述障碍物边缘为所述工作区域的基准位置。
  7. 根据权利要求1所述的方法,其特征在于,所述基于所述基准信息将所述工作区域划分为多个子区域,包括:
    将所述基准信息作为各子区域的入口,将所述工作区域划分为多个子区域;
    对所述多个子区域进行标记。
  8. 根据权利要求7所述的方法,其特征在于,所述标记包括:颜色和/或名称标记。
  9. 一种机器人工作区域地图构建装置,其特征在于,所述装置包括:
    扫描单元,用于实时扫描行驶路径中障碍物,并记录所述障碍物的位置参数;
    摄像单元,用于实时获取所述行驶路径中的所述障碍物的图像信息;
    确定单元,用于根据所述位置参数和所述图像信息确定所述障碍物基于所述工作区域的基准信息;
    划分单元,用于基于所述基准信息将所述工作区域划分为多个子区域。
  10. 根据权利要求9所述的装置,其特征在于,所述扫描单元还用于:
    采用激光雷达实时扫描行驶路径中障碍物,并判断扫描位置是否为所述障碍物边缘位置;
    当确定是所述障碍物边缘位置时,多次重复扫描所述边缘位置;
    记录每次扫描所述障碍物边缘位置的坐标参数。
  11. 根据权利要求10所述的装置,其特征在于,所述扫描单元还用于:
    从多组所述坐标参数中选出满足某一临近值的坐标参数;
    将所述选出的坐标参数进行聚合处理;
    将聚合处理后的坐标参数存入第一数组。
  12. 根据权利要求11所述的装置,其特征在于,所述摄像单元还用于:
    判断是否为所述障碍物的边缘;
    当确定为所述障碍物的边缘时,通过摄像装置从不同位置和/或不同角度实时获取多幅所述边缘的图像信息。
  13. 根据权利要求12所述的装置,其特征在于,所述摄像单元还用于:
    从生成的所述图像信息中提取特征线;
    将具有相似角度和相似位置的特征线分为同一组;
    所述同一组中的特征线超过一定阈值,则认定为标记位置;
    记录所述标记位置的位置坐标,并存入第二数组。
  14. 根据权利要求13所述的装置,其特征在于,所述确定单元还用于:
    比较所述第一数组和第二数组;
    当所述第一数组和第二数组接近到一定范围时,确定所述障碍物边缘为所述工作区域的基准位置。
  15. 根据权利要求9所述的装置,其特征在于,所述划分单元还用于:
    将所述基准信息作为各子区域的入口,将所述工作区域划分为多个子区域;
    对所述多个子区域进行标记。
  16. 根据权利要求15所述的装置,其特征在于,所述标记包括:颜色和/或名称标记。
  17. 一种机器人工作区域地图构建装置,其特征在于,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的计算机程序指令,所述处理器执行所述计算机程序指令时,实现权利要求1-8任一所述的方法步骤。
  18. 一种机器人,其特征在于,包括如权利要求9-17任一项所述的装置。
  19. 一种非瞬时性计算机可读存储介质,其特征在于,存储有计算机程序指令,所述计算机程序指令在被处理器调用和执行时实现权利要求1-8任一所述的方法步骤。
PCT/CN2020/083000 2019-04-02 2020-04-02 机器人工作区域地图构建方法、装置、机器人和介质 WO2020200282A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20784889.6A EP3951544A4 (en) 2019-04-02 2020-04-02 METHOD AND APPARATUS FOR CREATING A ROBOT WORKSPACE MAP, ROBOT AND MEDIA
US17/601,026 US20220167820A1 (en) 2019-04-02 2020-04-02 Method and Apparatus for Constructing Map of Working Region for Robot, Robot, and Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910261018.X 2019-04-02
CN201910261018.XA CN109947109B (zh) 2019-04-02 2019-04-02 机器人工作区域地图构建方法、装置、机器人和介质

Publications (1)

Publication Number Publication Date
WO2020200282A1 true WO2020200282A1 (zh) 2020-10-08

Family

ID=67013509

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083000 WO2020200282A1 (zh) 2019-04-02 2020-04-02 机器人工作区域地图构建方法、装置、机器人和介质

Country Status (4)

Country Link
US (1) US20220167820A1 (zh)
EP (1) EP3951544A4 (zh)
CN (2) CN109947109B (zh)
WO (1) WO2020200282A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200907A (zh) * 2020-10-29 2021-01-08 久瓴(江苏)数字智能科技有限公司 扫地机器人地图数据生成方法、装置、计算机设备和介质
CN112783158A (zh) * 2020-12-28 2021-05-11 广州辰创科技发展有限公司 多种无线传感识别技术融合方法、设备及存储介质
WO2022027869A1 (zh) * 2020-08-02 2022-02-10 珠海一微半导体股份有限公司 一种基于边界的机器人区域划分方法、芯片及机器人
CN114302326A (zh) * 2021-12-24 2022-04-08 珠海优特电力科技股份有限公司 定位区域的确定方法、定位方法、装置和定位设备

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947109B (zh) * 2019-04-02 2022-06-21 北京石头创新科技有限公司 机器人工作区域地图构建方法、装置、机器人和介质
CN110926476B (zh) * 2019-12-04 2023-09-01 三星电子(中国)研发中心 一种智能机器人的伴随服务方法及装置
CN111419118A (zh) * 2020-02-20 2020-07-17 珠海格力电器股份有限公司 一种划分区域的方法、装置、终端及计算机可读介质
CN113495937A (zh) * 2020-03-20 2021-10-12 珠海格力电器股份有限公司 一种机器人的控制方法、装置、电子设备及存储介质
US11875572B2 (en) 2020-03-25 2024-01-16 Ali Corporation Space recognition method, electronic device and non-transitory computer-readable storage medium
CN111538330B (zh) * 2020-04-09 2022-03-04 北京石头世纪科技股份有限公司 一种图像选取方法、自行走设备及计算机存储介质
CN111445775A (zh) * 2020-04-14 2020-07-24 河南城建学院 一种用于室内教学的雷达检测辅助试验模型框架及方法
CN111427357A (zh) * 2020-04-14 2020-07-17 北京石头世纪科技股份有限公司 一种机器人避障方法、装置和存储介质
CN111753695B (zh) * 2020-06-17 2023-10-13 上海宜硕网络科技有限公司 一种模拟机器人充电返回路线的方法、装置和电子设备
CN111920353A (zh) * 2020-07-17 2020-11-13 江苏美的清洁电器股份有限公司 清扫控制方法、清扫区域划分方法、装置、设备、存储介质
CN112015175A (zh) * 2020-08-12 2020-12-01 深圳华芯信息技术股份有限公司 用于移动机器人的房间分割方法、系统、终端以及介质
CN112380942A (zh) * 2020-11-06 2021-02-19 北京石头世纪科技股份有限公司 一种识别障碍物的方法、装置、介质和电子设备
WO2022099468A1 (zh) * 2020-11-10 2022-05-19 深圳市大疆创新科技有限公司 雷达及雷达的数据处理方法、可移动平台、存储介质
CN112656986A (zh) * 2020-12-29 2021-04-16 东莞市李群自动化技术有限公司 基于机器人的消毒处理方法、装置、设备和介质
CN113932825B (zh) * 2021-09-30 2024-04-09 深圳市普渡科技有限公司 机器人导航路径宽度获取系统、方法、机器人及存储介质
CN114594761B (zh) * 2022-01-05 2023-03-24 美的集团(上海)有限公司 机器人的路径规划方法、电子设备及计算机可读存储介质
CN117315038A (zh) * 2022-06-21 2023-12-29 松灵机器人(深圳)有限公司 异常区域标定方法及相关装置
CN115268470B (zh) * 2022-09-27 2023-08-18 深圳市云鼠科技开发有限公司 清洁机器人的障碍物位置标记方法、装置以及介质
WO2024065398A1 (zh) * 2022-09-29 2024-04-04 深圳汉阳科技有限公司 自动铲雪方法、装置、设备及可读存储介质
CN115908730A (zh) * 2022-11-11 2023-04-04 南京理工大学 一种在低通信带宽下远程操控端基于边缘的三维场景重建系统方法
CN116543050B (zh) * 2023-05-26 2024-03-26 深圳铭创智能装备有限公司 一种透明曲面基板定位方法、计算机设备和存储介质
CN117173415B (zh) * 2023-11-03 2024-01-26 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102018481A (zh) * 2009-09-11 2011-04-20 德国福维克控股公司 驱动清洁机器人的方法
US20120143372A1 (en) * 2010-12-06 2012-06-07 Samsung Electronics Co., Ltd. Robot and method for planning path of the same
CN106175606A (zh) * 2016-08-16 2016-12-07 北京小米移动软件有限公司 机器人及其实现自主操控的方法、装置
CN106239517A (zh) * 2016-08-23 2016-12-21 北京小米移动软件有限公司 机器人及其实现自主操控的方法、装置
CN106983449A (zh) * 2010-07-01 2017-07-28 德国福维克控股公司 具有区域划分的测绘制图
CN108303092A (zh) * 2018-01-12 2018-07-20 浙江国自机器人技术有限公司 一种自行规划路径的清洗方法
CN108885453A (zh) * 2015-11-11 2018-11-23 罗伯特有限责任公司 用于机器人导航的地图的划分
CN109947109A (zh) * 2019-04-02 2019-06-28 北京石头世纪科技股份有限公司 机器人工作区域地图构建方法、装置、机器人和介质

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7098435B2 (en) * 1996-10-25 2006-08-29 Frederick E. Mueller Method and apparatus for scanning three-dimensional objects
CN1782668A (zh) * 2004-12-03 2006-06-07 曾俊元 以视频感知的障碍物防撞方法与装置
KR101461185B1 (ko) * 2007-11-09 2014-11-14 삼성전자 주식회사 스트럭쳐드 라이트를 이용한 3차원 맵 생성 장치 및 방법
KR20090077547A (ko) * 2008-01-11 2009-07-15 삼성전자주식회사 이동 로봇의 경로 계획 방법 및 장치
US9043129B2 (en) * 2010-10-05 2015-05-26 Deere & Company Method for governing a speed of an autonomous vehicle
CN102254190A (zh) * 2010-12-13 2011-11-23 中国科学院长春光学精密机械与物理研究所 采用有方向的特征线实现图像匹配的方法
CA2886451C (en) * 2013-01-18 2024-01-02 Irobot Corporation Environmental management systems including mobile robots and methods using same
KR102158695B1 (ko) * 2014-02-12 2020-10-23 엘지전자 주식회사 로봇 청소기 및 이의 제어방법
US9516806B2 (en) * 2014-10-10 2016-12-13 Irobot Corporation Robotic lawn mowing boundary determination
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
JP6798779B2 (ja) * 2015-11-04 2020-12-09 トヨタ自動車株式会社 地図更新判定システム
CN106737653A (zh) * 2015-11-20 2017-05-31 哈尔滨工大天才智能科技有限公司 一种机器人视觉中障碍物刚柔性的判别方法
FR3051275A1 (fr) * 2016-05-13 2017-11-17 Inst Vedecom Procede de traitement d’image pour la reconnaissance de marquage au sol et systeme pour la detection du marquage au sol
GB2552251B (en) * 2016-05-24 2019-09-04 Securi Cabin Ltd A system for steering a trailer towards a payload
WO2018046617A1 (en) * 2016-09-07 2018-03-15 Starship Technologies Oü Method and system for calibrating multiple cameras
GB201621404D0 (en) * 2016-12-15 2017-02-01 Trw Ltd A method of tracking objects in a scene
CN106595682B (zh) * 2016-12-16 2020-12-04 上海博泰悦臻网络技术服务有限公司 一种地图数据的差分更新方法、系统及服务器
CN106863305B (zh) * 2017-03-29 2019-12-17 赵博皓 一种扫地机器人房间地图创建方法及装置
CN107330925B (zh) * 2017-05-11 2020-05-22 北京交通大学 一种基于激光雷达深度图像的多障碍物检测和跟踪方法
CN107030733B (zh) * 2017-06-19 2023-08-04 合肥虹慧达科技有限公司 一种轮式机器人
JP6946087B2 (ja) * 2017-07-14 2021-10-06 キヤノン株式会社 情報処理装置及びその制御方法、並びに、プログラム
CN107817509A (zh) * 2017-09-07 2018-03-20 上海电力学院 基于rtk北斗和激光雷达的巡检机器人导航系统及方法
CN111328386A (zh) * 2017-09-12 2020-06-23 罗博艾特有限责任公司 通过自主移动机器人对未知环境的探察
CN108873880A (zh) * 2017-12-11 2018-11-23 北京石头世纪科技有限公司 智能移动设备及其路径规划方法、计算机可读存储介质
CN108509972A (zh) * 2018-01-16 2018-09-07 天津大学 一种基于毫米波和激光雷达的障碍物特征提取方法
US10618537B2 (en) * 2018-02-12 2020-04-14 Vinod Khosla Autonomous rail or off rail vehicle movement and system among a group of vehicles
CN109188459B (zh) * 2018-08-29 2022-04-15 东南大学 一种基于多线激光雷达的坡道小障碍物识别方法
KR20230153417A (ko) * 2021-03-03 2023-11-06 가디언 글라스, 엘엘씨 전기장들의 변화들을 생성 및 검출하기 위한 시스템들 및/또는 방법들
US11940800B2 (en) * 2021-04-23 2024-03-26 Irobot Corporation Navigational control of autonomous cleaning robots

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102018481A (zh) * 2009-09-11 2011-04-20 德国福维克控股公司 驱动清洁机器人的方法
CN106983449A (zh) * 2010-07-01 2017-07-28 德国福维克控股公司 具有区域划分的测绘制图
US20120143372A1 (en) * 2010-12-06 2012-06-07 Samsung Electronics Co., Ltd. Robot and method for planning path of the same
CN108885453A (zh) * 2015-11-11 2018-11-23 罗伯特有限责任公司 用于机器人导航的地图的划分
CN106175606A (zh) * 2016-08-16 2016-12-07 北京小米移动软件有限公司 机器人及其实现自主操控的方法、装置
CN106239517A (zh) * 2016-08-23 2016-12-21 北京小米移动软件有限公司 机器人及其实现自主操控的方法、装置
CN108303092A (zh) * 2018-01-12 2018-07-20 浙江国自机器人技术有限公司 一种自行规划路径的清洗方法
CN109947109A (zh) * 2019-04-02 2019-06-28 北京石头世纪科技股份有限公司 机器人工作区域地图构建方法、装置、机器人和介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022027869A1 (zh) * 2020-08-02 2022-02-10 珠海一微半导体股份有限公司 一种基于边界的机器人区域划分方法、芯片及机器人
CN112200907A (zh) * 2020-10-29 2021-01-08 久瓴(江苏)数字智能科技有限公司 扫地机器人地图数据生成方法、装置、计算机设备和介质
CN112200907B (zh) * 2020-10-29 2022-05-27 久瓴(江苏)数字智能科技有限公司 扫地机器人地图数据生成方法、装置、计算机设备和介质
CN112783158A (zh) * 2020-12-28 2021-05-11 广州辰创科技发展有限公司 多种无线传感识别技术融合方法、设备及存储介质
CN114302326A (zh) * 2021-12-24 2022-04-08 珠海优特电力科技股份有限公司 定位区域的确定方法、定位方法、装置和定位设备
CN114302326B (zh) * 2021-12-24 2023-05-23 珠海优特电力科技股份有限公司 定位区域的确定方法、定位方法、装置和定位设备

Also Published As

Publication number Publication date
CN109947109B (zh) 2022-06-21
EP3951544A1 (en) 2022-02-09
CN114942638A (zh) 2022-08-26
US20220167820A1 (en) 2022-06-02
CN109947109A (zh) 2019-06-28
EP3951544A4 (en) 2022-12-28

Similar Documents

Publication Publication Date Title
WO2020200282A1 (zh) 机器人工作区域地图构建方法、装置、机器人和介质
WO2021212926A1 (zh) 自行走机器人避障方法、装置、机器人和存储介质
US20210251450A1 (en) Automatic cleaning device and cleaning method
WO2021008439A1 (zh) 一种自动清洁设备控制方法、装置、设备和介质
CN111990929B (zh) 一种障碍物探测方法、装置、自行走机器人和存储介质
WO2021043080A1 (zh) 一种清洁机器人及其控制方法
WO2021208530A1 (zh) 一种机器人避障方法、装置和存储介质
TW202110380A (zh) 一種清潔機器人及其控制方法
WO2021042982A1 (zh) 一种清洁机器人及其控制方法
CN114468898B (zh) 机器人语音控制方法、装置、机器人和介质
CN109932726B (zh) 机器人测距校准方法、装置、机器人和介质
CN109920424A (zh) 机器人语音控制方法、装置、机器人和介质
WO2022041737A1 (zh) 一种测距方法、装置、机器人和存储介质
CN109920425B (zh) 机器人语音控制方法、装置、机器人和介质
US20240029298A1 (en) Locating method and apparatus for robot, and storage medium
WO2022227876A1 (zh) 一种测距方法、装置、机器人和存储介质
CN210673215U (zh) 一种多光源探测机器人
CN116977858A (zh) 一种地面识别方法、装置、机器人和存储介质
CN116942017A (zh) 自动清洁设备、控制方法及存储介质
CN117008148A (zh) 打滑状态的检测方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20784889

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020784889

Country of ref document: EP

Effective date: 20211102