CN112484713A - Map construction method, navigation method and control system of mobile robot - Google Patents

Map construction method, navigation method and control system of mobile robot Download PDF

Info

Publication number
CN112484713A
CN112484713A CN202011103630.3A CN202011103630A CN112484713A CN 112484713 A CN112484713 A CN 112484713A CN 202011103630 A CN202011103630 A CN 202011103630A CN 112484713 A CN112484713 A CN 112484713A
Authority
CN
China
Prior art keywords
mobile robot
light
emitting device
image
light emitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011103630.3A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankobot Shanghai Smart Technologies Co ltd
Shankou Shenzhen Intelligent Technology Co ltd
Original Assignee
Ankobot Shanghai Smart Technologies Co ltd
Shankou Shenzhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankobot Shanghai Smart Technologies Co ltd, Shankou Shenzhen Intelligent Technology Co ltd filed Critical Ankobot Shanghai Smart Technologies Co ltd
Priority to CN202011103630.3A priority Critical patent/CN112484713A/en
Publication of CN112484713A publication Critical patent/CN112484713A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a map construction method, a navigation method and a control system of a mobile robot, wherein the map construction method comprises the following steps: acquiring at least one image in a state where a light emitting pattern of a light emitting device in a physical space of the mobile robot is controlled; extracting the positioning features of the light-emitting device from the at least one image; and determining the positioning position of the light-emitting device in the map based on the position of the positioning feature of the light-emitting device in the at least one image so as to record the positioning feature and the positioning position of the light-emitting device in the map as landmark data of the light-emitting device. According to the method and the device, the landmark data of the light-emitting device can be obtained according to the image acquired in the state that the light-emitting mode of the light-emitting device is controlled, and then a map containing the landmark data of the light-emitting device can be constructed, so that operations such as positioning, navigation and the like can be carried out according to the map.

Description

Map construction method, navigation method and control system of mobile robot
Technical Field
The application relates to the technical field of mobile robots, in particular to a map construction method, a navigation method and a control system of a mobile robot.
Background
When the mobile robot is at an unknown position in an unknown environment, a VSLAM (Visual Simultaneous Localization and Mapping) technology may be used to perform navigation operations and construct a map. Specifically, the mobile robot constructs a map or generates a navigation route by at least one of an image including a surrounding environment captured by an image capturing device and movement information on movement of the mobile robot itself, so that the mobile robot can move autonomously.
However, when the mobile robot performs map construction and instant positioning based on the VSLAM technology, the mobile robot cannot construct a map with an accurate landmark based on an acquired image due to the influence of ambient light (for example, in the case of insufficient ambient light), and it is difficult to perform positioning navigation based on the map.
Disclosure of Invention
In view of the above drawbacks of the related art, an object of the present application is to provide a map construction method, a navigation method, and a control system for a mobile robot, which are used to solve the problems that the mobile robot cannot construct a map with an accurate landmark and cannot perform positioning navigation according to the map based on an acquired image under the influence of ambient light in the prior art.
To achieve the above and other related objects, a first aspect of the present application provides a map construction method of a mobile robot, the map construction method including the steps of: acquiring at least one image in a state where a light emitting pattern of a light emitting device in a physical space of the mobile robot is controlled; extracting the positioning features of the light-emitting device from the at least one image; and determining the positioning position of the light-emitting device in the map based on the position of the positioning feature of the light-emitting device in the at least one image, so as to record the positioning feature and the positioning position of the light-emitting device in the map as the landmark data of the light-emitting device.
In certain embodiments of the first aspect of the present application, the manner in which the light emission pattern of the light emitting device is controlled comprises at least one of: the intelligent terminal is in wireless communication with the light-emitting device to control the light-emitting mode of the light-emitting device; the intelligent terminal is also in communication connection with the mobile robot; the mobile robot wirelessly communicates with the light emitting device through an intermediate device to control a light emitting mode of the light emitting device; and the mobile robot directly wirelessly communicates with the light emitting device to control a light emitting mode of the light emitting device.
In certain embodiments of the first aspect of the present application, the step of acquiring at least one image in a state where a light emission pattern of the light emitting device is controlled in a physical space of the mobile robot includes: acquiring at least one image when it is determined that the mobile robot is located within a preset range from the light emitting device and in a state in which a light emitting mode of the light emitting device is controlled; and/or acquiring at least one image when it is determined that the mobile robot is not obstructed by an obstacle and in a state where a light emitting mode of the light emitting device is controlled.
In certain embodiments of the first aspect of the present application, the step of extracting the localization features of the light emitting device from at least one image comprises: and performing image analysis on the acquired at least one image according to the lighting mode to determine that the acquired at least one image contains the corresponding lighting device, so as to extract the positioning features by using the at least one image containing the lighting device.
In certain embodiments of the first aspect of the present application, the lighting pattern of the lighting device comprises: a blinking mode, a color changing mode, a bright-dark mode, or a steady-state mode.
In certain embodiments of the first aspect of the present application, said extracting the localization features of the light emitting device from at least one image comprises: the lighting mode comprises at least one lighting state, and the positioning characteristics of the lighting device are extracted from at least one image acquired in one lighting state; or the lighting mode comprises at least two lighting states, and the positioning characteristics of the lighting device are extracted from a plurality of images acquired in at least two lighting states.
In certain embodiments of the first aspect of the present application, determining the localized position of the light emitting device in the map based on the position of the localized feature of the light emitting device in the at least one image comprises: and determining the positioning position of the light-emitting device in the map based on the positioning characteristics of the light-emitting device and the relative positions of the recorded positioning characteristics of other objects in the map in at least one image.
In certain embodiments of the first aspect of the present application, determining a localized position of a light emitting device in a map based on a position of a localized feature of the light emitting device in the at least one image comprises: determining a relative spatial position in physical space between a light emitting device and the mobile robot based on a position of a locating feature of the light emitting device in the at least one image; and determining the positioning position of the light-emitting device in the map according to the relative spatial position and the positioning position of the mobile robot in the map.
In certain embodiments of the first aspect of the present application, when the light-emitting device is plural, the manner in which the light-emitting pattern of the light-emitting device is controlled in the physical space of the mobile robot includes: the light emitting devices are sequentially controlled according to the same light emitting pattern; or the light emitting devices may be controlled in accordance with different light emitting patterns.
In certain embodiments of the first aspect of the present application, the landmark data of the light emitting device further includes identification information of the light emitting device.
A second aspect of the present application provides a navigation method of a mobile robot, the navigation method including the steps of: acquiring at least one image, and extracting a second positioning feature from the at least one image; and determining the position of the mobile robot in the first map by matching the second positioning feature with a first positioning feature in the first map, wherein the first positioning feature is the positioning feature in the landmark data of the corresponding light-emitting device obtained by the map construction method according to any one of the first aspect of the present application, and the position of the mobile robot in the first map is used for generating the navigation route.
In certain embodiments of the second aspect of the present application, the navigation method further comprises: constructing a second map according to images acquired at different positions and movement data thereof during movement of the mobile robot, wherein the second map comprises second positioning features of the light-emitting devices; the step of determining the location of the mobile robot in the first map by matching the second location features with the first location features in the first map comprises: the location of the mobile robot in the second map is mapped to the location in the first map by matching the second location feature and the first location feature.
In certain embodiments of the second aspect of the present application, the navigation method further comprises: and fusing the first map and the second map according to the matched first positioning characteristic and the second positioning characteristic.
In certain embodiments of the second aspect of the present application, mapping the location of the mobile robot in the second map to the location in the first map by matching the second localization feature and the first localization feature comprises: determining a deviation of the positioning position of the light emitting device in the two maps by matching the second positioning feature and the first positioning feature; and mapping the position of the mobile robot in the second map to the position in the first map according to the positioning position deviation.
In certain embodiments of the second aspect of the present application, acquiring at least one image comprises: at least one image is acquired in a state where a light emitting pattern of a light emitting device in a physical space of the mobile robot is controlled.
In certain embodiments of the second aspect of the present application, the manner in which the light emission pattern of the light emitting device is controlled comprises at least one of: the intelligent terminal is in wireless communication with the light-emitting device to control the light-emitting mode of the light-emitting device; the intelligent terminal is also in communication connection with the mobile robot; the mobile robot wirelessly communicates with the light emitting device through an intermediate device to control a light emitting mode of the light emitting device; and the mobile robot directly wirelessly communicates with the light emitting device to control a light emitting mode of the light emitting device.
In certain embodiments of the second aspect of the present application, the navigation method further comprises: acquiring at least one image when it is determined that the mobile robot is located within a preset range from the light emitting device and in a state in which a light emitting mode of the light emitting device is controlled; and/or acquiring at least one image when it is determined that the mobile robot is not obstructed by an obstacle and in a state where a light emitting mode of the light emitting device is controlled.
In certain embodiments of the second aspect of the present application, the step of extracting the second localization features of the light emitting device from at least one image comprises: and performing image analysis on the acquired at least one image according to the lighting mode to determine that the acquired at least one image contains the corresponding lighting device, so as to extract the second positioning feature by using the at least one image containing the lighting device.
In certain embodiments of the second aspect of the present application, the light emission pattern of the light emitting device comprises: a blinking state, a color changing state, a bright-dark state, or a steady state.
In certain embodiments of the second aspect of the present application, extracting second localization features of the light-emitting device from the at least one image comprises: the lighting mode comprises at least one lighting state, and second positioning features of the lighting device are extracted from at least one image acquired in one lighting state; or the lighting pattern comprises at least two lighting states, and the second positioning features of the lighting device are extracted from a plurality of images acquired in at least two lighting states.
In some embodiments of the second aspect of the present application, when the light emitting device is plural, the manner in which the light emitting pattern of the light emitting device in the physical space of the mobile robot is controlled includes: the light emitting devices are sequentially controlled according to the same light emitting pattern; or the light emitting devices may be controlled in accordance with different light emitting patterns.
In certain embodiments of the second aspect of the present application, determining the location of the mobile robot in the first map by matching the second localization feature with the first localization feature in the first map comprises: determining a relative spatial position in physical space between the light emitting device and the mobile robot based on the position of the second localization feature of the light emitting device in the at least one image; and determining the position of the mobile robot in the first map according to the relative spatial position and the positioning position of the first positioning feature matched with the second positioning feature in the first map.
A third aspect of the present application provides a control system of a mobile robot, including: the interface device is used for being in communication connection with the light-emitting device; a storage device storing at least one program; processing means, connected to said storage means and to said interface means, for executing said at least one program for coordinating said storage means and said interface means to perform a method for mapping a mobile robot as described in any of the first aspects of the present application and/or to perform a method for navigating a mobile robot as described in any of the second aspects of the present application.
A fourth aspect of the present application provides a mobile robot including: an image pickup device for picking up an image; the interface device is used for being in communication connection with the light-emitting device; a storage device storing at least one program; the mobile device is used for controlling the mobile robot to execute a mobile operation; and a processing device connected to the storage device, the interface device, the image capturing device, and the mobile device, and configured to execute the at least one program to coordinate the storage device, the interface device, the image capturing device, and the mobile device to perform the map construction method of the mobile robot according to any one of the first aspects of the present application and/or perform the navigation method of the mobile robot according to any one of the second aspects of the present application.
A fifth aspect of the present application provides a computer-readable storage medium storing at least one program which, when invoked, executes and implements a mapping method for a mobile robot as described in any one of the first aspects of the present application, and/or executes a navigation method for a mobile robot as described in any one of the second aspects of the present application.
To sum up, the map construction method, the navigation method, and the control system of the mobile robot of the present application acquire at least one image including a light-emitting device in a state where a light-emitting mode of the light-emitting device is controlled in a physical space of the mobile robot, and obtain landmark data of the light-emitting device according to the acquired at least one image, thereby constructing a map including the landmark data of the light-emitting device; on one hand, the map containing the landmark data of the light-emitting device can be constructed under the conditions of different ambient light; on the other hand, the positioning and navigation can be performed based on a map containing landmark data of the light emitting device, and further, in the case where the accurate positioning and navigation is not possible, the repositioning and navigation can be performed using at least one image acquired in a state where the light emitting mode of the light emitting device is controlled.
Drawings
The specific features of the invention to which this application relates are set forth in the appended claims. The features and advantages of the invention to which this application relates will be better understood by reference to the exemplary embodiments described in detail below and the accompanying drawings. The brief description of the drawings is as follows:
fig. 1 is a schematic diagram of an application scenario of the present application in an embodiment.
Fig. 2 is a flowchart illustrating a mapping method for a mobile robot according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of an architecture of an intelligent terminal in an embodiment.
FIG. 4 is a schematic diagram of two images obtained in two light emitting states according to an embodiment of the present application.
Fig. 5 is a schematic diagram of two images obtained in two light emitting states according to the present application in another embodiment.
Fig. 6 is a schematic diagram illustrating the principle of determining the relative spatial position between the light emitting device and the mobile robot based on the imaging principle according to the present application.
Fig. 7 is a flowchart illustrating a navigation method according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that changes in the module or unit composition, electrical, and operation may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Although the terms first, second, etc. may be used herein to describe various elements, information, or parameters in some instances, these elements or parameters should not be limited by these terms. These terms are only used to distinguish one element or parameter from another element or parameter. For example, a first map may be referred to as a second map, and similarly, a second map may be referred to as a first map, without departing from the scope of the various described embodiments. The first map and the second map are both describing one map, but they are not the same map unless the context clearly dictates otherwise. A similar situation also includes the first locating feature and the second locating feature.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
When the mobile robot constructs a map based on a VSLAM (Visual simultaneous localization and Mapping) technique, it may obtain landmark data required for constructing the map by acquiring an image captured by an image capturing device and analyzing and processing the acquired image. The mobile robot can determine its position by landmark data.
Wherein the landmark data comprises: the positioning position of the object in the physical space where the mobile robot is located in the map and the positioning characteristics of the object. The locating features include image features of the object in the image. Wherein the image features reflect contour features, angular features, point features, line features, etc. of the object. The localized position includes map coordinates of the image feature in a map. It should be noted that the landmark data of an object includes at least one positioning feature and at least one positioning position. For example, an object includes 3 location features and corresponding 3 locations in landmark data.
Because the change of the ambient light can affect the image quality of the image acquired by the mobile robot, for example, when the ambient light is insufficient/too bright or the ambient light changes, the image acquired by the mobile robot has the phenomena of underexposure/overexposure or unstable light intensity of a plurality of images, and the like, when the ambient light is insufficient/too bright or the ambient light changes, the mobile robot cannot determine accurate landmark data according to the stable image characteristics in the acquired image, so that the mobile robot constructs a map containing inaccurate/unusable landmark data based on the image, and further reduces the accuracy of the constructed map.
Further, when the mobile robot performs position navigation based on an inaccurate map, it is also unable to accurately determine its position in the physical space. For example, taking the physical environment where the mobile robot is located as an indoor example, a map constructed by the mobile robot at night includes landmark data of a chair, and the mobile robot finds that, when positioning and navigating in the daytime based on the map, the positioning features of the chair extracted from the acquired image of the mobile robot cannot be matched with the positioning features in the map, so that the mobile robot cannot perform accurate positioning and navigation according to the map and the acquired image.
Therefore, the application provides a map construction method, a navigation method and a control system of a mobile robot, which are used for acquiring at least one image containing a light-emitting device in a state that the light-emitting mode of the light-emitting device in a physical space of the mobile robot is controlled, so as to construct a map containing accurate landmark data according to the acquired at least one image, and further perform accurate positioning and navigation according to the map.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application scenario of the present application in an embodiment, as shown in fig. 1, a working environment of a mobile robot includes a mobile robot 10 and a light emitting device 20.
Wherein the mobile robot is a machine device that automatically performs a specific work. It can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. The mobile robot can be used indoors or outdoors, can be used for industry, business or families, can be used for replacing security patrol, welcome personnel or diners, or people to clean the ground, and can also be used for family accompanying, assistant office work and the like. The mobile robot is provided with at least one image shooting device for shooting images of the operating environment of the mobile robot so as to perform VSLAM (Visual Simultaneous localization and Mapping); according to the constructed map, the mobile robot can perform navigation path planning of work such as tour, cleaning, arrangement and the like. Generally, the mobile robot caches a map constructed during the operation of the mobile robot in a local storage space, or uploads the map to a server or a cloud for storage, or uploads the map to an intelligent terminal of a user for storage.
The mobile robot 10 includes: a storage device 11, an interface device 12, a processing device 13, an image pickup device 14, and a moving device 15.
The image capturing device 14 is configured to convert light energy reflected by each measurement point of each object captured in the physical space within the range of the angle of view into image data with a corresponding pixel resolution. Wherein the measurement point is a light reflection area on the physical object corresponding to each pixel position in the image data based on a light reflection principle. The image capturing device 14 includes, but is not limited to, at least one of the following: a CCD or CMOS sensor-integrated image pickup device, and a wide-angle image pickup device. Examples of the image pickup apparatus integrated with the CCD or CMOS sensor device include a monocular image pickup apparatus, a binocular image pickup apparatus, and the like. Examples of the wide-angle imaging device include a fisheye imaging device.
The image data includes color image data in a matrix form describing an ambient environment captured within a viewing angle range of the image capturing apparatus. The number of pixel rows/columns in the matrix-form color image data corresponds to the pixel resolution of the image pickup device. The color image data reflects a wavelength band of light that can be reflected by each object measurement point in the surrounding environment and that can be acquired by the image pickup device, and includes: RGB data, R/G/B data, or light intensity data (also called grayscale data), wherein any of the R/G/B data can be used as the light intensity data, in other words, the light intensity data and the color image data of a single color are common data. Or the light intensity data is determined by detecting the intensity degree of the light beam in the preset wave band reflected by the surface of the object in the surrounding environment by the image pickup device in the visual angle range.
The interface device 12 is used for being connected with the light-emitting device 20 in a communication mode and for acquiring the image acquired by the image pickup device 14, and the interface device 12 comprises a network interface, a data line interface and the like. Wherein the network interface includes, but is not limited to, at least one of: network interface devices based on ethernet, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on short-range communication (WiFi, bluetooth, etc.), and the like. The mobile robot is in communication with the light emitting device 20 through the interface device 12. The data line interface includes, but is not limited to, at least one of: USB interface, and RS232, etc. The interface device 12 is connected to the storage device 11, the processing device 13, the mobile device 15, the image pickup device 14, and the like.
The storage device 11 includes, but is not limited to, at least one of the following: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile RAM (NVRAM). For example, the storage 11 includes a flash memory device or other non-volatile solid-state storage device. In certain embodiments, the storage 11 may also include memory remote from the one or more processing devices 13, such as network-attached memory accessed via RF circuitry or external ports and a communications network, which may be the internet, one or more intranets, local area networks, wide area networks, storage area networks, and the like, or suitable combinations thereof. The memory controller may control access to the memory by other components of the device, such as the CPU and peripheral interfaces. In some embodiments, the storage device 11 further stores a map constructed according to the map construction method of the present application.
The processing means 13 are connected to said interface means 12 and to the storage means 11. The processing means 13 comprise one or more processors. The processing means 13 is operable to perform data read and write operations with the storage means 11. The processing means 13 performs such functions as controlling the lighting pattern of the lighting means 20, extracting positioning features of said lighting means 20 from the image, etc. The processing device 13 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof. The processing device 13 is also operatively coupled with I/O ports that enable the mobile robot to interact with various other electronic devices in the mobile robot, and input structures that enable a user to interact with the mobile robot. Thus, the input structures may include buttons, keyboards, mice, touch pads, and the like. The other electronic devices include, but are not limited to, at least one of: a motor in the mobile device 15 of the mobile robot, and a slave processor, such as a Micro Controller Unit (MCU), dedicated to control the mobile device 15 of the mobile robot.
The moving device 15 is used for controlling the mobile robot to perform a moving operation. In practical embodiments, the moving device 15 may include a traveling mechanism and a driving mechanism, wherein the traveling mechanism may be disposed at the bottom of the mobile robot, and the driving mechanism is disposed in the housing of the mobile robot. Further, the traveling mechanism may be in a traveling wheel manner, and in one implementation, the traveling mechanism may include at least two universal traveling wheels, for example, and the at least two universal traveling wheels realize the movement of advancing, retreating, steering, rotating and the like. In other implementations, the travel mechanism may, for example, comprise a combination of two straight travel wheels and at least one auxiliary steering wheel, wherein the two straight travel wheels are primarily used for forward and reverse travel in the case where the at least one auxiliary steering wheel is not engaged, and wherein steering and rotational etc. movement is achieved in the case where the at least one auxiliary steering wheel is engaged and engaged with the two straight travel wheels. The driving mechanism can be, for example, a driving motor, and the driving motor can be used for driving a traveling wheel in the traveling mechanism to move. In a specific implementation, the driving motor can be a reversible driving motor, for example, and a speed change mechanism can be further arranged between the driving motor and the wheel axle of the travelling wheel.
The light emitting device 20 is used for communicating with the mobile robot 20 and emitting light sensed by the image capturing device 14. In an embodiment, the mobile robot uses the interface device 12 to communicate with an interface device in the equipment where the light emitting device 20 is located, so as to realize communication connection with the light emitting device 20. Examples of the interface means in the device in which the light emitting device 20 is located are network interfaces including, but not limited to, at least one of the following: network interface devices for ethernet, network interface devices for mobile networks (3G, 4G, 5G, etc.), network interface devices for short-range communication, and the like. Based on this, the wireless communication signal for realizing the communication connection between the light-emitting device 20 and the mobile robot includes, but is not limited to, at least one of the following: WIFI signals, zigbee signals, bluetooth signals, infrared signals and the like.
Here, the light emitting apparatus 20 includes a light emitting device, a control circuit of the light emitting device, and a wireless communication module signal-connected to the control circuit. The lighting device may be a stand-alone product, for example, a smart light. The lighting device may also be part of a smart device, such as a display panel on the smart device, and/or an indicator light, etc. Wherein, the intelligent device can be applied in industrial, commercial or household environment, and the intelligent device includes but is not limited to at least one of the following: smart appliances in a home environment (e.g., smart lights, smart air conditioners, smart televisions, etc.), and smart devices in an industrial environment (e.g., smart lights in an industrial environment, etc.), smart devices in a commercial environment (e.g., self-checkout devices in a mall, smart lights in a parking lot, etc.), and so forth.
Here, the mobile robot further includes an environment sensing device (not shown) for sensing the surrounding environment, such as an inertial navigation device and/or an environment measuring device. The processing device 13 of the mobile robot coordinates the mobile device 15, the image capturing device 14, the storage device 11, the interface device 12, the environment sensing device, and the like to perform the map construction and/or navigation scheme of the mobile robot provided by the present application by calling at least one program. The processing device 13 forms a control system of the mobile robot by hardware and software for controlling the movement of the mobile device 15 using data provided by the image pickup device 14 and the like, and even data provided by the environment sensing device and the like. Based on this, the control system comprises: processing means 13, storage means 11, and interface means 12. The control system achieves the purpose of obtaining a map describing a physical space in which the mobile robot moves by executing the map construction method of the present application and/or achieves the purpose of positioning and navigation by executing the navigation method of the present application.
Referring to fig. 2, fig. 2 is a flowchart illustrating a mapping method of a mobile robot according to an embodiment of the present disclosure, where the mapping method includes: step S110, step S120, and step S130. The mapping method may be performed by the mobile robot shown in fig. 1. Wherein the processing device coordinates hardware such as the storage device and the interface device to execute the following steps.
In step S110, at least one image is acquired in a state where a light emitting pattern of a light emitting device in a physical space of the mobile robot is controlled. Examples of the light emitting device are the same as or similar to those described above, and will not be described in detail here.
Wherein the lighting mode includes a blinking mode, a color changing mode, a bright-dark mode, or a steady-state mode. The flickering mode is that the light-emitting device flickers and emits light at a preset flickering frequency according to the received control instruction. For example, different light emitting states in the blinking pattern undergo a brightness change or a color change according to a preset blinking frequency, where the blinking frequency is the number of times that the light emitting states complete a periodic on-off change in a unit time. For example, in accordance with the received control instruction, the light-emitting device is controlled to blink and emit light at a blinking frequency of 1HZ, and the time during which the light-emitting state of the light-emitting device is bright within 1s is 0.5s, and the time during which the light-emitting state of the light-emitting device is dark is 0.5 s. The color change mode is that the light-emitting state of the light-emitting device is switched between at least two colors according to the received control instruction. In one example, the light emitting device receives a plurality of control commands, each received control command includes color information, and different control commands include different color information, so that the light emitting device is in a color change mode including light emitting states of at least two colors. In another example, the received control instruction includes a plurality of color information and a color lighting rule corresponding to each color information, so that the lighting apparatus is in a color change mode including a lighting state of at least two colors. Wherein the color lighting rule includes: and the light emitting time and the light emitting sequence correspond to each color information. The bright-dark mode refers to that the light-emitting state of the light-emitting device is switched between at least two kinds of brightness according to the received control instruction. In one example, the light-emitting device receives a plurality of control commands, each received control command includes one kind of brightness information, and different control commands include different brightness information, so that the light-emitting device is in a bright-dark mode including at least two kinds of brightness in a light-emitting state. In another example, the received control instruction includes a plurality of kinds of luminance information and a luminance light emission rule corresponding to each kind of luminance information, so that the light-emitting device is in a bright-dark mode including light-emitting states of at least two kinds of luminance. Wherein the luminance light emission rule includes: the light emitting time and the light emitting sequence correspond to each kind of brightness information. The steady-state mode is that the light-emitting device emits light in a light-emitting state corresponding to the received control command, wherein when the light-emitting device is in the steady-state mode, the light-emitting device emits color light and/or brightness light corresponding to the control command.
The change in brightness in the bright-dark mode, the change in color in the color-changing mode, and the change in brightness/color in the blinking mode may be slight, and the slight change may be a change that is visually imperceptible to a human or a change that is visually imperceptible to a human, but the mobile robot can recognize the slight change from an image. The light emitted in the steady-state mode may be a faint color and/or intensity of light, or may be light outside the band of wavelengths perceived by the human eye (e.g., near infrared light).
It should be noted that any of the above-mentioned light emitting modes may further include a mode duration in the corresponding light emitting mode to reduce other influences of the light emitting device on the environment. The flashing time length can be stored in the light-emitting device or contained in the control instruction. In some examples, the pattern duration is stored on the light-emitting device side, for example, the pattern duration in the stored blinking pattern is 2s, thereby reducing discomfort of blinking of the LED lamp to eyes of the person in the room. In still other examples, the mode duration is carried in the control instruction. For example, the device sending the control instruction encapsulates a preset fixed mode duration in the control instruction. For another example, the device sending the control instruction analyzes data detected by the image capturing apparatus and/or the environment sensing apparatus of the mobile robot to predict the mode duration, and encapsulates the predicted mode duration in the control instruction. Wherein, the analysis mode includes: and analyzing the light intensity mean value of the image data acquired before the control instruction is sent to determine whether the ambient light is favorable for visual positioning, if so, packaging the mode duration with the preset first duration in the control instruction, otherwise, packaging the mode duration with the preset second duration in the control instruction, wherein the first duration is shorter than the second duration.
In order to enable the mobile robot to acquire an image corresponding to each of the light emission states in the light emission pattern, the mobile robot may control the light emission pattern of the light emission device according to the frequency at which the image pickup device acquires the image. For example, the image pickup apparatus can pick up 30 frames of images per second, and the color change mode includes two light emitting states of yellow and purple. In order to obtain at least one image in both the case where the light emitting state is yellow and the case where the light emitting state is purple, respectively, the maintaining time of the purple light emitting state and the yellow light emitting state in the color change mode should not be less than 1/30 seconds.
In the physical space where the mobile robot is located, the light emitting mode of the light emitting device may be controlled by the mobile robot or may be controlled by the smart terminal.
In one embodiment, the mobile robot is in communication connection with a smart terminal, and the smart terminal is also in wireless communication with the light-emitting device to control the light-emitting mode of the light-emitting device; the intelligent terminal can be a smart phone, AR glasses, a smart watch, a tablet computer and other devices. A smart terminal held by a user is provided with a client (for example, APP used with the mobile robot) capable of providing communication with the mobile robot for the user, so that the user can perform wireless communication with the mobile robot by using the smart terminal. Referring to fig. 3, which is a schematic diagram illustrating an architecture of an intelligent terminal in an embodiment, the intelligent terminal 30 includes: storage means 31, interface means 32, processing means 33, and display means 34. The processing device 33 in the intelligent terminal coordinates the storage device 31, the interface device 32, the display device 34 and the like by calling at least one program to realize control of the light emitting mode of the light emitting device, acquisition of information such as a map constructed by the mobile robot and/or an instant position of the mobile robot in the constructed map, or display of the information transmitted by the mobile robot in an image or text manner.
The intelligent terminal is in communication connection with the mobile robot and also in communication connection with the light-emitting device in the physical space where the mobile robot is located. When a user operates the intelligent terminal, the intelligent terminal sends at least one control instruction to the light-emitting device on one hand, and sends a message containing the light-emitting mode to the mobile robot on the other hand, so that the mobile robot knows the light-emitting mode of the controlled light-emitting device. The message is, for example, a corresponding control instruction, or information sent by the intelligent terminal based on a message pushing mechanism.
In another embodiment, the light emitting pattern of the light emitting device is controlled by the mobile robot. Specifically, during wireless communication between the mobile robot and the light-emitting device directly or indirectly, the mobile robot issues a control instruction to control a light-emitting mode of the light-emitting device.
In one example, the mobile robot wirelessly communicates directly with the lighting device to control a lighting pattern of the lighting device.
In some specific examples, the mobile robot establishes a wireless communication connection with the light-emitting devices through a handshake protocol or the like, and then sends a control instruction to the corresponding light-emitting devices. Wherein, the data packet is transmitted through a wireless local area network such as Bluetooth or WiFi.
For example, the mobile robot and the light-emitting device perform wireless communication by using a bluetooth method, the mobile robot and the light-emitting device may be in an automatic pairing state, and when the mobile robot can search a bluetooth signal (i.e., a data packet) of the light-emitting device, the mobile robot and the light-emitting device may be in pairing connection, and the mobile robot may control a light-emitting mode of the light-emitting device; the mobile robot can also realize pairing connection with the light-emitting device through a pre-stored pairing password.
In other specific examples, the mobile robot sends a control command to the light-emitting devices within the preset range by periodically sending the control command, and the like. Wherein, the control instruction is transmitted by wireless communication modes such as infrared, ultrasonic and the like.
For example, the mobile robot and the light emitting device perform wireless communication by using an infrared signal, the mobile robot periodically actively emits the infrared signal, and the light emitting device may be in a preset light emitting mode when receiving the infrared signal emitted by the mobile robot, where the preset light emitting mode may be any one of the light emitting modes described above.
In another example, the mobile robot wirelessly communicates with the lighting apparatus through an intermediary device to control a lighting pattern of the lighting apparatus. The intermediate device is a device for realizing control of intelligent household appliances in a home environment, intelligent devices in an industrial environment or intelligent devices in a commercial environment. The intermediate device may wirelessly communicate with the mobile robot and the light emitting apparatus, respectively, using wireless communication signals. The wireless communication signals are the same or similar to those described above and will not be described in detail herein. Taking the example that the intermediate device is a device (e.g., a majestic or a majestic) that implements intelligent household electrical control in a home environment, the mobile robot is communicatively connected to the intermediate device, the intermediate device is communicatively connected to the light-emitting device, and the mobile robot forwards the control command through the intermediate device to control the light-emitting device to switch/maintain a corresponding light-emitting state according to a corresponding light-emitting mode. The intermediate device forwards the received control command according to the communication modes (such as communication protocol, transmission channel and the like) corresponding to the mobile robot and the light-emitting device respectively.
It should be noted that the above examples may be used in combination, for example, after receiving the control instruction, the intermediate device may respectively feed back a response message to the mobile robot and send the control instruction to the light-emitting device, where the response message is used to indicate that the intermediate device has sent the control instruction to the light-emitting device.
The mobile robot further acquires at least one image under the condition that the light emission pattern of the light emitting device is controlled.
Here, the mobile robot acquires one image or a plurality of images having a time series at least after confirming that the control instruction is issued. For example, an image pickup device of the mobile robot continuously acquires images and stores the images in a storage device (such as a cache), and when it is confirmed that a control instruction is issued, one or a plurality of images having a time series are extracted from the storage device.
According to the description of the above examples, examples of the manner for the mobile robot to confirm the issuance of the control instruction include: when receiving a message sent by a third-party device such as an intelligent terminal, confirming that the message also sends a control instruction to the light-emitting device; determining to send out a control instruction based on the received response information corresponding to the control instruction; or confirming to send out the corresponding control instruction when the control instruction is sent out.
In some embodiments, the mobile robot also acquires an image before confirming the issuance of the control instruction, so as to collect a plurality of images for reflecting the light emitting process of the light emitting device from the time the control instruction is not received until the light emitting device emits light according to the light emitting pattern in the control instruction. For example, an image pickup device of the mobile robot continuously acquires images and stores the images in a storage device (such as a cache), and a plurality of images having a time series corresponding to the above-described process are extracted from the storage device.
In order to effectively reduce the situation that the light-emitting device is not shot, improve the accuracy of the mobile robot in acquiring the image containing the light-emitting device, and reduce the computing resource of the mobile robot, in some embodiments, at least one image is acquired when the mobile robot or the intelligent terminal determines that the mobile robot is located within a preset range from the light-emitting device, and in a state that the light-emitting mode of the light-emitting device is controlled. The preset range is a distance range preset to ensure that the mobile robot can shoot the light-emitting device and to ensure that the mobile robot can transmit a control command to the light-emitting device. For example, the preset range is between 3m and 1 m. In one example, when the mobile robot determines that the mobile robot is within a preset range from the light-emitting device by detecting the strength of a wireless communication signal between the mobile robot and the light-emitting device, the mobile robot sends a control instruction to control the light-emitting device to be in a corresponding light-emitting mode, so that the mobile robot acquires a corresponding image. In another example, when the intelligent terminal determines that the mobile robot is located within a preset range of a light-emitting device through positioning interaction with the mobile robot, the intelligent terminal prompts a user, controls the light-emitting device to be in a corresponding light-emitting mode under the operation of the user, and sends a message containing the light-emitting mode to the mobile robot so that the mobile robot can acquire a corresponding image.
In other embodiments, the mobile robot further improves the probability of including the light-emitting device in the acquired image by cooperatively analyzing at least one data provided by the environment sensing device and/or the image capturing device to determine that the mobile robot is not obstructed by an obstacle, for example, an obstacle such as a table, a sofa, or a bed, which forms a space through which the mobile robot can pass. Acquiring at least one image when it is determined that the mobile robot is not blocked by an obstacle and when a light emitting mode of the light emitting device is controlled.
Here, the determination that the mobile robot is not obstructed by the obstacle is determined by analyzing at least one data satisfying a control condition. The control conditions include at least one of: the condition is set based on that the ratio of the image size of an obstacle in the image shot by the image shooting device to the image size of the whole image is lower than a preset proportion threshold value, and the condition is set based on that the distance measurement data between the top of the mobile robot and the obstacle above the top, provided by the environment sensing device, is larger than a preset distance threshold value. The image size is, for example, the number of pixels. The distance measurement data is, for example, data provided by a measurement sensor such as a laser distance measurement sensor, a binocular imaging device, or a ToF sensor.
Taking the example of determining that the mobile robot is not blocked by the obstacle by using the image size, when the ratio of the number of pixels of each obstacle in the image acquired by the image pickup device to the number of pixels of the whole image is lower than a preset ratio, the mobile robot controls the light emitting mode of the light emitting device in a wireless communication manner. For example, the mobile robot is a mobile robot applied to a home environment, the preset ratio is 70%, when the mobile robot is located at the bottom of a bed, a ratio of the number of pixels occupied by a bed plate in an image in the image obtained by the image capturing device to the number of pixels in the entire image is 90%, the image size of an obstacle does not satisfy a control condition, and a light emitting mode of the light emitting device is not controlled by using a wireless communication method or whether the obtained image includes the light emitting device is not determined. The mobile robot determines whether to control the light emitting mode of the light emitting device by judging whether the image size of the obstacle in the image meets the control condition, so that when the mobile robot is close to a large obstacle (such as a supermarket shelf and industrial processing equipment) or is positioned in a low space (such as a sofa bottom, a bed bottom and a vehicle bottom) formed by the obstacle, the mobile robot cannot control the light emitting mode of the light emitting device or cannot determine whether the acquired image contains the light emitting device.
In some embodiments, when there are a plurality of light emitting devices in the physical space that can wirelessly communicate with the mobile robot, at least one engineering problem such as sparsity of each light emitting device in the physical space, efficiency of the mobile robot in recording landmark data of the corresponding light emitting device, and minimization of mutual optical interference between the light emitting devices is considered; and/or to improve the accuracy of determining that each lighting device in the acquired at least one image corresponds to the controlled lighting pattern; and in order to improve the accuracy of extracting the positioning features of each light-emitting device from the at least one image in the subsequent step S120, the light-emitting mode and/or the timing of the light-emitting mode of each light-emitting device needs to be selectively controlled.
In one example, the light emitting devices may be sequentially controlled in the same light emitting pattern. Specifically, the mobile robot or the smart terminal controls one of the plurality of light emitting devices in a light emitting mode, and does not control the other light emitting devices while controlling the light emitting device in the mode.
In a specific example, when the number of the light emitting devices is plural, the mobile robot or the smart terminal determines the distance between the mobile robot and each light emitting device according to the signal intensity of the wireless communication signal between the mobile robot and each light emitting device, and sequentially controls the light emitting devices corresponding to the distances in the order from small to large according to the magnitude of the obtained plural distances.
In order to avoid controlling the same light emitting device multiple times, the mobile robot or the smart terminal controls one light emitting device of the plurality of light emitting devices according to a light emitting mode so that the mobile robot obtains at least one image. The mobile robot carries out image analysis on the acquired at least one image, and when the mobile robot determines that the at least one image contains the light-emitting device, the mobile robot or the intelligent terminal controls other light-emitting devices according to the same light-emitting mode. The intelligent terminal can determine that the mobile robot has acquired the image containing the light-emitting device controlled by the intelligent terminal through communication with the mobile robot. Embodiments of the light-emitting device included in the at least one image obtained by image analysis to determine the at least one image will be described in detail later.
In another example, the light emitting devices are controlled in different light emitting modes. Specifically, the mobile robot or the intelligent terminal may control the light emitting devices simultaneously according to different light emitting modes or control the light emitting devices simultaneously, so as to determine the positions of the different light emitting devices in the image and extract the positioning features of the different light emitting devices according to the different light emitting modes of the different light emitting devices.
In a specific example, in order to improve the efficiency of the mobile robot in constructing the map, when the intensity of the light emitting devices in the physical space satisfies the simultaneous control condition, the mobile robot or the smart terminal simultaneously controls the light emitting devices according to different light emitting modes, and when not, the mobile robot or the smart terminal may sequentially control the light emitting devices according to different light emitting modes. The intensity is exemplified by the number of light emitting devices within a preset range from the mobile robot, and based on this, the simultaneous control condition is exemplified by the number of light emitting devices within the preset range being higher than a preset number. For example, the preset range is within 3m, the preset number is 2, and if the number of the light emitting devices within 3m from the mobile robot is 3, the mobile robot or the smart terminal may simultaneously control the light emitting devices according to different light emitting modes.
Further, in order to prevent the mobile robot from being affected by light interference between the light emitting devices to perform image analysis when the light emitting devices are simultaneously controlled according to different light emitting modes, if the intensity of the light emitting devices in the physical space satisfies the simultaneous control condition and the distance between the light emitting devices is less than the preset distance, the mobile robot or the intelligent terminal sequentially controls the light emitting devices according to the different light emitting modes. The distance between the light-emitting devices can be obtained by using the coordinates broadcasted by the light-emitting devices, and the broadcasted coordinates are coordinates in the same coordinate system. The same coordinate system is exemplified by the coordinate system in the bluetooth positioning scheme.
Note that at least two of the four modes, i.e., the blinking mode, the color-changing mode, the bright-dark mode, and the steady-state mode, the blinking mode of different frequencies, the color-changing mode including light-emitting states of different colors, the bright-dark mode including light-emitting states of different luminances, or the different steady-state modes may be regarded as different light-emitting modes.
Based on any one of the above embodiments, the light emitting mode of the light emitting device in the physical space of the mobile robot can be controlled, and further, the mobile robot can acquire at least one image.
The mobile robot may perform step S120 according to the acquired at least one image.
In step S120, the mobile robot extracts the positioning feature of the light-emitting device from the at least one image. Wherein the positioning feature of the light-emitting device refers to an image feature of the light-emitting device in the acquired image. Wherein the image features reflect profile features, angular features, point features, line features, etc. of the light emitting device.
Specifically, the mobile robot performs image analysis on the acquired at least one image according to the light-emitting mode to determine that the acquired at least one image contains a corresponding light-emitting device, so that the at least one image containing the light-emitting device can be used to extract the positioning feature. Here, the mobile robot performs image analysis on the acquired at least one image according to the light-emitting pattern, the detection of pixel values in the image, and/or the variation of pixel values in different images to determine the image area of the controlled light-emitting device in the image (i.e., to determine that the acquired at least one image includes the corresponding light-emitting device). The manner in which the image regions are determined will be described in detail later. For example, it is determined whether there are pixel values in the acquired at least one image that conform to the respective color and/or brightness according to the color and/or brightness of the light-emitting state in the steady-state mode, and if so, it is determined that the acquired at least one image includes the respective light-emitting device; furthermore, the mobile robot can extract the positioning features of the light-emitting device by using the acquired image containing the light-emitting device in at least one image. For another example, it is determined whether a change in a pixel value of an image area describing the same object in the at least two acquired images corresponds to a corresponding change in a light-emitting state according to a change in a light-emitting state in a blinking mode, a bright-dark mode, or a color-changing mode, and if so, it may be determined that the at least two acquired images include corresponding light-emitting devices; furthermore, the mobile robot can extract the positioning feature of the light-emitting device by using the image containing the light-emitting device in the at least two acquired images.
In one embodiment, the lighting mode includes at least one lighting state, and the mobile robot may extract a positioning feature of the lighting device from at least one image acquired in one of the lighting states. In particular, in case the lighting pattern comprises one lighting state or at least two lighting states, the image area of the lighting device in the image and its positioning features may be determined from only one image or a plurality of images acquired in one of the lighting states.
In an example, taking an example that the light emission pattern includes a light emission state, the mobile robot extracts the positioning feature corresponding to the light emitting device and the position thereof in the corresponding image from an image area formed based on a connected domain according to at least one image acquired in the light emission state and the connected domain formed by the pixel values of the corresponding light emission state in the image.
Taking the mobile robot to extract the positioning features corresponding to the light-emitting devices and the positions of the positioning features in the corresponding images according to the acquired image as an example, the mobile robot extracts a connected domain formed by pixel values corresponding to the light-emitting states, and the connected domain extends outwards for a preset number of pixels to obtain at least one image area; screening each image area by utilizing the outline shape, the proportion of the outline size and/or the like of each image area in the image to filter out the image areas obviously not belonging to the light-emitting device; extracting image features and positions of the image features in the corresponding image in at least one screened image area by using an extraction algorithm obtained by machine learning or a preset feature extraction template; the extracted image features and their positions are identified as the locating features of the light emitting device and its position in the corresponding image.
Taking the case where the light-emitting pattern includes a light-emitting state based on color, for example, the stable light-emitting pattern is a light-emitting state of yellow with fixed luminance, the mobile robot extends a connected domain formed by pixel values representing yellow in one image by a preset number of pixels, and at least one image area can be obtained. Screening each image area according to the image characteristics of the light-emitting device and/or the actual size of the light-emitting device prestored in the storage device so as to filter the image areas obviously not belonging to the light-emitting device; furthermore, the positioning features of the light-emitting device and the position thereof in the corresponding image are extracted in the at least one screened image area.
In yet another example, still taking the example that the lighting pattern includes a lighting state, the mobile robot extracts the localization features corresponding to the lighting devices and their positions in the respective images from the plurality of acquired images. The mobile robot determines each image region corresponding to a corresponding lighting state in a plurality of images according to a lighting state in a lighting pattern, and extracts a localization feature having a matching relationship and its position in the corresponding image from each of the image regions, unlike the way of extracting a localization feature and a position from a single image. Examples of the matching relationship include at least one of: the similarity of the descriptors of the positioning features meets a preset similarity condition, the position error (also called pixel position error) between the matched positioning features in different images meets a preset error condition, and the moving pixel data between the image areas/matched positioning features in different images and the moving data of the mobile robot meet a preset proportional scale condition. Wherein the movement data is measured by the mobile robot using data provided by the image capturing device and/or the environmental sensing device, examples of which include: data of distances and orientations that the mobile robot is moving autonomously, and/or data of distances and orientations that the mobile robot is moving relative to objects within the environment.
Taking the case where the light emission pattern includes a light emission state based on color, for example, the steady-state light emission pattern is a light emission state of yellow with fixed luminance, the mobile robot can determine each image region corresponding to the light emission state of yellow from the plurality of images in the manner described in the foregoing embodiments. And the mobile robot takes the positioning features which meet the matching relation in each image area as the positioning features of the light-emitting device, and extracts the positions of the positioning features of the light-emitting device in the corresponding images.
In some examples, image features corresponding to image regions in a plurality of images that meet a yellow light-emitting state are counted, and the image feature with the largest number is used as a positioning feature of the light-emitting device.
It should be noted that the above manner of extracting the positioning features and the positions is only an example, and the mobile robot may combine the above example to screen out more suitable image areas, and extract the positioning features of the light emitting devices and the positions thereof in the corresponding images.
In another example, taking an example in which the lighting pattern includes two lighting states, the mobile robot extracts the localization feature of the lighting device from at least one image acquired in one of the lighting states, based on the at least one image and a pixel value representing the lighting state in the image. For example, the color change mode includes two light emitting states of yellow and purple, and the mobile robot may extract the positioning feature of the light emitting device from at least one image acquired in the yellow light emitting state according to the position of the pixel value representing yellow in the image in the manner described in the above example, or may extract the positioning feature of the light emitting device from at least one image acquired in the purple light emitting state according to the position of the pixel value representing purple in the image in the manner described in the above example.
In determining the image area of the light-emitting device in the image based on one light-emitting state, not only the color state but also the luminance state and the color state indicated by the pixel value in the image may be considered in combination, or only the luminance state indicated by the pixel value in the image may be considered.
In another embodiment, the lighting mode includes at least two lighting states, and the positioning feature of the lighting apparatus is extracted from a plurality of images acquired in at least two of the lighting states. When the light emission pattern includes two light emission states, the mobile robot may extract the positioning feature of the light emitting device from the plurality of images according to the plurality of images acquired in each light emission state and a variation in pixel values in the plurality of images. When the light emitting mode includes two or more light emitting states, the mobile robot may extract the positioning feature of the light emitting device from the plurality of images according to a change in the plurality of images and the pixel values in the plurality of images acquired in each of the light emitting states, or may extract the positioning feature of the light emitting device from the plurality of images only according to a change in the plurality of images and the pixel values in the plurality of images acquired in some of the light emitting states. The partial light emission state includes at least two light emission states.
Specifically, when the change of the pixel values corresponding to a plurality of image regions representing the same object in a plurality of images represents the change of the light-emitting state in the light-emitting mode, the object can be determined to be the light-emitting device; the mobile robot may then extract from the plurality of images the positioning features corresponding to the light emitting devices and their positions in the respective images.
In some embodiments, in order to more accurately determine that the change of the pixel values corresponding to the plurality of image areas reflects the change of the light emitting state in the light emitting mode, the storage device may store the pixel values, the pixel value ranges, or the differences of the pixel values corresponding to the light emitting states in advance, and the light emitting device may determine the plurality of image areas reflecting the change of the light emitting state in the light emitting mode according to the differences of the pixel values or the pixel values corresponding to the light emitting states in the light emitting mode, so as to determine the positioning features corresponding to the plurality of image areas as the positioning features of the light emitting device.
When the light emission pattern includes a plurality of light emission states, the light emission pattern corresponds to a plurality of the differences. For example, when the light-emitting pattern includes three light-emitting states of red, yellow, and green, the light-emitting pattern corresponds to a difference of 3 pixel values, that is, a difference of a pixel value corresponding to the red light-emitting state and a pixel value corresponding to the yellow light-emitting state, a difference of a pixel value corresponding to the red light-emitting state and a pixel value corresponding to the green light-emitting state, and a difference of a pixel value corresponding to the yellow light-emitting state and a pixel value corresponding to the green light-emitting state. The mobile robot can determine a plurality of image areas representing the change situation of the partial light-emitting state in the light-emitting mode according to the difference of at least one pixel value.
In one example, taking as an example that the lighting mode is a bright-dark mode and the bright-dark mode includes two lighting states of a luminance of 0 (the lighting device is in a non-lighting state) and a luminance of C (C ≠ 0), the mobile robot acquires a plurality of images in the two lighting states, wherein at least one of the plurality of images is an image acquired in the lighting state of the luminance of 0 and at least one of the plurality of images is an image acquired in the lighting state of the luminance of C. According to the change of the pixel values in the acquired multiple images, the position of the light-emitting device in the multiple images can be determined. For example, please refer to fig. 4, which is a schematic diagram of two images obtained in two light emitting states according to the present application in an embodiment, as shown in fig. 4, an image D is an image obtained in a light emitting state with a brightness of 0, and an image E is an image obtained in a light emitting state with a brightness of C, wherein an image area m in the image E and an image area n in the image D represent the same object in a physical space. Comparing the pixel value of the image D with the pixel value of the corresponding position in the image E by taking the image D as a reference, if the difference between the pixel value of each pixel position in the image area m in the image E and the pixel value of each pixel position in the image area n in the image D satisfies the difference between the pixel values corresponding to the two light-emitting states, determining that the object at the image area m and the object at the image area n are the light-emitting devices. As another example, if the positioning features of the light emitting devices can be extracted from the two images according to their positions in the images D and E, respectively. It should be noted that, in some embodiments, the positioning feature of the light-emitting device may be extracted from only the image D or the image E.
In another example, taking the light emitting mode as a color changing mode and the color changing mode includes two light emitting states of red and green as an example, please refer to fig. 5, fig. 5 is a schematic diagram of two images obtained in two light emitting states in another embodiment of the present application, please refer to fig. 5, an image F is an image obtained in a red light emitting state, and an image G is an image obtained in a green light emitting state, wherein an image area a in the image F and an image area b in the image G represent the same object in a physical space. If the difference between the pixel value of the image F in the image area a and the pixel value of the image G in the image area b satisfies the difference between the pixel values corresponding to the red and green light emitting states, the object represented by the image area a in the image F and the image area b in the image G is a light emitting device. The positioning features of the light emitting devices can be extracted from the two images according to the image areas of the light emitting devices in the images F and G, respectively.
In yet another example, the lighting pattern is a blinking pattern, and the mobile robot may extract the positioning feature of the lighting device from the plurality of images according to a periodicity of a change in a pixel value representing the same object in the plurality of images. For example, the mobile robot extracts the localization feature of the light-emitting device from the acquired 8 images, and the pixel values in the image region representing the same object in both the first image and the second image are unchanged, and the pixel values in the image region representing the same object in both the third image and the fourth image are changed from the pixel values in the image region representing the same object in both the first image and the second image. The remaining four images are the same or similar to the case of the first four images. Then, according to the periodic variation, the object in the corresponding image area in each image can be determined as the light-emitting device. It should be noted that, in the case where the frequency of acquiring images by the image capturing apparatus is not changed, the flicker frequency of the flicker pattern is different, and the number of images in each change period may be different. For example, the higher the flicker frequency, the fewer the number of images per change period.
In step S130, the mobile robot determines a location position of the light-emitting device in a map based on a position of the location feature of the light-emitting device in the at least one image to record the location feature and the location position of the light-emitting device in the map as landmark data of the light-emitting device. The map is constructed using images and movement data acquired at different positions during movement of the mobile robot on the basis of corresponding the start position where the mobile robot is located to the start coordinate position of the constructed map. The map of the mobile robot needs to store landmark data, which includes the positioning position of an object in the physical space where the mobile robot is located in the map and the positioning characteristics of the object. It should be noted that the landmark data of an object includes at least one positioning feature and at least one positioning position. For example, an object includes 3 location features and corresponding 3 locations in landmark data. It should be noted that the positioning feature of the light-emitting device includes: the positioning features are extracted from one or more images corresponding to one lighting state in the lighting mode, or the positioning features are extracted from a plurality of images corresponding to at least two lighting states.
Specifically, the mobile robot determines the positioning position of the light-emitting device in the map by determining the relative spatial position between the light-emitting device and the mobile robot in the physical space by the position of the positioning feature of the light-emitting device in the image, or determining the relative position of the positioning feature of the light-emitting device in the image and the positioning feature of other objects recorded in the map in the image.
In one embodiment, the location position of the lighting device in the map is determined based on the relative positions of the location features of the lighting device and the recorded location features of other objects in the map in at least one image. The other objects recorded in the map refer to objects in which landmark data are already recorded in the map. The relative positions include: pixel position deviations between the locating features of the light emitting device and the locating features of another object on the image.
In one example, one image includes a light emitting device and an object, and the object has landmark data recorded in a map, and the relative spatial positions of the light emitting device and the object in an actual physical space are determined according to the relative positions of the positioning features of the light emitting device and the object in the image (for example, the number of pixels where the positioning features of the light emitting device and the positioning features of the object are different in the lateral direction of the image and the number of pixels where the positioning features of the light emitting device and the positioning features of the object are different in the longitudinal direction of the image), and physical reference information of the image pickup device, wherein the physical reference information includes: the physical height of the image capture device from a plane of travel (e.g., the ground), the focal length of the image capture device, the angle of view of the image capture device, and the angle of the primary optical axis of the image capture device relative to a horizontal or vertical plane. And determining the positioning position of the light-emitting device in the map according to the relative spatial positions of the light-emitting device and the object and the positioning position of the object in the map.
In another example, the mobile robot may further determine, for each image, a location position of the lighting apparatus in the map obtained in each image according to the location features of the lighting apparatus and relative positions of the location features of other objects recorded in the map in the plurality of images, perform data processing (e.g., averaging processing, median processing, etc.) on the plurality of location positions obtained in the plurality of images, and use the processed location position as a required location position of the lighting apparatus in the map.
In another embodiment, a relative spatial position between the light emitting device and the mobile robot in physical space is determined based on the position of the locating feature of the light emitting device in the at least one image; and determining the positioning position of the light-emitting device in the map according to the relative spatial position and the positioning position of the mobile robot in the map. The relative spatial positions include: the distance and the azimuth angle between the light-emitting device and the mobile robot.
In one example, the mobile robot determines a relative spatial position between the light emitting device and the mobile robot in a physical space based on the position of the light emitting device in one image and the physical reference information. It should be noted that the light emitting device may be above the plane of travel, but the light emitting device is considered to be in the plane of travel when the relative spatial position is determined from a single image. Referring to fig. 6, fig. 6 shows a principle of determining a relative spatial position between a light emitting device and a mobile robot based on an imaging principle according to the present applicationSchematic, as shown, the diagram includes three coordinate systems: image coordinate system UO1V, with O3World coordinate system with origin, O2A camera coordinate system of a dot, assuming that the light-emitting device includes a point P in an actual physical space, an object height H of the image capturing device from a travel plane is known (for example, the object height H can be obtained from a Z-axis coordinate of the image capturing device in the world coordinate system), a world coordinate point M corresponding to an image coordinate center, and a world coordinate system origin O3A distance O of3M, image coordinate O of lens central point1Measuring image coordinate P of pixel point1The length and width of the actual pixel, and the focal length of the image capturing device, O can be obtained by derivation calculation3Length of P, thereby, according to the O3The length of P can obtain the physical distance between the current position of the mobile robot and the point P. In order to determine the azimuth angle between the point P and the position of the mobile robot, the mobile robot calculates the azimuth angle between the mobile robot and the point P according to the corresponding relation between each pixel point of the image and the actual physical azimuth angle which are stored in the storage device in advance. Each pixel point corresponds to an azimuth angle, and the azimuth angle can be calculated based on parameters such as the number of pixels, the focal length of the image pickup device, the angle of view and the like. The mobile robot can determine the positioning position of the light-emitting device in the map according to the relative spatial position and the positioning position of the mobile robot in the map.
In another example, the mobile robot includes at least two image pickup devices, and the mobile robot may determine a relative spatial position between the object and the mobile robot based on a binocular ranging principle and simultaneously acquiring two images including the light emitting devices from different image pickup devices. Specifically, the mobile robot determines the relative spatial position between the light-emitting device and the mobile robot according to the position of the light-emitting device in each image, the focal length of the image pickup device, and the positional relationship between the image pickup devices, and further determines the positioning position of the light-emitting device in the map according to the relative spatial position and the positioning position of the mobile robot in the map. Wherein the positional relationship between the image pickup devices includes: the distance between the optical centers of the image capture devices (i.e., the baseline).
In yet another example, the mobile robot determines a relative spatial position between the light emitting device and the mobile robot in a physical space based on two images including the light emitting device acquired from one image pickup device, a correspondence of an image coordinate system and a physical space coordinate system, and a position of the light emitting device in each image. Wherein the two images are images of different moments of time (for example, a current moment image and a last moment image) acquired by the image pickup device. For an implementation of the above steps, referring to the robot positioning method described in patent publication No. CN107907131B, a relative spatial position between the robot and the object is determined based on the acquired positions of matching features of the object in the current-time image and the previous-time image and the correspondence between the image coordinate system and the physical space coordinate system.
It should be noted that the examples provided in steps S120 and S130 are only examples. To reduce the computational resources and improve the integration degree of steps S120 and S130 in the present application, in some embodiments, the mobile robot determines the location feature and the position thereof in the corresponding image in accordance with the corresponding lighting state in the plurality of images and determines the location position of the lighting device in the map according to the lighting mode and the corresponding movement data during the lighting of the lighting device according to the lighting mode.
The mobile robot determines image areas describing the same object in the plurality of images according to the movement data, determines image areas corresponding to the same object in the plurality of images according to the same or different light-emitting states in the light-emitting modes, extracts the positioning features of the light-emitting devices with matching relations and the positions of the light-emitting devices in the corresponding images from the image areas, and then determines the positions of the light-emitting devices in the map according to the positions of the positioning features of the light-emitting devices in the images. Wherein the movement data is measured by the mobile robot by using data provided by the image capturing device and/or the environment sensing device, and examples thereof include at least one of the following: data of the distance and orientation that the mobile robot moves autonomously, data of the distance and orientation that the mobile robot moves relative to objects within the environment, and preset physical scale constants (such as the lens separation between the binocular cameras, or the distance between the mobile robot and the plane of travel). Examples of the matching relationship include at least one of: the similarity of the descriptors of the positioning features meets a preset similarity condition, the position error (also called pixel position error) between the matched positioning features in different images meets a preset error condition, and the moving pixel data between the image areas/matched positioning features in different images and the moving data of the mobile robot meet a preset proportional scale condition.
It should be noted that, in the light of the above examples and the respective examples in steps S120 and S130, the skilled person combines the ways of obtaining more landmark data for determining the light emitting devices, and thus, the examples are not repeated here.
Based on any of the above embodiments, the positioning position of the light emitting device can be determined, and based on this, the mobile robot records the positioning feature and the positioning position of the light emitting device as landmark data of the light emitting device in the map. The location features of the landmark data include: the positioning features extracted by the mobile robot from one image or the positioning features extracted by the mobile robot from a plurality of images. For example, the mobile robot extracts different positioning features of the light-emitting device from two images acquired in two light-emitting states, and can record both the extracted different positioning features in a map.
It should be noted that, if there is no landmark data at the location position of the light emitting device in the map, the obtained location feature and location position of the light emitting device can be directly recorded in the map as landmark data of the light emitting device. If landmark data has already been recorded at this position, the existing landmark data may be recorded in the map as the landmark data of the light-emitting device, or the existing landmark data may be recorded in the map together with the positioning feature of the light-emitting device obtained in the foregoing embodiment.
In some embodiments, identification information of the light emitting device is further included in the landmark data. Wherein the identification information of the light emitting device is used to distinguish the light emitting device from other objects. When the light emitting device is plural, the identification information may also be used to distinguish different light emitting devices in the physical space. The identification information is determined based on at least one of the following: a physical address of the lighting apparatus (e.g. a Mac address and/or an IP address), a digest of the physical address (e.g. a Hash value), a device number of the lighting apparatus, or unique random data, etc.
The landmark data stored in the map includes not only the respective landmark data corresponding to the light-emitting devices but also the respective landmark data corresponding to other objects. Each landmark data corresponding to other objects includes: positioning features extracted from each image acquired in/out of the light-emitting mode and their positions in the respective images, and the positioning positions of other objects in the map, and the like. In other words, the map contains landmark data in a richer light environment. The map constructed by the map construction method can enable the mobile robot to adapt to more light environments.
According to the map construction method of the mobile robot, the mobile robot can obtain the map recorded with the landmark data of the light-emitting device under different visual conditions based on at least one image containing the light-emitting device, which is acquired in the state that the light-emitting mode of the light-emitting device in the physical space of the mobile robot is controlled, so that the mobile robot can perform positioning and navigation based on the map.
Based on the landmark data to the light emitting device in any of the above embodiments, the present application further provides a navigation method for a mobile robot, please refer to fig. 7, where fig. 7 is a schematic flow chart of an embodiment of the navigation method of the present application, and as shown in the figure, the navigation method includes: step S210 and step S220. The navigation method may be performed by the mobile robot shown in fig. 1. Wherein the processing device coordinates hardware such as the storage device and the interface device to execute the following steps.
For the convenience of the subsequent description, the positioning feature in the landmark data of the corresponding light emitting device in the map stored in the mobile robot is referred to as a first positioning feature.
In step S210, the mobile robot acquires at least one image, and extracts a second positioning feature from the at least one image.
In one embodiment, the storage device stores therein positioning features of the light emitting device. The mobile robot extracts a second positioning feature of the light-emitting device, which is matched with the stored image feature, from the at least one image according to the at least one image acquired by the image capturing device, and the extracted second positioning feature is used for determining the position of the mobile robot in the first map.
In another embodiment, the at least one image is acquired in a state where a light emitting pattern of a light emitting device is controlled in a physical space of the mobile robot. The embodiment in which the lighting pattern of the lighting device is controlled and the embodiment in which the second location feature of the lighting device is extracted from the acquired at least one image are the same as or similar to those described above will not be described in detail herein. By acquiring at least one image in a state where the light-emitting mode of the light-emitting device is controlled, the mobile robot can extract the second positioning feature of the light-emitting device under different visual conditions.
In some embodiments, during movement of the mobile robot, a second map is constructed from images acquired at different locations and movement data thereof. When the mobile robot detects an interruption condition, the mobile robot builds a temporary second map based on the current location in physical space. Wherein, the interruption condition includes any one of the following cases: the case where the movement is interrupted by intervention of an external force, the case where data necessary for positioning from the first map is missing or insufficient, or the case where the movement is interrupted by intervention of a control command. For example, the mobile robot continuously acquires images when the light is insufficient, and the mobile robot constructs a second map when the mobile robot cannot determine the position of the mobile robot in the first map according to the acquired images and the first map. For another example, when the user holds the mobile robot from room a to room B, the mobile robot determines that an interruption condition occurs if it detects that the time length for generating the suspension data exceeds a preset warning time length threshold, and when the interruption condition occurs, the mobile robot constructs the second map.
The movement data includes angle data related to a moving direction of the mobile robot and distance data related to a moving distance of the mobile robot. The movement data may be provided by a movement measurement device. The movement measuring device includes a counting sensor and an angle sensor (e.g., a gyroscope) provided at any position of a driving motor, a wheel, or a mechanical structure between the driving motor and the wheel. The moving path describing the actual movement of the mobile robot can be obtained in the map by using the continuously measured moving data, and then the mobile robot can construct a second map according to the second positioning features of each object extracted from the images acquired during the movement and the moving path in the map.
Wherein the second map is constructed to include a second locating feature of the light emitting device. In one example, the storage device stores therein image characteristics of the light emitting device. The mobile robot extracts a second positioning feature of the light-emitting device, which matches the stored image feature, from the at least one image according to the at least one image acquired by the image pickup device. In another example, the at least one image is acquired in a state where a light emitting pattern of a light emitting device is controlled in a physical space of the mobile robot. The embodiment in which the lighting pattern of the lighting device is controlled and the embodiment in which the second location feature of the lighting device is extracted from the acquired at least one image are the same as or similar to those described above will not be described in detail herein. By acquiring at least one image including the light-emitting device in a state where the light-emitting mode of the light-emitting device is controlled, the mobile robot can extract the second positioning feature of the light-emitting device under different visual conditions, and the landmark data including the second positioning feature of the light-emitting device is recorded in the second map.
The mobile robot may determine a position of the mobile robot in a map according to the second positioning feature of the light emitting device.
In step S220, the mobile robot determines the position of the mobile robot in the first map by matching the second positioning feature with the first positioning feature in the first map. Wherein the first positioning feature is a positioning feature in the landmark data of the light emitting device obtained in the map construction method described above. In one example, the landmark data of the light emitting device includes identification information, and the mobile robot can distinguish the landmark data of the light emitting device from landmark data of other objects in a map according to the identification information.
In an embodiment, the mobile robot determines a relative spatial position in physical space between the light emitting device and the mobile robot based on a position of the second localization feature of the light emitting device in the at least one image; and determining the position of the mobile robot in the first map according to the relative spatial position and the positioning position of the first positioning feature matched with the second positioning feature in the first map. The embodiment in which the mobile robot determines the relative spatial position between the light emitting device and the mobile robot in the physical space according to the position of the second positioning feature in the at least one image is the same as or similar to the embodiment described in the map building method, and will not be described in detail here.
Specifically, the mobile robot determines, according to the second positioning feature, a first positioning feature that matches the second positioning feature in the first map, and when the matching first positioning feature is a positioning feature of a plurality of objects, the mobile robot may further uniquely determine, according to the identification information of the light-emitting device, a first positioning feature that matches the second positioning feature of the light-emitting device in the first map, and then determine, according to the determined position of the first positioning feature in the map and the relative spatial position, the position of the mobile robot in the first map.
In order to determine the position of the mobile robot in the first map while the mobile robot moves according to the constructed second map, step S220 includes step S221.
In step S221, the mobile robot maps the position of the mobile robot in the second map to the position in the first map by matching the second positioning feature and the first positioning feature. Specifically, for the case where the same lighting device is described in the determined first map and second map, the mobile robot further determines a deviation of the positioning positions of the first positioning feature and the second positioning feature that describe the same lighting device in the respective maps, and maps the position of the mobile robot in the second map to the position in the first map using the determined positioning position deviation. Wherein the positional deviation comprises a displacement deviation, and/or an angular deviation, of the light emitting device between the positional positions in the first map and the second map, respectively.
In one embodiment, the deviation of the positioning position of the lighting device in the two maps is determined by matching the second positioning feature and the first positioning feature. And mapping the position of the mobile robot in the second map to the position in the first map according to the positioning position deviation.
In a specific embodiment, the mobile robot performs a matching operation on a plurality of first positioning features of the light emitting devices in the first map and a plurality of second positioning features of the light emitting devices in the second map, and when a first positioning feature matching each second positioning feature of the light emitting devices is obtained, a first positioning position of each first positioning feature in the first map and a second positioning position of each second positioning feature in the second map are also obtained, so as to obtain a positioning position deviation of the light emitting devices. And the mobile robot integrally converts each positioning position in the second map according to the positioning position deviation so as to enable the first map and the second map to finish the alignment operation. And performing matrix conversion on each positioning position in the second map according to the rotation angle, the translation displacement and the scaling described by the positioning position deviation by the alignment operation. For example, the mobile robot performs at least one of a rotation transformation, a translation transformation, or a scaling transformation on the second map. For another example, the mobile robot transforms the second map in blocks to reduce the amount of computation in a single calculation. Based on the position of the mobile robot in the second map, the mobile robot can determine the position of the mobile robot in the second map according to the converted second map, and further, the mobile robot can map the position of the mobile robot in the second map to the position of the mobile robot in the first map according to the positions of the same light-emitting device in the first map and the second map and the position of the mobile robot in the converted second map.
In certain embodiments, the step S221 further includes: and fusing the first map and the second map according to the matched first positioning characteristic and the second positioning characteristic. Specifically, according to the foregoing embodiment, the mobile robot performs matrix conversion on each positioning position in the second map based on the matched first positioning feature and second positioning feature, and fuses the converted second map with the first map. For example, redundant information in the converted second map is removed; and updating the converted second map. The redundant information is landmark data and a map area generated after the map areas describing the same physical area in the first map and the second map are fused. For example, the mobile robot deletes the map area and the landmark data thereof overlapped in the second map generated after the fusion.
In some practical applications, the mobile robot further sends the fused map to the intelligent terminal, so that the intelligent terminal can display the fused map.
Based on the mobile robot's location in the first map, the mobile robot may generate a navigation route. The navigation route is a moving route of the mobile robot moving from the current position to the target position. A position of the mobile robot in the first map (i.e., a current position) and a position of the destination position in the first map are determined, and the mobile robot may generate a movement route from the current position to the destination position based on the first map. In an example, the mobile robot generates a navigation route from the obstacle information and the control instruction in the first map. The control instructions include at least one of: and the user sends an instruction to the mobile robot through the intelligent terminal and an instruction generated by the mobile robot. For example, the instruction is that the user specifies the target position of the mobile robot in a map displayed by the intelligent terminal. As another example, the mobile robot is a cleaning robot and the command is an edge-sweeping command generated by the mobile robot.
The navigation method of the mobile robot can enable the mobile robot to carry out positioning and navigation based on the first map containing the landmark data of the light-emitting device. Furthermore, when the visual condition is poor, the mobile robot cannot determine the position in the first map according to the currently acquired image, or the second map constructed by the mobile robot has a positioning position deviation with the first map, the mobile robot can perform accurate positioning and navigation by using the navigation method.
The present application also provides a computer-readable and writable storage medium storing at least one program that, when invoked, executes and implements at least one embodiment of the mapping method described above with respect to the mobile robot shown in fig. 2 and/or executes and implements at least one embodiment of the navigation method described above with respect to the mobile robot shown in fig. 7. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for enabling a mobile robot equipped with the storage medium to perform all or part of the steps of the method according to the embodiments of the present application.
In the embodiments provided herein, the computer-readable and writable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a USB flash drive, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable-writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In one or more exemplary aspects, the functions described in the computer program of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may be located on a tangible, non-transitory computer-readable and/or writable storage medium. Tangible, non-transitory computer readable and writable storage media may be any available media that can be accessed by a computer.
The flowcharts and block diagrams in the figures described above of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (25)

1. A map construction method of a mobile robot, characterized by comprising the steps of:
acquiring at least one image in a state where a light emitting pattern of a light emitting device in a physical space of the mobile robot is controlled;
extracting the positioning features of the light-emitting device from the at least one image;
and determining the positioning position of the light-emitting device in the map based on the position of the positioning feature of the light-emitting device in the at least one image, so as to record the positioning feature and the positioning position of the light-emitting device in the map as the landmark data of the light-emitting device.
2. The map construction method of a mobile robot according to claim 1, wherein the manner in which the light emission pattern of the light emitting device is controlled includes at least one of:
the intelligent terminal is in wireless communication with the light-emitting device to control the light-emitting mode of the light-emitting device; the intelligent terminal is also in communication connection with the mobile robot;
the mobile robot wirelessly communicates with the light emitting device through an intermediate device to control a light emitting mode of the light emitting device; and
the mobile robot directly wirelessly communicates with the light emitting device to control a light emitting mode of the light emitting device.
3. The map construction method of a mobile robot according to claim 1, wherein the step of acquiring at least one image in a state where a light emission pattern of a light emitting device is controlled in a physical space of the mobile robot comprises:
acquiring at least one image when it is determined that the mobile robot is located within a preset range from the light emitting device and in a state in which a light emitting mode of the light emitting device is controlled; and/or
Acquiring at least one image when it is determined that the mobile robot is not obstructed by an obstacle and in a state in which a light emitting mode of the light emitting device is controlled.
4. The mapping method for a mobile robot according to claim 1, wherein the step of extracting the localization feature of the light-emitting device from at least one image comprises: and performing image analysis on the acquired at least one image according to the lighting mode to determine that the acquired at least one image contains the corresponding lighting device, so as to extract the positioning features by using the at least one image containing the lighting device.
5. The map construction method for a mobile robot according to claim 1, wherein the light emission pattern of the light emitting device includes: a blinking mode, a color changing mode, a bright-dark mode, or a steady-state mode.
6. The mapping method for a mobile robot according to claim 1, wherein the extracting the localization feature of the light-emitting device from the at least one image includes:
the lighting mode comprises at least one lighting state, and the positioning characteristics of the lighting device are extracted from at least one image acquired in one lighting state; or
The lighting mode includes at least two lighting states, and the positioning feature of the lighting device is extracted from a plurality of images acquired in at least two of the lighting states.
7. The mapping method for a mobile robot according to claim 1, wherein determining the localization position of the light-emitting device in the map based on the position of the localization feature of the light-emitting device in the at least one image comprises:
and determining the positioning position of the light-emitting device in the map based on the positioning characteristics of the light-emitting device and the relative positions of the recorded positioning characteristics of other objects in the map in at least one image.
8. The mapping method for a mobile robot according to claim 1, wherein determining the localization position of the light-emitting device in the map based on the position of the localization feature of the light-emitting device in the at least one image comprises:
determining a relative spatial position in physical space between a light emitting device and the mobile robot based on a position of a locating feature of the light emitting device in the at least one image;
and determining the positioning position of the light-emitting device in the map according to the relative spatial position and the positioning position of the mobile robot in the map.
9. The map construction method of a mobile robot according to claim 1, wherein when the number of the light emitting devices is plural, a manner in which a light emitting pattern of the light emitting device in a physical space of the mobile robot is controlled includes:
the light emitting devices are sequentially controlled according to the same light emitting pattern; or
The light emitting devices are controlled in different light emitting modes.
10. The mapping method for a mobile robot according to claim 1 or 9, wherein the landmark data of the light emitting device further includes identification information of the light emitting device.
11. A navigation method of a mobile robot, characterized by comprising the steps of:
acquiring at least one image, and extracting a second positioning feature from the at least one image;
determining the position of the mobile robot in the first map by matching the second positioning feature with a first positioning feature in the first map, wherein the first positioning feature is the positioning feature in the landmark data of the corresponding light-emitting device obtained by the map construction method according to claims 1 to 10, and the position of the mobile robot in the first map is used for generating the navigation route.
12. The navigation method of a mobile robot according to claim 11, further comprising:
constructing a second map according to images acquired at different positions and movement data thereof during movement of the mobile robot, wherein the second map comprises second positioning features of the light-emitting devices;
the step of determining the location of the mobile robot in the first map by matching the second location features with the first location features in the first map comprises: the location of the mobile robot in the second map is mapped to the location in the first map by matching the second location feature and the first location feature.
13. The navigation method of a mobile robot according to claim 12, further comprising: and fusing the first map and the second map according to the matched first positioning characteristic and the second positioning characteristic.
14. The method of navigating a mobile robot according to claim 12, wherein mapping the position of the mobile robot in the second map to the position in the first map by matching the second location feature and the first location feature comprises:
determining a deviation of the positioning position of the light emitting device in the two maps by matching the second positioning feature and the first positioning feature;
and mapping the position of the mobile robot in the second map to the position in the first map according to the positioning position deviation.
15. The navigation method of a mobile robot according to claim 11 or 12, wherein acquiring at least one image comprises:
at least one image is acquired in a state where a light emitting pattern of a light emitting device in a physical space of the mobile robot is controlled.
16. The navigation method of a mobile robot according to claim 15, wherein the manner in which the light-emitting pattern of the light-emitting device is controlled includes at least one of:
the intelligent terminal is in wireless communication with the light-emitting device to control the light-emitting mode of the light-emitting device; the intelligent terminal is also in communication connection with the mobile robot;
the mobile robot wirelessly communicates with the light emitting device through an intermediate device to control a light emitting mode of the light emitting device; and
the mobile robot directly wirelessly communicates with the light emitting device to control a light emitting mode of the light emitting device.
17. The navigation method of a mobile robot according to claim 15, further comprising:
acquiring at least one image when it is determined that the mobile robot is located within a preset range from the light emitting device and in a state in which a light emitting mode of the light emitting device is controlled; and/or
Acquiring at least one image when it is determined that the mobile robot is not obstructed by an obstacle and in a state in which a light emitting mode of the light emitting device is controlled.
18. The method of claim 15, wherein the step of extracting the second location feature of the light-emitting device from at least one image comprises: and performing image analysis on the acquired at least one image according to the lighting mode to determine that the acquired at least one image contains the corresponding lighting device, so as to extract the second positioning feature by using the at least one image containing the lighting device.
19. The navigation method of a mobile robot according to claim 15, wherein the light emission pattern of the light emitting device includes: a blinking state, a color changing state, a bright-dark state, or a steady state.
20. The method of claim 15, wherein extracting the second location feature of the light-emitting device from the at least one image comprises:
the lighting mode comprises at least one lighting state, and second positioning features of the lighting device are extracted from at least one image acquired in one lighting state; or
The lighting pattern includes at least two lighting states, and the second positioning feature of the lighting apparatus is extracted from a plurality of images acquired in at least two of the lighting states.
21. The navigation method of a mobile robot according to claim 15, wherein when the light-emitting device is plural, a manner in which a light-emitting pattern of the light-emitting device in a physical space of the mobile robot is controlled includes:
the light emitting devices are sequentially controlled according to the same light emitting pattern; or
The light emitting devices are controlled in different light emitting modes.
22. The method of navigating a mobile robot according to claim 11, wherein determining the position of the mobile robot in the first map by matching the second location feature with the first location feature in the first map comprises:
determining a relative spatial position in physical space between the light emitting device and the mobile robot based on the position of the second localization feature of the light emitting device in the at least one image;
and determining the position of the mobile robot in the first map according to the relative spatial position and the positioning position of the first positioning feature matched with the second positioning feature in the first map.
23. A control system of a mobile robot, comprising:
the interface device is used for being in communication connection with the light-emitting device;
a storage device storing at least one program;
processing means, connected to said storage means and to interface means, for executing said at least one program for coordinating said storage means and interface means to perform a mapping method for a mobile robot according to any of claims 1-10 and/or to perform a navigation method for a mobile robot according to any of claims 11-22.
24. A mobile robot, characterized in that the mobile robot comprises:
an image pickup device for picking up an image;
the interface device is used for being in communication connection with the light-emitting device;
a storage device storing at least one program;
the mobile device is used for controlling the mobile robot to execute a mobile operation;
processing means connected to the storage means, the interface means, the image capturing means, and the mobile means for executing the at least one program to coordinate the storage means, the interface means, the image capturing means, and the mobile means to perform the map construction method of the mobile robot according to any one of claims 1-10 and/or to perform the navigation method of the mobile robot according to any one of claims 11-22.
25. A computer-readable storage medium, characterized by storing at least one program which, when invoked, executes and implements a mapping method for a mobile robot according to any of claims 1-10 and/or executes a navigation method for a mobile robot according to any of claims 11-22.
CN202011103630.3A 2020-10-15 2020-10-15 Map construction method, navigation method and control system of mobile robot Pending CN112484713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011103630.3A CN112484713A (en) 2020-10-15 2020-10-15 Map construction method, navigation method and control system of mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011103630.3A CN112484713A (en) 2020-10-15 2020-10-15 Map construction method, navigation method and control system of mobile robot

Publications (1)

Publication Number Publication Date
CN112484713A true CN112484713A (en) 2021-03-12

Family

ID=74926125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011103630.3A Pending CN112484713A (en) 2020-10-15 2020-10-15 Map construction method, navigation method and control system of mobile robot

Country Status (1)

Country Link
CN (1) CN112484713A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549376A (en) * 2018-04-16 2018-09-18 爱啃萝卜机器人技术(深圳)有限责任公司 A kind of navigation locating method and system based on beacon
US10293489B1 (en) * 2017-12-15 2019-05-21 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and system, and cleaning robot using the same
US20190164306A1 (en) * 2016-08-19 2019-05-30 Guangzhou Airob Robot Technology Co., Ltd. Method and Apparatus For Map Constructing And Map Correcting Based On Light-Emitting Device
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN111256701A (en) * 2020-04-26 2020-06-09 北京外号信息技术有限公司 Equipment positioning method and system
WO2020129992A1 (en) * 2018-12-17 2020-06-25 Groove X株式会社 Robot, charging station for robot, and landmark device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190164306A1 (en) * 2016-08-19 2019-05-30 Guangzhou Airob Robot Technology Co., Ltd. Method and Apparatus For Map Constructing And Map Correcting Based On Light-Emitting Device
US10293489B1 (en) * 2017-12-15 2019-05-21 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and system, and cleaning robot using the same
CN108549376A (en) * 2018-04-16 2018-09-18 爱啃萝卜机器人技术(深圳)有限责任公司 A kind of navigation locating method and system based on beacon
WO2020129992A1 (en) * 2018-12-17 2020-06-25 Groove X株式会社 Robot, charging station for robot, and landmark device
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN111256701A (en) * 2020-04-26 2020-06-09 北京外号信息技术有限公司 Equipment positioning method and system

Similar Documents

Publication Publication Date Title
EP3691417B1 (en) Automatic stage lighting tracking system and control method therefor
KR100506533B1 (en) Mobile robot and autonomic traveling system and method thereof
EP3123289B1 (en) Locating a portable device based on coded light
US10371504B2 (en) Light fixture commissioning using depth sensing device
KR100669250B1 (en) System and method for real-time calculating location
CN110893085B (en) Cleaning robot and charging path determining method thereof
US10006989B1 (en) Disabling robot sensors
US20220161430A1 (en) Recharging Control Method of Desktop Robot
CN108288289B (en) LED visual detection method and system for visible light positioning
CN106162144A (en) A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
WO2013104314A1 (en) System for determining three-dimensional position of transmission device relative to detecting device
TWI702805B (en) System and method for guiding a machine capable of autonomous movement
US20210390301A1 (en) Indoor vision positioning system and mobile robot
WO2019214643A1 (en) Method for guiding autonomously movable machine by means of optical communication device
CN112204345A (en) Indoor positioning method of mobile equipment, mobile equipment and control system
WO2024055788A1 (en) Laser positioning method based on image informaton, and robot
US20180356834A1 (en) Vehicle that moves automatically within an environment as well as a system with a vehicle and an external illuminating device
CN112484713A (en) Map construction method, navigation method and control system of mobile robot
CN115022553B (en) Dynamic control method and device for light supplement lamp
US11676286B2 (en) Information management apparatus, information management method, and non-transitory recording medium
JP2014160017A (en) Management device, method and program
CN114445494A (en) Image acquisition and processing method, image acquisition device and robot
CN111246120B (en) Image data processing method, control system and storage medium for mobile device
US12005837B2 (en) Enhanced illumination-invariant imaging
US20230048410A1 (en) Enhanced Illumination-Invariant Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210312

WD01 Invention patent application deemed withdrawn after publication