WO2021013230A1 - Procédé de commande de robot, robot, terminal, serveur et système de commande - Google Patents

Procédé de commande de robot, robot, terminal, serveur et système de commande Download PDF

Info

Publication number
WO2021013230A1
WO2021013230A1 PCT/CN2020/103859 CN2020103859W WO2021013230A1 WO 2021013230 A1 WO2021013230 A1 WO 2021013230A1 CN 2020103859 W CN2020103859 W CN 2020103859W WO 2021013230 A1 WO2021013230 A1 WO 2021013230A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
robot
coordinates
coordinate
server
Prior art date
Application number
PCT/CN2020/103859
Other languages
English (en)
Chinese (zh)
Inventor
薛清风
彭洪彬
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021013230A1 publication Critical patent/WO2021013230A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a robot control method, robot, terminal, server and control system.
  • the other is that the user manually selects the target point through the electronic map drawn in the mobile APP (as shown in Figure 1) and sends it to the sweeper.
  • the main interactive operation relies on the electronic map generated by SLAM (Simultaneous Localization and Mapping, real-time positioning and mapping) technology.
  • SLAM Simultaneous Localization and Mapping, real-time positioning and mapping
  • the embodiments of the present application provide a robot control method, robot, terminal, server, and control system to control the robot to accurately move to a target point.
  • the first aspect of the embodiments of the present application provides a method for controlling a robot.
  • This method is applied to a system composed of robots, terminals, and servers.
  • the robot carries a camera and lidar.
  • the method includes: the robot creates a visual SLAM map through its own camera, creates a laser SLAM map through its own lidar, takes the coordinate system of the visual SLAM map on the horizontal plane as the first coordinate system, and uses the laser SLAM map
  • the robot uploads the coordinate conversion relationship between the first coordinate system and the second coordinate system and the visual SLAM map to the server;
  • the terminal intercepts the current interface to obtain the target image frame, and the terminal extracts the characteristic data of the target point, and Upload the target image frame and the characteristic data of the target point to the server;
  • the server receives the target image frame, the characteristic data of the target point, and determines the target point in the first coordinate system according to the target image frame, the characteristic data of the target point and the visual SLAM map
  • the server
  • the user does not need to select the target point on the electronic map with a low degree of restoration, but directly selects the target point on the interface of the terminal. Therefore, the user can accurately select the position where the robot wants to control the movement. So the robot can accurately move to the target point.
  • the origin of the first coordinate system and the origin of the second coordinate system may or may not overlap.
  • the coordinate conversion relationship between the two coordinate systems is more complicated; in the case where the origin of the first coordinate system coincides with the origin of the second coordinate system Below, the coordinate conversion relationship between the two coordinate systems is relatively simple. If the origin of the first coordinate system does not coincide with the origin of the second coordinate system, the coordinate conversion relationship includes the angle between the axial direction of the first coordinate system and the axial direction of the second coordinate system, the origin of the first coordinate system and the second coordinate system. The relative position of the origin of the coordinate system.
  • the coordinate conversion relationship includes the coordinates of the same point in the first coordinate system and the second coordinate system, or the axial direction of the first coordinate system and the second coordinate system The axial angle of the coordinate system.
  • the included angle ⁇ can be measured before the robot leaves the factory, or can be measured after the robot reaches the user's hands.
  • a second aspect of the embodiments of the present application provides a method for controlling a robot, which is applied to a robot including a camera and a lidar.
  • the method includes: the robot creates a visual SLAM map through the camera and a laser SLAM map through the lidar; Upload the coordinate conversion relationship between the first coordinate system and the second coordinate system to the server; where the first coordinate system is the projected coordinate system of the visual SLAM map on the horizontal plane, and the second coordinate system is the coordinate system of the laser SLAM map;
  • the robot uploads the visual SLAM map to the server; the robot receives the second coordinates obtained by the server based on the visual SLAM map and the coordinate conversion relationship between the first coordinate system and the second coordinate system.
  • the second coordinates are the coordinates of the target point in the second coordinate system ;
  • the robot determines the movement path according to the second coordinates and the coordinates of the current position of the robot in the second coordinate system; the robot moves to the target point according to the movement path.
  • the coordinate conversion relationship includes the angle between the axial direction of the first coordinate system and the axial direction of the second coordinate system, and the origin of the first coordinate system and The relative position of the origin of the second coordinate system.
  • the coordinate conversion relationship includes the coordinates of the same point in the first coordinate system and the second coordinate system, or the axial direction of the first coordinate system and The axial angle of the second coordinate system.
  • the third aspect of the embodiments of the present application provides a robot control method, which is applied to a terminal.
  • the method includes: the terminal receives a user's touch operation; the terminal intercepts the current interface to obtain the target image frame; the terminal determines the characteristic data of the target point ; The terminal uploads the target image frame and the characteristic data of the target point to the server.
  • control method of the robot further includes: the terminal extracts feature points from the target image frame according to a preset feature extraction algorithm; the terminal determines the feature data of the feature points; and the terminal uploads the feature data of the feature points to the server.
  • control method of the robot further includes: the terminal determines the screen coordinates of the target point and the screen coordinates of the feature point; the terminal according to the screen coordinates of the target point and the screen coordinates of the feature point, and the feature data of the target point and the feature point
  • the characteristic data determines the relative position of the target point and the characteristic point; the terminal uploads the relative position of the target point and the characteristic point to the server.
  • the fourth aspect of the embodiments of the present application provides a robot control method, which is applied to a server.
  • the method includes: the server receives the visual SLAM map sent by the robot; the server receives the coordinates of the first coordinate system and the second coordinate system sent by the robot Conversion relationship; where the first coordinate system is the projected coordinate system of the visual SLAM map on the horizontal plane, and the second coordinate system is the coordinate system of the laser SLAM map acquired by the robot; the visual SLAM map and the laser SLAM map are the robot’s Created in the same environment; the server receives the target image frame and the characteristic data of the target point uploaded by the terminal; the server determines the first coordinate of the target point in the first coordinate system according to the target image frame, the characteristic data of the target point and the visual SLAM map ; The server converts the first coordinate into the second coordinate, and the second coordinate is the coordinate of the target point in the second coordinate system; the server sends the second coordinate to the robot.
  • the server determines the first coordinate of the target point in the first coordinate system according to the target image frame, the characteristic data of the target point and the visual SLAM map, including: the server obtains the characteristic data of the characteristic point in the target image frame; The feature data of the feature point is matched with the feature data in the visual SLAM map to determine the coordinate of the feature point in the first coordinate system; the server determines the relative position of the target point and the feature point; the server determines the position of the feature point in the first coordinate system The coordinates, the relative position of the target point and the characteristic point determine the first coordinate of the target point in the first coordinate system.
  • the server determines the relative position of the target point and the characteristic point, including: the server receives the screen coordinates of the target point and the screen coordinates of the characteristic point uploaded by the terminal, and the characteristic data of the target point and the characteristic data of the characteristic point; The screen coordinates of the point and the screen coordinates of the feature point, as well as the feature data of the target point and the feature data of the feature point, determine the relative position of the target point and the feature point.
  • the server converts the first coordinate to the second coordinate, including: the server according to the relative position of the origin of the first coordinate system and the origin of the second coordinate system, and the relationship between the axis of the first coordinate system and the second coordinate system
  • the included angle of the axis converts the first coordinate to the second coordinate.
  • the origin of the first coordinate system coincides with the origin of the second coordinate system
  • a robot including a memory and a processor.
  • the memory is used to store information including program instructions.
  • the processor is used to control the execution of the program instructions. When the program instructions are loaded and executed by the processor, Make the robot execute the method described in the second aspect.
  • a sixth aspect of the embodiments of the present application provides a terminal, including a memory and a processor, the memory is used to store information including program instructions, the processor is used to control the execution of the program instructions, and when the program instructions are loaded and executed by the processor, The terminal is caused to execute the method described in the third aspect.
  • a server including a memory and a processor, the memory is used to store information including program instructions, and the processor is used to control the execution of the program instructions.
  • the server is caused to execute the method described in the fourth aspect.
  • An eighth aspect of the embodiments of the present application provides a control system.
  • the control system includes the robot described in the fifth aspect, the terminal described in the sixth aspect, and the server described in the seventh aspect.
  • the robot constructs two maps, the visual SLAM map and the laser SLAM map, and the robot or the server can learn the difference between the two maps. Conversion relationship. Then based on the user's selection in the image of the actual environment provided by the terminal, the server can learn the position of the target point selected by the user in the visual SLAM map, and based on the conversion relationship between the two maps, obtain the target point in the laser SLAM map Therefore, the robot can be conveniently controlled to move to the target point selected by the user based on the laser SLAM map.
  • the user does not need to select the target point on the electronic map with a low degree of restoration, but directly selects the target point on the interface of the terminal. Therefore, the user can accurately select the position where the robot wants to control the movement, so that the robot can accurately Move to the target point.
  • Figure 1 is a schematic diagram of an electronic map provided by the prior art
  • FIG. 2A is a schematic diagram of a control system provided by an embodiment of the application.
  • 2B is a schematic structural diagram of a robot provided by an embodiment of the application.
  • 2C is a software structure block diagram of a robot provided by an embodiment of this application.
  • FIG. 3 is a schematic diagram of selecting a target point through a terminal according to an embodiment of the application.
  • 4A is a schematic diagram of an image obtained by a terminal photographing an indoor environment according to an embodiment of the application.
  • 4B is a schematic diagram of feature points in an image obtained by a terminal photographing an indoor environment according to an embodiment of the application;
  • FIG. 5A is a flowchart of interaction between a robot, a terminal, and a server provided by an embodiment of this application;
  • FIG. 5B is a flowchart of another robot, terminal, and server interaction provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the right-hand coordinate system and the ground provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of the positional relationship between the first coordinate system and the second coordinate system provided by an embodiment of the application.
  • FIG. 8 is a flowchart of a method for controlling a robot provided by an embodiment of the application.
  • Fig. 9 is a flowchart of a method for controlling a robot provided by an embodiment of the application.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are in an "or” relationship.
  • "The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
  • at least one item (a) of a, b, or c can represent: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • an embodiment of the present application provides a control system, including a terminal 100, a server 200, and a robot 300.
  • the terminal 100 is also called user equipment (User Equipment, UE), which is a device that provides voice and/or data connectivity to users.
  • UE user equipment
  • UE user equipment
  • Common terminals include, for example, mobile phones, tablet computers, notebook computers, palmtop computers, and mobile Internet devices (MID).
  • the terminal 100 can control the robot 300 through the server 200.
  • the robot 300 can also be replaced with other electronic devices that have functions similar to those described in the embodiments of the present application and can be controlled by the terminal 100.
  • FIG. 2B shows a schematic structural diagram of the terminal 100.
  • the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and user An identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal 100.
  • the terminal 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the terminal 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the terminal 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the terminal 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the terminal 100, and can also be used to transfer data between the terminal 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the terminal 100.
  • the terminal 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the terminal 100. While the charging management module 140 charges the battery 142, it can also supply power to the terminal 100 through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the terminal 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the terminal 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the terminal 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the terminal 100, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the terminal 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the terminal 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the terminal 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the terminal 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the terminal 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the terminal 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the terminal 100 may support one or more video codecs.
  • the terminal 100 can play or record videos in multiple encoding formats, for example: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the terminal 100 can be implemented, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the terminal 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the terminal 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the terminal 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the terminal 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the terminal 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the terminal 100 may be provided with at least one microphone 170C. In other embodiments, the terminal 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the terminal 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, and a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA, CTIA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194. Pressure sensor 180A
  • the capacitive pressure sensor may include at least two parallel plates with conductive material.
  • the terminal 100 determines the intensity of the pressure according to the change in capacitance.
  • the terminal 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the terminal 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions.
  • the gyro sensor 180B may be used to determine the movement posture of the terminal 100.
  • the angular velocity of the terminal 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the terminal 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the terminal 100 through a reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the terminal 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the terminal 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the terminal 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal 100 in various directions (generally three axes). When the terminal 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the terminal 100, and be used in applications such as horizontal and vertical screen switching, and pedometer.
  • the terminal 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the terminal 100 emits infrared light to the outside through the light emitting diode.
  • the terminal 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal 100. When insufficient reflected light is detected, the terminal 100 may determine that there is no object near the terminal 100.
  • the terminal 100 can use the proximity light sensor 180G to detect that the user holds the terminal 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the terminal 100 can adjust the brightness of the display screen 194 automatically according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the terminal 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the terminal 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the terminal 100 when the temperature is lower than another threshold, the terminal 100 heats the battery 142 to avoid abnormal shutdown of the terminal 100 due to low temperature.
  • the terminal 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the terminal 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the terminal 100 may receive key input, and generate key signal input related to user settings and function control of the terminal 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal 100.
  • the terminal 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the terminal 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the terminal 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
  • the software system of the terminal 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the terminal 100 by way of example.
  • FIG. 2C is a block diagram of the software structure of the terminal 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text and controls that display pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the terminal 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompt text messages in the status bar sound prompts, robot vibrations, flashing lights, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files in the application layer and application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes the touch operation into the original input event (including touch coordinates (in the embodiment of the present application), the time stamp of the touch operation, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or videos.
  • the user can use the terminal to download an APP for controlling the robot.
  • the APP associates the terminal with the robot, and uploads the association relationship to the server.
  • the server stores the association relationship between the terminal and the robot.
  • a robot can be associated with one terminal, or with two or more terminals. For example, if A purchases the robot R1, the server stores the association relationship between A's mobile phone and the robot R1, so that when A opens the APP of his mobile phone, he can control the movement of the robot R1. For another example, someone buys a robot R2 and gives it to B.
  • B associates all 3 people’s mobile phones with robot R2, so that when any one of the 3 people in B’s family opens the app on their mobile phone, Able to control the movement of the robot R2.
  • the robot By associating the terminal with the robot and storing the association relationship between the terminal and the robot, the robot can be controlled by the terminal associated with the robot.
  • the robot After the user purchases the robot, he puts the robot in an indoor environment, for example, in his own home, and turns on the robot, and the robot will construct a laser SLAM map and a visual SLAM map of the indoor environment.
  • Simultaneous Localization And Mapping usually refers to a system that collects and calculates various sensor data on a robot or other carrier to generate its own position and posture positioning and scene map information .
  • SLAM technology is critical to the action and interaction capabilities of robots or other agents, because it represents the basis of this ability: knowing where you are, knowing the surrounding environment, and knowing how to act autonomously next. It has a wide range of applications in areas such as autonomous driving, service robots, unmanned aerial vehicles, AR/VR, etc. It can be said that all agents with certain mobility have some form of SLAM system.
  • SLAM systems usually contain multiple sensors and multiple functional modules.
  • the current common robot SLAM system generally has two forms: SLAM based on lidar (laser SLAM) and SLAM based on vision (Visual SLAM, VSLAM or visual SLAM).
  • Laser SLAM was born out of early ranging-based positioning methods (such as ultrasonic and infrared single-point ranging).
  • the emergence and popularization of lidar makes the measurement faster and more accurate, and the information is richer.
  • the object information collected by lidar presents a series of scattered points with accurate angle and distance information, which are called point clouds.
  • the laser SLAM system calculates the relative movement distance and posture change of the lidar by matching and comparing two point clouds at different times, thus completing the positioning of the robot itself.
  • Lidar distance measurement is more accurate, the error model is simple, it runs stably in environments other than direct sunlight, and it is easier to process point clouds.
  • the point cloud information itself contains direct geometric relationships, making the path planning and navigation of the robot intuitive.
  • Eyes are the main source of information from the outside world.
  • Visual SLAM also has similar characteristics. It can obtain massive and redundant texture information from the environment and has super-strong scene recognition capabilities.
  • the early visual SLAM was based on filtering theory, and its nonlinear error model and huge amount of calculation became obstacles to its practical implementation. In recent years, with the sparse nonlinear optimization theory (Bundle Adjustment) and the advancement of camera technology and computing performance, visual SLAM running in real time is no longer a dream.
  • a visual SLAM system consists of a front end and a back end.
  • the front end is responsible for calculating the robot's pose through visual incremental, which is faster.
  • the back end is mainly responsible for two functions: one is when a loop occurs (that is, when the robot is determined to have returned to the place it has visited before), the loop is found and the position and posture of the two visits are corrected; the other is when the front end tracking is lost , Reposition the robot according to the visual texture information. Simply put, the front end is responsible for fast positioning, and the back end is responsible for slower map maintenance.
  • the robot after the robot constructs the laser SLAM map and the visual SLAM map of the indoor environment, it transmits the constructed visual SLAM map to the server, and the server receives and stores the visual SLAM map.
  • the user is in an indoor space and wants to control the movement of the robot through the terminal.
  • the robot can be a sweeping robot, and the user wants to control the sweeping robot to move to a certain position in a certain room through the terminal to clean. , For the convenience of description, this position is called the target point.
  • the user’s terminal downloads the APP for controlling the robot
  • the user opens the terminal’s APP
  • the APP calls the terminal’s camera. See Figure 3.
  • the user can tilt the terminal at an appropriate angle so that The camera can collect images containing the target point, and the current image captured by the camera is displayed on the terminal interface.
  • the user touches a point on the terminal interface with a finger to select the target point.
  • the contact point between the user's finger and the terminal screen is called the touch point. Take a screenshot of the current interface to get the target image frame.
  • the touch point is a point on the screen of the terminal
  • the target point is the position the user wants the robot to move to.
  • the terminal can be adjusted to tilt or rotate to adjust the direction of the terminal’s camera, Or change the touch point on the screen, etc.
  • the target point in the image that is, in the indoor three-dimensional space
  • Figure 4A shows the target image frame obtained by the mobile phone taking a screenshot of the current interface.
  • point A is the touch point.
  • the terminal uses a preset feature extraction algorithm to extract feature points in the target image frame, for example, point B1, point B2, ..., point B9 are the extracted feature points, and the terminal determines the screen coordinates of the feature point.
  • the preset feature extraction algorithm may be a SIFT algorithm or the like.
  • SIFT or Scale-invariant Feature Transform (SIFT)
  • SIFT Scale-invariant Feature Transform
  • the first coordinate system is the projected coordinate system of the coordinate system of the visual SLAM map on the horizontal plane
  • the second coordinate system is the coordinate system of the laser SLAM map
  • both the first coordinate system and the second coordinate system are two-dimensional coordinate systems.
  • the terminal uploads the screen coordinates of the feature point and the target point, and the feature data of the feature point and the target point to the server; it can also upload the data of the target image frame to the server.
  • the server determines the relative position of the target point and each feature point according to the feature data of each feature point and the target point, and finds the coordinates of these feature points from the visual SLAM map according to the feature data of each feature point; and based on the target point and each feature The relative position of the point, the coordinates of the target point in the visual SLAM map are obtained. That is, the first coordinate of the target point in the first coordinate system is acquired.
  • the server converts the coordinates of the target point in the first coordinate system into the coordinates of the target point in the second coordinate system.
  • the server sends the coordinates of the target point in the second coordinate system to the sweeping robot, and the sweeping robot plans the movement path according to the coordinates of the target point in the second coordinate system and the coordinates of its current position in the second coordinate system.
  • the movement path moves to the target point, and then the position of the target point is cleaned.
  • the robot control method provided by the embodiment of the present application involves the interaction between the terminal, the server, and the robot.
  • the control method of the robot shown in FIG. 5A includes the following steps S1 to S10.
  • the robot control method shown in FIG. 5B includes the following steps S1-S4, steps S5'-S7', and steps S8-S10.
  • Figure 5A is described in detail below.
  • Step S1 The robot constructs a visual SLAM map and a laser SLAM map.
  • the process of the robot constructing the laser SLAM map includes step S400 to step S403.
  • Lidar is installed in the robot, which can emit laser light and receive the laser light reflected by obstacles.
  • the obstacles refer to objects placed indoors and generally stationary.
  • Step S400 Predetermine the origin and coordinate system of the laser SLAM map to be created.
  • the selection rule of the origin may be set before the factory. Wherein, the origin may be, for example, the position of the charging pile of the robot.
  • Step S401 When the robot is moving indoors, the lidar continuously emits laser light, and the laser light is reflected by the obstacle point before being received by the lidar.
  • the obstacle point refers to a point in the obstacle.
  • the laser is usually fired to a point in the obstacle.
  • Step S402 the robot determines the direction of the obstacle point according to the orientation of the lidar; and determines the distance between the obstacle point and the robot according to the length of time from when the laser is emitted to when the laser is received.
  • the robot can obtain the coordinates of its current location based on its own sensors. In this way, based on the length of the laser round trip, the distance between the obstacle point and the current position of the robot can be determined; the distance between the obstacle point and the origin can also be determined.
  • Step S403 The robot can obtain the direction of each obstacle point and the distance between each obstacle point and the origin by moving indoors and continuously emitting laser, and then the robot can create based on the direction of the obstacle point and the distance between the obstacle point and the origin Laser SLAM map.
  • the process of the robot constructing the visual SLAM map includes steps S501 to S505.
  • a camera is installed in the robot to take pictures of the surrounding environment.
  • Step S501 Predetermine the origin and coordinate system of the visual SLAM map to be created.
  • the selection rule of the origin may be set before the factory. Wherein, the origin may be, for example, the position of the charging pile of the robot.
  • Step S502 the robot moves indoors and takes pictures of the surrounding environment through the camera. Among them, after the origin and coordinate system are determined, during the movement of the robot, the robot can obtain the coordinates of its current location based on its own sensors.
  • Step S503 The robot extracts feature points of the captured image according to a preset feature extraction algorithm, and obtains the positions of these feature points relative to the robot.
  • the preset feature extraction algorithm may be a SIFT algorithm or the like.
  • SIFT Scale-invariant Feature Transform
  • This description is scale-invariant, and key points can be detected in the image.
  • Step S504 The robot calculates the coordinates of the feature point in the coordinate system of the visual SLAM map according to the position of the feature point relative to the robot and the current coordinate of the robot in the coordinate system of the visual SLAM map.
  • Step S505 The robot draws a visual SLAM map based on the coordinates in the coordinate system of the visual SLAM map by moving indoors and continuously acquiring the coordinates of the surrounding feature points.
  • the first coordinate system is the projected coordinate system of the coordinate system of the visual SLAM map on the horizontal plane.
  • the abscissa of the point in the visual SLAM map is the same as the point in the first coordinate.
  • the abscissa of the system is the same; and the ordinate of the point in the visual SLAM map is the same as the ordinate of the point in the first coordinate system.
  • the second coordinate system is the coordinate system of the laser SLAM map.
  • the abscissa of the point in the laser SLAM map is the same as the abscissa of the point in the second coordinate system; and the point is in the laser SLAM
  • the ordinate in the map is the same as the ordinate of the point in the second coordinate system.
  • the laser SLAM map can be used to correct the visual SLAM map.
  • the abscissa X 0 and ordinate Y 0 of a point in the visual SLAM map (for convenience of description, the point is called point B)
  • the abscissa and ordinate of the point are determined in the laser SLAM map, respectively X 0 ', Y 0 '
  • the coordinates of the point in the visual SLAM map can be corrected according to the coordinates of the point in the laser SLAM map.
  • K1 and K2 are constants, and the values of K1 and K2 can be equal or unequal.
  • both K1 and K2 can be set to 1/2.
  • the horizontal coordinate in the SLAM map is X 0 ', and the vertical coordinate is Y 0 '.
  • Step S2 The robot uploads the visual SLAM map to the server.
  • Step S3 The robot uploads the relative position of the origin of the first coordinate system and the origin of the second coordinate system, and the angle between the axial direction of the first coordinate system and the axial direction of the second coordinate system to the server.
  • the relative position of the two origins can be known. If the coordinates of the origin of the first coordinate system in the second coordinate system are known, or the coordinates of the origin of the second coordinate system in the first coordinate system are known, the relative positions of the two origins can also be known. In step S3, the robot uploads the relative position of the origin of the first coordinate system and the origin of the second coordinate system to the server.
  • the server uploads the following information to the server: the distance between the origin of the first coordinate system and the origin of the second coordinate system Distance, and the direction of one origin relative to another origin; or, the coordinates of the origin of the first coordinate system in the second coordinate system, or the coordinates of the origin of the second coordinate system in the first coordinate system.
  • the origin of the first coordinate system and the origin of the second coordinate system may or may not coincide.
  • the formula for calculating the second coordinate based on the first coordinate is more complicated when the two origins do not overlap.
  • the robot only needs to upload the angle between the axial direction of the first coordinate system and the axial direction of the second coordinate system to the server.
  • the angle between the axial direction of the first coordinate system and the axial direction of the second coordinate system specifically refers to the angle between the horizontal axis of the first coordinate system and the horizontal axis of the second coordinate system, or, the first coordinate system The angle between the vertical axis of and the vertical axis of the second coordinate system.
  • the included angle lies between [0, ⁇ ].
  • xOy represents the first coordinate system
  • x'Oy' represents the second coordinate system
  • is the axial angle between the two coordinate systems.
  • Step S4 The terminal displays a visual interface.
  • the user can open an APP installed on the terminal for controlling the robot.
  • the APP calls the camera of the terminal to take an image of the current environment and provides a visual interface for the user.
  • the user can see the indoor environment in the image taken by the camera, for example, the user can see the sofa, desk, chair, bookcase, etc. in the image taken by the camera. If the user wants the robot to move to a certain position in front of the sofa, the position in front of the sofa is the target point; if the user wants the robot to move to a certain position beside the bookcase, the position beside the bookcase is the target point.
  • Step S5 The terminal receives the user's touch operation, takes a screenshot of the interface to obtain the target image frame, extracts the feature points in the target image frame, and determines the feature data of each feature point and the relative position of the target point and each feature point. Generally speaking, the number of extracted feature points is greater than 2.
  • the terminal takes a screenshot of the interface displayed on the screen to obtain the target image frame, and the terminal extracts the characteristic points in the target image frame.
  • Figure 4B shows some of the extracted characteristic points (for example, point B1, point B2, ..., point B9) .
  • the terminal determines the relative position of the target point and each feature point.
  • the feature extraction algorithm used by the terminal to extract feature points in the target image frame in step S5 is the same as the feature extraction algorithm used when the robot constructs the SLAM map.
  • Step S6 The terminal uploads the characteristic data of each characteristic point and the relative position of the target point and each characteristic point to the server.
  • Step S7 The server finds the coordinates of these characteristic points from the visual SLAM map according to the characteristic data of each characteristic point; and obtains the coordinates of the target point in the visual SLAM map based on the relative position of the target point and each characteristic point. That is, the coordinates of the target point in the first coordinate system are acquired. For the convenience of description, the coordinates are called the first coordinates.
  • the server receives the feature data of the feature points uploaded by the terminal, searches for the corresponding feature points in the visual SLAM map stored by itself, and determines the coordinates of the feature points in the visual SLAM map.
  • the coordinates of the feature points in the visual SLAM map can be three-dimensional coordinates.
  • the coordinates of the three dimensions are: abscissa, ordinate, and vertical.
  • the abscissa is taken as the abscissa of the target point in the first coordinate system
  • the ordinate is taken as the ordinate of the target point in the first coordinate system.
  • the coordinates of the target point in the first coordinate system are determined according to the coordinates of the characteristic point in the first coordinate system and the relative position of the characteristic point and the target point, that is, the first coordinate is determined.
  • Step S8 The server converts the first coordinate into the second coordinate according to the relative position of the origin of the first coordinate system and the second coordinate system, and the angle between the axial direction of the first coordinate system and the axial direction of the second coordinate system.
  • xOy represents the first coordinate system
  • x'Oy' represents the second coordinate system
  • the angle between the horizontal axis of the first coordinate system and the horizontal axis of the second coordinate system is ⁇ , in the first coordinate system
  • the coordinates of point A in the first coordinate system are (X, Y)
  • the coordinates in the second coordinate system are (X′, Y′)
  • the distance between the origin O of the system is r
  • the angle between the line segment OA and the horizontal axis of the first coordinate system is ⁇ .
  • the coordinates (X', Y') of the target point in the second coordinate system can be calculated.
  • The angle between the horizontal axis of the first coordinate system and the horizontal axis of the second coordinate system is ⁇ , the positive direction of the horizontal axis of the first coordinate system is determined by the fixed parameters of the camera, and the positive direction of the horizontal axis of the second coordinate system Determined by the fixed parameters of the lidar.
  • is determined and is a fixed constant. ⁇ can be deduced from the coordinates of the same point in the first coordinate system and the coordinates in the second coordinate system.
  • the method of deducing ⁇ will be introduced in detail below. Determine a point (for the convenience of description, call this point C), make the robot move to point C, and check the coordinates of the robot in the first coordinate system and the second coordinate system. Assume that the coordinates in the first coordinate system are (X 1 , Y 1 ), the coordinates in the second coordinate system are (X 1 ′, Y 1 ′), that is, the coordinates of point C in the first coordinate system are (X 1 , Y 1 ), point C The coordinates in the second coordinate system are (X 1 ′, Y 1 ′).
  • the first coordinate system is the coordinate system of the projection of the visual SLAM coordinate system on the horizontal plane. Therefore, there is also cumulative error in the first coordinate system. The more inaccurate the coordinates of the point. Therefore, by selecting a position close to the origin (for example, the distance between the point C and the origin is within 30 cm), the obtained coordinates of the robot in the first coordinate system are more accurate, and the ⁇ calculated accordingly is also more accurate.
  • Step S9 The server sends the second coordinates to the robot.
  • the second coordinate is the coordinate of the target point in the laser SLAM map.
  • Step S10 the robot determines the movement path according to the second coordinates, and moves to the target point according to the movement path.
  • the robot knows the coordinates of its current position in the second coordinate system, and also knows the coordinates of the target point in the second coordinate system, determines the motion path according to the laser SLAM map, and moves to the target point according to the motion path.
  • Steps S1-S4 in the robot control method shown in FIG. 5B are the same as steps S1-S4 shown in FIG. 5A. After step S4, the method further includes the following steps S5'-S7'.
  • Step S5' The terminal receives the user's touch operation, and takes a screenshot of the interface to obtain the target image frame.
  • the terminal takes a screenshot of the interface displayed on the screen to obtain the target image frame.
  • Step S6' The terminal uploads the target image frame to the server.
  • Step S7' The server extracts feature points from the target image frame, determines the feature data of each feature point, determines the relative position of the target point and each feature point, and finds these features from the visual SLAM map based on the feature data of each feature point.
  • the coordinates of the point; and based on the relative position of the target point and each feature point, the coordinates of the target point in the visual SLAM map are obtained. That is, the coordinates of the target point in the first coordinate system are acquired. For the convenience of description, the coordinates are called the first coordinates.
  • FIG. 4B shows some extracted feature points (for example, point B1, point B2, ..., point B9).
  • the server receives the feature data of the feature points uploaded by the terminal, searches for the corresponding feature points in the visual SLAM map stored by itself, and determines the coordinates of the feature points in the visual SLAM map.
  • the coordinates of the feature points in the visual SLAM map can be three-dimensional coordinates.
  • the coordinates of the three dimensions are: abscissa, ordinate, and vertical.
  • the abscissa is taken as the abscissa of the target point in the first coordinate system
  • the ordinate is taken as the ordinate of the target point in the first coordinate system.
  • the coordinates of the target point in the first coordinate system are determined according to the coordinates of the characteristic point in the first coordinate system and the relative position of the characteristic point and the target point, that is, the first coordinate is determined.
  • the robot control method shown in FIG. 5B further includes steps S8-S10.
  • the steps S8-S10 included in the robot control method shown in FIG. 5B are the same as the steps S8-S10 shown in FIG. 5A.
  • the user does not need to select the target point on the electronic map with a low degree of restoration, but directly selects the target point on the interface of the terminal. Therefore, the user can accurately select the position where the robot wants to control the movement. So the robot can accurately move to the target point.
  • the terminal extracts the feature points from the target image frame, and then uploads the feature data of the feature points to the server instead of uploading the target image frame to the server.
  • the advantage of this method is that the server The target image frame cannot be deduced based on the data of the feature points, thereby effectively protecting user privacy.
  • the terminal uploads the target image frame to the server, and the server extracts the feature points in the target image frame.
  • the advantage of this method is that the calculation amount of the terminal is effectively reduced, and the occupation of the calculation resources of the terminal is reduced, so that the configuration requirements of the terminal are lower.
  • the computing power of the server is much stronger than that of the terminal, the computing speed in this way is faster, so the robot responds faster.
  • FIG. 8 shows a flowchart of a control method for executing a robot provided by an embodiment of the application.
  • the embodiment shown in FIG. 8 is described by taking mobile phones and robot cleaners as examples, and the control method of this application can also be applied to other terminals except mobile phones and robot cleaners, for example, a tablet computer. Control mopping robots or control other movable equipment.
  • the first part mainly includes that after the user purchases the cleaning robot, before using the cleaning robot to clean, it is necessary to download an APP for controlling the cleaning robot on the mobile phone and associate the mobile phone with the cleaning robot, specifically including steps S101 to S103.
  • Step S101 The user opens the APP installed on the mobile phone for controlling the sweeping robot.
  • Step S102 The user associates the mobile phone with the sweeping robot on the APP, and the mobile phone uploads the association relationship to the server.
  • the purpose of associating the mobile phone with the cleaning robot is to make the cleaning robot be controlled by the specific terminal associated with it.
  • Step S103 The server receives the association relationship between the mobile phone and the cleaning robot uploaded by the mobile phone, and stores the association relationship.
  • the second part mainly includes the construction of laser SLAM maps and visual SLAM maps for the indoor environment by the sweeping robot.
  • the server stores the visual SLAM maps uploaded by the sweeping robot, which specifically includes steps S201 to S206.
  • Step S201 The sweeping robot separately establishes a laser SLAM map and a visual SLAM map of the indoor environment.
  • Step S202 The sweeping robot uploads the visual SLAM map to the server.
  • Step S203 The cleaning robot uploads the coordinates (X 1 , Y 1 ) in the first coordinate system and the coordinates (X 1 ′, Y 1 ′) in the second coordinate system of the same point in the indoor environment to the server.
  • Step S204 The server receives the visual SLAM map uploaded by the cleaning robot, and associates the visual SLAM map with the cleaning robot.
  • Step S205 The server receives the coordinates (X 1 , Y 1 ) in the first coordinate system and the coordinates (X 1 ′, Y 1 ′) in the second coordinate system of the same point uploaded by the cleaning robot.
  • the first coordinate system is the projected coordinate system of the coordinate system of the visual SLAM map on the horizontal plane
  • the second coordinate system is the coordinate system of the laser SLAM map.
  • Step S206 The server calculates the horizontal axis of the first coordinate system and the coordinates (X 1 ′, Y 1 ′) of the same point in the first coordinate system (X 1 , Y 1 ) and the coordinates (X 1 ′, Y 1 ′) in the second coordinate system.
  • the included angle ⁇ between the horizontal axes of the second coordinate system, and the included angle ⁇ is associated with the sweeping robot.
  • the angle between the horizontal axis of the first coordinate system and the horizontal axis of the second coordinate system is ⁇ .
  • the positive direction of the horizontal axis of the first coordinate system is determined by the fixed parameters of the camera.
  • the positive direction of the horizontal axis is determined by the fixed parameters of the lidar.
  • the purpose of associating the included angle ⁇ with the cleaning robot is to make the server know which included angle ⁇ converts the first coordinate into the second coordinate.
  • the mobile phone P1 and the cleaning robot R1 have an associated relationship; the mobile phone P2 and the cleaning robot R2 have an associated relationship; the mobile phone P3 and the cleaning robot R3 have an associated relationship, as shown in Table 2, the cleaning robot R1 and the included angle ⁇ 1 has an associated relationship; the cleaning robot R2 has an associated relationship with the included angle ⁇ 2; the cleaning robot R3 has an associated relationship with an included angle ⁇ 3.
  • the server stores the above-mentioned association relationship.
  • the server When the server receives the screen coordinates of the feature point and the screen coordinates of the target point uploaded by the mobile phone P1, it searches for the association relationship stored by itself, and knows that the mobile phone P1 and the cleaning robot R1 have an association relationship. R1 and the included angle ⁇ 1 have an associated relationship. In the process of calculating the second coordinate, ⁇ 1 is substituted into formula (7), (8), or formula (9).
  • the server When the server receives the screen coordinates of the feature points uploaded by the mobile phone P3 and the screen coordinates of the target point, it searches for the stored association relationship, and knows that the mobile phone P3 has an association relationship with the cleaning robot R3, and the cleaning robot R3 has an association relationship with the included angle ⁇ 3 In the process of calculating the second coordinate, ⁇ 3 is substituted into formula (7), (8), or formula (9).
  • the third part mainly includes the user using the mobile phone to control the sweeping robot to move to the target point for cleaning, specifically including step S301 to step S316.
  • Step S301 After the robot is associated with the terminal, when the user wants to use the cleaning robot to clean, for example, the user wants the cleaning robot to move to the target point to clean the location of the target point.
  • the user opens the APP on the mobile phone, and the APP calls the mobile phone camera.
  • Step S302 The camera of the mobile phone collects an image of the current environment, and displays the image on the screen of the mobile phone.
  • Step S303 The user selects the target point by touching the screen.
  • the user touches a certain point on the phone screen (the point is the touch point).
  • the user can tilt the phone at an appropriate angle so that the camera can capture images containing the target point.
  • the current image captured by the camera is displayed on the phone interface.
  • the user touches a certain point on the phone interface to select the target point. Among them, the contact point between the user's finger and the mobile phone screen is called the touch point.
  • the mobile phone can obtain the target point in the actual scene that the user wants to select through the position of the touch point and the current tilt angle of the mobile phone.
  • Step S304 The mobile phone takes a screenshot of the current screen interface, and the obtained screenshot is the aforementioned target image frame.
  • Step S305 the mobile phone extracts N feature points in the target image frame.
  • N is a natural number greater than 1.
  • Step S306 the mobile phone separately determines the feature data of the N feature points and the relative positions of the target point and the N feature points.
  • Step S307 the mobile phone uploads the feature data of the N feature points and the relative positions of the target point and the N feature points to the server.
  • Step S308 The server receives the feature data of the N feature points and the relative positions of the target point and the N feature points uploaded by the mobile phone.
  • the mobile phone can also just upload the target image frame to the server, and the server extracts the characteristic data of the characteristic point, and calculates the relative position between the target point and the characteristic point.
  • Step S309 The server finds the coordinates of the N feature points from the visual SLAM map according to the feature data of the N feature points.
  • the coordinates of the N feature points refer to the coordinates of the N feature points in the first coordinate system.
  • Step S310 The server determines the coordinates (X, Y) of the target point in the first coordinate system according to the coordinates of the N feature points and the relative positions of the target point and the N feature points.
  • Step S312 The server sends the second coordinates (X', Y') to the cleaning robot.
  • Step S313 The cleaning robot receives the second coordinates (X', Y') sent by the server.
  • Step S314 The sweeping robot determines the coordinates of its current position in the second coordinate system.
  • Step S315 the cleaning robot plans a movement path according to the second coordinates (X′, Y′) and the coordinates of the current position of the cleaning robot in the second coordinate system.
  • Step S316 the sweeping robot controls itself to move to the target point according to the planned movement path.
  • the first coordinate is calculated according to the coordinates (X 1 , Y 1 ) of the same point in the first coordinate system and the coordinates (X 1 ′, Y 1 ′) in the second coordinate system.
  • the angle ⁇ between the horizontal axis of the system and the horizontal axis of the second coordinate system.
  • This step can also be completed by the sweeping robot.
  • the sweeping robot calculates the distance between the horizontal axis of the first coordinate system and the horizontal axis of the second coordinate system.
  • the included angle ⁇ is uploaded to the server, and the server can directly associate the included angle ⁇ with the sweeping robot and use it in the process of converting the first coordinate into the second coordinate.
  • the user does not need to select the target point on the electronic map with a low degree of restoration, but directly selects the target point on the interface of the terminal. Therefore, the user can accurately select the position where the robot wants to control the movement. So the robot can accurately move to the target point.
  • a robot control method provided by an embodiment of the present application involves interaction between a mobile phone, a server, and a cleaning robot, and includes the following steps S901 to S907.
  • the sweeping robot collects environmental information and constructs a laser SLAM map and a visual SLAM map.
  • S902 The sweeping robot uploads the visual SLAM map to the server.
  • S903 The mobile phone takes a screenshot of the current interface to obtain a target image frame, and extracts feature points in the target image frame.
  • the user can tilt the phone at an appropriate angle so that the camera can capture images containing the target point.
  • the current image captured by the camera is displayed on the phone interface.
  • the user touches a certain point on the phone interface to select the target point. Among them, the contact point between the user's finger and the mobile phone screen is called the touch point.
  • the mobile phone can obtain the target point in the actual scene that the user wants to select through the position of the touch point and the current tilt angle of the mobile phone.
  • the mobile phone takes a screenshot of the current interface to obtain the target image frame.
  • the mobile phone extracts the feature points in the target image frame, and determines the feature data of each feature point and the relative position of the target point and each feature point.
  • S904 The mobile phone uploads the characteristic data of each characteristic point and the relative position of the target point and each characteristic point to the server.
  • S905 The server calculates the coordinates of the target point in the first coordinate system, and converts the coordinates of the target point in the first coordinate system into the coordinates of the target point in the second coordinate system.
  • the server finds the coordinates of each feature point from the visual SLAM map according to the feature data of each feature point.
  • the coordinates of each feature point refer to the coordinates of each feature point in the first coordinate system.
  • S906 The server delivers the coordinates of the target point in the second coordinate system to the cleaning robot.
  • S907 The cleaning robot plans a movement path according to the coordinates of the target point in the second coordinate system and the coordinates of its current position in the second coordinate system, and autonomously moves to the target point.
  • the embodiments of the present application also provide a computer-readable storage medium in which a computer program is stored, and when the computer program is run on a computer, the computer executes the communication method described in the foregoing embodiment.
  • embodiments of the present application also provide a computer program product, which includes a computer program, which when running on a computer, causes the computer to execute the communication method described in the foregoing embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk).

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

La présente invention concerne un procédé de commande de robot (300), le robot (300), un terminal (100), un serveur (200) et un système de commande. Le procédé de commande comprend les étapes suivantes : le robot (300) télécharge vers le serveur (200) la relation de conversion de coordination entre un premier système de coordonnées et un second système de coordonnées et une carte SLAM visuelle ; le terminal (100) télécharge une trame d'image cible et des données de caractéristique d'un point cible vers le serveur (200) ; le serveur (200) détermine des coordonnées du point cible dans le premier système de coordonnées en fonction de la trame d'image cible, des données de caractéristiques du point cible et de la carte SLAM visuelle ; le serveur (200) convertit les premières coordonnées du point cible dans le premier système de coordonnées en secondes coordonnées du point cible dans le second système de coordonnées, et le serveur (200) envoie les secondes coordonnées au robot (300) ; le robot (300) reçoit les secondes coordonnées, détermine un trajet de déplacement en fonction des secondes coordonnées et des coordonnées de l'emplacement actuel du robot (300) dans le second système de coordonnées et se déplace vers le point cible selon le trajet de déplacement. Un utilisateur peut sélectionner précisément une position vers laquelle le robot (300) est censé se déplacer, de sorte que le robot (300) peut se déplacer avec précision vers le point cible.
PCT/CN2020/103859 2019-07-24 2020-07-23 Procédé de commande de robot, robot, terminal, serveur et système de commande WO2021013230A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910673025.0A CN110495819B (zh) 2019-07-24 2019-07-24 机器人的控制方法、机器人、终端、服务器及控制系统
CN201910673025.0 2019-07-24

Publications (1)

Publication Number Publication Date
WO2021013230A1 true WO2021013230A1 (fr) 2021-01-28

Family

ID=68586780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103859 WO2021013230A1 (fr) 2019-07-24 2020-07-23 Procédé de commande de robot, robot, terminal, serveur et système de commande

Country Status (2)

Country Link
CN (1) CN110495819B (fr)
WO (1) WO2021013230A1 (fr)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495819B (zh) * 2019-07-24 2021-05-18 华为技术有限公司 机器人的控制方法、机器人、终端、服务器及控制系统
CN113063426B (zh) * 2020-01-02 2022-12-13 北京魔门塔科技有限公司 一种位置信息确定方法及装置
CN111367278A (zh) * 2020-03-04 2020-07-03 北京小狗智能机器人技术有限公司 机器人工作覆盖区域的确定方法及相关设备
CN113679289B (zh) * 2020-05-18 2023-02-21 云米互联科技(广东)有限公司 扫地机控制方法、控制设备及计算机可读存储介质
CN112578333A (zh) * 2020-12-24 2021-03-30 江苏新冠亿科技有限公司 一种智能小车初始坐标检测方法、智能小车及存储介质
CN112932338A (zh) * 2021-02-05 2021-06-11 深圳拓邦股份有限公司 一种扫地机器人定点清扫方法
CN112886670A (zh) * 2021-03-04 2021-06-01 武汉联一合立技术有限公司 机器人的充电控制方法、装置、机器人及存储介质
CN113329071A (zh) * 2021-05-26 2021-08-31 北京远度互联科技有限公司 无人机调度方法及服务系统、计算机存储介质
CN113504790B (zh) * 2021-07-08 2022-08-26 中国南方电网有限责任公司超高压输电公司大理局 一种无人机的飞行控制方法、装置及无人机
CN113569849B (zh) * 2021-08-04 2023-11-21 北京交通大学 基于计算机视觉的汽车充电桩界面检测智能交互系统
CN114504285B (zh) * 2022-04-21 2022-07-05 深圳市倍思科技有限公司 清洁位置确定方法、装置、设备及存储介质
CN117697769B (zh) * 2024-02-06 2024-04-30 成都威世通智能科技有限公司 一种基于深度学习的机器人控制系统和方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346477A (ja) * 2004-06-03 2005-12-15 Toshiba Tec Corp 自律走行体
CN101650891A (zh) * 2008-08-12 2010-02-17 三星电子株式会社 创建3维网络地图的方法和控制自动行进设备的方法
EP2623010A2 (fr) * 2012-02-04 2013-08-07 LG Electronics, Inc. Robot nettoyeur
TW201444515A (zh) * 2013-05-17 2014-12-01 Lite On Electronics Guangzhou 清掃機器人及清掃機器人的定位方法
US20170203439A1 (en) * 2016-01-20 2017-07-20 Yujin Robot Co., Ltd. System for operating mobile robot based on complex map information and operating method thereof
CN107402567A (zh) * 2016-05-19 2017-11-28 科沃斯机器人股份有限公司 组合机器人及其巡航路径生成方法
CN107450561A (zh) * 2017-09-18 2017-12-08 河南科技学院 移动机器人的自主路径规划与避障系统及其使用方法
CN110495819A (zh) * 2019-07-24 2019-11-26 华为技术有限公司 机器人的控制方法、机器人、终端、服务器及控制系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06214639A (ja) * 1993-01-18 1994-08-05 Fujita Corp 移動体の走行制御装置
CN103092201B (zh) * 2012-08-10 2015-03-04 江苏科技大学 基于射频识别的多传感器语音导盲机器人及路径规划方法
CN106168805A (zh) * 2016-09-26 2016-11-30 湖南晖龙股份有限公司 基于云计算的机器人自主行走的方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346477A (ja) * 2004-06-03 2005-12-15 Toshiba Tec Corp 自律走行体
CN101650891A (zh) * 2008-08-12 2010-02-17 三星电子株式会社 创建3维网络地图的方法和控制自动行进设备的方法
EP2623010A2 (fr) * 2012-02-04 2013-08-07 LG Electronics, Inc. Robot nettoyeur
TW201444515A (zh) * 2013-05-17 2014-12-01 Lite On Electronics Guangzhou 清掃機器人及清掃機器人的定位方法
US20170203439A1 (en) * 2016-01-20 2017-07-20 Yujin Robot Co., Ltd. System for operating mobile robot based on complex map information and operating method thereof
CN107402567A (zh) * 2016-05-19 2017-11-28 科沃斯机器人股份有限公司 组合机器人及其巡航路径生成方法
CN107450561A (zh) * 2017-09-18 2017-12-08 河南科技学院 移动机器人的自主路径规划与避障系统及其使用方法
CN110495819A (zh) * 2019-07-24 2019-11-26 华为技术有限公司 机器人的控制方法、机器人、终端、服务器及控制系统

Also Published As

Publication number Publication date
CN110495819A (zh) 2019-11-26
CN110495819B (zh) 2021-05-18

Similar Documents

Publication Publication Date Title
WO2021013230A1 (fr) Procédé de commande de robot, robot, terminal, serveur et système de commande
WO2020244497A1 (fr) Procédé d'affichage pour écran flexible et dispositif électronique
WO2020259452A1 (fr) Procédé d'affichage plein écran pour terminal mobile et appareil
WO2021213164A1 (fr) Procédé d'interaction entre des interfaces d'application, dispositif électronique et support de stockage lisible par ordinateur
CN112040361B (zh) 耳机控制方法、装置及存储介质
CN109766043A (zh) 电子设备的操作方法和电子设备
WO2021169394A1 (fr) Procédé d'embellissement d'une image du corps humain sur la base de la profondeur et dispositif électronique
CN108462818A (zh) 电子设备及用于在该电子设备中显示360度图像的方法
WO2022027972A1 (fr) Procédé de recherche de dispositif et dispositif électronique
WO2022007707A1 (fr) Procédé de commande de dispositif domestique, dispositif terminal et support de stockage lisible par ordinateur
CN112134995A (zh) 一种查找应用对象的方法、终端及计算机可读存储介质
WO2022161386A1 (fr) Procédé de détermination de pose et dispositif associé
WO2022206494A1 (fr) Procédé et dispositif de suivi de cible
WO2021170129A1 (fr) Procédé de détermination de pose et dispositif associé
WO2022152174A1 (fr) Procédé de projection d'écran et dispositif électronique
WO2022062902A1 (fr) Procédé de transfert de fichier et dispositif électronique
WO2021036562A1 (fr) Procédé d'invite pour un entraînement physique, et dispositif électronique
CN111294320B (zh) 数据转换的方法和装置
CN114812381A (zh) 电子设备的定位方法及电子设备
WO2022222705A1 (fr) Procédé de commande de dispositif et dispositif électronique
WO2022222702A1 (fr) Procédé de déverrouillage d'écran et dispositif électronique
WO2024114785A1 (fr) Procédé de traitement d'image, dispositif électronique et système
WO2024021691A1 (fr) Procédé d'affichage et dispositif électronique
WO2023216957A1 (fr) Procédé et système de positionnement de cible et dispositif électronique
CN113220935B (zh) 录像数据的存储、查询方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844557

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20844557

Country of ref document: EP

Kind code of ref document: A1