WO2022227352A1 - 多机器人-多人协作控制方法、装置及系统 - Google Patents

多机器人-多人协作控制方法、装置及系统 Download PDF

Info

Publication number
WO2022227352A1
WO2022227352A1 PCT/CN2021/114835 CN2021114835W WO2022227352A1 WO 2022227352 A1 WO2022227352 A1 WO 2022227352A1 CN 2021114835 W CN2021114835 W CN 2021114835W WO 2022227352 A1 WO2022227352 A1 WO 2022227352A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
robot
positioning
location
user terminal
Prior art date
Application number
PCT/CN2021/114835
Other languages
English (en)
French (fr)
Inventor
俞捷
Original Assignee
来飞光通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 来飞光通信有限公司 filed Critical 来飞光通信有限公司
Publication of WO2022227352A1 publication Critical patent/WO2022227352A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present invention relates to the technical field of multi-robot cooperation, in particular to a multi-robot-multi-person cooperative control method, device and system.
  • MRS Multi-Robot System
  • the robots in the current multi-robot collaborative systems on the market generally move point-to-point according to a fixed route, plus anti-collision functions, such as food delivery robots or sweeping robots; and the server that controls the robots in the multi-robot collaborative system can Obtain the location information of each robot in the system, but cannot actively obtain the location information of the user terminal connected to the server.
  • the staff uses the user terminal to issue a collaboration request command
  • the location information of the request initiating terminal needs to be temporarily obtained and cannot be notified in time.
  • Staff members using other user terminals who are close to the place where the collaboration request instruction is initiated rush to the scene to perform collaborative work.
  • embodiments of the present invention provide a multi-robot-multi-person cooperative control method, device and system to solve the problem that when a collaboration request instruction is initiated, the robot cannot be arranged in time to respond to the instruction with the staff using the user terminal to perform cooperative operations. question.
  • a first aspect of the embodiments of the present invention provides a multi-robot-multi-person cooperative control method, and the method is applied to a multi-robot-multi-person cooperative system;
  • the system includes N robots and M users connected in communication with a server terminal; N ⁇ 1; M ⁇ 1; using the server as an execution subject, the method includes:
  • the positioning information includes geographic location information and a first device identifier; each of the user terminals and each of the robots has a unique device identifier;
  • the shared map includes N positioning marks of the robots and M of the user terminals;
  • the shared map is a three-dimensional coordinate map;
  • the assistance request information includes an assistance request instruction and a second device identifier
  • the cooperative work instruction is sent to the target device, so that the user of the robot and/or the user terminal corresponding to the target device can reach the positioning coordinates to perform cooperative work.
  • the method before the receiving the positioning information, the method further includes:
  • the user terminal acquires the first position information of the location where it is located, and collects the first optical signal of the environment where it is located;
  • the first location information is three-dimensional location information;
  • the optical signal is a visible light communication signal;
  • the user terminal acquires the first site information mapped by the first optical signal from the server;
  • the first site information includes building floor information, room number information and area function division information;
  • the user terminal generates the positioning information according to the first geographic location information and the device identification of the local terminal, and sends the positioning information to the server.
  • the method before the receiving the positioning information, the method further includes:
  • the robot acquires second position information of its location, and collects a second light signal of its environment; the second position information is three-dimensional position information; the light signal is a visible light communication signal;
  • the robot acquires the second site information mapped by the second optical signal from the server;
  • the second site information includes building floor information, room number information and regional function division information;
  • the robot obtains second geographic location information according to the second location information and the second site information;
  • the robot generates the positioning information according to the second geographic location information and the device identification of the local machine, and sends the positioning information to the server.
  • the method before the receiving the positioning information, the method further includes:
  • the robot collects the third optical signal, it obtains the location coordinate information of the location; the location coordinate information includes the location coordinates and the third site information; the third site information includes building floor information, room number information and area functional division information;
  • the robot constructs a mapping relationship between the encoded information of the third optical signal and the position coordinate information
  • the robot sends the mapping relationship to the server for storage.
  • each visible light source has a unique coded identifier; the light signals generated by each of the visible light sources include: Code identification information corresponding to each of the visible light sources.
  • the determining that the robot and/or the user terminal satisfying the preset condition in the shared map is a target device includes:
  • a preset number of the robots and/or the user terminals that are closest to the positioning coordinates are determined as target devices.
  • the user terminal acquires the first location information of the location where it is located, and collects the first optical signal of the location where it is located, including:
  • the user terminal obtains the first location information of the location by using a synchronous positioning and mapping method
  • the user terminal collects the first optical signal of the environment in which it is located through the photodetector.
  • the robot acquires the second position information of the location where it is located, and collects the second light signal of the environment where it is located, including:
  • the robot obtains the second position information of the location by synchronous positioning and mapping method
  • the robot collects the second light signal of the environment in which it is located through the photodetector.
  • the robot obtains the position coordinate information of the location, including:
  • the robot collects the third optical signal, the robot receives the wireless signal of the environment to obtain the first wireless signal list, and obtains the position coordinate information of the location; the wireless signal list includes the signal strength of each received wireless signal;
  • the robot determines that the wireless signal with the strongest signal strength in the first wireless signal list is the first target signal
  • the robot constructs a mapping relationship between the encoded information of the third optical signal and the position coordinate information, including:
  • the robot constructs a mapping relationship between the encoded information of the third optical signal, the position coordinate information, and the first target signal.
  • the user terminal acquires the first location information of the location where it is located, and collects the first optical signal of the environment where it is located, and further includes:
  • the user terminal acquires the first location information of the location, collects the first optical signal of the environment, and receives the wireless signal of the environment to obtain the second wireless signal list;
  • the user terminal obtains the first site information mapped by the first optical signal from the server, including:
  • the user terminal determines that the wireless signal with the strongest signal strength in the second wireless signal list is the second target signal
  • the user terminal acquires the site information mapped by the first optical signal and the second target signal from the server.
  • the robot acquires the second position information of the location where it is located, and collects the second light signal of the environment where it is located, further comprising:
  • the robot obtains the second position information of the location, collects the second optical signal of the environment, and receives the wireless signal of the environment to obtain a third wireless signal list;
  • the robot obtains the second site information mapped by the second optical signal from the server, including:
  • the robot determines that the wireless signal with the strongest signal strength in the third wireless signal list is the third target signal
  • the robot acquires the site information mapped by the second optical signal and the third target signal from the server.
  • the positioning marker identified by the first device in the shared map is updated to the location where the geographic location information is located, and the updated shared map is sent to all the robots and the users Before the terminal, also include:
  • the parameter information includes building floor information, room number information and area function division information;
  • the parameter information is correspondingly marked in the building information model to obtain the shared map.
  • the method further includes:
  • the shared map is updated based on the scene scan data.
  • a second aspect of the embodiments of the present invention provides a multi-robot-multi-person cooperative control device, including:
  • Positioning information receiving module for receiving positioning information; Described positioning information includes geographic location information and first equipment identification; Each user terminal and each robot all have unique equipment identification;
  • a positioning update module configured to update the positioning mark identified by the first device in the shared map to the location where the geographic location information is located, and send the updated shared map to all the robots and the user terminals ;
  • the shared map includes N positioning marks of the robots and M of the user terminals; the shared map is a three-dimensional coordinate map; N ⁇ 1; M ⁇ 1;
  • an assistance request information receiving module configured to receive assistance request information;
  • the assistance request information includes an assistance request instruction and a second device identifier;
  • a positioning coordinate obtaining module configured to obtain positioning coordinates according to the positioning mark of the second device identification in the shared map
  • a target device determination module configured to determine the robot and/or the user terminal that meet the preset conditions in the shared map as a target device
  • a collaborative instruction generation module configured to generate a collaborative work instruction according to the positioning coordinates and the assistance request instruction
  • the instruction sending module is configured to send the cooperative work instruction to the target device, so that the user of the robot and/or the user terminal corresponding to the target device can reach the positioning coordinates to perform cooperative work.
  • a third aspect of the embodiments of the present invention provides a multi-robot-multi-person collaboration system, the system includes N robots and M user terminals that are communicatively connected to a server; N ⁇ 1; M ⁇ 1; wherein,
  • the multi-robot-multi-person cooperative system implements the steps of the multi-robot-multi-person cooperative control method described in the first aspect.
  • a multi-robot-multi-person cooperative control method, device, and system provided by the embodiments of the present invention are applied to a multi-robot-multi-person cooperative system;
  • the system includes N robots and M user terminals that are communicatively connected to a server; N ⁇ 1; M ⁇ 1; the server receives positioning information;
  • the positioning information includes geographic location information and a first device identifier; each of the user terminals and each of the robots has a unique device identifier;
  • the positioning information sent by the terminal and each robot can grasp the position change of each user terminal and each robot in the collaborative system in real time.
  • the shared map includes N positioning marks of the robots and M of the user terminals.
  • the assistance request information includes the assistance request instruction and the second device identification; obtain the positioning coordinates according to the positioning mark of the second device identification in the shared map; the server receives the transmission from any user terminal or robot After receiving the assistance request information, the location coordinates can be quickly found from the shared map to the location requested, which improves the system response rate; the robot and/or the user terminal that meets the preset conditions in the shared map is determined as the target generating a cooperative work instruction according to the positioning coordinates and the assistance request instruction; sending the cooperative work instruction to the target device, so that the user of the robot and/or user terminal corresponding to the target device can reach the The positioning coordinates work together.
  • the collaboration request instruction is initiated, the robot and the staff using the user terminal are arranged in time to respond to the instruction for collaborative operation.
  • FIG. 1 is a schematic flowchart of a multi-robot-multi-person collaborative control method provided in Embodiment 1 of the present invention
  • FIG. 2 is a schematic structural diagram of a multi-robot-multi-person collaboration system provided by Embodiment 1 of the present invention
  • FIG. 3 is a schematic diagram of a multi-robot-multi-person collaborative work scenario provided by Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of a robot detecting an optical signal and constructing a new map according to Embodiment 1 of the present invention
  • FIG. 5 is a schematic diagram of the combination of an optical signal and a wireless signal according to Embodiment 1 of the present invention.
  • FIG. 6 is a schematic structural diagram of a multi-robot-multi-person cooperative control device provided in Embodiment 2 of the present invention.
  • FIG. 7 is a schematic structural diagram of a multi-robot-multi-person collaboration system provided by Embodiment 3 of the present invention.
  • FIG. 8 is a schematic structural diagram of a visible light communication system-on-chip VLCSOC according to Embodiment 3 of the present invention.
  • the robots in the multi-robot assistance system usually move point-to-point in a fixed route, and the server cannot know the location information of the user terminal connected to the multi-robot assistance system.
  • a user terminal initiates an assistance request command to the system, it needs to temporarily obtain the geographic location of the device initiating the command, resulting in a delay in the system response; it is also unable to promptly notify the staff using other user terminals that are closer to the location where the collaboration request command is initiated to rush to the scene.
  • the server updates the positioning mark positions of each user terminal and each robot on the shared map according to the received positioning information sent by each user terminal and each robot, so that each robot and each robot in the shared map can be displayed in the shared map.
  • Real-time tracking of the location of the user terminal after the server receives the assistance request information sent by any user terminal or robot, it can quickly find the requested location from the shared map to obtain the positioning coordinates, and arrange the robot in the shared map and the user in time.
  • the staff of the terminal respond to the instructions together to perform collaborative work.
  • FIG. 1 it is a schematic flowchart of a multi-robot-multi-person cooperative control method provided by Embodiment 1 of the present invention.
  • this embodiment can be applied to an application scenario where a multi-robot-multi-person cooperative system performs cooperative operations, and the method is applied to a multi-robot-multi-person cooperative system; the system includes N robots and M robots that are communicatively connected to the server.
  • User terminal; N ⁇ 1; M ⁇ 1; the method is executed by a server as the execution body, and the server may be a network cloud platform built by a plurality of servers, etc.;
  • the method specifically includes the following steps:
  • S110 Receive positioning information; the positioning information includes geographic location information and a first device identifier; each of the user terminals and each of the robots has a unique device identifier.
  • the server 20 in the system is used as the main control server to control all robots in the system, obtain the current position of the robot, environmental information, process data and formulate strategies, etc., and can also obtain the current position information and processing of the user terminal connected to the server. data, etc.; the server can also communicate with the cloud 40 where data is stored.
  • the server may be a Coordination, Control, and Collaboration Server (C3S); the server maintains the position of each user terminal and robot, and receives all mobile and fixed terminals (including robots, Centralized gateway at the user terminal) location.
  • the server is able to send instructions to the robot to perform various actions, such as moving, turning, going to a specific location, manipulating the robotic manipulator, and collecting sensor data.
  • the C3S can also have a user interface (UI) for the operator to control the robot on the user terminal through the user interface, and to cooperate with the operators of other user terminals.
  • UI user interface
  • the multiple robots can be the same type of mobile robots or different types of mobile robots with different functions; and the user terminal can be a mobile smart terminal, a tablet. Or smart terminals such as smart watches.
  • each robot and each user terminal in the system send positioning information to the server.
  • the positioning information received by the server can be generated by any robot or any user terminal in the system; the positioning information includes the geographic location information and the first device identifier of the device sending the positioning information, each of the user terminals and each of the Each robot has a unique device identifier, so that when the server receives the positioning information, the device to which the geographic location information belongs is determined according to the first device identifier.
  • each robot and Each user terminal can also periodically send positioning information to the server according to a preset time interval, so that the server can continuously receive the positioning information of each robot and each user terminal.
  • a positioning and navigation device In order to obtain the geographic location information of the location of each robot and user terminal, both the robot and the user terminal are provided with a positioning and navigation device and various sensors required for positioning and navigation (eg odometer and inertial sensor, etc.).
  • the preset time interval may be 0.1-0.5 seconds.
  • the positioning information received by the server is generated and sent by any user terminal in the multi-robot-multi-person collaboration system.
  • each user terminal In order to track the position of each user terminal in the multi-robot-multi-person collaboration system, each user terminal will periodically locate the position of the machine at preset time intervals.
  • the visible light communication signal obtained from the device obtains more accurate three-dimensional position information, and generates positioning information according to the three-dimensional position information and sends it to the server to complete the report of the position. Therefore, before the server receives the positioning information, the process of generating the positioning information by each user terminal is also included.
  • the process of generating the positioning information by the user terminal includes steps 11 to 14:
  • Step 11 The user terminal acquires first location information of the location where it is located, and collects the first optical signal of the environment where it is located; the first location information is three-dimensional location information; the optical signal is a visible light communication signal;
  • a plurality of electronic marks carrying coded marks can be reasonably set in the work scene in advance, and the coded marks corresponding to each electronic mark and its location information (such as the longitude, latitude, building name, number of floors) , the direction of the floor, the room number, the floor location and the function of the area) are uniquely mapped, and the server or the cloud connected to the server has the above-mentioned mapping relationship list pre-stored; make the robot or user terminal move to the location of the electronic mark to scan the electronic mark to decode
  • the site information corresponding to the scanned electronic mark can be obtained through the server, and the obtained site information can be combined with the obtained three-dimensional positioning information to obtain more accurate three-dimensional information and increase the accuracy of positioning.
  • a preset intelligent installation program can be used to control multiple robots to complete the installation of multiple electronic markers (such as LED lights or beacons, etc.) carrying coded identifiers in the work scene.
  • the existence form of the above electronic mark can be a QR (Quick Response) code or a visible light source (such as an LED light or a beacon, etc.) that transmits the coded mark as a visible light communication (Visible Light Communication, VLC) signal.
  • QR Quick Response
  • a visible light source such as an LED light or a beacon, etc.
  • VLC Visible Light Communication
  • the visible light source has a unique coded identifier (IDcode); the optical signal generated by each of the visible light sources includes the coded identification information corresponding to each of the visible light sources.
  • IDcode unique coded identifier
  • the user terminal obtains the first position information of the current location through the provided positioning device, and also collects the first optical signal of the environment where it is located through the signal collection device. Since the position information acquired by the positioning device is mostly three-dimensional positioning information, the first position information acquired by the user terminal is three-dimensional position information.
  • the first optical signal generated by the visible light source is a visible light communication (VLC) signal.
  • the user terminal may be provided with a simultaneous localization and mapping (SLAM) device and a photodetector, so that the user terminal can realize the first position of the location through the method of simultaneous localization and mapping
  • SLAM simultaneous localization and mapping
  • the specific process that the user terminal obtains the first position information of its location and collects the first optical signal of the environment includes: the user terminal passes the The synchronous positioning and mapping method obtains the first position information of the location; the user terminal collects the first optical signal of the environment through the photodetector.
  • the SLAM device provided in the user terminal includes a main control chip and sensing and measurement devices such as an odometer and an inertial sensor connected to the main control chip.
  • the main control chip of the SLAM device obtains the data measured by each sensor and measurement device to calculate the synchronous positioning and mapping method, and obtains the positioning and mapping method.
  • the map of the environment where the local machine is located, that is, the acquired first location information of the location includes the positioning coordinates and the constructed map.
  • the created map can be a grid map (Occupancy Grid Map).
  • the user terminal collects the first optical signal in the environment through the photodetector.
  • Step 12 the user terminal obtains the first site information mapped by the first optical signal from the server;
  • the first site information includes building floor information, room number information and area function division information;
  • the server or the cloud to which the server is connected pre-stores the above-mentioned mapping relationship list.
  • the first site information includes information such as latitude, building name, number of floors, direction of the floor, room number, and area function.
  • the function of this area can be divided according to users (for example, a staff-only area or a public area); or divided into functional areas (such as toilets, offices, etc.) according to the realization functions.
  • Step 13 the user terminal obtains first geographic location information according to the first location information and the first site information;
  • the user terminal performs information completion and position correction on the first location information according to the obtained first site information, and obtains first geographic location information including the site information, that is, three-dimensional coordinate information.
  • Step 14 The user terminal generates the positioning information according to the first geographic location information and the device identification of the local machine, and sends the positioning information to the server.
  • the user terminal generates positioning information according to the obtained first geographic position information and the device identification of the local machine, and sends the positioning information to the server to complete the reporting of the position information once.
  • the positioning information received by the server is generated and sent by any robot in the multi-robot-multi-person collaborative system.
  • each robot will locate the position of the machine at preset time intervals.
  • the signal obtains more accurate three-dimensional position information, and generates positioning information according to the three-dimensional position information and sends it to the server to complete the report of the position. Therefore, before the server receives the positioning information, the process of generating the positioning information by each robot is also included.
  • the process of generating the positioning information by the robot includes steps 21 to 24:
  • Step 21 The robot obtains second position information of its location, and collects a second optical signal of its environment; the second location information is three-dimensional position information; the optical signal is a visible light communication signal;
  • multiple electronic markers carrying coded identifiers are reasonably set.
  • the information such as the direction of the floor, room number, floor location and area function
  • the server or the cloud connected to the server stores the above mapping relationship list in advance; make the robot or user terminal move to the location of the electronic mark and scan the electronic mark to decode it.
  • the site information corresponding to the scanned electronic mark can be obtained through the server, and the obtained site information can be combined with the obtained 3D positioning information to obtain more accurate 3D information and increase the positioning accuracy.
  • the existence form of the above electronic mark can be a QR (QuickResponse) code or a visible light source (such as an LED lamp, a VLC lamp, or a beacon, etc.) that transmits the coded identification as an optical communication (Visible Light Communication, VLC) signal.
  • QR QuickResponse
  • a visible light source such as an LED lamp, a VLC lamp, or a beacon, etc.
  • VLC Visible Light Communication
  • FIG. 3 in this implementation example, in the scene where the N robots 301 and the M user terminals 302 are located in the multi-robot-multi-person collaboration system, there are several visible light sources 303 ;
  • the visible light source has a unique coded identifier (IDcode); the optical signal generated by each of the visible light sources includes the coded identification information corresponding to each of the visible light sources.
  • IDcode unique coded identifier
  • the robot obtains the second position information of the current position through the provided positioning device, and also collects the second light signal of the environment in which it is located through the signal collection device. Since the position information obtained by the positioning device is mostly three-dimensional positioning information, the second position information obtained by the robot is three-dimensional position information.
  • the second optical signal generated by the visible light source is a visible light communication (VLC) signal.
  • the robot may be provided with a simultaneous localization and mapping (SLAM) device and a photoelectric detector, so that the robot can obtain the second position information of its location through the positioning device, and obtain the second position information through the photoelectric device.
  • the detector realizes the collection of the second optical signal of the environment in which it is located.
  • the specific process of acquiring the second position information of the robot's location and collecting the second optical signal of the environment includes: the robot synchronizes the positioning and The mapping method obtains the second position information of the location; the robot collects the second light signal of the environment through the photodetector.
  • the SLAM device provided in the robot includes a main control chip and sensing and measurement devices such as an odometer and an inertial sensor connected to the main control chip.
  • the main control chip of the SLAM device obtains the data measured by each sensor and measurement device to calculate the synchronous positioning and mapping method, and obtains the positioning and mapping method.
  • the map of the environment where the machine is located, that is, the acquired second location information of the location includes the positioning coordinates and the constructed map.
  • the created map can be a grid map (Occupancy Grid Map).
  • the robot collects the second optical signal in the environment through the photodetector.
  • Step 22 the robot obtains the second site information mapped by the second optical signal from the server;
  • the second site information includes building floor information, room number information and area function division information;
  • the server or the cloud to which the server is connected pre-stores the above mapping relationship list.
  • the robot collects the second optical signal of the environment in which it is located
  • the second site information mapped by the second optical signal can be queried from the mapping relationship list stored in the server.
  • the second site information includes information such as latitude, building name, number of floors, direction of the floor, room number, and area functions.
  • the function of this area can be divided according to users (for example, a staff-only area or a public area); or divided into functional areas (such as toilets, offices, etc.) according to the realization functions.
  • Step 23 the robot obtains second geographic location information according to the second location information and the second site information;
  • the robot performs information completion and position correction on the second location information according to the obtained second site information, and obtains second geographic location information including the site information, that is, three-dimensional coordinate information.
  • Step 24 The robot generates the positioning information according to the second geographic location information and the device identification of the local machine, and sends the positioning information to the server.
  • the robot generates positioning information according to the obtained second geographic position information and the device identification of the machine, and sends the positioning information to the server to complete the reporting of the position information once.
  • the mapping relationship between different light (VLC) signals stored in the server and site information is extracted from the construction drawings of the work scene of the multi-robot-multi-person collaboration system or stored in the cloud in advance .
  • the site information may correspond to the floor information, room number information, and area function information of the building.
  • the light signals are all emitted by visible light sources (such as LED lights, beacons, etc.) installed in the building, the construction drawings corresponding to the building have detailed location information of each visible light source, including the number of floors and the location information. Position coordinates within the floor, etc.
  • the mapping relationship between the light signal generated by each visible light source and the position information of each visible light source is established, and the mapping relationship list is generated and stored in the In the server or in the cloud, the mapping relationship between different optical signals and site information can be obtained.
  • the VLC code identifier (ID) of each visible light source is stored in the cloud together with its mapped physical location information (including the longitude, latitude, building, floor, room number and area function information in the map of the building) .
  • the robot's synchronous positioning and mapping function can be used to complete the surveying and mapping of the new map and mark the electronic map in the new map. marked location.
  • the mapping relationship between the different optical signals stored in the server and the site information can be constructed by the robots in the multi-robot-multi-person collaborative system. Specifically, before any robot or user terminal in the system obtains the site information mapped by the optical signal from the server, the process of constructing the mapping relationship between the optical signal and the site information specifically includes steps 31 to 33:
  • Step 31 If the robot collects the third optical signal, it obtains the location coordinate information of the location; the location coordinate information includes the location coordinates and the third site information; the third site information includes building floor information, room number Information and regional functional division information;
  • the electronic marking in the working scene of the system When the electronic marking in the working scene of the system is installed for the first time and there is no drawing data record, it will take a lot of engineering time and high cost to use the manual method to record and mark.
  • multiple robots in the system can cruise on the new map and detect light signals at the same time.
  • the robot collects the third light signal, it obtains the position coordinate information of the current position of the robot, so that the obtained position coordinates
  • the information is the location of the electronic marker that generates the third optical signal.
  • the SLAM device provided in the robot includes a main control chip and sensing and measurement devices such as an odometer and an inertial sensor connected to the main control chip.
  • the main control chip of the SLAM device obtains the data measured by each sensor and measuring device to perform the synchronous positioning and mapping method. Calculation is performed to obtain positioning and build a map of the environment where the machine is located, that is, the obtained location coordinate information of the location includes location coordinates and a new map constructed.
  • the robot can also obtain the building floor information, room number information and area function division information of the location according to the design drawings of the scene to obtain the third site information, so that the location coordinate information also includes the third site information.
  • Step 32 the robot constructs a mapping relationship between the encoded information of the third optical signal and the position coordinate information
  • the robot decodes the collected third optical signal to obtain an encoded identification number corresponding to the electronic marker generating the third optical signal, and constructs a mapping relationship between the encoded identification number of the collected third optical signal and the obtained position coordinate information.
  • the established new map can be a grid map (Occupancy Grid Map) containing detected electronic markers; different image values in the grid map represent areas with obstacles (image values are 1), free area (pixel value is 0), and electronic marking area (pixel value is -1).
  • the occupancy raster map with electronic markers is in pgm picture format and records the location of the starting point of the map, the orientation of the map, the resolution of the pixel and physical distance of the image, the position of the electronic markers, the position of each electronic marker
  • the corresponding code identifies the ID and the position of the obstacle.
  • Step 33 The robot sends the mapping relationship to the server for storage.
  • the robot After the robot constructs a mapping relationship between the encoded information of the third optical signal and the position coordinate information, the robot sends the mapping relationship to the server for storage.
  • Figure 4 taking one of the robots in the system as an example, a schematic diagram of the robot detecting light signals to construct a new map.
  • the robot is placed on a digital map at a starting point with a known location and orientation.
  • the origin can be marked with an electronic marker (eg, a VLC signal).
  • an electronic marker eg, a VLC signal.
  • the receiver of the robot detects the third light signal, and decodes the collected third light signal to obtain the corresponding ID of the light generating the third light signal, and converts the current robot's current
  • the location coordinate information is mapped to the received VLC lamp ID.
  • the robot covers all areas, the position of each lamp in the area will be mapped to its corresponding lamp ID. This mapping of light ID to position and orientation will be stored in a database for the user of the user terminal and the robot to navigate.
  • the optical signal in order to avoid the problems of complicated operation and long response time caused by overly complicated encoding settings of the optical signal, the optical signal can also be combined with the wireless signal so that the optical signal encoding can be reused.
  • the mapping relationship between the optical signal and the site information can also be the mapping relationship between the optical signal and the site information and the wireless signal.
  • the method for constructing the mapping relationship between the optical signal and the wireless signal and the site information specifically includes steps 41 to 41. 44:
  • the optical signal can also be combined with the wireless signal so that the optical signal code can be reused.
  • the wireless signal may be a Bluetooth or WIFI signal or the like. As shown in FIG.
  • FIG. 5 it is a schematic diagram of the combination of an optical signal and a wireless signal.
  • the wireless signal is a bluetooth signal
  • several bluetooth signals eg, bluetooth signals A-I in the figure
  • VLC lamp code identification IDs eg codes 1-9
  • VLC IDs are distributed in a region in such a way that they are far away from the same twin IDs in adjacent regions. This special arrangement reduces the probability of false triggering due to undesired changes in the Bluetooth signal RSS.
  • Step 41 If the robot collects the third optical signal, it receives the wireless signal of the environment to obtain the first wireless signal list, and obtains the position coordinate information of the location; the wireless signal list includes the received wireless signals. signal strength;
  • the robot collects the third light signal, it obtains the position coordinate information of the current position of the robot, and also needs to receive the wireless signal of the environment to obtain the first wireless signal list.
  • Step 42 the robot determines that the wireless signal with the strongest signal strength in the first wireless signal list is the first target signal
  • the wireless signal corresponding to the third optical signal should be the wireless signal with the strongest signal strength in the first wireless signal list.
  • the robot determines that the wireless signal with the strongest signal strength in the first wireless signal list is the first target signal, so as to find the wireless signal corresponding to the area where the third optical signal is located.
  • Step 43 the robot constructs a mapping relationship between the encoded information of the third optical signal, the position coordinate information, and the first target signal;
  • the robot constructs a mapping relationship between the encoded information of the third optical signal, the acquired position coordinate information, and the first target signal corresponding to the area where the VLC lamp that generates the third optical signal is located.
  • Step 44 The robot sends the mapping relationship to the server for storage.
  • the robot sends the mapping relationship to the server for storage, so that the information of each lamp's VLC identifier (ID) and its radio frequency (Bluetooth/WiFi) MAC address is based on the building's map (longitude) along with its physical location information. , latitude, building, floor, room number and area function information, etc.) are stored on the server or cloud.
  • ID lamp's VLC identifier
  • WiFi radio frequency
  • mapping relationship between the optical signal and the site information can be the mapping relationship between the optical signal and the site information and the wireless signal.
  • the process of generating the positioning information by the user terminal further includes steps 51 to 55:
  • Step 51 The user terminal acquires the first location information of the location where it is located, collects the first optical signal of the environment where it is located, and receives the wireless signal of the environment where it is located to obtain a second wireless signal list;
  • the user terminal collects the first optical signal, it obtains the first location information of the current location of the user terminal, and also needs to receive the wireless signal of the environment to obtain the second wireless signal. List of signals.
  • Step 52 The user terminal determines that the wireless signal with the strongest signal strength in the second wireless signal list is the second target signal
  • the wireless signal corresponding to the first optical signal should be the wireless signal with the strongest signal strength in the second wireless signal list.
  • the user terminal determines that the wireless signal with the strongest signal strength in the second wireless signal list is the second target signal, so as to find the wireless signal corresponding to the area where the first optical signal is located.
  • Step 53 the user terminal obtains the site information mapped by the first optical signal and the second target signal from the server;
  • the server or the cloud to which the server is connected pre-stores the above mapping relationship list.
  • the site information corresponding to the mapping of the first optical signal and the second target signal can be queried from the mapping relationship list stored in the server.
  • the site information corresponds to the first site information including information such as latitude, building name, number of floors, room number in the direction of the floor, and area functions.
  • Step 54 the user terminal obtains first geographic location information according to the first location information and the first site information;
  • the user terminal performs information completion and position correction on the first location information according to the obtained first site information, and obtains first geographic location information including the site information, that is, three-dimensional coordinate information.
  • Step 55 The user terminal generates the positioning information according to the first geographic location information and the device identification of the local machine, and sends the positioning information to the server.
  • the user terminal generates positioning information according to the obtained first geographic position information and the device identification of the local machine, and sends the positioning information to the server to complete the reporting of the position information once.
  • mapping relationship between the optical signal and the site information can be the mapping relationship between the optical signal and the site information and the wireless signal.
  • the process of generating the positioning information by the robot further includes steps 61 to 65:
  • Step 61 The robot obtains the second position information of the location, collects the second optical signal of the environment, and receives the wireless signal of the environment to obtain a third wireless signal list;
  • the robot collects the second light signal, it obtains the second position information of the current position of the robot, and also needs to receive the wireless signal of the environment to obtain the third wireless signal list. .
  • Step 62 the robot determines that the wireless signal with the strongest signal strength in the third wireless signal list is the third target signal
  • the wireless signal corresponding to the second optical signal should be the wireless signal with the strongest signal strength in the third wireless signal list.
  • the robot determines that the wireless signal with the strongest signal strength in the third wireless signal list is the third target signal, so as to find the wireless signal corresponding to the area where the second optical signal is located.
  • Step 63 the robot obtains the site information mapped by the second optical signal and the third target signal from the server;
  • the server or the cloud to which the server is connected pre-stores the above mapping relationship list.
  • the robot After the robot collects the second optical signal and the third target signal in the environment, it can query the site information corresponding to the mapping of the second optical signal and the third target signal from the mapping relationship list stored in the server.
  • the site information corresponds to the second site information including latitude, building name, floor number, floor direction, room number and other information, and information such as area functions.
  • Step 64 the robot obtains second geographic location information according to the second location information and the second site information;
  • the robot performs information completion and position correction on the second location information according to the obtained second site information, and obtains second geographic location information including the site information, that is, three-dimensional coordinate information.
  • Step 65 The robot generates the positioning information according to the second geographic location information and the device identification of the local machine, and sends the positioning information to the server.
  • the robot generates positioning information according to the obtained second geographic position information and the device identification of the machine, and sends the positioning information to the server to complete the reporting of the position information once.
  • S120 Update the positioning mark identified by the first device in the shared map to the location where the geographic location information is located, and send the updated shared map to all the robots and the user terminals; the sharing The map includes N positioning marks of the robots and M user terminals; the shared map is a three-dimensional coordinate map.
  • the shared map in the server is a three-dimensional electronic map, including three-dimensional coordinate information and site information, and can be generated according to the parameters corresponding to the construction drawings of the operation scene of the multi-robot-multi-person collaborative system and the map obtained by the robot surveying and mapping. Since the positioning information received by the server can be generated by any robot or any user terminal in the system, the server updates the positioning marker positions of each user terminal and each robot on the shared map according to the received positioning information sent by each user terminal and each robot, Real-time tracking of the positions of each robot and each user terminal in the shared map is realized.
  • each robot and each user terminal in the system can know the position of each robot and each user terminal in the system, so as to realize the robot and the user terminal.
  • Location sharing between user terminals Furthermore, the positions of multiple robots and multiple user terminals can be simultaneously displayed on the map of the user terminal or the map of the robot.
  • the construction of the shared map may be completed by controlling several robots in the multi-robot-multi-person collaborative system to map and scan the work scene. Specifically, before updating the positioning marker identified by the first device in the shared map to the location where the geographic location information is located, and sending the updated shared map to all the robots and the user terminals, Also includes steps 71 to 74:
  • Step 71 Control a number of the robots to perform map mapping on the target scene, and perform laser scanning on the target scene to obtain map data and scan data;
  • map surveying and mapping equipment such as a SLAM device
  • the SLAM device includes a main control chip and sensing measurement devices such as an odometer and an inertial sensor connected to the main control chip.
  • each robot When controlling a plurality of the robots to map the target scene, each robot performs map drawing by synchronous positioning and mapping method. In detail, each robot is controlled to roam in the target scene.
  • the main control chip of the SLAM device of the robot obtains the data measured by each sensor and measurement device during the cruising process to calculate the synchronous positioning and mapping method, and establishes the environment where the machine is located. map, that is, to get map data.
  • the target scene may be a work scene of a multi-robot-multi-person collaboration system.
  • each robot is also equipped with a three-dimensional laser scanning device (such as a laser, radar scanner, etc.); when controlling several of the robots to map the target scene, the three-dimensional laser scanning device of each robot is synchronously controlled to be turned on, and each robot is in When cruising in the target scene, the 3D laser scanning device of each robot scans the contours and boundaries in the target scene, such as the contours and boundaries of facilities such as walls and homes, and records the 3D coordinates and reflections of a large number of dense points on the surface of the measured object. information such as rate and texture to obtain scan data.
  • a three-dimensional laser scanning device such as a laser, radar scanner, etc.
  • Step 72 establishing a building information model of the target scene according to the map data and the scan data;
  • a building information model (Building Information Modeling) of the target scene is established, thereby obtaining a three-dimensional building model.
  • Step 73 Obtain parameter information of the target scene; the parameter information includes building floor information, room number information and regional function division information;
  • the target scene usually has artificially specified site information, such as building name, number of floors, floor direction, room number, floor location, and regional functions.
  • site information such as building name, number of floors, floor direction, room number, floor location, and regional functions.
  • the parameter information of the target scene can be obtained from architectural drawings or design blueprints, that is, information such as building floor information, room number information, and regional function division information.
  • Step 74 The parameter information is correspondingly marked in the building information model to obtain the shared map.
  • the parameter information is correspondingly marked in the building information model, so that each room in the building information model of the target scene has building floor information, room number information and regional function division. Information labeling.
  • the SLAM map obtained by the robot's pre-mapping of the site is used as the first draft of the shared map, and then scanning information is obtained by scanning the walls and facilities in the scene through laser scanning.
  • the outlines and boundaries on the shared map are generated in the first draft, such as the outlines and boundaries of facilities such as walls and homes; then the shared map is revised and completed according to the architectural drawing parameters corresponding to the site, so as to complete the construction of the shared map and make the shared map It becomes a three-dimensional electronic map, including longitude, latitude, building name, number of floors, direction of the floor, room number, floor location and other information. This can greatly reduce the time required to build a human-machine shared map, and improve map accuracy, compared to a build entirely drawn by humans.
  • the robot in order to deal with changes such as the rearrangement of furniture and other facilities or indoor renovation in the target scene, can also periodically scan the indoor environment of the target scene by laser, and update the shared map in time according to the scan data, so as to Ensure real-time and accurate shared maps.
  • the method further includes: controlling a number of the robots to perform laser scanning on the target scene at regular intervals to obtain scene scanning data; The scene scan data updates the shared map.
  • the assistance request information includes an assistance request instruction and a second device identifier.
  • a worker can use a user terminal to generate assistance request information at any location and send it to the server to generate an assistance operation strategy.
  • the application scenario of the system is a shopping mall
  • the staff finds a fire somewhere or finds an injured person and needs assistance when patrolling the scene, they can select the assistance request instruction corresponding to the item to be assisted through the user terminal and combine this with this.
  • the device identification of the machine generates an assistance request message and sends it to the server.
  • the shared map in the server can track the positions of each robot and each user terminal in real time, according to the positioning mark of the second device identification contained in the assistance request information in the shared map, the positioning coordinates can be quickly obtained, that is, the origin of the assistance request instruction. position, improve the response rate of the system.
  • the shared map in the user terminal can also track the positions of each robot and each user terminal in real time, when the staff uses the user terminal to generate assistance request information at any location, it is also possible to directly display the robot and the user terminal according to the shared map in the user terminal.
  • the location of each user terminal select the target device required for assistance, such as several robots and/or user terminals closest to the user terminal in the shared map, so that the generated assistance request information also includes the device identification information of the specified target device.
  • the process of determining that the robot and/or the user terminal that meets the preset conditions in the shared map is the target device is specifically: determining the robot and/or user terminal corresponding to the device identification information included in the assistance request information as the target device A device that completes the determination of the target device required for the assistance request instruction.
  • the preset conditions can be set to To select a preset number of robots and/or other user terminals other than the user terminal that are closest to the positioning coordinates. Determining that the robot and/or the user terminal that meets the preset condition in the shared map is the target device, specifically: determining the preset number of the robots that are closest to the positioning coordinates in the shared map The robot and/or the user terminal is the target device.
  • the preset number may be preset or correspondingly changed according to the type of the assistance request instruction.
  • the server After determining the target device required for the assistance request instruction, the server generates the cooperative work instruction according to the positioning coordinates and the assistance request instruction.
  • the server sends the generated collaborative work instruction to the determined target device, so that the robot and/or the user of the user terminal that has received the collaborative work instruction arrives at the positioning coordinates and performs collaborative work according to the control strategy of the collaborative work instruction. , and arrange the robot and the staff who use the user terminal to respond to the instructions in time for collaborative work.
  • a multi-robot-multi-person cooperative control method provided by an embodiment of the present invention is applied to a multi-robot-multi-person cooperative system;
  • the system includes N robots and M user terminals that are communicatively connected to a server; N ⁇ 1; M ⁇ 1; the server receives positioning information;
  • the positioning information includes geographic location information and a first device identifier; each user terminal and each robot has a unique device identifier; each user terminal and each robot are regularly received
  • the sent positioning information can grasp the position changes of each user terminal and each robot in the collaborative system in real time.
  • the shared map includes N positioning marks of the robots and M of the user terminals.
  • the assistance request information includes the assistance request instruction and the second device identification; obtain the positioning coordinates according to the positioning mark of the second device identification in the shared map; the server receives the transmission from any user terminal or robot After receiving the assistance request information, the location coordinates can be quickly found from the shared map to the location requested, which improves the system response rate; the robot and/or the user terminal that meets the preset conditions in the shared map is determined as the target generating a cooperative work instruction according to the positioning coordinates and the assistance request instruction; sending the cooperative work instruction to the target device, so that the user of the robot and/or user terminal corresponding to the target device can reach the Coordinate the coordinates to work together.
  • the collaboration request instruction is initiated, the robot and the staff using the user terminal are arranged in time to respond to the instruction for collaborative operation.
  • the embodiment of the present invention further provides a multi-robot-multi-person cooperative control device, and the multi-robot-multi-person cooperative control device includes: a processor 601, a memory 602, and storage in the memory 602.
  • the processor 601 executes the computer program 603 , the steps in the above-mentioned embodiment of the multi-robot-multi-person cooperative control method are implemented, for example, steps S110 to S170 shown in FIG. 1 .
  • the computer program 603 may be divided into one or more modules, and the one or more modules are stored in the memory 602 and executed by the processor 601 to complete the present application.
  • the one or more modules may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 603 in the washing apparatus.
  • the computer program 603 can be divided into a positioning information receiving module, a positioning updating module, an assistance request information receiving module, a positioning coordinate obtaining module, a target device determining module, a collaborative instruction generating module and an instruction sending module.
  • the specific functions of each module are as follows :
  • a positioning information receiving module for receiving positioning information; the positioning information includes geographic location information and a first device identifier; each user terminal and each robot has a unique device identifier;
  • a positioning update module configured to update the positioning mark identified by the first device in the shared map to the location where the geographic location information is located, and send the updated shared map to all the robots and the user terminals ;
  • the shared map includes N positioning marks of the robots and M of the user terminals; the shared map is a three-dimensional coordinate map; N ⁇ 1; M ⁇ 1;
  • an assistance request information receiving module configured to receive assistance request information;
  • the assistance request information includes an assistance request instruction and a second device identifier;
  • a positioning coordinate obtaining module configured to obtain positioning coordinates according to the positioning mark of the second device identification in the shared map
  • a target device determination module configured to determine the robot and/or the user terminal that meet the preset conditions in the shared map as a target device
  • a collaborative instruction generation module configured to generate a collaborative work instruction according to the positioning coordinates and the assistance request instruction
  • the instruction sending module is configured to send the cooperative work instruction to the target device, so that the user of the robot and/or the user terminal corresponding to the target device can reach the positioning coordinates to perform cooperative work.
  • the multi-robot-multi-person cooperative control device may include, but is not limited to, a processor 601 , a memory 602 and a computer program 603 stored in the memory 602 .
  • Fig. 6 is only an example of a multi-robot-multi-person cooperative control device, and does not constitute a limitation on the multi-robot-multi-person cooperative control device, and may include more or less components than those shown in the figure, Or some components are combined, or different components, for example, the multi-robot-multi-person cooperative control device may also include input and output devices, network access devices, buses, and the like.
  • the processor 601 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 602 may be an internal storage unit of the washing device, such as a hard disk or a memory of the washing device.
  • the memory 602 can also be an external storage device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash card) provided on the drainage device of the washing device. Card), etc.
  • the memory 602 may also include both an internal storage unit of the drainage device of the washing device and an external storage device.
  • the memory 602 is used to store the computer program and other programs and data required for the drainage method of the washing apparatus.
  • the memory 602 may also be used to temporarily store data that has been output or will be output.
  • FIG. 7 is a schematic structural diagram of a multi-robot-multi-person collaboration system provided by Embodiment 3 of the present invention.
  • the system includes N robots 72 and M user terminals 73 connected in communication with the server 71; N ⁇ 1; M ⁇ 1; wherein,
  • the multi-robot-multi-person cooperative system executes the steps of the multi-robot-multi-person cooperative control method described in the first embodiment.
  • each user terminal and robot in the multi-robot-multi-person cooperative system can receive optical communication signals (VLC)
  • each user terminal and robot also has a visible light communication system-level chip VLCSOC.
  • the visible light communication system-on-chip VLCSOC 8 realizes optical communication signal (VLC) reception and encoding and decoding functions, and may include a power supply (DC) 110 , a wireless data source 130 , a light-emitting diode 150 , a photoelectric sensor 170 , a DC- DC power converter 250, wireless communication unit 240, security unit 180 and VLC unit 140; wherein VLC unit 140 includes baseband DSP unit 220, analog signal processing unit 230, transmitter 140A and receiver 140B.
  • DC power supply
  • the DC-DC power converter 250 is used to obtain power from the power source (DC) 110 and supply power to the elements in the SOC 8
  • the wireless communication unit 240 is connected to the external data source 130 and is used to communicate with the data source 130 to transmit/receive
  • the VLC unit 140 is connected to the LEDs15 and one or more photosensors 170
  • the security unit 180 is connected to the wireless communication unit 240 and the VLC unit 140 .
  • the VLC unit 140 is used to modulate the LEDs 15 to transmit information and/or the VLC unit 140 is used to receive information using VLC via one or more photosensors 170 .
  • the SOC8 can support both VLC transmit and receive, so that the SOC8 can be used for VLC sources (such as LED lights, symbols, or other devices) and VLC receivers (such as handheld devices, etc.), in addition, the SOC8 can also support bidirectional VLC device.
  • the VLC unit 140 includes a transmitter (TX) circuit 140A and a receiver (RX) circuit 140B.
  • the TX circuit 140A may be a baseband digital signal processing unit (baseband DSP unit) 220 and an analog signal processing unit 230 that adjusts the LED lamp 150 .
  • RX circuit 140B may receive signals from photosensors 170 and detect VLC data in the received light.
  • VLC unit 140 and wireless communication unit 240 may be used in conjunction to communicate information to a device.
  • the VLC unit 140 for example, can transmit a Quick Response Code (QR code), Uniform Resource Locator (URL), or other data that can be used to access a larger amount of information. Greater amounts of information may be communicated via wireless communication unit 240 over a wireless or wired network.
  • QR code Quick Response Code
  • URL Uniform Resource Locator
  • Security unit 180 may secure network and/or VLC data access, eg, security unit 180 may include cryptographic hardware for encrypting data transmitted over VLC, and/or decrypting data to be transmitted over VLC. Therefore, sensitive data can be transmitted only to specific users, and other recipients who receive these VLC data cannot decrypt the data.
  • data to be transmitted over the network may be encrypted by the security unit 180 and data received from the network (eg, data to be transmitted over VLC) may be encrypted by the security unit 180 .
  • the photosensor 170 may be any kind of photosensor, for example, the photosensor may include a photodetector or a CMOS image sensor. Photodetectors can be used for high bandwidth/data rate communications, while CMOS image sensors can be used for low bandwidth/data rate communications. A given system may include one or more types of photosensors 170 . Other photosensors may be used in other embodiments.
  • LEDS15 may be any type of LED.
  • LEDS 15 may be a large number of low cost standard LEDs. Through the combination of inexpensive LEDsl5 and SoC16 cost savings (compared with discrete components), VLC can be more easily accepted by the market. Over time, SOC8 can also take advantage of Moore's Law to reduce cost, increase performance, and more. VLC can also be used in conjunction with low-cost wireless/wired networks.
  • the LEDs 15 may be organic LEDs (OLEDs).
  • the power supply may be direct current (DC) or alternating current (AC), or may be powered via a dedicated power cord or along with data (eg, via Power over Ethernet - PoE). Also, data can be sent/received through wireless or wired communication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

一种多机器人-多人协作控制方法、装置及系统,属于多机器人协作的技术领域,该方法包括:接收定位信息;将第一设备标识在共享地图中的定位标记更新为地理位置信息所处的位置,并将更新后的共享地图发送给所有机器人和用户终端;接收协助请求信息;根据共享地图中第二设备标识的定位标记,得到定位坐标;确定共享地图中满足预设条件的机器人和/或用户终端为目标装置;根据定位坐标和协助请求指令生成协同工作指令;向目标装置发送协同工作指令。解决协作请求指令发起时无法及时安排机器人与使用用户终端的工作人员一起响应指令进行协同作业的问题。

Description

多机器人-多人协作控制方法、装置及系统 技术领域
本发明涉及多机器人协作的技术领域,尤其涉及一种多机器人-多人协作控制方法、装置及系统。
背景技术
机器人技术的不断发展与进步,单个移动机器人已经难以通过自身完成复杂繁琐的工作任务,难以完成生产实践的工作指标,人们开始迫切需要研究新的方向来满足机械领域中的实际需要,于是多个机器人的团队进入了研究领域的视野中。多个移动机器人组成多机器人协作系统(Multi-Robot System,MRS)。相比较单个机器人而言,多机器人协作系统中的机器人可以通过中央控制器和自身的协调系统重新规划工作任务适应环境,因此具有较强的容错能力和鲁棒性。多个机器人同时工作,可以提高工作效率,增强了工作的配合能力与工作任务指标。
但市面上目前的多机器人协作系统中的机器人,一般是按照固定路线进行点对点的运动,加上防碰撞功能,例如送餐机器人或扫地机器人等;且控制多机器人协作系统中的机器人的服务器能获取系统中各个机器人的位置信息,却无法主动获取与服务器连接的用户终端的位置信息,导致当工作人员使用用户终端发布协作请求指令时,需要临时获取请求发起终端的位置信息,也无法及时通知距协作请求指令发起地点较近的使用其它用户终端的工作人员赶往现场进行协同工作。
发明内容
有鉴于此,本发明实施例提供了一种多机器人-多人协作控制方法、装置及系统,以解决协作请求指令发起时无法及时安排机器人与使用用户终端的工作人员一起响应指令进行协同作业的问题。
本发明实施例的第一方面提供了一种多机器人-多人协作控制方法,所述方法应用于多机器人-多人协作系统;所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;以所述服务器为执行主体,所述方法包括:
接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一所述用户终端以及每一所述机器人均具有唯一的设备标识;
将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;所述共享地图包含N个所述机器人和M个所述用户终端的定位标记;所述共享地图为三维坐标地图;
接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;
根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;
确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;
根据所述定位坐标和所述协助请求指令生成协同工作指令;
向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。
在一个实施示例中,在所述接收定位信息之前,还包括:
所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号;所述第一位置信息为三维位置信息;所述光信号为可见光通信信号;
所述用户终端从所述服务器中获取所述第一光信号映射的第一场地信息;所述第一场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
所述用户终端根据所述第一位置信息和所述第一场地信息得到第一地理位置信息;
所述用户终端根据所述第一地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
在一个实施示例中,在所述接收定位信息之前,还包括:
所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号;所述第二位置信息为三维位置信息;所述光信号为可见光通信信号;
所述机器人从所述服务器中获取所述第二光信号映射的第二场地信息;所述第二场地信息包括建筑楼层信息、房间 号信息和区域功能划分信息;
所述机器人根据所述第二位置信息和所述第二场地信息得到第二地理位置信息;
所述机器人根据所述第二地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
在一个实施示例中,在所述接收定位信息之前,还包括:
所述机器人若采集到第三光信号,则获取所在位置的位置坐标信息;所述位置坐标信息包括位置坐标和第三场地信息;所述第三场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
所述机器人构建所述第三光信号的编码信息与所述位置坐标信息的映射关系;
所述机器人将所述映射关系发送至所述服务器进行存储。
在一个实施示例中,N个所述机器人和M个所述用户终端所处的场景中设有若干可见光光源;每一所述可见光光源具有唯一的编码标识;各个所述可见光光源产生光信号包含各个所述可见光光源对应的编码标识信息。
在一个实施示例中,所述确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置,包括:
在所述共享地图中确定距所述定位坐标最近的预设个数的所述机器人和/或所述用户终端为目标装置。
在一个实施示例中,所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号,包括:
所述用户终端通过同步定位与建图方法获取所在位置的第一位置信息;
所述用户终端通过光电探测器采集所处环境的第一光信号。
在一个实施示例中,所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号,包括:
所述机器人通过同步定位与建图方法获取所在位置的第二位置信息;
所述机器人通过光电探测器采集所处环境的第二光信号。
在一个实施示例中,所述机器人若采集到第三光信号,则获取所在位置的位置坐标信息,包括:
所述机器人若采集到第三光信号,则接收所在环境的无线信号得到第一无线信号列表,并获取所在位置的位置坐标信息;所述无线信号列表包括接收到的各个无线信号的信号强度;
所述机器人确定所述第一无线信号列表中信号强度最强的无线信号为第一目标信号;
所述机器人构建所述第三光信号的编码信息与所述位置坐标信息的映射关系,包括:
所述机器人构建所述第三光信号的编码信息与所述位置坐标信息以及所述第一目标信号之间的映射关系。
在一个实施示例中,所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号,还包括:
所述用户终端获取所在位置的第一位置信息,采集所处环境的第一光信号,并接收所在环境的无线信号得到第二无线信号列表;
所述用户终端从所述服务器中获取所述第一光信号映射的第一场地信息,包括:
所述用户终端确定所述第二无线信号列表中信号强度最强的无线信号为第二目标信号;
所述用户终端从所述服务器中获取所述第一光信号和所述第二目标信号映射的场地信息。
在一个实施示例中,所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号,还包括:
所述机器人获取所在位置的第二位置信息,采集所处环境的第二光信号,并接收所在环境的无线信号得到第三无线信号列表;
所述机器人从所述服务器中获取所述第二光信号映射的第二场地信息,包括:
所述机器人确定所述第三无线信号列表中信号强度最强的无线信号为第三目标信号;
所述机器人从所述服务器中获取所述第二光信号和所述第三目标信号映射的场地信息。
在一个实施示例中,在将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端之前,还包括:
控制若干所述机器人对目标场景进行地图测绘,并对所述目标场景进行激光扫描,得到地图数据和扫描数据;
根据所述地图数据和所述扫描数据建立所述目标场景的建筑信息模型;
获取所述目标场景的参数信息;所述参数信息包括建筑楼层信息、房间号信息和区域功能划分信息;
将所述参数信息对应标注在所述建筑信息模型中,得到所述共享地图。
在一个实施示例中,在将所述参数信息对应标注在所述建筑信息模型中,得到所述共享地图之后,还包括:
控制若干所述机器人定时对所述目标场景进行激光扫描,得到场景扫描数据;
根据所述场景扫描数据更新所述共享地图。
本发明实施例的第二方面提供了一种多机器人-多人协作控制装置,包括:
定位信息接收模块,用于接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一用户终端以及每一 机器人均具有唯一的设备标识;
定位更新模块,用于将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;所述共享地图包含N个所述机器人和M个所述用户终端的定位标记;所述共享地图为三维坐标地图;N≥1;M≥1;
协助请求信息接收模块,用于接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;
定位坐标获取模块,用于根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;
目标装置确定模块,用于确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;
协同指令生成模块,用于根据所述定位坐标和所述协助请求指令生成协同工作指令;
指令发送模块,用于向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。
本发明实施例的第三方面提供了一种多机器人-多人协作系统,所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;其中,
所述多机器人-多人协作系统实现如第一方面所述的多机器人-多人协作控制方法的步骤。
本发明实施例提供的一种多机器人-多人协作控制方法、装置及系统,应用于多机器人-多人协作系统;所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;服务器接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一所述用户终端以及每一所述机器人均具有唯一的设备标识;通过定时接收各个用户终端和各个机器人发送的定位信息,能够实时掌握协作系统中各个用户终端和各个机器人的位置变动。将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;服务器根据接收到的各个用户终端和各个机器人发送的定位信息后更新共享地图上各个用户终端和各个机器人的定位标记位置,实现在共享地图中各个机器人和各个用户终端的位置的实时跟踪。所述共享地图包含N个所述机器人和M个所述用户终端的定位标记。接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;服务器接收到任一用户终端或机器人发送的协助请求信息后,能够从共享地图中快速查找到请求发送的位置得到定位坐标,提高系统响应速率;确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;根据所述定位坐标和所述协助请求指令生成协同工作指令;向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。实现协作请求指令发起时,及时安排机器人与使用用户终端的工作人员一起响应指令进行协同作业。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例一提供的多机器人-多人协作控制方法的流程示意图;
图2是本发明实施例一提供的多机器人-多人协作系统的结构示意图;
图3是本发明实施例一提供的多机器人-多人协作工作场景的示意图;
图4是本发明实施例一提供的机器人检测光信号构建新地图的示意图;
图5是本发明实施例一提供的光信号与无线信号结合的示意图;
图6是本发明实施例二提供的多机器人-多人协作控制装置的结构示意图;
图7是本发明实施例三提供的多机器人-多人协作系统的结构示意图;
图8是本发明实施例三提供的可见光通信系统级芯片VLCSOC的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含一系列步骤或单元的过程、方法或系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,术语“第一”、“第 二”和“第三”等是用于区别不同对象,而非用于描述特定顺序。
多机器人协助系统中的机器人通常以固定的路线进行点对点的运动,且服务器无法获知与多机器人协助系统连接的用户终端的位置信息。当用户终端向系统发起协助请求指令时,需要临时获取指令发起设备的地理位置,导致系统响应延迟;也无法及时通知距协作请求指令发起地点较近的使用其它用户终端的工作人员赶往现场进行协同工作。为解决这一问题,本实施例中服务器根据接收到的各个用户终端和各个机器人发送的定位信息后更新共享地图上各个用户终端和各个机器人的定位标记位置,实现在共享地图中各个机器人和各个用户终端的位置的实时跟踪;服务器接收到任一用户终端或机器人发送的协助请求信息后,能够从共享地图中快速查找到请求发送的位置得到定位坐标,及时安排共享地图中的机器人与使用用户终端的工作人员一起响应指令进行协同作业。
实施例一
如图1所示,是本发明实施例一提供的多机器人-多人协作控制方法的流程示意图。具体地,本实施例可适用于多机器人-多人协作系统进行协同作业的应用场景,该方法应用于多机器人-多人协作系统;所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;该方法由服务器作为执行主体执行,该服务器可为由多个服务器架设的网络云端平台等;在本申请实施例中以服务器作为执行主体进行说明,该方法具体包括如下步骤:
S110、接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一所述用户终端以及每一所述机器人均具有唯一的设备标识。
如图2所示,在多机器人-多人(Multi-Robot Multi-Human(MRMH))协作系统中,除了至少一个机器人10与服务器20连接以外,至少一个用户终端30也与服务器20连接,实现人机共享控制,使得工作人员能够通过用户终端向服务器提出控制策略。系统中的服务器20为主控服务器用于控制系统中的所有机器人,获取机器人当前位置、环境信息、处理数据和制定策略等,同时还能够获取与服务器通信连接的用户终端的当前位置信息和处理数据等;该服务器还可与存储数据的云端40进行通信连接。详细地,该服务器可为协调、控制和协作服务器(Coordination,Control,and Collaboration Server(C3S));该服务器维护着每个用户终端和机器人的位置,是接收所有移动和固定终端(包括机器人、用户终端)位置的集中网关。服务器能够向机器人发送指令,以执行各种操作,如移动、转弯、到特定位置、操纵机器人机械手和收集传感器数据等。此外,C3S还可以有一个用户界面(UI),供操作人员在用户终端上通过用户界面来控制机器人,并与其它用户终端的操作人员进行协同作业。可选的,该系统中若包括多个与服务器通信连接的机器人,则多个机器人可为同类型的移动机器人或具有不同功能的不同类型的移动机器人;且用户终端可为移动智能终端、平板或智能手表等智能终端。
为获知与服务器通信连接的N个机器人和M个用户终端的位置,系统中各个机器人和各个用户终端均向服务器发送定位信息。服务器接收到的定位信息可由系统中任一机器人或任一用户终端生成;定位信息中包括发送该定位信息的设备的地理位置信息和第一设备标识,每一所述用户终端以及每一所述机器人均具有唯一的设备标识,使得服务器接收到定位信息时根据第一设备标识确定地理位置信息所属的设备。具体的,由于系统中机器人和携带用户终端的使用者在执行任务时位置会发生移动,为实现对与服务器通信连接的N个机器人和M个用户终端的位置的实时跟踪,系统中各个机器人和各个用户终端还可根据预设时间间隔定时向服务器发送定位信息,使得服务器能够持续接收到各个机器人和各个用户终端的定位信息。各个机器人和用户终端为实现所在位置的地理位置信息的获取,机器人和用户终端中均设有定位导航装置以及各种定位导航所需的传感器(例如里程计和惯性传感器等)。可选的,该预设时间间隔可为0.1-0.5秒。
在一个实施示例中,服务器接收到的定位信息由多机器人-多人协作系统中任一用户终端生成并发送。为实现对多机器人-多人协作系统中各个用户终端的位置跟踪,各个用户终端会以预设时间间隔定时对本机所处的位置进行定位,各个用户终端根据定位获得的位置信息并结合采集到的可见光通信信号获得更准确的三维位置信息,根据三维位置信息生成定位信息发送至服务器,完成位置的上报。因此,在服务器接收定位信息之前,还包括各个用户终端生成定位信息的过程。具体的,以多机器人-多人协作系统中任一个用户终端为例,该用户终端生成定位信息的过程包括步骤11至步骤14:
步骤11、所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号;所述第一位置信息为三维位置信息;所述光信号为可见光通信信号;
多机器人-多人协作系统的工作场景多数为复杂场景,例如多楼层的高楼内部或设有大型设备的区域等,其中,多楼层的高楼内部每层楼内的地形相同,而在大楼内工作的机器人或工作人员携带的用户终端获取的三维定位信息无法反映场地信息(例如楼层数、房间号或区域功能等),导致定位不清楚。为解决这一技术问题,可预先在工作场景中合理设置多个携带有编码标识的电子标记,各个电子标记对应的编码标识与其所在的位置信息(例如所在经度、纬度、建筑物名称、楼层数、所在楼层方向、房间号、楼层位置和区域功能等信息)唯一映射,且服务器或服务器所连接的云端预先存储有上述映射关系列表;使得机器人或用户终端移动到电子标记所在位置扫描电子标记解码得到编码标识时,就能通 过服务器获取到扫描的电子标记对应的场地信息,将得到的场地信息与获取到的三维定位信息相结合,就能得到更为准确的三维信息,增加定位的准确度。可选的,可通过预设的智能安装程序控制多个机器人完成工作场景中多个携带有编码标识的电子标记(例如LED灯或信标等)的安装。可选的,上述电子标记的存在形式可为QR(Quick Response)码或将编码标识以光通信(Visible Light Communication,VLC)信号发出的可见光光源(例如LED灯或信标等)。如图3所示,在本实施示例中,多机器人-多人协作系统中N个所述机器人301和M个所述用户终端302所处的场景中设有若干可见光光源303;每一所述可见光光源具有唯一的编码标识(IDcode);各个所述可见光光源产生光信号包含各个所述可见光光源对应的编码标识信息。
因此,为获得更准确的定位信息,用户终端通过设有的定位装置获取当前所在位置的第一位置信息,还通过信号采集装置采集所处环境的第一光信号。由于定位装置获取的位置信息多为三维定位信息,所以用户终端获取得到的第一位置信息为三维位置信息。由可见光光源产生的第一光信号为可见光通信(VLC)信号。
在一个实施示例中,用户终端内可设有同步定位与建图(simultaneous localization and mapping(SLAM))装置和光电探测器,以便于用户终端通过同步定位与建图方法实现所在位置的第一位置信息的获取,以及通过光电探测器实现所处环境的第一光信号的采集。具体的,以多机器人-多人协作系统中任一个用户终端为例,用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号的具体过程包括:所述用户终端通过同步定位与建图方法获取所在位置的第一位置信息;所述用户终端通过光电探测器采集所处环境的第一光信号。
详细的,用户终端内设有的SLAM装置包括主控芯片和与该主控芯片连接的里程计和惯性传感器等传感测量器件。在用户终端通过同步定位与建图方法获取所在位置的第一位置信息时,SLAM装置的主控芯片获取各个传感测量器件测得的数据进行同步定位与建图方法的计算,得到定位和建立本机所在环境的地图,即获取到的所在位置的第一位置信息包括定位坐标和构建的地图。可选的,建立的地图可为栅格地图(Occupancy Grid Map)。
由于,可见光光源产生的VLC信号为光信号,用户终端通过光电探测器采集到所处环境的第一光信号。
步骤12、所述用户终端从所述服务器中获取所述第一光信号映射的第一场地信息;所述第一场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
由于在工作场景中设置的多个携带有编码标识的可见光光源对应的编码标识与其所在的位置信息(例如所在经度、纬度、建筑物名称、楼层数,所在楼层方向、房间号、楼层位置、区域功能等信息)唯一映射,且服务器或服务器所连接的云端预先存储有上述映射关系列表。当用户终端采集到所处环境的第一光信号后,可从服务器中存储的映射关系列表中查询到第一光信号映射的第一场地信息。该第一场地信息包括纬度、建筑物名称、楼层数,所在楼层方向、房间号和区域功能等信息。详细的,该区域功能可根据使用者划分(例如员工专用区或公用区);或者根据实现功能划分为功能区(例如厕所、办公室等)。
步骤13、所述用户终端根据所述第一位置信息和所述第一场地信息得到第一地理位置信息;
用户终端根据得到的第一场地信息对第一位置信息进行信息补全和位置校正,得到包含场地信息的第一地理位置信息,即三维坐标信息。
步骤14、所述用户终端根据所述第一地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
用户终端根据得到的第一地理位置信息和本机的设备标识生成定位信息,并将定位信息发送至所述服务器,完成一次位置信息的上报。
在一个实施示例中,服务器接收到的定位信息由多机器人-多人协作系统中任一机器人生成并发送。为实现对多机器人-多人协作系统中各个机器人的位置跟踪,各个机器人会以预设时间间隔定时对本机所处的位置进行定位,各个机器人根据定位获得的位置信息并结合采集到的可见光通信信号获得更准确的三维位置信息,根据三维位置信息生成定位信息发送至服务器,完成位置的上报。因此,在服务器接收定位信息之前,还包括各个机器人生成定位信息的过程。具体的,以多机器人-多人协作系统中任一个机器人为例,该机器人生成定位信息的过程包括步骤21至步骤24:
步骤21、所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号;所述第二位置信息为三维位置信息;所述光信号为可见光通信信号;
在多机器人-多人协作系统的工作场景中合理设置多个携带有编码标识的电子标记,各个电子标记对应的编码标识与其所在的位置信息(例如所在经度、纬度、建筑物名称、楼层数、所在楼层方向、房间号、楼层位置和区域功能等信息)唯一映射,且服务器或服务器所连接的云端预先存储有上述映射关系列表;使得机器人或用户终端移动到电子标记所在位置扫描电子标记解码得到编码标识时,就能通过服务器获取到扫描的电子标记对应的场地信息,将得到的场地信息与获取到的三维定位信息相结合,就能得到更为准确的三维信息,增加定位的准确度。可选的,上述电子标记的存在形式 可为QR(QuickResponse)码或将编码标识以光通信(Visible Light Communication,VLC)信号发出的可见光光源(例如LED灯、VLC灯或信标等)。如图3所示,在本实施示例中,多机器人-多人协作系统中N个所述机器人301和M个所述用户终端302所处的场景中设有若干可见光光源303;每一所述可见光光源具有唯一的编码标识(IDcode);各个所述可见光光源产生光信号包含各个所述可见光光源对应的编码标识信息。
因此,为获得更准确的定位信息,机器人通过设有的定位装置获取当前所在位置的第二位置信息,还通过信号采集装置采集所处环境的第二光信号。由于定位装置获取的位置信息多为三维定位信息,所以机器人获取得到的第二位置信息为三维位置信息。由可见光光源产生的第二光信号为可见光通信(VLC)信号。
在一个实施示例中,机器人内可设有同步定位与建图(simultaneous localization andmapping(SLAM))装置和光电探测器,以便于机器人通过定位装置实现所在位置的第二位置信息的获取,以及通过光电探测器实现所处环境的第二光信号的采集。具体的,以多机器人-多人协作系统中任一个机器人为例,机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号的具体过程包括:所述机器人通过同步定位与建图方法获取所在位置的第二位置信息;所述机器人通过光电探测器采集所处环境的第二光信号。
详细的,机器人内设有的SLAM装置包括主控芯片和与该主控芯片连接的里程计和惯性传感器等传感测量器件。在机器人通过同步定位与建图方法获取所在位置的第二位置信息时,SLAM装置的主控芯片获取各个传感测量器件测得的数据进行同步定位与建图方法的计算,得到定位和建立本机所在环境的地图,即获取到的所在位置的第二位置信息包括定位坐标和构建的地图。可选的,建立的地图可为栅格地图(Occupancy Grid Map)。
由于,可见光光源产生的VLC信号为光信号,机器人通过光电探测器采集到所处环境的第二光信号。
步骤22、所述机器人从所述服务器中获取所述第二光信号映射的第二场地信息;所述第二场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
由于在工作场景中设置的多个携带有编码标识的可见光光源对应的编码标识与其所在的位置信息(例如所在经度、纬度、建筑物名称、楼层数,所在楼层方向、房间号、楼层位置和、区域功能等信息)唯一映射,且服务器或服务器所连接的云端预先存储有上述映射关系列表。当机器人采集到所处环境的第二光信号后,可从服务器中存储的映射关系列表中查询到第二光信号映射的第二场地信息。该第二场地信息包括纬度、建筑物名称、楼层数、所在楼层方向、房间号等信息和区域功能。详细的,该区域功能可根据使用者划分(例如员工专用区或公用区);或者根据实现功能划分为功能区(例如厕所、办公室等)。
步骤23、所述机器人根据所述第二位置信息和所述第二场地信息得到第二地理位置信息;
机器人根据得到的第二场地信息对第二位置信息进行信息补全和位置校正,得到包含场地信息的第二地理位置信息,即三维坐标信息。
步骤24、所述机器人根据所述第二地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
机器人根据得到的第二地理位置信息和本机的设备标识生成定位信息,并将定位信息发送至所述服务器,完成一次位置信息的上报。
在一种实施示例中,服务器内存储的不同光(VLC)信号与场地信息之间的映射关系为从多机器人-多人协作系统的作业场景的施工建设图纸中提取得到或提前存储在云端中。详细举例说明,若多机器人-多人协作系统的作业场景为整栋大楼,则场地信息可对应为这栋大楼的楼层信息、房间号信息和区域功能信息。若光信号均由安装在该大楼内的可见光光源(例如LED灯、信标等)发出,则该大楼对应的施工建设图纸中具有各个可见光光源的详细位置信息,该位置信息包括所在楼层数以及在楼层内的位置坐标等。因此,通过从作业场景的施工建设图纸中提取生成光信号的可见光光源的位置信息,建立各个可见光光源生成的光信号与各个可见光光源的位置信息之间的映射关系,并生成映射关系列表存储在服务器中或云端,就能够得到不同光信号与场地信息之间的映射关系。具体的,每盏可见光光源的VLC编码标识(ID)与它映射的物理位置信息(包括建筑物的地图中的经度、纬度、建筑物、楼层、房间号和区域功能信息等)一起存储在云端。
在另一种实施示例中,当系统的工作场景中的电子标记为首次安装,还无图纸数据记录时,可利用机器人的同步定位与建图功能完成新地图的测绘以及在新地图中标记电子标记的位置。服务器内存储的不同光信号与场地信息之间的映射关系可由多机器人-多人协作系统中的机器人构建。具体的,在系统中任一机器人或用户终端从所述服务器中获取光信号映射的场地信息之前,光信号与场地信息之间的映射关系的构建过程具体包括步骤31至步骤33:
步骤31、所述机器人若采集到第三光信号,则获取所在位置的位置坐标信息;所述位置坐标信息包括位置坐标和第三场地信息;所述第三场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
当系统的工作场景中的电子标记为首次安装,还无图纸数据记录时,若使用人工方式进行记录标记需要耗费大量的 工程时间,而且成本较高。为解决这个技术问题,可通过系统中多个机器人在新地图上巡游并同时检测光信号,当机器人采集到第三光信号时,获取当前机器人所在位置的位置坐标信息,使得获取到的位置坐标信息即为产生第三光信号的电子标记的位置。具体的,机器人内设有的SLAM装置包括主控芯片和与该主控芯片连接的里程计和惯性传感器等传感测量器件。若采集到第三光信号,机器人通过同步定位与建图方法获取当前所在位置的位置坐标信息时,SLAM装置的主控芯片获取各个传感测量器件测得的数据进行同步定位与建图方法的计算,得到定位和建立本机所在环境的地图,即获取到的所在位置的位置坐标信息包括位置坐标和构建的新地图。同时,机器人还能根据场景的设计图纸获取到所在位置的建筑楼层信息、房间号信息和区域功能划分信息得到第三场地信息,使得位置坐标信息还包括第三场地信息。
步骤32、所述机器人构建所述第三光信号的编码信息与所述位置坐标信息的映射关系;
机器人根据采集到的第三光信号解码得到产生第三光信号的电子标记对应的编码标识号,构建采集到的第三光信号的编码标识号与得到的位置坐标信息的映射关系。具体在新地图的呈现形式为,建立的新地图可为栅格地图(Occupancy Grid Map)包含检测到的电子标记;在栅格地图中不同的像数值来代表具有障碍物的区域(像数值为1),自由区域(像数值为0),以及电子标记区域(像素值为-1)。详细地,具有电子标记的占据栅格地图为pgm图片格式,并记录了地图的起始点的位置,地图的方向,图像的像素与物理距离的分辨率,电子标记的位置,每个电子标记位置对应的编码标识ID,障碍物的位置。
步骤33、所述机器人将所述映射关系发送至所述服务器进行存储。
机器人构建所述第三光信号的编码信息与所述位置坐标信息的映射关系后,将该映射关系发送至所述服务器进行存储。详细举例说明,如图4所示,以系统中其中一个机器人为例,机器人检测光信号构建新地图的示意图。机器人被放置在数字地图上一个已知位置和方向的起点。可选的,该起点可采用电子标记(例如VLC信号)进行标定。当机器人开始移动时,利用传感器(IMU/LiDAR/Time-of-Flight)为基础的SLAM装置从起点推算出其位置。当机器人在电子标记(例如VLC灯)下行走时,机器人的接收器检测第三光信号,从而根据采集到的第三光信号解码得到产生第三光信号的灯对应ID,并将机器人当前的位置坐标信息映射到接收到的VLC灯ID上。当机器人覆盖了所有的区域后,区域内每一盏灯的位置就会被映射到其对应的灯ID上。这个灯光ID到位置和方位的映射会存储在数据库中,供用户终端的使用者和机器人导航使用。
在另外一种实施示例中,为避免光信号的编码设置过于复杂,而造成的运算复杂和响应时间过长问题,还可将光信号与无线信号结合以使光信号编码能够被重复利用。光信号与场地信息之间的映射关系还可为光信号与场地信息以及无线信号之间的映射关系,则光信号和无线信号与场地信息之间的映射关系的构建方法具体包括步骤41至步骤44:
具体的,若系统的工作场景为一个具有巨大空间的室内环境的话,则为满足定位需求需要在室内环境设置大量的VLC灯。为确保场景中每个VLC灯的编码标识(ID code)不重复,会使VLC信号码长度很长,增加移动设备的计算复杂度和响应时间。为解决这一问题,还可将光信号与无线信号结合以使光信号编码能够被重复利用。具体的,在室内场景中布置若干个无线信号,以使每个无线信号的覆盖范围能够将该场景划分成不同的区域;在每个无线信号的覆盖范围内就能重复使用有限的VLC灯编码标识。可选的,该无线信号可为蓝牙或WIFI信号等。如图5所示,为光信号与无线信号结合的示意图。当无线信号为蓝牙信号时,在室内场景中布置若干个蓝牙信号(例如图中的蓝牙信号A-I),以使每个无线信号的覆盖范围能够将该场景划分成不同的区域。如图中所示,在每个无线信号的覆盖范围内就能重复使用有限的VLC灯编码标识ID(例如编码1-9)。VLC ID在一个区域中的分布方式是,它们与相邻区域中相同的孪生ID相距甚远。这种特殊的安排降低了由于蓝牙信号RSS的不希望的变化而导致的误触发的概率。
步骤41、所述机器人若采集到第三光信号,则接收所在环境的无线信号得到第一无线信号列表,并获取所在位置的位置坐标信息;所述无线信号列表包括接收到的各个无线信号的信号强度;
由于不同区域的VLC灯在不同的无线信号覆盖范围内,当机器人采集到第三光信号时,获取当前机器人所在位置的位置坐标信息,还需接收所在环境的无线信号得到第一无线信号列表。
步骤42、所述机器人确定所述第一无线信号列表中信号强度最强的无线信号为第一目标信号;
由于第三光信号所在位置为对应的无线信号覆盖区域,因此第三光信号对应的无线信号应为第一无线信号列表里信号强度最强的无线信号。机器人确定第一无线信号列表中信号强度最强的无线信号为第一目标信号,从而查找到第三光信号所在区域对应的无线信号。
步骤43、所述机器人构建所述第三光信号的编码信息与所述位置坐标信息以及所述第一目标信号之间的映射关系;
机器人构建第三光信号的编码信息与获取到的位置坐标信息以及产生第三光信号的VLC灯所在区域对应的第一目标信号之间的映射关系。
步骤44、所述机器人将所述映射关系发送至所述服务器进行存储。
机器人将所述映射关系发送至服务器进行存储,使得每盏灯的VLC标识符(ID)和它的射频(蓝牙/WiFi)MAC地址的信息与它的物理位置信息一起根据建筑物的地图(经度、纬度、建筑物、楼层、房间号和区域功能信息等)存储在服务器或云端。
由上述一个实施示例可知,光信号与场地信息之间的映射关系可为光信号与场地信息以及无线信号之间的映射关系,则以多机器人-多人协作系统中任一个用户终端为例,该用户终端生成定位信息的过程还包括步骤51至步骤55:
步骤51、所述用户终端获取所在位置的第一位置信息,采集所处环境的第一光信号,并接收所在环境的无线信号得到第二无线信号列表;
由于不同区域的VLC灯在不同的无线信号覆盖范围内,当用户终端采集到第一光信号时,获取当前用户终端所在位置的第一位置信息,还需接收所在环境的无线信号得到第二无线信号列表。
步骤52、所述用户终端确定所述第二无线信号列表中信号强度最强的无线信号为第二目标信号;
由于第一光信号所在位置为对应的无线信号覆盖区域,因此第一光信号对应的无线信号应为第二无线信号列表里信号强度最强的无线信号。用户终端确定第二无线信号列表中信号强度最强的无线信号为第二目标信号,从而查找到第一光信号所在区域对应的无线信号。
步骤53、所述用户终端从所述服务器中获取所述第一光信号和所述第二目标信号映射的场地信息;
由于在工作场景中设置的多个携带有编码标识的可见光光源对应的编码标识与其所在的位置信息(例如所在经度、纬度、建筑物名称、楼层数、所在楼层方向、房间号、楼层位置和区域功能等信息)和其所在区域的无线信号唯一映射,且服务器或服务器所连接的云端预先存储有上述映射关系列表。当用户终端采集到所处环境的第一光信号和第二目标信号后,可从服务器中存储的映射关系列表中查询到第一光信号和第二目标信号对应映射的场地信息。该场地信息即对应为第一场地信息包括纬度、建筑物名称、楼层数,所在楼层方向房间号和区域功能等信息。
步骤54、所述用户终端根据所述第一位置信息和所述第一场地信息得到第一地理位置信息;
用户终端根据得到的第一场地信息对第一位置信息进行信息补全和位置校正,得到包含场地信息的第一地理位置信息,即三维坐标信息。
步骤55、所述用户终端根据所述第一地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
用户终端根据得到的第一地理位置信息和本机的设备标识生成定位信息,并将定位信息发送至所述服务器,完成一次位置信息的上报。
由上述一个实施示例可知,光信号与场地信息之间的映射关系可为光信号与场地信息以及无线信号之间的映射关系,则以多机器人-多人协作系统中任一个机器人为例,该机器人生成定位信息的过程还包括步骤61至步骤65:
步骤61、所述机器人获取所在位置的第二位置信息,采集所处环境的第二光信号,并接收所在环境的无线信号得到第三无线信号列表;
由于不同区域的VLC灯在不同的无线信号覆盖范围内,当机器人采集到第二光信号时,获取当前机器人所在位置的第二位置信息,还需接收所在环境的无线信号得到第三无线信号列表。
步骤62、所述机器人确定所述第三无线信号列表中信号强度最强的无线信号为第三目标信号;
由于第二光信号所在位置为对应的无线信号覆盖区域,因此第二光信号对应的无线信号应为第三无线信号列表里信号强度最强的无线信号。机器人确定第三无线信号列表中信号强度最强的无线信号为第三目标信号,从而查找到第二光信号所在区域对应的无线信号。
步骤63、所述机器人从所述服务器中获取所述第二光信号和所述第三目标信号映射的场地信息;
由于在工作场景中设置的多个携带有编码标识的可见光光源对应的编码标识与其所在的位置信息(例如所在经度、纬度、建筑物名称、楼层数、所在楼层方向、房间号、楼层位置和区域功能等信息)和其所在区域的无线信号唯一映射,且服务器或服务器所连接的云端预先存储有上述映射关系列表。当机器人采集到所处环境的第二光信号和第三目标信号后,可从服务器中存储的映射关系列表中查询到第二光信号和第三目标信号对应映射的场地信息。该场地信息即对应为第二场地信息包括纬度、建筑物名称、楼层数、所在楼层方向、房间号等信息和区域功能等信息。
步骤64、所述机器人根据所述第二位置信息和所述第二场地信息得到第二地理位置信息;
机器人根据得到的第二场地信息对第二位置信息进行信息补全和位置校正,得到包含场地信息的第二地理位置信息,即三维坐标信息。
步骤65、所述机器人根据所述第二地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
机器人根据得到的第二地理位置信息和本机的设备标识生成定位信息,并将定位信息发送至所述服务器,完成一次位置信息的上报。
S120、将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;所述共享地图包含N个所述机器人和M个所述用户终端的定位标记;所述共享地图为三维坐标地图。
具体的,服务器中的共享地图为三维电子地图,包含三维坐标信息和场地信息,可根据多机器人-多人协作系统的作业场景的施工建设图纸对应的参数以及由机器人测绘得到的地图生成。由于服务器接收到的定位信息可由系统中任一机器人或任一用户终端生成,服务器根据接收到的各个用户终端和各个机器人发送的定位信息更新共享地图上各个用户终端和各个机器人的定位标记位置,实现在共享地图中各个机器人和各个用户终端的位置的实时跟踪。并且通过将每次更新后的共享地图发送给系统中所有机器人和所有用户终端,使得系统中的每一机器人和每一用户终端均能获知系统中各个机器人和各个用户终端的位置,实现机器人和用户终端之间的位置共享。进而可以实现在用户终端的地图上或者机器人的地图上同时显示多个机器人与多个用户终端的位置。
在一个实施示例中,可通过控制多机器人-多人协作系统中若干机器人对工作场景进行地图测绘和扫描完成共享地图的构建。具体的,在将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端之前,还包括步骤71至步骤74:
步骤71、控制若干所述机器人对目标场景进行地图测绘,并对所述目标场景进行激光扫描,得到地图数据和扫描数据;
具体的,在各个机器人内设有地图测绘设备,例如SLAM装置,该SLAM装置包括主控芯片和与该主控芯片连接的里程计和惯性传感器等传感测量器件。控制若干所述机器人对目标场景进行地图测绘时,每个机器人通过同步定位与建图方法进行地图绘制。详细的,控制各个机器人在目标场景内巡游,机器人的SLAM装置的主控芯片获取各个传感测量器件在巡游过程中测得的数据进行同步定位与建图方法的计算,建立本机所在环境的地图,即得到地图数据。可选的,该目标场景可为多机器人-多人协作系统作业的工作场景。并且各个机器人内还设有三维激光扫描装置(例如激光、雷达扫描仪等);在控制若干所述机器人对目标场景进行地图测绘时,同步控制各个机器人的三维激光扫描装置开启,在各个机器人在目标场景内巡游时,各个机器人的三维激光扫描装置对目标场景内的轮廓和边界,例如墙壁和家居等设施的轮廓和边界进行扫描,记录被测物体表面大量的密集的点的三维坐标、反射率和纹理等信息,得到扫描数据。
步骤72、根据所述地图数据和所述扫描数据建立所述目标场景的建筑信息模型;
根据上述步骤得到的地图数据和扫描数据建立目标场景的建筑信息模型(Building Information Modeling),从而得到三维立体的建筑模型。
步骤73、获取所述目标场景的参数信息;所述参数信息包括建筑楼层信息、房间号信息和区域功能划分信息;
并且为实现建筑功用,目标场景通常会具有人为规定的场地信息,例如建筑物名称、楼层数、所在楼层方向、房间号、楼层位置和区域功能等信息。通过从建筑图纸或设计蓝图能够获取到目标场景的参数信息,即建筑楼层信息、房间号信息和区域功能划分信息等信息。
步骤74、将所述参数信息对应标注在所述建筑信息模型中,得到所述共享地图。
将目标场景的参数信息输入建筑信息模型中,使得目标场景的建筑楼层信息、房间号信息和区域功能划分信息等信息能够准确标注在建筑信息模型中,得到共享地图,从而使得构建的共享地图包括三维坐标信息和场地信息。详细举例说明,若目标场景具有若干个房间,则通过将参数信息对应标注在建筑信息模型中,使得目标场景的建筑信息模型中的每个房间都具有建筑楼层信息、房间号信息和区域功能划分信息标注。
在另一种共享地图建立的实施方式中,将机器人预先对场地测绘得到的SLAM地图作为该共享地图的初稿,然后通过激光扫扫描场景内墙壁以及设施得到扫描信息,根据得到的扫描信息在该初稿中生成共享地图上的轮廓和边界,例如墙壁和家居等设施的轮廓和边界;然后再根据场地对应的建筑图纸参数对共享地图进行修正和补全,从而完成共享地图的构建,使得共享地图成为三维电子地图,包含经度、纬度、建筑物名称、楼层数,所在楼层方向、房间号、楼层位置等信息。这与完全由人工绘制的构建相比,可以大大减少构建人机共享地图所需的时间,提高地图精度。
在一个实施示例中,为应对目标场景内发生家具等设施重新布置或室内改造等变化,还可以让机器人定期对目标场景的室内环境进行激光扫描,并根据扫描数据及时对共享地图进行更新,以确保共享地图的实时性和准确性。具体的,在将所述参数信息对应标注在所述建筑信息模型中,得到所述共享地图之后,还包括:控制若干所述机器人定时对所述目标场景进行激光扫描,得到场景扫描数据;根据所述场景扫描数据更新所述共享地图。
S130、接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识。
在应用场景中,工作人员可以使用用户终端在任一地点生成协助请求信息并发送给服务器进行协助作业策略的生成。详细举例说明,当系统的应用场景为商场时,工作人员若巡逻该场景时发现某处着火或发现受伤人员等事故需要协助,则可通过用户终端选择需协助事项对应的协助请求指令并结合本机的设备标识生成协助请求信息并发送给服务器。
S140、根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标。
由于服务器中的共享地图能够实时跟踪各个机器人和各个用户终端的位置,根据共享地图中协助请求信息包含的第二设备标识的定位标记,就能够快速获得定位坐标,即协助请求指令的发起地点的位置,提高系统的响应速率。
S150、确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置。
由于用户终端中的共享地图也能够实时跟踪各个机器人和各个用户终端的位置,在工作人员使用用户终端在任一地点生成协助请求信息时,还能够直接根据用户终端中的共享地图显示的各个机器人和各个用户终端的位置,选择协助所需的目标设备,例如在共享地图中距离该用户终端最近的若干机器人和/或用户终端,使得生成的协助请求信息还包括指定的目标设备的设备标识信息。当服务器接收到协助请求信息后确定共享地图中满足预设条件的机器人和/或用户终端为目标装置的过程具体为:确定协助请求信息包括的设备标识信息对应的机器人和/或用户终端为目标装置,完成协助请求指令所需的目标设备的确定。
在一个实施示例中,为尽快响应协同工作指令,需及时通知距协作请求指令发起地点(即定位坐标)较近的使用其它用户终端的工作人员赶往现场进行协同工作,因此预设条件可设为选取距定位坐标最近的预设个数的机器人和/或除所述用户终端以外的其它用户终端。确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置,具体可为:在所述共享地图中确定距所述定位坐标最近的预设个数的所述机器人和/或所述用户终端为目标装置。可选的,该预设个数可预先设定或根据协助请求指令的类型对应变化。
S160、根据所述定位坐标和所述协助请求指令生成协同工作指令。
确定协助请求指令所需的目标设备后,服务器根据定位坐标和协助请求指令生成协同工作指令。
S170、向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。
服务器向确定的目标装置发送生成的协同工作指令,使得接收到协同工作指令的机器人和/或用户终端的使用者到达定位坐标并根据协同工作指令的控制策略进行协同工作,实现协作请求指令发起时,及时安排机器人与使用用户终端的工作人员一起响应指令进行协同作业。
本发明实施例提供的一种多机器人-多人协作控制方法,应用于多机器人-多人协作系统;所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;服务器接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一所述用户终端以及每一所述机器人均具有唯一的设备标识;通过定时接收各个用户终端和各个机器人发送的定位信息,能够实时掌握协作系统中各个用户终端和各个机器人的位置变动。将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;服务器根据接收到的各个用户终端和各个机器人发送的定位信息后更新共享地图上各个用户终端和各个机器人的定位标记位置,实现在共享地图中各个机器人和各个用户终端的位置的实时跟踪。所述共享地图包含N个所述机器人和M个所述用户终端的定位标记。接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;服务器接收到任一用户终端或机器人发送的协助请求信息后,能够从共享地图中快速查找到请求发送的位置得到定位坐标,提高系统响应速率;确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;根据所述定位坐标和所述协助请求指令生成协同工作指令;向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。实现协作请求指令发起时,及时安排机器人与使用用户终端的工作人员一起响应指令进行协同作业。
实施例二
如图6所示的是本发明实施例二提供的多机器人-多人协作控制装置。在实施例一的基础上,本发明实施例还提供了一种多机器人-多人协作控制装置,该多机器人-多人协作控制装置包括:处理器601、存储器602以及存储在所述存储器602中并可在所述处理器601上运行的计算机程序603,例如用于多机器人-多人协作控制方法的程序。所述处理器601执行所述计算机程序603时实现上述多机器人-多人协作控制方法实施例中的步骤,例如图1所示的步骤S110至S170。
示例性的,所述计算机程序603可以被分割成一个或多个模块,所述一个或者多个模块被存储在所述存储器602中,并由所述处理器601执行,以完成本申请。所述一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序603在所述洗涤设备中的执行过程。例如,所述计算机程序603可以被分割成定位信 息接收模块、定位更新模块、协助请求信息接收模块、定位坐标获取模块、目标装置确定模块、协同指令生成模块和指令发送模块,各模块具体功能如下:
定位信息接收模块,用于接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一用户终端以及每一机器人均具有唯一的设备标识;
定位更新模块,用于将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;所述共享地图包含N个所述机器人和M个所述用户终端的定位标记;所述共享地图为三维坐标地图;N≥1;M≥1;
协助请求信息接收模块,用于接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;
定位坐标获取模块,用于根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;
目标装置确定模块,用于确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;
协同指令生成模块,用于根据所述定位坐标和所述协助请求指令生成协同工作指令;
指令发送模块,用于向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。
所述多机器人-多人协作控制装置可包括,但不仅限于,处理器601、存储器602以及存储在所述存储器602中的计算机程序603。本领域技术人员可以理解,图6仅仅是多机器人-多人协作控制装置的示例,并不构成对多机器人-多人协作控制装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述多机器人-多人协作控制装置还可以包括输入输出设备、网络接入设备、总线等。
所述处理器601可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器602可以是所述洗涤设备的内部存储单元,例如洗涤设备的硬盘或内存。所述存储器602也可以是外部存储设备,例如洗涤设备的排水装置上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器602还可以既包括洗涤设备的排水装置的内部存储单元也包括外部存储设备。所述存储器602用于存储所述计算机程序以及洗涤设备的排水方法所需的其他程序和数据。所述存储器602还可以用于暂时地存储已经输出或者将要输出的数据。
实施例三
图7是本发明实施例三提供的多机器人-多人协作系统的结构示意图。所述系统包括与服务器71通信连接的N个机器人72和M个用户终端73;N≥1;M≥1;其中,
所述多机器人-多人协作系统执行如实施例一中所述的多机器人-多人协作控制方法的步骤。
具体的,为实现多机器人-多人协作系统中每个用户终端和每个机器人均能够接收光通信信号(VLC),各个用户终端和机器人中还具有可见光通信系统级芯片VLCSOC。如图8所示,该可见光通信系统级芯片VLCSOC 8实现光通信信号(VLC)接收以及编解码功能,可包括电源(DC)110、无线数据源130、发光二极管150、光电传感器170、DC-DC电源转换器250、无线通信单元240、安全单元180以及VLC单元140;其中VLC单元140包括基带DSP单元220、模拟信号处理单元230、发射器140A和接收器140B。
详细的,使用DC-DC电源转换器250从电源(DC)110获取电能并供电至SOC8中的元件,该无线通信单元240连接至外部数据源130并且用于与数据源130通信以发送/接收用于回程连通的和/或系统控制的数据,该VLC单元140连接至LEDsl5以及一个或多个光电传感器170,该安全单元180与无线通信单元240和VLC单元140相连。
VLC单元140用于调制LEDsl5以传送信息和/或VLC单元140用于通过一个或多个光电传感器170利用VLC接收信息。在一个实施方式中,SOC8可同时支持VLC发送和接收,使得SOC8可用于VLC源(例如LED灯、符号或其他装置)以及VLC接收器(例如手持装置等),此外,SOC8也可支持双向VLC装置。在本实施例中,VLC单元140包括发射器(TX)电路140A和接收器(RX)电路140B。TX电路140A可以是基带数字信号处理单元(基带DSP单元)220和调节LED灯150的模拟信号处理单元230。RX电路140B可从光电传感器170接收信号并探测所接收到的光中的VLC数据。
VLC单元140和无线通信单元240可协同使用以将信息传送至装置。该VLC单元140,例如,可传送快速响应码(QR code)、统一资源定位符(URL)或其它可用于访问更大量信息的数据。更大量的信息可经由无线通信单元240通过无线或有线网络传送。
安全单元180可保证网络和/或VLC数据访问的安全性,例如,安全单元180可包括密码硬件,用于加密通过VLC传输的数据,和/或解密要通过VLC传输的数据。因此,敏感数据可只传输给特定的用户,而其他接收到这些VLC数据的接收者不能解密这些数据。同样地,要在网络上传输的数据可由安全单元180加密以及从网络接收到的数据(例如要通过VLC传输的数据)可由安全单元180加密。
光电传感器170可以是任意一种光电传感器,例如,该光电传感器可包括光电探测器或CMOS图像传感器。光电探测器可用于高带宽/数据速率通信,而CMOS图像传感器可用于低带宽/数据速率通信。一个给定的系统可包括一种或多种类型的光电传感器170。其他的实施方式中可使用其他的光电传感器。
LEDS15可以是任一种LED。在一个实施方式中,LEDS15可以是大量的低成本的标准LEDs。通过廉价的LEDsl5和SoC16的费用节省(与分立元件相比)的结合,VLC可更加容易地被市场接受。随着时间的推移,SOC8还可利用摩尔定律降低成本、增加性能等。VLC还可以与低成本的无线/有线网络结合使用。在一个实施方式中,LEDsl5可以是有机LEDs(OLEDs)。电源可以是直流(DC)电源或交流(AC)电源,或者可以通过专用电源线供电或与数据一起供电(例如通过以太网供电-PoE)。此外,数据可以通过无线或有线通信系统发送/接收。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。

Claims (15)

  1. 一种多机器人-多人协作控制方法,其特征在于,所述方法应用于多机器人-多人协作系统;所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;以所述服务器为执行主体,所述方法包括:
    接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一所述用户终端以及每一所述机器人均具有唯一的设备标识;
    将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;所述共享地图包含N个所述机器人和M个所述用户终端的定位标记;所述共享地图为三维坐标地图;
    接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;
    根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;
    确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;
    根据所述定位坐标和所述协助请求指令生成协同工作指令;
    向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。
  2. 如权利要求1所述的多机器人-多人协作控制方法,其特征在于,在所述接收定位信息之前,还包括:
    所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号;所述第一位置信息为三维位置信息;所述光信号为可见光通信信号;
    所述用户终端从所述服务器中获取所述第一光信号映射的第一场地信息;所述第一场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
    所述用户终端根据所述第一位置信息和所述第一场地信息得到第一地理位置信息;
    所述用户终端根据所述第一地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
  3. 如权利要求1所述的多机器人-多人协作控制方法,其特征在于,在所述接收定位信息之前,还包括:
    所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号;所述第二位置信息为三维位置信息;所述光信号为可见光通信信号;
    所述机器人从所述服务器中获取所述第二光信号映射的第二场地信息;所述第二场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
    所述机器人根据所述第二位置信息和所述第二场地信息得到第二地理位置信息;
    所述机器人根据所述第二地理位置信息和本机的设备标识生成所述定位信息,并将所述定位信息发送至所述服务器。
  4. 如权利要求2或3任一项所述的多机器人-多人协作控制方法,其特征在于,在所述接收定位信息之前,还包括:
    所述机器人若采集到第三光信号,则获取所在位置的位置坐标信息;所述位置坐标信息包括位置坐标和第三场地信息;所述第三场地信息包括建筑楼层信息、房间号信息和区域功能划分信息;
    所述机器人构建所述第三光信号的编码信息与所述位置坐标信息的映射关系;
    所述机器人将所述映射关系发送至所述服务器进行存储。
  5. 如权利要求4所述的多机器人-多人协作控制方法,其特征在于,N个所述机器人和M个所述用户终端所处的场景中设有若干可见光光源;每一所述可见光光源具有唯一的编码标识;各个所述可见光光源产生光信号包含各个所述可见光光源对应的编码标识信息。
  6. 如权利要求1所述的多机器人-多人协作控制方法,其特征在于,所述确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置,包括:
    在所述共享地图中确定距所述定位坐标最近的预设个数的所述机器人和/或所述用户终端为目标装置。
  7. 如权利要求2所述的多机器人-多人协作控制方法,其特征在于,所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号,包括:
    所述用户终端通过同步定位与建图方法获取所在位置的第一位置信息;
    所述用户终端通过光电探测器采集所处环境的第一光信号。
  8. 如权利要求3所述的多机器人-多人协作控制方法,其特征在于,所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号,包括:
    所述机器人通过同步定位与建图方法获取所在位置的第二位置信息;
    所述机器人通过光电探测器采集所处环境的第二光信号。
  9. 如权利要求4所述的多机器人-多人协作控制方法,其特征在于,所述机器人若采集到第三光信号,则获取所在位置的位置坐标信息,包括:
    所述机器人若采集到第三光信号,则接收所在环境的无线信号得到第一无线信号列表,并获取所在位置的位置坐标信息;所述无线信号列表包括接收到的各个无线信号的信号强度;
    所述机器人确定所述第一无线信号列表中信号强度最强的无线信号为第一目标信号;
    所述机器人构建所述第三光信号的编码信息与所述位置坐标信息的映射关系,包括:
    所述机器人构建所述第三光信号的编码信息与所述位置坐标信息以及所述第一目标信号之间的映射关系。
  10. 如权利要求9所述的多机器人-多人协作控制方法,其特征在于,所述用户终端获取所在位置的第一位置信息,并采集所处环境的第一光信号,还包括:
    所述用户终端获取所在位置的第一位置信息,采集所处环境的第一光信号,并接收所在环境的无线信号得到第二无线信号列表;
    所述用户终端从所述服务器中获取所述第一光信号映射的第一场地信息,包括:
    所述用户终端确定所述第二无线信号列表中信号强度最强的无线信号为第二目标信号;
    所述用户终端从所述服务器中获取所述第一光信号和所述第二目标信号映射的场地信息。
  11. 如权利要求9所述的多机器人-多人协作控制方法,其特征在于,所述机器人获取所在位置的第二位置信息,并采集所处环境的第二光信号,还包括:
    所述机器人获取所在位置的第二位置信息,采集所处环境的第二光信号,并接收所在环境的无线信号得到第三无线信号列表;
    所述机器人从所述服务器中获取所述第二光信号映射的第二场地信息,包括:
    所述机器人确定所述第三无线信号列表中信号强度最强的无线信号为第三目标信号;
    所述机器人从所述服务器中获取所述第二光信号和所述第三目标信号映射的场地信息。
  12. 如权利要求1所述的多机器人-多人协作控制方法,其特征在于,在将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端之前,还包括:
    控制若干所述机器人对目标场景进行地图测绘,并对所述目标场景进行激光扫描,得到地图数据和扫描数据;
    根据所述地图数据和所述扫描数据建立所述目标场景的建筑信息模型;
    获取所述目标场景的参数信息;所述参数信息包括建筑楼层信息、房间号信息和区域功能划分信息;
    将所述参数信息对应标注在所述建筑信息模型中,得到所述共享地图。
  13. 如权利要求12所述的多机器人-多人协作控制方法,其特征在于,在将所述参数信息对应标注在所述建筑信息模型中,得到所述共享地图之后,还包括:
    控制若干所述机器人定时对所述目标场景进行激光扫描,得到场景扫描数据;
    根据所述场景扫描数据更新所述共享地图。
  14. 一种多机器人-多人协作控制装置,其特征在于,所述装置包括:
    定位信息接收模块,用于接收定位信息;所述定位信息包括地理位置信息和第一设备标识;每一用户终端以及每一机器人均具有唯一的设备标识;
    定位更新模块,用于将所述第一设备标识在共享地图中的定位标记更新为所述地理位置信息所处的位置,并将更新后的共享地图发送给所有所述机器人和所述用户终端;所述共享地图包含N个所述机器人和M个所述用户终端的定位标记;所述共享地图为三维坐标地图;N≥1;M≥1;
    协助请求信息接收模块,用于接收协助请求信息;所述协助请求信息包括协助请求指令和第二设备标识;
    定位坐标获取模块,用于根据所述共享地图中所述第二设备标识的定位标记,得到定位坐标;
    目标装置确定模块,用于确定所述共享地图中满足预设条件的所述机器人和/或所述用户终端为目标装置;
    协同指令生成模块,用于根据所述定位坐标和所述协助请求指令生成协同工作指令;
    指令发送模块,用于向所述目标装置发送所述协同工作指令,以使所述目标装置对应的机器人和/或用户终端的使用者到达所述定位坐标进行协同工作。
  15. 一种多机器人-多人协作系统,其特征在于,所述系统包括与服务器通信连接的N个机器人和M个用户终端;N≥1;M≥1;其中,
    所述多机器人-多人协作系统执行如权利要求1至13任一项所述的多机器人-多人协作控制方法的步骤。
PCT/CN2021/114835 2021-04-28 2021-08-26 多机器人-多人协作控制方法、装置及系统 WO2022227352A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110468727.2 2021-04-28
CN202110468727.2A CN115248039A (zh) 2021-04-28 2021-04-28 多机器人-多人协作控制方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2022227352A1 true WO2022227352A1 (zh) 2022-11-03

Family

ID=83697463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114835 WO2022227352A1 (zh) 2021-04-28 2021-08-26 多机器人-多人协作控制方法、装置及系统

Country Status (2)

Country Link
CN (1) CN115248039A (zh)
WO (1) WO2022227352A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016511A (zh) * 2022-12-26 2023-04-25 广东职业技术学院 一种多机器人的数据传输方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582488B (zh) * 2023-07-14 2023-10-13 中创(深圳)物联网有限公司 数据传输方法、装置、设备及存储介质
CN117993871A (zh) * 2024-04-07 2024-05-07 中建八局西南建设工程有限公司 一种多机协同工程智能施工系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105751219A (zh) * 2016-05-07 2016-07-13 深圳市华科安测信息技术有限公司 用于挂号及拿药的医用机器人控制系统及方法
CN106737687A (zh) * 2017-01-17 2017-05-31 暨南大学 基于可见光定位导航的室内机器人系统
CN108687783A (zh) * 2018-08-02 2018-10-23 合肥市徽马信息科技有限公司 一种带路式博物馆讲解导览机器人
CN109814556A (zh) * 2019-01-22 2019-05-28 东南大学 一种多机器人协作探索未知环境与地图构建的装置与方法
US20190240839A1 (en) * 2016-10-07 2019-08-08 Lg Electronics Inc. Robot and operation method therefor
CN112207828A (zh) * 2020-09-30 2021-01-12 广东唯仁医疗科技有限公司 一种基于5g网络的零售机器人控制方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105751219A (zh) * 2016-05-07 2016-07-13 深圳市华科安测信息技术有限公司 用于挂号及拿药的医用机器人控制系统及方法
US20190240839A1 (en) * 2016-10-07 2019-08-08 Lg Electronics Inc. Robot and operation method therefor
CN106737687A (zh) * 2017-01-17 2017-05-31 暨南大学 基于可见光定位导航的室内机器人系统
CN108687783A (zh) * 2018-08-02 2018-10-23 合肥市徽马信息科技有限公司 一种带路式博物馆讲解导览机器人
CN109814556A (zh) * 2019-01-22 2019-05-28 东南大学 一种多机器人协作探索未知环境与地图构建的装置与方法
CN112207828A (zh) * 2020-09-30 2021-01-12 广东唯仁医疗科技有限公司 一种基于5g网络的零售机器人控制方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016511A (zh) * 2022-12-26 2023-04-25 广东职业技术学院 一种多机器人的数据传输方法
CN116016511B (zh) * 2022-12-26 2023-08-01 广东职业技术学院 一种多机器人的数据传输方法

Also Published As

Publication number Publication date
CN115248039A (zh) 2022-10-28

Similar Documents

Publication Publication Date Title
WO2022227352A1 (zh) 多机器人-多人协作控制方法、装置及系统
US10306419B2 (en) Device locating using angle of arrival measurements
US9819509B2 (en) Systems and methods for location-based control of equipment and facility resources
CN103823228B (zh) 定位系统、终端和定位方法
EP3092858B1 (en) Controlling beaconing in a positioning system
CN101661098B (zh) 机器人餐厅多机器人自动定位系统
KR101906329B1 (ko) 카메라 기반의 실내 위치 인식 장치 및 방법
JP5496096B2 (ja) 無線端末測位システム、環境計測システム並びに設備管理システム
CN106405605A (zh) 一种机器人基于ros和gps的室内外无缝定位方法和定位系统
US20140128093A1 (en) Portal transition parameters for use in mobile device positioning
CN105007566B (zh) 一种室内外定位快速切换方法及系统
US20140297090A1 (en) Autonomous Mobile Method and Autonomous Mobile Device
US8818721B2 (en) Method and system for exchanging data
CN104364610A (zh) 使用关注点进行室内结构推断
WO2015117477A1 (zh) 一种室内定位方法、装置及计算机存储介质
US20160345129A1 (en) Positioning system for indoor and surrounding areas, positioning method and route-planning method thereof and mobile apparatus
WO2015166337A2 (en) Augmented reality based management of a representation of a smart environment
CN105190241A (zh) 利用压力分布来确定位置背景识别符
KR101007608B1 (ko) 무선랜과 gps 및 rf 신호감지 기법을 이용한위치추적 솔루션
CN105929820B (zh) 一种智能机器人定位方法
CN106441306A (zh) 自主定位与地图构建的智能生命探测机器人
JP6160036B2 (ja) 移動通信装置、及び位置情報通知方法
CN207399518U (zh) 一种基于Wi-Fi物联网设备网络的定位系统
JP6624406B2 (ja) 位置検索システム、位置検索方法、発信装置、位置検出装置および設備機器
CN109089206B (zh) 一种基于LoRa SX1280的室内定位装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.02.2024)