CN115248039A - Multi-robot-multi-person cooperation control method, device and system - Google Patents

Multi-robot-multi-person cooperation control method, device and system Download PDF

Info

Publication number
CN115248039A
CN115248039A CN202110468727.2A CN202110468727A CN115248039A CN 115248039 A CN115248039 A CN 115248039A CN 202110468727 A CN202110468727 A CN 202110468727A CN 115248039 A CN115248039 A CN 115248039A
Authority
CN
China
Prior art keywords
information
robot
user terminal
positioning
optical signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110468727.2A
Other languages
Chinese (zh)
Other versions
CN115248039B (en
Inventor
俞捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hong Kong Collaborative Innovation Institute Of Hong Kong University Of Science And Technology Futian Shenzhen
Original Assignee
Liphy Communications Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liphy Communications Ltd filed Critical Liphy Communications Ltd
Priority to CN202110468727.2A priority Critical patent/CN115248039B/en
Priority to PCT/CN2021/114835 priority patent/WO2022227352A1/en
Publication of CN115248039A publication Critical patent/CN115248039A/en
Application granted granted Critical
Publication of CN115248039B publication Critical patent/CN115248039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application belongs to the technical field of multi-robot cooperation, and provides a multi-robot-multi-person cooperation control method, a device and a system, wherein the method comprises the following steps: receiving positioning information; updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; receiving assistance request information; obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map; determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device; generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction; and sending the cooperative work instruction to the target device. The method and the device for the cooperative operation solve the problem that when the cooperative request instruction is initiated, the robot and a worker using the user terminal cannot be arranged timely to respond to the instruction together to perform the cooperative operation.

Description

Multi-robot-multi-person cooperation control method, device and system
Technical Field
The invention relates to the technical field of multi-robot cooperation, in particular to a multi-robot-multi-person cooperation control method, device and system.
Background
With the continuous development and progress of the robot technology, a single mobile robot is difficult to complete complex and tedious work tasks and work indexes of production practice, people are urgently required to research a new direction to meet actual needs in the mechanical field, and then a plurality of robot teams enter the visual field of the research field. A plurality of mobile robots constitute a Multi-Robot cooperative System (MRS). Compared with a single robot, the robots in the multi-robot cooperative system can replan work task to adapt to the environment through the central controller and the self coordination system, and therefore the fault-tolerant capability and the robustness are strong. A plurality of robots work simultaneously, can improve work efficiency, have strengthened the cooperation ability and the work task index of work.
However, in the current multi-robot cooperation system in the market, the robot generally performs point-to-point movement according to a fixed route, and an anti-collision function is added, such as a meal delivery robot or a floor sweeping robot; and the server controlling the robots in the multi-robot cooperative system can acquire the position information of each robot in the system, but cannot actively acquire the position information of the user terminal connected with the server, so that when a worker uses the user terminal to issue a cooperative request instruction, the worker needs to temporarily acquire the position information of the request initiating terminal, and cannot timely inform the worker using other user terminals close to the cooperative request instruction initiating location to arrive at the site for cooperative work.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a system for multi-robot-multi-person cooperative control, so as to solve the problem that when a cooperation request instruction is initiated, a robot cannot be arranged in time to respond to the instruction together with a worker using a user terminal to perform cooperative work.
The first aspect of the embodiment of the invention provides a multi-robot-multi-person cooperative control method, which is applied to a multi-robot-multi-person cooperative system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; taking the server as an execution subject, the method comprises the following steps:
receiving positioning information; the positioning information comprises geographic position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map;
receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device;
generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
In an implementation example, before the receiving the positioning information, the method further includes:
the user terminal acquires first position information of the position of the user terminal and collects a first optical signal of the environment of the user terminal; the first position information is three-dimensional position information; the optical signal is a visible light communication signal;
the user terminal acquires first site information mapped by the first optical signal from the server; the first site information comprises building floor information, room number information and regional function division information;
the user terminal obtains first geographical position information according to the first position information and the first site information;
and the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
In an implementation example, before the receiving the positioning information, the method further includes:
the robot acquires second position information of the position and acquires a second optical signal of the environment; the second position information is three-dimensional position information; the optical signal is a visible light communication signal;
the robot acquires second site information mapped by the second optical signal from the server; the second site information comprises building floor information, room number information and regional function division information;
the robot obtains second geographic position information according to the second position information and the second site information;
and the robot generates the positioning information according to the second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server.
In an implementation example, before the receiving the positioning information, the method further includes:
if the robot acquires the third optical signal, acquiring position coordinate information of the position where the robot is located; the position coordinate information comprises position coordinates and third site information; the third site information comprises building floor information, room number information and regional function division information;
the robot constructs a mapping relation between the coding information of the third optical signal and the position coordinate information;
and the robot sends the mapping relation to the server for storage.
In one implementation example, a plurality of visible light sources are arranged in a scene where the N robots and the M user terminals are located; each visible light source is provided with a unique coded identifier; and each visible light source generates a light signal containing coded identification information corresponding to each visible light source.
In one implementation example, the determining that the robot and/or the user terminal meeting the preset condition in the shared map is a target device includes:
and determining a preset number of robots and/or user terminals closest to the positioning coordinates in the shared map as target devices.
In an implementation example, the acquiring, by the user terminal, first location information of a location where the user terminal is located, and collecting a first optical signal of an environment where the user terminal is located includes:
the user terminal acquires first position information of a position through a synchronous positioning and mapping method;
the user terminal collects a first optical signal of the environment through a photoelectric detector.
In one embodiment, the robot obtains second position information of the position and collects a second optical signal of the environment, and the method includes:
the robot acquires second position information of the position through a synchronous positioning and mapping method;
the robot collects a second optical signal of the environment through the photoelectric detector.
In an implementation example, if the robot acquires the third optical signal, acquiring position coordinate information of the position where the robot is located, including:
if the robot acquires a third optical signal, receiving a wireless signal of the environment to obtain a first wireless signal list, and acquiring position coordinate information of the position; the wireless signal list comprises the signal strength of each received wireless signal;
the robot determines the wireless signal with the strongest signal intensity in the first wireless signal list as a first target signal;
the robot constructs a mapping relation between the coded information of the third optical signal and the position coordinate information, and the mapping relation comprises the following steps:
and the robot constructs a mapping relation between the coded information of the third optical signal, the position coordinate information and the first target signal.
In an implementation example, the user terminal obtains first location information of a location of the user terminal, and collects a first optical signal of an environment of the user terminal, further including:
the user terminal acquires first position information of the position, acquires a first optical signal of the environment and receives a wireless signal of the environment to obtain a second wireless signal list;
the user terminal obtains first site information mapped by the first optical signal from the server, and the method comprises the following steps:
the user terminal determines the wireless signal with the strongest signal strength in the second wireless signal list as a second target signal;
and the user terminal acquires the site information mapped by the first optical signal and the second target signal from the server.
In one embodiment, the robot obtains second position information of a position where the robot is located, and collects a second optical signal of an environment where the robot is located, and the robot further includes:
the robot acquires second position information of the position, acquires a second optical signal of the environment and receives a wireless signal of the environment to obtain a third wireless signal list;
the robot acquires second site information mapped by the second optical signal from the server, and the method comprises the following steps:
the robot determines the wireless signal with the strongest signal strength in the third wireless signal list as a third target signal;
and the robot acquires the site information mapped by the second optical signal and the third target signal from the server.
In one implementation example, before updating the positioning mark of the first device identifier in the shared map to the position where the geographic position information is located, and sending the updated shared map to all the robots and the user terminals, the method further includes:
controlling a plurality of robots to map a target scene, and scanning the target scene with laser to obtain map data and scanning data;
building information models of the target scenes are built according to the map data and the scanning data;
acquiring parameter information of the target scene; the parameter information comprises building floor information, room number information and regional function division information;
and correspondingly marking the parameter information in the building information model to obtain the shared map.
In an implementation example, after the parameter information is correspondingly marked in the building information model, and the shared map is obtained, the method further includes:
controlling a plurality of robots to carry out laser scanning on the target scene at regular time to obtain scene scanning data;
and updating the shared map according to the scene scanning data.
A second aspect of an embodiment of the present invention provides a multi-robot-multi-person cooperative control apparatus, including:
the positioning information receiving module is used for receiving positioning information; the positioning information comprises geographic position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
the positioning updating module is used for updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map; n is more than or equal to 1; m is more than or equal to 1;
an assistance request information receiving module for receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
the positioning coordinate acquisition module is used for acquiring a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
the target device determining module is used for determining the robot and/or the user terminal meeting the preset conditions in the shared map as a target device;
a cooperative instruction generating module, configured to generate a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and the instruction sending module is used for sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
A third aspect of the embodiments of the present invention provides a multi-robot-multi-person collaboration system, where the system includes N robots and M user terminals that are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; wherein,
the multi-robot-multi-person cooperation system realizes the steps of the multi-robot-multi-person cooperation control method according to the first aspect.
The embodiment of the invention provides a multi-robot-multi-person cooperative control method, a device and a system, which are applied to a multi-robot-multi-person cooperative system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; the server receives positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification; by receiving the positioning information sent by each user terminal and each robot at regular time, the position change of each user terminal and each robot in the cooperation system can be mastered in real time. Updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; and the server updates the positioning mark positions of the user terminals and the robots on the shared map according to the received positioning information sent by the user terminals and the robots, so that the real-time tracking of the positions of the robots and the user terminals in the shared map is realized. The shared map includes positioning marks of the N robots and the M user terminals. Receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier; obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map; after receiving the assistance request information sent by any user terminal or robot, the server can quickly find the position requested to be sent from the shared map to obtain the positioning coordinate, so that the response rate of the system is improved; determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device; generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction; and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work. When the cooperation request instruction is initiated, the robot and the staff using the user terminal are arranged in time to respond to the instruction together to carry out cooperation operation.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a multi-robot-multi-person cooperative control method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a multi-robot-multi-person collaboration system according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a multi-robot-multi-person collaborative work scenario according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a robot detecting optical signals to construct a new map according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the combination of optical signals and wireless signals provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a multi-robot-multi-person cooperative control apparatus according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a multi-robot-multi-person collaboration system according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of a vlc soc in a visible light communication system-on-chip according to a third embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
The robots in the multi-robot assistance system usually perform point-to-point movement along a fixed route, and the server cannot know the location information of the user terminal connected to the multi-robot assistance system. When a user terminal initiates an assistance request instruction to a system, the geographical position of an instruction initiating device needs to be temporarily acquired, so that the response delay of the system is caused; and staff using other user terminals close to the initiation place of the cooperation request instruction cannot be informed in time to drive to the site for cooperation work. In order to solve the problem, in this embodiment, the server updates the positioning mark positions of each user terminal and each robot on the shared map according to the received positioning information sent by each user terminal and each robot, so as to realize real-time tracking of the positions of each robot and each user terminal in the shared map; after receiving the assistance request information sent by any user terminal or robot, the server can quickly find the position requested to be sent from the shared map to obtain the positioning coordinate, and timely arrange the robot in the shared map to respond to the instruction together with the worker using the user terminal to perform cooperative operation.
Example one
Fig. 1 is a schematic flow chart of a multi-robot-multi-person cooperative control method according to an embodiment of the present invention. Specifically, the embodiment is applicable to an application scenario in which a multi-robot-multi-person cooperation system performs cooperative work, and the method is applied to the multi-robot-multi-person cooperation system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; the method is executed by taking a server as an execution main body, wherein the server can be a network cloud platform erected by a plurality of servers; in the embodiment of the present application, a server is used as an execution subject for explanation, and the method specifically includes the following steps:
s110, receiving positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification.
As shown in fig. 2, in a Multi-Robot Multi-Human (MRMH) collaboration system, in addition to at least one Robot 10 connected to a server 20, at least one user terminal 30 is also connected to the server 20, implementing man-machine shared control, so that a worker can propose a control strategy to the server through the user terminal. The server 20 in the system is a main control server and is used for controlling all robots in the system, acquiring the current position, the environmental information, the processing data, formulating a strategy and the like of the robots, and meanwhile, acquiring the current position information, the processing data and the like of a user terminal in communication connection with the server; the server may also be communicatively coupled to a cloud 40 that stores data. In detail, the Server may be a Coordination, control, and Collaboration Server (C3S); the server maintains the location of each user terminal and robot as a centralized gateway that receives the location of all mobile and fixed terminals (including robots, user terminals). The server can send instructions to the robot to perform various operations, such as moving, turning, reaching a particular location, maneuvering the robot manipulator, collecting sensor data, and the like. In addition, the C3S may further have a User Interface (UI) through which an operator controls the robot on the user terminal and performs cooperative work with operators of other user terminals. Optionally, if the system includes a plurality of robots communicatively connected to the server, the plurality of robots may be mobile robots of the same type or different types having different functions; and the user terminal can be a mobile intelligent terminal, a tablet or an intelligent watch and other intelligent terminals.
In order to obtain the positions of N robots and M user terminals which are in communication connection with the server, all the robots and all the user terminals in the system send positioning information to the server. The positioning information received by the server can be generated by any robot or any user terminal in the system; the positioning information comprises geographical position information of equipment sending the positioning information and a first equipment identifier, and each user terminal and each robot have unique equipment identifiers, so that the server determines the equipment to which the geographical position information belongs according to the first equipment identifier when receiving the positioning information. Specifically, because the positions of the robot and the user carrying the user terminal in the system move when the task is executed, in order to realize the real-time tracking of the positions of the N robots and the M user terminals which are in communication connection with the server, each robot and each user terminal in the system can also send positioning information to the server at regular time according to a preset time interval, so that the server can continuously receive the positioning information of each robot and each user terminal. In order to acquire the geographic position information of the position of each robot and each user terminal, the robot and each user terminal are respectively provided with a positioning navigation device and various sensors (such as an odometer, an inertial sensor and the like) required for positioning navigation. Alternatively, the predetermined time interval may be 0.1-0.5 seconds.
In one implementation example, the positioning information received by the server is generated and sent by any user terminal in the multi-robot-multi-person cooperative system. In order to realize the position tracking of each user terminal in the multi-robot-multi-person cooperative system, each user terminal can position the position of the machine at preset time intervals, each user terminal obtains more accurate three-dimensional position information according to the position information obtained by positioning and in combination with the collected visible light communication signals, generates positioning information according to the three-dimensional position information and sends the positioning information to a server, and the reporting of the position is completed. Therefore, before the server receives the positioning information, the process that each user terminal generates the positioning information is also included. Specifically, taking any user terminal in the multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the user terminal includes steps 11 to 14:
step 11, the user terminal acquires first position information of a position where the user terminal is located, and collects a first optical signal of an environment where the user terminal is located; the first position information is three-dimensional position information; the optical signal is a visible light communication signal;
the working scene of the multi-robot-multi-person cooperative system is mostly a complex scene, for example, the interior of a multi-floor high building or an area provided with large equipment, etc., wherein the terrain in each floor of the interior of the multi-floor high building is the same, and the three-dimensional positioning information acquired by the robot working in the building or the user terminal carried by the staff cannot reflect the site information (for example, the floor number, the room number or the area function, etc.), so that the positioning is unclear. In order to solve the technical problem, a plurality of electronic marks carrying code identifiers can be reasonably set in a working scene in advance, the code identifiers corresponding to the electronic marks are uniquely mapped with the position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, regional function and other information) of the electronic marks, and the mapping relation list is stored in advance in a server or a cloud end connected with the server; when the robot or the user terminal moves to the position where the electronic marker is located, the electronic marker is scanned and decoded to obtain the coded identifier, the site information corresponding to the scanned electronic marker can be obtained through the server, the obtained site information is combined with the obtained three-dimensional positioning information, more accurate three-dimensional information can be obtained, and the positioning accuracy is improved. Optionally, a plurality of robots may be controlled by a preset intelligent installation program to complete installation of a plurality of electronic tags (e.g., LED lamps or beacons) carrying coded identifiers in a working scene. Alternatively, the electronic mark may be in the form of a QR (Quick Response) code or a Visible Light source (e.g., LED lamp or beacon) that sends out a coded mark as a Light Communication (VLC) signal. As shown in fig. 3, in the present embodiment, in a scene where N robots 301 and M user terminals 302 are located in the multi-robot-multi-person cooperative system, a plurality of visible light sources 303 are provided; each visible light source has a unique coded identification (IDcode); and each visible light source generates a light signal containing coded identification information corresponding to each visible light source.
Therefore, in order to obtain more accurate positioning information, the user terminal obtains the first position information of the current position through the positioning device, and also collects the first optical signal of the environment through the signal collecting device. Since most of the position information acquired by the positioning device is three-dimensional positioning information, the first position information acquired by the user terminal is three-dimensional position information. The first light signal generated by the visible light source is a Visible Light Communication (VLC) signal.
In an implementation example, a synchronous positioning and mapping (SLAM) device and a photodetector may be disposed in the user terminal, so that the user terminal obtains first position information of a location through a synchronous positioning and mapping method, and acquires a first optical signal of an environment through the photodetector. Specifically, taking any user terminal in the multi-robot-multi-person cooperative system as an example, the specific process of acquiring the first position information of the position where the user terminal is located and acquiring the first optical signal of the environment where the user terminal is located includes: the user terminal acquires first position information of a position through a synchronous positioning and mapping method; the user terminal collects a first optical signal of the environment through a photoelectric detector.
In detail, the SLAM device provided in the user terminal includes a main control chip and sensing measurement devices such as an odometer and an inertial sensor connected to the main control chip. When the user terminal acquires the first position information of the position through the synchronous positioning and mapping method, the main control chip of the SLAM device acquires data measured by each sensing measuring device to perform calculation of the synchronous positioning and mapping method, and a map of the environment where the local device is located is obtained and established, namely the acquired first position information of the position comprises positioning coordinates and the established map. Alternatively, the built Map may be a Grid Map (Occupancy Grid Map).
Because the VLC signal generated by the visible light source is a light signal, the user terminal collects the first light signal of the environment through the photodetector.
Step 12, the user terminal obtains first site information mapped by the first optical signal from the server; the first site information comprises building floor information, room number information and regional function division information;
because the code identifications corresponding to the visible light sources carrying the code identifications and the position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, region function and the like) of the visible light sources are uniquely mapped in the working scene, and the mapping relation list is stored in advance in the server or a cloud end connected with the server. After the user terminal collects the first optical signal of the environment, the first site information mapped by the first optical signal can be inquired from the mapping relation list stored in the server. The first site information includes information such as latitude, building name, floor number, floor direction, room number, and regional function. In detail, the zone function may be divided according to users (e.g., a staff-dedicated zone or a public zone); or divided into functional areas (e.g., toilets, offices, etc.) according to the implemented functions.
Step 13, the user terminal obtains first geographical position information according to the first position information and the first site information;
and the user terminal performs information completion and position correction on the first position information according to the obtained first site information to obtain first geographical position information containing the site information, namely three-dimensional coordinate information.
And step 14, the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
And the user terminal generates positioning information according to the obtained first geographical position information and the equipment identifier of the local machine, and sends the positioning information to the server to finish the reporting of the position information once.
In one implementation example, the positioning information received by the server is generated and transmitted by any robot in the multi-robot-multi-person cooperative system. In order to realize the position tracking of each robot in the multi-robot-multi-person cooperative system, each robot can position the position of the robot at preset time intervals, each robot obtains more accurate three-dimensional position information according to the position information obtained by positioning and by combining the collected visible light communication signals, generates positioning information according to the three-dimensional position information and sends the positioning information to a server, and the position reporting is completed. Therefore, before the server receives the positioning information, a process of generating the positioning information by each robot is also included. Specifically, taking any one robot in the multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the robot includes steps 21 to 24:
step 21, the robot acquires second position information of the position and collects a second optical signal of the environment; the second position information is three-dimensional position information; the optical signal is a visible light communication signal;
reasonably setting a plurality of electronic marks carrying code marks in a working scene of the multi-robot-multi-person cooperative system, wherein the code marks corresponding to the electronic marks are uniquely mapped with position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, regional function and other information) of the electronic marks, and the mapping relation list is stored in advance in a server or a cloud end connected with the server; when the robot or the user terminal moves to the position where the electronic mark is located, the electronic mark is scanned and decoded to obtain the coded mark, the site information corresponding to the scanned electronic mark can be obtained through the server, the obtained site information is combined with the obtained three-dimensional positioning information, more accurate three-dimensional information can be obtained, and the positioning accuracy is improved. Alternatively, the electronic mark may be in the form of a QR (Quick Response) code or a Visible Light source (such as an LED lamp, a VLC lamp, or a beacon) that sends out a coded mark as a Visible Light Communication (VLC) signal. As shown in fig. 3, in the present embodiment, in a scene where N robots 301 and M user terminals 302 are located in the multi-robot-multi-person collaboration system, a plurality of visible light sources 303 are provided; each visible light source has a unique coded identification (IDcode); and each visible light source generates a light signal which comprises coded identification information corresponding to each visible light source.
Therefore, in order to obtain more accurate positioning information, the robot acquires second position information of the current position through the positioning device and acquires a second optical signal of the environment through the signal acquisition device. Since most of the position information acquired by the positioning device is three-dimensional position information, the second position information acquired by the robot is three-dimensional position information. The second light signal generated by the visible light source is a Visible Light Communication (VLC) signal.
In an implementation example, a synchronous positioning and mapping (SLAM) device and a photodetector may be disposed in the robot, so that the robot achieves acquisition of second position information of the position by the positioning device, and achieves acquisition of a second optical signal of the environment by the photodetector. Specifically, taking any robot in the multi-robot-multi-person cooperative system as an example, the specific process of acquiring the second position information of the position where the robot is located and acquiring the second optical signal of the environment where the robot is located includes: the robot acquires second position information of the position through a synchronous positioning and mapping method; the robot collects a second optical signal of the environment through the photoelectric detector.
In detail, the SLAM device provided in the robot includes a main control chip and sensing measurement devices such as an odometer and an inertial sensor connected to the main control chip. When the robot acquires the second position information of the position through the synchronous positioning and mapping method, the main control chip of the SLAM device acquires data measured by each sensing measuring device to perform calculation of the synchronous positioning and mapping method, and a map of the environment where the robot is located is obtained and established, namely the acquired second position information of the position comprises positioning coordinates and the established map. Alternatively, the built Map may be a Grid Map (Occupancy Grid Map).
Because the VLC signal generated by the visible light source is a light signal, the robot acquires a second light signal of the environment through the photoelectric detector.
Step 22, the robot acquires second site information mapped by the second optical signal from the server; the second site information comprises building floor information, room number information and regional function division information;
the code identifications corresponding to the visible light sources with the code identifications and the position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, regional function and the like) of the visible light sources are uniquely mapped in a working scene, and the mapping relation list is stored in advance in a server or a cloud end connected with the server. After the robot collects the second optical signal of the environment, the second site information mapped by the second optical signal can be inquired from the mapping relation list stored in the server. The second site information includes information such as latitude, building name, floor number, floor direction, room number, and regional functions. In detail, the zone function may be divided according to users (e.g., a staff exclusive zone or a public zone); or divided into functional areas (e.g., toilets, offices, etc.) according to the implemented functions.
Step 23, the robot obtains second geographic position information according to the second position information and the second site information;
and the robot performs information completion and position correction on the second position information according to the obtained second site information to obtain second geographic position information containing the site information, namely three-dimensional coordinate information.
And 24, the robot generates the positioning information according to the second geographical position information and the equipment identification of the robot, and sends the positioning information to the server.
And the robot generates positioning information according to the obtained second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server to finish the reporting of the position information once.
In one implementation example, the mapping relationship between different light (VLC) signals and the site information stored in the server is extracted from a construction drawing of a work scene of the multi-robot multi-person cooperative system or stored in the cloud in advance. For example, if the operation scenario of the multi-robot-multi-person collaboration system is an entire building, the site information may correspond to floor information, room number information, and regional function information of the building. If the optical signals are all sent by visible light sources (such as LED lamps, beacons, etc.) installed in the building, the construction drawing corresponding to the building has detailed position information of each visible light source, and the position information includes the number of floors and the position coordinates in the floors. Therefore, the mapping relation between different optical signals and the field information can be obtained by extracting the position information of the visible light source generating the optical signal from the construction drawing of the operation scene, establishing the mapping relation between the optical signal generated by each visible light source and the position information of each visible light source, generating a mapping relation list and storing the mapping relation list in the server or in the cloud. Specifically, the VLC coded Identification (ID) of each visible light source is stored in the cloud together with the physical location information it maps to, including longitude, latitude, building, floor, room number, and regional function information in the map of the building.
In another implementation example, when the electronic mark in the working scene of the system is installed for the first time and no drawing data record exists, the synchronous positioning and drawing function of the robot can be utilized to complete the mapping of the new map and the marking of the position of the electronic mark in the new map. The mapping relation between different optical signals and the site information stored in the server can be constructed by the robots in the multi-robot-multi-person cooperation system. Specifically, before any robot or user terminal in the system acquires the location information mapped by the optical signal from the server, the process of constructing the mapping relationship between the optical signal and the location information specifically includes steps 31 to 33:
step 31, if the robot acquires a third optical signal, acquiring position coordinate information of the position where the robot is located; the position coordinate information comprises position coordinates and third site information; the third site information comprises building floor information, room number information and regional function division information;
when the electronic mark in the working scene of the system is installed for the first time and no drawing data is recorded, a large amount of engineering time is consumed if the electronic mark is recorded in a manual mode, and the cost is high. In order to solve the technical problem, a plurality of robots in the system can patrol on a new map and simultaneously detect optical signals, and when the robots acquire a third optical signal, position coordinate information of the current position of the robots is acquired, so that the acquired position coordinate information is the position of the electronic mark generating the third optical signal. Specifically, the SLAM device arranged in the robot comprises a main control chip and sensing measurement devices such as an odometer and an inertial sensor which are connected with the main control chip. If the third optical signal is acquired, when the robot acquires the position coordinate information of the current position by a synchronous positioning and mapping method, the main control chip of the SLAM device acquires data measured by each sensing measuring device to perform calculation of the synchronous positioning and mapping method, and a map of the environment where the robot is located is obtained and established, namely the acquired position coordinate information of the location includes position coordinates and a constructed new map. Meanwhile, the robot can also acquire building floor information, room number information and regional function division information of the position according to a design drawing of a scene to acquire third site information, so that the position coordinate information further comprises the third site information.
Step 32, the robot constructs a mapping relation between the coding information of the third optical signal and the position coordinate information;
and the robot decodes the acquired third optical signal to obtain a code identification number corresponding to the electronic mark generating the third optical signal, and constructs a mapping relation between the code identification number of the acquired third optical signal and the obtained position coordinate information. Specifically, the presentation form of the new Map is that the established new Map can be a Grid Map (Occupancy Grid Map) containing detected electronic tags; different pixel values in the grid map represent the area with the obstacle (pixel value is 1), the free area (pixel value is 0), and the emark area (pixel value is-1). In detail, the occupied grid map with the electronic marker is in a pgm picture format, and records the position of a starting point of the map, the direction of the map, the resolution of pixels and physical distances of an image, the positions of the electronic markers, a code Identification (ID) corresponding to each electronic marker position, and the position of an obstacle.
And step 33, the robot sends the mapping relation to the server for storage.
And after the robot constructs the mapping relation between the coding information of the third optical signal and the position coordinate information, the mapping relation is sent to the server for storage. For example, as shown in fig. 4, taking one of the robots in the system as an example, the robot detects the optical signal to construct a schematic diagram of a new map. The robot is placed at the start of a known position and orientation on the digital map. Alternatively, the starting point may be calibrated using an electronic marker (e.g., a VLC signal). When the robot starts to move, the position is estimated from the starting point by a SLAM device based on sensors (IMU/LiDAR/Time-of-Flight). When the robot walks under the electronic marker (such as a VLC lamp), the receiver of the robot detects the third light signal, so that the lamp corresponding ID for generating the third light signal is obtained by decoding the collected third light signal, and the current position coordinate information of the robot is mapped onto the received VLC lamp ID. When the robot covers all areas, the position of each lamp in the area is mapped to its corresponding lamp ID. This mapping of light IDs to location and orientation is stored in a database for use by the user of the user terminal and for navigation by the robot.
In another embodiment, to avoid the problems of complicated operation and long response time caused by excessively complicated coding settings of the optical signal, the optical signal may be combined with the wireless signal to make the optical signal code reusable. The mapping relationship between the optical signal and the venue information may also be a mapping relationship between the optical signal and the venue information and between the wireless signals, and the method for constructing the mapping relationship between the optical signal and the wireless signals and between the venue information specifically includes steps 41 to 44:
specifically, if the working scene of the system is an indoor environment with a huge space, a large number of VLC lamps need to be arranged in the indoor environment to meet the positioning requirement. To ensure that the code identification (ID code) of each VLC light in a scene is not repeated, the VLC signal code length may be long, increasing the computational complexity and response time of the mobile device. To address this issue, the optical signal may also be combined with a wireless signal to enable the optical signal encoding to be reused. Specifically, a plurality of wireless signals are arranged in an indoor scene, so that the coverage area of each wireless signal can divide the scene into different areas; the limited VLC light code identification can be reused within the coverage range of each wireless signal. Optionally, the wireless signal may be a bluetooth or WIFI signal, etc. Fig. 5 is a schematic diagram illustrating the combination of optical signals and wireless signals. When the wireless signals are bluetooth signals, several bluetooth signals (e.g., bluetooth signals a-I in the figure) are arranged in an indoor scene so that the coverage area of each wireless signal can divide the scene into different areas. As shown, a limited VLC light code identification ID (e.g., codes 1-9) can be reused within the coverage area of each wireless signal. VLC IDs are distributed in a region in such a way that they are far apart from the same twin ID in a neighboring region. This particular arrangement reduces the probability of false triggers due to unwanted changes in the bluetooth signal RSS.
Step 41, if the robot collects a third optical signal, receiving a wireless signal of the environment to obtain a first wireless signal list, and acquiring position coordinate information of the position; the wireless signal list comprises the signal strength of each received wireless signal;
due to the fact that the VLC lamps in different areas are in different wireless signal coverage ranges, when the robot collects the third light signal, the position coordinate information of the current position of the robot is obtained, and the wireless signal of the environment where the robot is located needs to be received to obtain the first wireless signal list.
Step 42, the robot determines the wireless signal with the strongest signal strength in the first wireless signal list as a first target signal;
since the position of the third optical signal is the coverage area of the corresponding wireless signal, the wireless signal corresponding to the third optical signal should be the wireless signal with the strongest signal intensity in the first wireless signal list. The robot determines the wireless signal with the strongest signal intensity in the first wireless signal list as a first target signal, and therefore the wireless signal corresponding to the area where the third optical signal is located is found.
Step 43, the robot constructs a mapping relation between the coded information of the third optical signal, the position coordinate information and the first target signal;
and the robot constructs a mapping relation between the coded information of the third light signal and the acquired position coordinate information and a first target signal corresponding to the region where the VLC lamp generating the third light signal is located.
And step 44, the robot sends the mapping relation to the server for storage.
The robot sends the mapping to the server for storage, so that information of VLC Identifier (ID) of each lamp and its radio frequency (bluetooth/WiFi) MAC address is stored together with its physical location information in the server or cloud according to the map of the building (longitude, latitude, building, floor, room number, and regional function information, etc.).
As can be seen from the foregoing embodiment, the mapping relationship between the optical signal and the venue information may be a mapping relationship between the optical signal and the venue information and a wireless signal, and taking any user terminal in the multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the user terminal further includes steps 51 to 55:
step 51, the user terminal acquires first position information of the position, collects a first optical signal of the environment and receives a wireless signal of the environment to obtain a second wireless signal list;
because VLC lamps in different areas are in different wireless signal coverage ranges, when a user terminal collects a first optical signal, first position information of the position where the current user terminal is located is obtained, and a second wireless signal list is obtained by receiving wireless signals of the environment where the user terminal is located.
Step 52, the user terminal determines the radio signal with the strongest signal strength in the second radio signal list as a second target signal;
since the position of the first optical signal is the coverage area of the corresponding wireless signal, the wireless signal corresponding to the first optical signal should be the wireless signal with the strongest signal intensity in the second wireless signal list. And the user terminal determines the wireless signal with the strongest signal intensity in the second wireless signal list as a second target signal, so as to search the wireless signal corresponding to the area where the first optical signal is located.
Step 53, the user terminal obtains the site information mapped by the first optical signal and the second target signal from the server;
because the code identifications corresponding to the visible light sources carrying the code identifications and the position information (such as information of longitude, latitude, building name, floor number, floor direction, room number, floor position, region function and the like) of the visible light sources and the wireless signals of the regions where the visible light sources are located are uniquely mapped in the working scene, and the mapping relation list is stored in advance in the server or the cloud end connected with the server. After the user terminal collects the first optical signal and the second target signal of the environment, the location information mapped correspondingly by the first optical signal and the second target signal can be inquired from the mapping relation list stored in the server. The site information corresponds to first site information including latitude, building name, floor number, number of rooms in the floor direction, regional functions and the like.
Step 54, the user terminal obtains first geographical location information according to the first location information and the first site information;
and the user terminal performs information completion and position correction on the first position information according to the obtained first site information to obtain first geographical position information containing the site information, namely three-dimensional coordinate information.
And step 55, the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
And the user terminal generates positioning information according to the obtained first geographical position information and the equipment identifier of the local machine, and sends the positioning information to the server to finish the reporting of the position information once.
As can be seen from the foregoing embodiment, the mapping relationship between the optical signal and the site information may be a mapping relationship between the optical signal and the site information as well as between the optical signal and the wireless signal, and taking any one robot in a multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the robot further includes steps 61 to 65:
step 61, the robot acquires second position information of the position, collects a second optical signal of the environment and receives a wireless signal of the environment to obtain a third wireless signal list;
because the VLC lamps in different areas are in different wireless signal coverage ranges, when the robot acquires a second optical signal, second position information of the current robot position is acquired, and a third wireless signal list is obtained by receiving wireless signals of the environment where the robot is located.
Step 62, the robot determines the wireless signal with the strongest signal strength in the third wireless signal list as a third target signal;
since the position of the second optical signal is the coverage area of the corresponding wireless signal, the wireless signal corresponding to the second optical signal should be the wireless signal with the strongest signal strength in the list of the third wireless signals. The robot determines the wireless signal with the strongest signal intensity in the third wireless signal list as a third target signal, so as to search the wireless signal corresponding to the area where the second optical signal is located.
Step 63, the robot acquires the site information mapped by the second optical signal and the third target signal from the server;
because the code identifications corresponding to the visible light sources carrying the code identifications and the position information (such as information of longitude, latitude, building name, floor number, floor direction, room number, floor position, region function and the like) of the visible light sources and the wireless signals of the regions where the visible light sources are located are uniquely mapped in the working scene, and the mapping relation list is stored in advance in the server or the cloud end connected with the server. After the robot collects the second optical signal and the third target signal of the environment, the site information mapped correspondingly by the second optical signal and the third target signal can be inquired from the mapping relation list stored in the server. The site information corresponds to second site information including information such as latitude, building name, floor number, floor direction, room number and the like, and information such as regional function and the like.
Step 64, the robot obtains second geographic position information according to the second position information and the second site information;
and the robot performs information completion and position correction on the second position information according to the obtained second site information to obtain second geographic position information containing the site information, namely three-dimensional coordinate information.
And step 65, the robot generates the positioning information according to the second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server.
And the robot generates positioning information according to the obtained second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server to finish the reporting of the position information once.
S120, updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map.
Specifically, the shared map in the server is a three-dimensional electronic map, which contains three-dimensional coordinate information and site information, and can be generated according to parameters corresponding to a construction drawing of an operation scene of the multi-robot-multi-person cooperation system and a map obtained by mapping of the robots. The positioning information received by the server can be generated by any robot or any user terminal in the system, and the server updates the positioning mark positions of each user terminal and each robot on the shared map according to the received positioning information sent by each user terminal and each robot, so that the real-time tracking of the positions of each robot and each user terminal in the shared map is realized. And each updated shared map is sent to all robots and all user terminals in the system, so that each robot and each user terminal in the system can know the positions of each robot and each user terminal in the system, and the position sharing between the robots and the user terminals is realized. Further, the positions of the plurality of robots and the plurality of user terminals can be displayed simultaneously on the map of the user terminal or the map of the robot.
In one implementation example, the construction of the shared map may be accomplished by controlling several robots in a multi-robot-multi-person collaborative system to map and scan the work scene. Specifically, before updating the positioning mark of the first device identifier in the shared map to the position of the geographic position information, and sending the updated shared map to all the robots and the user terminals, the method further includes steps 71 to 74:
step 71, controlling a plurality of robots to map and draw a target scene, and performing laser scanning on the target scene to obtain map data and scanning data;
specifically, a mapping device, such as a SLAM device, is provided in each robot, and the SLAM device includes a main control chip and sensing and measuring devices such as an odometer and an inertial sensor connected to the main control chip. And when a plurality of robots are controlled to map a target scene, each robot carries out mapping by a synchronous positioning and mapping method. In detail, each robot is controlled to patrol in a target scene, a master control chip of the SLAM device of the robot acquires data measured by each sensing measuring device in the patrol process, the data are synchronously positioned and calculated by a map building method, and a map of the environment where the robot is located is built, namely map data are obtained. Optionally, the target scene may be a working scene of multi-robot-multi-person collaboration system operation. And each robot is also internally provided with a three-dimensional laser scanning device (such as a laser, a radar scanner and the like); when a plurality of robots are controlled to map a target scene, the three-dimensional laser scanning devices of all the robots are synchronously controlled to be started, when all the robots travel in the target scene, the three-dimensional laser scanning devices of all the robots scan the contours and boundaries in the target scene, such as the contours and boundaries of facilities such as walls and houses, and record the information of three-dimensional coordinates, reflectivity, textures and the like of a large number of dense points on the surface of a measured object to obtain scanning data.
Step 72, building an architectural information model of the target scene according to the map data and the scanning data;
and establishing a Building Information model (Building Information Modeling) of the target scene according to the map data and the scanning data obtained in the steps, thereby obtaining a three-dimensional Building model.
Step 73, acquiring parameter information of the target scene; the parameter information comprises building floor information, room number information and regional function division information;
and to implement the building function, the target scene will usually have artificially defined site information, such as building name, floor number, direction of the floor, room number, floor position, and regional function. Parameter information of a target scene, namely, information such as building floor information, room number information, and regional function division information can be acquired from a building drawing or a design blueprint.
And step 74, correspondingly marking the parameter information in the building information model to obtain the shared map.
The parameter information of the target scene is input into the building information model, so that the information such as building floor information, room number information, regional function division information and the like of the target scene can be accurately marked in the building information model to obtain a shared map, and the constructed shared map comprises three-dimensional coordinate information and site information. For example, if the target scene has several rooms, the parameter information is correspondingly labeled in the building information model, so that each room in the building information model of the target scene has labels of building floor information, room number information, and area function partition information.
In another implementation mode of establishing the shared map, a SLAM map obtained by mapping a field by a robot in advance is used as a primary draft of the shared map, then scanning information is obtained by scanning walls and facilities in a scene through laser, and outlines and boundaries on the shared map, such as outlines and boundaries of facilities such as walls and houses, are generated in the primary draft according to the obtained scanning information; and then correcting and completing the shared map according to the construction drawing parameters corresponding to the site, thereby completing the construction of the shared map, and enabling the shared map to become a three-dimensional electronic map which comprises information such as longitude, latitude, building name, floor number, floor direction, room number, floor position and the like. Compared with the construction completely drawn by manpower, the time for constructing the man-machine shared map can be greatly reduced, and the map precision is improved.
In one implementation example, in order to deal with changes such as rearrangement of furniture and the like or indoor reconstruction and the like in a target scene, the robot can be used for regularly scanning the indoor environment of the target scene by laser and updating the shared map in time according to scanning data so as to ensure real-time performance and accuracy of the shared map. Specifically, after the parameter information is correspondingly marked in the building information model to obtain the shared map, the method further includes: controlling a plurality of robots to carry out laser scanning on the target scene at regular time to obtain scene scanning data; and updating the shared map according to the scene scanning data.
S130, receiving assistance request information; the assistance request information comprises assistance request instructions and a second device identification.
In an application scenario, a worker may use a user terminal to generate assistance request information at any place and send the assistance request information to a server to generate an assistance job policy. For a detailed example, when the application scene of the system is a shopping mall, if a worker finds that an accident such as a fire or an injured person needs assistance while patrolling the scene, the worker may select an assistance request instruction corresponding to the item to be assisted by the user terminal, generate assistance request information by combining with the device identifier of the user terminal, and send the assistance request information to the server.
And S140, obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map.
Because the shared map in the server can track the positions of each robot and each user terminal in real time, the positioning coordinates, namely the position of the initiation place of the assistance request instruction, can be quickly obtained according to the positioning mark of the second equipment identifier contained in the assistance request information in the shared map, and the response rate of the system is improved.
S150, determining the robot and/or the user terminal meeting the preset conditions in the shared map as a target device.
Because the shared map in the user terminal can also track the positions of each robot and each user terminal in real time, when a worker uses the user terminal to generate assistance request information at any place, the worker can also select target equipment required for assistance, such as a plurality of robots and/or user terminals closest to the user terminal in the shared map, according to the positions of each robot and each user terminal displayed by the shared map in the user terminal, so that the generated assistance request information also comprises equipment identification information of the specified target equipment. The process of determining that the robot and/or the user terminal meeting the preset condition in the shared map is the target device after the server receives the assistance request information specifically comprises the following steps: and determining that the robot and/or the user terminal corresponding to the equipment identification information included in the assistance request information is a target device, and completing the determination of the target equipment required by the assistance request instruction.
In an embodiment, in order to respond to the cooperative work instruction as soon as possible, a worker using another user terminal that is closer to the initiation location (i.e., the positioning coordinate) of the cooperative request instruction needs to be notified in time to arrive at the site for cooperative work, so that the preset condition may be set to select a preset number of robots closest to the positioning coordinate and/or another user terminal other than the user terminal. Determining that the robot and/or the user terminal meeting the preset condition in the shared map are/is a target device, which may specifically be: and determining a preset number of robots and/or user terminals closest to the positioning coordinates in the shared map as target devices. Optionally, the preset number may be preset or may be changed according to the type of the assistance request instruction.
And S160, generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction.
And after the target equipment required by the assistance request instruction is determined, the server generates a cooperative work instruction according to the positioning coordinate and the assistance request instruction.
S170, sending the cooperative work instruction to the target device so that the robot corresponding to the target device and/or a user of the user terminal can reach the positioning coordinate to perform cooperative work.
And the server sends the generated cooperative work instruction to the determined target device, so that the robot and/or the user of the user terminal receiving the cooperative work instruction reach the positioning coordinate and perform cooperative work according to the control strategy of the cooperative work instruction, and when the cooperative request instruction is initiated, the robot and the staff using the user terminal are timely arranged to respond to the instruction together for cooperative work.
The embodiment of the invention provides a multi-robot-multi-person cooperative control method, which is applied to a multi-robot-multi-person cooperative system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; the server receives positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification; by receiving the positioning information sent by each user terminal and each robot at regular time, the position change of each user terminal and each robot in the cooperation system can be grasped in real time. Updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; and the server updates the positioning mark positions of the user terminals and the robots on the shared map according to the received positioning information sent by the user terminals and the robots, so that the real-time tracking of the positions of the robots and the user terminals in the shared map is realized. The shared map includes positioning marks of the N robots and the M user terminals. Receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier; obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map; after receiving the assistance request information sent by any user terminal or robot, the server can quickly find the position requested to be sent from the shared map to obtain a positioning coordinate, so that the response rate of the system is improved; determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device; generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction; and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work. When the cooperation request instruction is initiated, the robot and the staff using the user terminal are arranged in time to respond to the instruction together to carry out cooperation operation.
Example two
Fig. 6 shows a multi-robot-multi-person cooperative control apparatus according to a second embodiment of the present invention. On the basis of the first embodiment, the embodiment of the present invention further provides a multi-robot-multi-person cooperative control apparatus, including: a processor 601, a memory 602, and a computer program 603, such as a program for a multi-robot-multi-person cooperative control method, stored in said memory 602 and executable on said processor 601. The processor 601, when executing the computer program 603, implements the steps in the above-described embodiment of the multi-robot-multi-person cooperative control method, such as steps S110 to S170 shown in fig. 1.
Illustratively, the computer program 603 may be partitioned into one or more modules that are stored in the memory 602 and executed by the processor 601 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program 603 in the washing apparatus. For example, the computer program 603 may be divided into a positioning information receiving module, a positioning updating module, an assistance request information receiving module, a positioning coordinate obtaining module, a target device determining module, a coordination instruction generating module, and an instruction sending module, where the specific functions of the modules are as follows:
the positioning information receiving module is used for receiving positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
the positioning updating module is used for updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map; n is more than or equal to 1; m is more than or equal to 1;
an assistance request information receiving module for receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
the positioning coordinate acquisition module is used for acquiring a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
the target device determining module is used for determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device;
a cooperative instruction generating module, configured to generate a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and the instruction sending module is used for sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
The multi-robot-multi-person cooperative control apparatus may include, but is not limited to, a processor 601, a memory 602, and a computer program 603 stored in the memory 602. Those skilled in the art will appreciate that fig. 6 is merely an example of a multi-robot-multi-person cooperative control apparatus, and does not constitute a limitation on the multi-robot-multi-person cooperative control apparatus, and may include more or less components than those shown, or some components may be combined, or different components, for example, the multi-robot-multi-person cooperative control apparatus may further include an input-output device, a network access device, a bus, and the like.
The Processor 601 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 602 may be an internal storage unit of the washing apparatus, such as a hard disk or a memory of the washing apparatus. The memory 602 may also be an external storage device, such as a plug-in hard disk provided on a drain of a washing apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 602 may also include both an internal storage unit of the drain of the washing apparatus and an external storage device. The memory 602 is used to store the computer program and other programs and data required for the drain method of the washing apparatus. The memory 602 may also be used to temporarily store data that has been output or is to be output.
EXAMPLE III
Fig. 7 is a schematic structural diagram of a multi-robot-multi-person collaboration system according to a third embodiment of the present invention. The system comprises N robots 72 and M user terminals 73 communicatively connected to a server 71; n is more than or equal to 1; m is more than or equal to 1; wherein,
the multi-robot-multi-person cooperative system executes the steps of the multi-robot-multi-person cooperative control method as described in the first embodiment.
Specifically, in order to realize that each user terminal and each robot in the multi-robot-multi-person cooperative system can receive optical communication signals (VLC), each user terminal and each robot are also provided with a visible light communication system level chip VLCSOC. As shown in fig. 8, the visible light communication system-on-chip VLCSOC 8, which implements the functions of receiving and encoding/decoding a light communication signal (VLC), may include a power supply (DC) 110, a wireless data source 130, a light emitting diode 150, a photosensor 170, a DC-DC power converter 250, a wireless communication unit 240, a security unit 180, and a VLC unit 140; where the VLC unit 140 includes a baseband DSP unit 220, an analog signal processing unit 230, a transmitter 140A, and a receiver 140B.
In detail, the wireless communication unit 240 is connected to the external data source 130 and used to communicate with the data source 130 to transmit/receive data for backhaul communication and/or system control, the VLC unit 140 is connected to the LEDsl5 and the one or more photosensors 170, and the security unit 180 is connected to the wireless communication unit 240 and the VLC unit 140, using the DC-DC power converter 250 to obtain power from the power source (DC) 110 and supply the power to the elements in the SOC 8.
The VLC unit 140 is operable to modulate LEDsl5 to transmit information and/or the VLC unit 140 is operable to receive information using VLC via one or more photosensors 170. In one embodiment, SOC8 may support VLC transmission and reception simultaneously, such that SOC8 may be used for VLC sources (e.g., LED lights, symbols, or other devices) as well as VLC receivers (e.g., handheld devices, etc.), and further, SOC8 may also support two-way VLC devices. In the present embodiment, the VLC unit 140 includes a Transmitter (TX) circuit 140A and a Receiver (RX) circuit 140B. TX circuit 140A may be a baseband digital signal processing unit (baseband DSP unit) 220 and an analog signal processing unit 230 that conditions LED lamp 150. RX circuit 140B may receive signals from photosensor 170 and detect VLC data in the received light.
VLC unit 140 and wireless communication unit 240 may be used in conjunction to communicate information to a device. The VLC unit 140 may, for example, transmit a quick response code (QR code), a Uniform Resource Locator (URL), or other data that may be used to access a greater amount of information. A larger amount of information may be transmitted through a wireless or wired network via the wireless communication unit 240.
The security unit 180 may ensure security of network and/or VLC data access, for example, the security unit 180 may include cryptographic hardware for encrypting data to be transmitted via VLC and/or decrypting data to be transmitted via VLC. Thus, sensitive data may be transmitted only to a particular user, while other recipients receiving such VLC data may not be able to decrypt the data. Likewise, data to be transmitted over the network may be encrypted by the security unit 180 and data received from the network (e.g., data to be transmitted via VLC) may be encrypted by the security unit 180.
The photosensor 170 can be any type of photosensor, for example, the photosensor can include a photodetector or a CMOS image sensor. The photodetector may be used for high bandwidth/data rate communications, while the CMOS image sensor may be used for low bandwidth/data rate communications. A given system may include one or more types of photosensors 170. Other photosensors may be used in other embodiments.
The LEDS15 may be any kind of LED. In one embodiment, the LEDS15 may be a large number of low cost standard LEDS. VLC may be more readily accepted by the market through a combination of inexpensive LEDsl5 and cost savings of the SoC16 (as compared to discrete components). Over time, SOC8 may also utilize moore's law to reduce cost, increase performance, and the like. VLC may also be used in conjunction with low cost wireless/wireline networks. In one embodiment, LEDsl5 may be Organic LEDs (OLEDs). The power supply may be a Direct Current (DC) power supply or an Alternating Current (AC) power supply, or may be powered via a dedicated power cord or together with the data (e.g., via power over ethernet-PoE). In addition, data may be transmitted/received through a wireless or wired communication system.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (15)

1. A multi-robot-multi-person cooperation control method is characterized in that the method is applied to a multi-robot-multi-person cooperation system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; taking the server as an execution subject, the method comprises the following steps:
receiving positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map;
receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device;
generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
2. The multi-robot-multi-person cooperative control method according to claim 1, further comprising, before the receiving the positioning information:
the user terminal acquires first position information of a position where the user terminal is located and collects a first optical signal of an environment where the user terminal is located; the first position information is three-dimensional position information; the optical signal is a visible light communication signal;
the user terminal acquires first site information mapped by the first optical signal from the server; the first site information comprises building floor information, room number information and regional function division information;
the user terminal obtains first geographical position information according to the first position information and the first site information;
and the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
3. The multi-robot-multi-person cooperative control method according to claim 1, further comprising, before the receiving the positioning information:
the robot acquires second position information of the position and acquires a second optical signal of the environment; the second position information is three-dimensional position information; the optical signal is a visible light communication signal;
the robot acquires second site information mapped by the second optical signal from the server; the second site information comprises building floor information, room number information and regional function division information;
the robot obtains second geographical position information according to the second position information and the second site information;
and the robot generates the positioning information according to the second geographical position information and the equipment identifier of the robot, and sends the positioning information to the server.
4. A multi-robot-multi-person cooperative control method according to any one of claims 2 or 3, wherein before said receiving positioning information, further comprising:
if the robot acquires the third optical signal, acquiring position coordinate information of the position where the robot is located; the position coordinate information comprises position coordinates and third site information; the third site information comprises building floor information, room number information and regional function division information;
the robot constructs a mapping relation between the coding information of the third optical signal and the position coordinate information;
and the robot sends the mapping relation to the server for storage.
5. The multi-robot-multi-person cooperative control method according to claim 4, wherein a plurality of visible light sources are provided in a scene where N of the robots and M of the user terminals are located; each visible light source is provided with a unique coded identifier; and each visible light source generates a light signal containing coded identification information corresponding to each visible light source.
6. The multi-robot-multi-person cooperative control method according to claim 1, wherein the determining that the robot and/or the user terminal satisfying a preset condition in the shared map is a target device comprises:
and determining a preset number of robots and/or user terminals closest to the positioning coordinates in the shared map as target devices.
7. The multi-robot-multi-person cooperative control method according to claim 2, wherein the acquiring, by the user terminal, first location information of a location where the user terminal is located and collecting a first optical signal of an environment where the user terminal is located comprises:
the user terminal acquires first position information of a position through a synchronous positioning and mapping method;
the user terminal collects a first optical signal of the environment through a photoelectric detector.
8. The multi-robot-multi-person cooperative control method according to claim 3, wherein the robot acquires second position information of a location where the robot is located and collects a second optical signal of an environment where the robot is located, and the method comprises:
the robot acquires second position information of the position through a synchronous positioning and mapping method;
the robot collects a second optical signal of the environment through the photoelectric detector.
9. The multi-robot-multi-person cooperative control method of claim 4, wherein the acquiring position coordinate information of the position where the robot is located if the robot acquires the third optical signal comprises:
if the robot acquires a third optical signal, receiving a wireless signal of the environment to obtain a first wireless signal list, and acquiring position coordinate information of the position; the wireless signal list comprises the signal strength of each received wireless signal;
the robot determines the wireless signal with the strongest signal intensity in the first wireless signal list as a first target signal;
the robot constructs a mapping relation between the coded information of the third optical signal and the position coordinate information, and the mapping relation comprises the following steps:
and the robot constructs a mapping relation between the coded information of the third optical signal, the position coordinate information and the first target signal.
10. The multi-robot-multi-person cooperative control method according to claim 9, wherein the user terminal acquires first location information of a location where the user terminal is located and collects a first optical signal of an environment where the user terminal is located, further comprising:
the user terminal acquires first position information of the position, acquires a first optical signal of the environment and receives a wireless signal of the environment to obtain a second wireless signal list;
the user terminal obtains first site information mapped by the first optical signal from the server, and the method comprises the following steps:
the user terminal determines the wireless signal with the strongest signal strength in the second wireless signal list as a second target signal;
and the user terminal acquires the site information mapped by the first optical signal and the second target signal from the server.
11. The multi-robot-multi-person cooperative control method according to claim 9, wherein the robot acquires second position information of a position where the robot is located and collects a second optical signal of an environment where the robot is located, further comprising:
the robot acquires second position information of the position, acquires a second optical signal of the environment and receives a wireless signal of the environment to obtain a third wireless signal list;
the robot acquires second site information mapped by the second optical signal from the server, and the method comprises the following steps:
the robot determines the wireless signal with the strongest signal strength in the third wireless signal list as a third target signal;
and the robot acquires the site information mapped by the second optical signal and the third target signal from the server.
12. The multi-robot-multi-person cooperative control method according to claim 1, further comprising, before updating the positioning mark of the first device identifier in the shared map to a position where the geographic position information is located and sending the updated shared map to all the robots and the user terminals:
controlling a plurality of robots to map a target scene, and performing laser scanning on the target scene to obtain map data and scanning data;
building an architectural information model of the target scene according to the map data and the scanning data;
acquiring parameter information of the target scene; the parameter information comprises building floor information, room number information and regional function division information;
and correspondingly marking the parameter information in the building information model to obtain the shared map.
13. The multi-robot-multi-person cooperative control method according to claim 12, wherein after the parameter information is correspondingly labeled in the building information model to obtain the shared map, the method further comprises:
controlling a plurality of robots to perform laser scanning on the target scene at regular time to obtain scene scanning data;
and updating the shared map according to the scene scanning data.
14. A multi-robot-multi-person cooperative control apparatus, the apparatus comprising:
the positioning information receiving module is used for receiving positioning information; the positioning information comprises geographic position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
the positioning updating module is used for updating the positioning mark of the first equipment identifier in the shared map to the position of the geographic position information and sending the updated shared map to all the robots and the user terminals; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map; n is more than or equal to 1; m is more than or equal to 1;
an assistance request information receiving module for receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
the positioning coordinate acquisition module is used for acquiring a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
the target device determining module is used for determining the robot and/or the user terminal meeting the preset conditions in the shared map as a target device;
the cooperative instruction generating module is used for generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and the instruction sending module is used for sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
15. A multi-robot-multi-person cooperation system is characterized by comprising N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; wherein,
the multi-robot-multi-person cooperative system performs the steps of the multi-robot-multi-person cooperative control method according to any one of claims 1 to 13.
CN202110468727.2A 2021-04-28 2021-04-28 Multi-robot-multi-person cooperative control method, device and system Active CN115248039B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110468727.2A CN115248039B (en) 2021-04-28 2021-04-28 Multi-robot-multi-person cooperative control method, device and system
PCT/CN2021/114835 WO2022227352A1 (en) 2021-04-28 2021-08-26 Multi-robot multi-person collaboration control method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110468727.2A CN115248039B (en) 2021-04-28 2021-04-28 Multi-robot-multi-person cooperative control method, device and system

Publications (2)

Publication Number Publication Date
CN115248039A true CN115248039A (en) 2022-10-28
CN115248039B CN115248039B (en) 2024-08-02

Family

ID=83697463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110468727.2A Active CN115248039B (en) 2021-04-28 2021-04-28 Multi-robot-multi-person cooperative control method, device and system

Country Status (2)

Country Link
CN (1) CN115248039B (en)
WO (1) WO2022227352A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582488A (en) * 2023-07-14 2023-08-11 中创(深圳)物联网有限公司 Data transmission method, device, equipment and storage medium
CN117993871A (en) * 2024-04-07 2024-05-07 中建八局西南建设工程有限公司 Multi-machine collaborative engineering intelligent construction system
CN118219280A (en) * 2024-05-23 2024-06-21 北京大学 Multi-machine collaborative exploration system, method, robot and man-side

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016511B (en) * 2022-12-26 2023-08-01 广东职业技术学院 Data transmission method for multiple robots

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3508938A2 (en) * 2018-01-05 2019-07-10 iRobot Corporation Mobile cleaning robot teaming and persistent mapping
CN110268225A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 The method of positioning device, server-side and mobile robot on map
CN111390904A (en) * 2020-03-16 2020-07-10 广州赛特智能科技有限公司 Method and device for realizing multi-robot cooperative operation with high operation efficiency
CN111504299A (en) * 2020-04-03 2020-08-07 小狗电器互联网科技(北京)股份有限公司 Map establishing method, map establishing device, readable medium and electronic equipment
CN111673755A (en) * 2020-06-24 2020-09-18 西北农林科技大学 Picking robot control system and method based on visual servo
CN112180769A (en) * 2020-09-29 2021-01-05 杭州海康威视数字技术股份有限公司 Cooperative movement method, electronic device, and computer storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105751219A (en) * 2016-05-07 2016-07-13 深圳市华科安测信息技术有限公司 Medical robot controlling system and method for registration and drug fetching
KR102631147B1 (en) * 2016-10-07 2024-01-31 엘지전자 주식회사 Robot for airport and method thereof
CN106737687A (en) * 2017-01-17 2017-05-31 暨南大学 Indoor Robot system based on visible ray location navigation
CN108687783A (en) * 2018-08-02 2018-10-23 合肥市徽马信息科技有限公司 One kind is led the way explanation guide to visitors robot of formula museum
CN109814556B (en) * 2019-01-22 2022-04-15 东南大学 Device and method for exploring unknown environment and map construction through cooperation of multiple robots
CN112207828A (en) * 2020-09-30 2021-01-12 广东唯仁医疗科技有限公司 Retail robot control method and system based on 5G network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3508938A2 (en) * 2018-01-05 2019-07-10 iRobot Corporation Mobile cleaning robot teaming and persistent mapping
CN110268225A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 The method of positioning device, server-side and mobile robot on map
CN111390904A (en) * 2020-03-16 2020-07-10 广州赛特智能科技有限公司 Method and device for realizing multi-robot cooperative operation with high operation efficiency
CN111504299A (en) * 2020-04-03 2020-08-07 小狗电器互联网科技(北京)股份有限公司 Map establishing method, map establishing device, readable medium and electronic equipment
CN111673755A (en) * 2020-06-24 2020-09-18 西北农林科技大学 Picking robot control system and method based on visual servo
CN112180769A (en) * 2020-09-29 2021-01-05 杭州海康威视数字技术股份有限公司 Cooperative movement method, electronic device, and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANGYUAN YU 等: "Conceptual Design for Indoor Visible Light Communication and Positioning Cooperative Systems", 《2020 22ND INTERNATIONAL CONFERENCE ON TRANSPARENT OPTICAL NETWORKS (ICTON)》, 22 September 2020 (2020-09-22), pages 1 - 4 *
裴凌;刘东辉;钱久超;: "室内定位技术与应用综述", 导航定位与授时, vol. 4, no. 03, 31 May 2017 (2017-05-31), pages 5 - 14 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582488A (en) * 2023-07-14 2023-08-11 中创(深圳)物联网有限公司 Data transmission method, device, equipment and storage medium
CN116582488B (en) * 2023-07-14 2023-10-13 中创(深圳)物联网有限公司 Data transmission method, device, equipment and storage medium
CN117993871A (en) * 2024-04-07 2024-05-07 中建八局西南建设工程有限公司 Multi-machine collaborative engineering intelligent construction system
CN117993871B (en) * 2024-04-07 2024-07-09 中建八局西南建设工程有限公司 Multi-machine collaborative engineering intelligent construction system
CN118219280A (en) * 2024-05-23 2024-06-21 北京大学 Multi-machine collaborative exploration system, method, robot and man-side

Also Published As

Publication number Publication date
WO2022227352A1 (en) 2022-11-03
CN115248039B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN115248039B (en) Multi-robot-multi-person cooperative control method, device and system
Rao et al. Real-time monitoring of construction sites: Sensors, methods, and applications
CN101661098B (en) Multi-robot automatic locating system for robot restaurant
US10771935B2 (en) Device locating using angle of arrival measurements
CN105554878B (en) Parking management method, vehicle-mounted Bluetooth label, base station and system
CN106405605B (en) A kind of indoor and outdoor seamless positioning method and positioning system of the robot based on ROS and GPS
US10062178B2 (en) Locating a portable device based on coded light
EP3048747B1 (en) Positioning method based on visible light source, mobile terminal and controller
JP5496096B2 (en) Wireless terminal positioning system, environmental measurement system and equipment management system
KR101906329B1 (en) Apparatus and method for indoor localization based on camera
US20140297090A1 (en) Autonomous Mobile Method and Autonomous Mobile Device
US20150198447A1 (en) Method and apparatus for indoor position tagging
US20160345129A1 (en) Positioning system for indoor and surrounding areas, positioning method and route-planning method thereof and mobile apparatus
TWI593988B (en) Indoor positioning system and method
WO2011144967A1 (en) Extended fingerprint generation
CN109975758A (en) Wi-Fi blue tooth integrated base station location system
US20120310529A1 (en) Method and system for exchanging data
JP6160036B2 (en) Mobile communication device and position information notification method
CN104200702B (en) System and method for looking up parking space
CN105929820B (en) A kind of intelligent robot localization method
CN106871889B (en) Control method capable of switching navigation modes
CN108495251B (en) Combined positioning method based on Bluetooth and LED optical positioning
Sohn et al. Localization system for mobile robot using wireless communication with IR landmark
CN108924735A (en) A kind of intelligent shoe guidance system and method
CN106154223B (en) Indoor navigation method and indoor navigation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240708

Address after: 41st Floor, Changfu Jinmao Building, Shihua Road, Futian Free Trade Zone, Fubao Street, Futian District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Hong Kong Collaborative Innovation Institute of Hong Kong University of science and Technology (Futian Shenzhen)

Country or region after: China

Address before: Room 604, Ka Lok House, 720 Nathan Road, Mong Kok, Kowloon, Hong Kong, China

Applicant before: LIPHY COMMUNICATIONS LTD.

Country or region before: Hong-Kong

GR01 Patent grant
GR01 Patent grant