Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a system for multi-robot-multi-person cooperative control, so as to solve the problem that when a cooperation request instruction is initiated, a robot cannot be arranged in time to respond to the instruction together with a worker using a user terminal to perform cooperative work.
The first aspect of the embodiment of the invention provides a multi-robot-multi-person cooperative control method, which is applied to a multi-robot-multi-person cooperative system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; taking the server as an execution subject, the method comprises the following steps:
receiving positioning information; the positioning information comprises geographic position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map;
receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device;
generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
In an implementation example, before the receiving the positioning information, the method further includes:
the user terminal acquires first position information of the position of the user terminal and collects a first optical signal of the environment of the user terminal; the first position information is three-dimensional position information; the optical signal is a visible light communication signal;
the user terminal acquires first site information mapped by the first optical signal from the server; the first site information comprises building floor information, room number information and regional function division information;
the user terminal obtains first geographical position information according to the first position information and the first site information;
and the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
In an implementation example, before the receiving the positioning information, the method further includes:
the robot acquires second position information of the position and acquires a second optical signal of the environment; the second position information is three-dimensional position information; the optical signal is a visible light communication signal;
the robot acquires second site information mapped by the second optical signal from the server; the second site information comprises building floor information, room number information and regional function division information;
the robot obtains second geographic position information according to the second position information and the second site information;
and the robot generates the positioning information according to the second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server.
In an implementation example, before the receiving the positioning information, the method further includes:
if the robot acquires the third optical signal, acquiring position coordinate information of the position where the robot is located; the position coordinate information comprises position coordinates and third site information; the third site information comprises building floor information, room number information and regional function division information;
the robot constructs a mapping relation between the coding information of the third optical signal and the position coordinate information;
and the robot sends the mapping relation to the server for storage.
In one implementation example, a plurality of visible light sources are arranged in a scene where the N robots and the M user terminals are located; each visible light source is provided with a unique coded identifier; and each visible light source generates a light signal containing coded identification information corresponding to each visible light source.
In one implementation example, the determining that the robot and/or the user terminal meeting the preset condition in the shared map is a target device includes:
and determining a preset number of robots and/or user terminals closest to the positioning coordinates in the shared map as target devices.
In an implementation example, the acquiring, by the user terminal, first location information of a location where the user terminal is located, and collecting a first optical signal of an environment where the user terminal is located includes:
the user terminal acquires first position information of a position through a synchronous positioning and mapping method;
the user terminal collects a first optical signal of the environment through a photoelectric detector.
In one embodiment, the robot obtains second position information of the position and collects a second optical signal of the environment, and the method includes:
the robot acquires second position information of the position through a synchronous positioning and mapping method;
the robot collects a second optical signal of the environment through the photoelectric detector.
In an implementation example, if the robot acquires the third optical signal, acquiring position coordinate information of the position where the robot is located, including:
if the robot acquires a third optical signal, receiving a wireless signal of the environment to obtain a first wireless signal list, and acquiring position coordinate information of the position; the wireless signal list comprises the signal strength of each received wireless signal;
the robot determines the wireless signal with the strongest signal intensity in the first wireless signal list as a first target signal;
the robot constructs a mapping relation between the coded information of the third optical signal and the position coordinate information, and the mapping relation comprises the following steps:
and the robot constructs a mapping relation between the coded information of the third optical signal, the position coordinate information and the first target signal.
In an implementation example, the user terminal obtains first location information of a location of the user terminal, and collects a first optical signal of an environment of the user terminal, further including:
the user terminal acquires first position information of the position, acquires a first optical signal of the environment and receives a wireless signal of the environment to obtain a second wireless signal list;
the user terminal obtains first site information mapped by the first optical signal from the server, and the method comprises the following steps:
the user terminal determines the wireless signal with the strongest signal strength in the second wireless signal list as a second target signal;
and the user terminal acquires the site information mapped by the first optical signal and the second target signal from the server.
In one embodiment, the robot obtains second position information of a position where the robot is located, and collects a second optical signal of an environment where the robot is located, and the robot further includes:
the robot acquires second position information of the position, acquires a second optical signal of the environment and receives a wireless signal of the environment to obtain a third wireless signal list;
the robot acquires second site information mapped by the second optical signal from the server, and the method comprises the following steps:
the robot determines the wireless signal with the strongest signal strength in the third wireless signal list as a third target signal;
and the robot acquires the site information mapped by the second optical signal and the third target signal from the server.
In one implementation example, before updating the positioning mark of the first device identifier in the shared map to the position where the geographic position information is located, and sending the updated shared map to all the robots and the user terminals, the method further includes:
controlling a plurality of robots to map a target scene, and scanning the target scene with laser to obtain map data and scanning data;
building information models of the target scenes are built according to the map data and the scanning data;
acquiring parameter information of the target scene; the parameter information comprises building floor information, room number information and regional function division information;
and correspondingly marking the parameter information in the building information model to obtain the shared map.
In an implementation example, after the parameter information is correspondingly marked in the building information model, and the shared map is obtained, the method further includes:
controlling a plurality of robots to carry out laser scanning on the target scene at regular time to obtain scene scanning data;
and updating the shared map according to the scene scanning data.
A second aspect of an embodiment of the present invention provides a multi-robot-multi-person cooperative control apparatus, including:
the positioning information receiving module is used for receiving positioning information; the positioning information comprises geographic position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
the positioning updating module is used for updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map; n is more than or equal to 1; m is more than or equal to 1;
an assistance request information receiving module for receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
the positioning coordinate acquisition module is used for acquiring a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
the target device determining module is used for determining the robot and/or the user terminal meeting the preset conditions in the shared map as a target device;
a cooperative instruction generating module, configured to generate a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and the instruction sending module is used for sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
A third aspect of the embodiments of the present invention provides a multi-robot-multi-person collaboration system, where the system includes N robots and M user terminals that are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; wherein,
the multi-robot-multi-person cooperation system realizes the steps of the multi-robot-multi-person cooperation control method according to the first aspect.
The embodiment of the invention provides a multi-robot-multi-person cooperative control method, a device and a system, which are applied to a multi-robot-multi-person cooperative system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; the server receives positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification; by receiving the positioning information sent by each user terminal and each robot at regular time, the position change of each user terminal and each robot in the cooperation system can be mastered in real time. Updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; and the server updates the positioning mark positions of the user terminals and the robots on the shared map according to the received positioning information sent by the user terminals and the robots, so that the real-time tracking of the positions of the robots and the user terminals in the shared map is realized. The shared map includes positioning marks of the N robots and the M user terminals. Receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier; obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map; after receiving the assistance request information sent by any user terminal or robot, the server can quickly find the position requested to be sent from the shared map to obtain the positioning coordinate, so that the response rate of the system is improved; determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device; generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction; and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work. When the cooperation request instruction is initiated, the robot and the staff using the user terminal are arranged in time to respond to the instruction together to carry out cooperation operation.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
The robots in the multi-robot assistance system usually perform point-to-point movement along a fixed route, and the server cannot know the location information of the user terminal connected to the multi-robot assistance system. When a user terminal initiates an assistance request instruction to a system, the geographical position of an instruction initiating device needs to be temporarily acquired, so that the response delay of the system is caused; and staff using other user terminals close to the initiation place of the cooperation request instruction cannot be informed in time to drive to the site for cooperation work. In order to solve the problem, in this embodiment, the server updates the positioning mark positions of each user terminal and each robot on the shared map according to the received positioning information sent by each user terminal and each robot, so as to realize real-time tracking of the positions of each robot and each user terminal in the shared map; after receiving the assistance request information sent by any user terminal or robot, the server can quickly find the position requested to be sent from the shared map to obtain the positioning coordinate, and timely arrange the robot in the shared map to respond to the instruction together with the worker using the user terminal to perform cooperative operation.
Example one
Fig. 1 is a schematic flow chart of a multi-robot-multi-person cooperative control method according to an embodiment of the present invention. Specifically, the embodiment is applicable to an application scenario in which a multi-robot-multi-person cooperation system performs cooperative work, and the method is applied to the multi-robot-multi-person cooperation system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; the method is executed by taking a server as an execution main body, wherein the server can be a network cloud platform erected by a plurality of servers; in the embodiment of the present application, a server is used as an execution subject for explanation, and the method specifically includes the following steps:
s110, receiving positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification.
As shown in fig. 2, in a Multi-Robot Multi-Human (MRMH) collaboration system, in addition to at least one Robot 10 connected to a server 20, at least one user terminal 30 is also connected to the server 20, implementing man-machine shared control, so that a worker can propose a control strategy to the server through the user terminal. The server 20 in the system is a main control server and is used for controlling all robots in the system, acquiring the current position, the environmental information, the processing data, formulating a strategy and the like of the robots, and meanwhile, acquiring the current position information, the processing data and the like of a user terminal in communication connection with the server; the server may also be communicatively coupled to a cloud 40 that stores data. In detail, the Server may be a Coordination, control, and Collaboration Server (C3S); the server maintains the location of each user terminal and robot as a centralized gateway that receives the location of all mobile and fixed terminals (including robots, user terminals). The server can send instructions to the robot to perform various operations, such as moving, turning, reaching a particular location, maneuvering the robot manipulator, collecting sensor data, and the like. In addition, the C3S may further have a User Interface (UI) through which an operator controls the robot on the user terminal and performs cooperative work with operators of other user terminals. Optionally, if the system includes a plurality of robots communicatively connected to the server, the plurality of robots may be mobile robots of the same type or different types having different functions; and the user terminal can be a mobile intelligent terminal, a tablet or an intelligent watch and other intelligent terminals.
In order to obtain the positions of N robots and M user terminals which are in communication connection with the server, all the robots and all the user terminals in the system send positioning information to the server. The positioning information received by the server can be generated by any robot or any user terminal in the system; the positioning information comprises geographical position information of equipment sending the positioning information and a first equipment identifier, and each user terminal and each robot have unique equipment identifiers, so that the server determines the equipment to which the geographical position information belongs according to the first equipment identifier when receiving the positioning information. Specifically, because the positions of the robot and the user carrying the user terminal in the system move when the task is executed, in order to realize the real-time tracking of the positions of the N robots and the M user terminals which are in communication connection with the server, each robot and each user terminal in the system can also send positioning information to the server at regular time according to a preset time interval, so that the server can continuously receive the positioning information of each robot and each user terminal. In order to acquire the geographic position information of the position of each robot and each user terminal, the robot and each user terminal are respectively provided with a positioning navigation device and various sensors (such as an odometer, an inertial sensor and the like) required for positioning navigation. Alternatively, the predetermined time interval may be 0.1-0.5 seconds.
In one implementation example, the positioning information received by the server is generated and sent by any user terminal in the multi-robot-multi-person cooperative system. In order to realize the position tracking of each user terminal in the multi-robot-multi-person cooperative system, each user terminal can position the position of the machine at preset time intervals, each user terminal obtains more accurate three-dimensional position information according to the position information obtained by positioning and in combination with the collected visible light communication signals, generates positioning information according to the three-dimensional position information and sends the positioning information to a server, and the reporting of the position is completed. Therefore, before the server receives the positioning information, the process that each user terminal generates the positioning information is also included. Specifically, taking any user terminal in the multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the user terminal includes steps 11 to 14:
step 11, the user terminal acquires first position information of a position where the user terminal is located, and collects a first optical signal of an environment where the user terminal is located; the first position information is three-dimensional position information; the optical signal is a visible light communication signal;
the working scene of the multi-robot-multi-person cooperative system is mostly a complex scene, for example, the interior of a multi-floor high building or an area provided with large equipment, etc., wherein the terrain in each floor of the interior of the multi-floor high building is the same, and the three-dimensional positioning information acquired by the robot working in the building or the user terminal carried by the staff cannot reflect the site information (for example, the floor number, the room number or the area function, etc.), so that the positioning is unclear. In order to solve the technical problem, a plurality of electronic marks carrying code identifiers can be reasonably set in a working scene in advance, the code identifiers corresponding to the electronic marks are uniquely mapped with the position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, regional function and other information) of the electronic marks, and the mapping relation list is stored in advance in a server or a cloud end connected with the server; when the robot or the user terminal moves to the position where the electronic marker is located, the electronic marker is scanned and decoded to obtain the coded identifier, the site information corresponding to the scanned electronic marker can be obtained through the server, the obtained site information is combined with the obtained three-dimensional positioning information, more accurate three-dimensional information can be obtained, and the positioning accuracy is improved. Optionally, a plurality of robots may be controlled by a preset intelligent installation program to complete installation of a plurality of electronic tags (e.g., LED lamps or beacons) carrying coded identifiers in a working scene. Alternatively, the electronic mark may be in the form of a QR (Quick Response) code or a Visible Light source (e.g., LED lamp or beacon) that sends out a coded mark as a Light Communication (VLC) signal. As shown in fig. 3, in the present embodiment, in a scene where N robots 301 and M user terminals 302 are located in the multi-robot-multi-person cooperative system, a plurality of visible light sources 303 are provided; each visible light source has a unique coded identification (IDcode); and each visible light source generates a light signal containing coded identification information corresponding to each visible light source.
Therefore, in order to obtain more accurate positioning information, the user terminal obtains the first position information of the current position through the positioning device, and also collects the first optical signal of the environment through the signal collecting device. Since most of the position information acquired by the positioning device is three-dimensional positioning information, the first position information acquired by the user terminal is three-dimensional position information. The first light signal generated by the visible light source is a Visible Light Communication (VLC) signal.
In an implementation example, a synchronous positioning and mapping (SLAM) device and a photodetector may be disposed in the user terminal, so that the user terminal obtains first position information of a location through a synchronous positioning and mapping method, and acquires a first optical signal of an environment through the photodetector. Specifically, taking any user terminal in the multi-robot-multi-person cooperative system as an example, the specific process of acquiring the first position information of the position where the user terminal is located and acquiring the first optical signal of the environment where the user terminal is located includes: the user terminal acquires first position information of a position through a synchronous positioning and mapping method; the user terminal collects a first optical signal of the environment through a photoelectric detector.
In detail, the SLAM device provided in the user terminal includes a main control chip and sensing measurement devices such as an odometer and an inertial sensor connected to the main control chip. When the user terminal acquires the first position information of the position through the synchronous positioning and mapping method, the main control chip of the SLAM device acquires data measured by each sensing measuring device to perform calculation of the synchronous positioning and mapping method, and a map of the environment where the local device is located is obtained and established, namely the acquired first position information of the position comprises positioning coordinates and the established map. Alternatively, the built Map may be a Grid Map (Occupancy Grid Map).
Because the VLC signal generated by the visible light source is a light signal, the user terminal collects the first light signal of the environment through the photodetector.
Step 12, the user terminal obtains first site information mapped by the first optical signal from the server; the first site information comprises building floor information, room number information and regional function division information;
because the code identifications corresponding to the visible light sources carrying the code identifications and the position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, region function and the like) of the visible light sources are uniquely mapped in the working scene, and the mapping relation list is stored in advance in the server or a cloud end connected with the server. After the user terminal collects the first optical signal of the environment, the first site information mapped by the first optical signal can be inquired from the mapping relation list stored in the server. The first site information includes information such as latitude, building name, floor number, floor direction, room number, and regional function. In detail, the zone function may be divided according to users (e.g., a staff-dedicated zone or a public zone); or divided into functional areas (e.g., toilets, offices, etc.) according to the implemented functions.
Step 13, the user terminal obtains first geographical position information according to the first position information and the first site information;
and the user terminal performs information completion and position correction on the first position information according to the obtained first site information to obtain first geographical position information containing the site information, namely three-dimensional coordinate information.
And step 14, the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
And the user terminal generates positioning information according to the obtained first geographical position information and the equipment identifier of the local machine, and sends the positioning information to the server to finish the reporting of the position information once.
In one implementation example, the positioning information received by the server is generated and transmitted by any robot in the multi-robot-multi-person cooperative system. In order to realize the position tracking of each robot in the multi-robot-multi-person cooperative system, each robot can position the position of the robot at preset time intervals, each robot obtains more accurate three-dimensional position information according to the position information obtained by positioning and by combining the collected visible light communication signals, generates positioning information according to the three-dimensional position information and sends the positioning information to a server, and the position reporting is completed. Therefore, before the server receives the positioning information, a process of generating the positioning information by each robot is also included. Specifically, taking any one robot in the multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the robot includes steps 21 to 24:
step 21, the robot acquires second position information of the position and collects a second optical signal of the environment; the second position information is three-dimensional position information; the optical signal is a visible light communication signal;
reasonably setting a plurality of electronic marks carrying code marks in a working scene of the multi-robot-multi-person cooperative system, wherein the code marks corresponding to the electronic marks are uniquely mapped with position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, regional function and other information) of the electronic marks, and the mapping relation list is stored in advance in a server or a cloud end connected with the server; when the robot or the user terminal moves to the position where the electronic mark is located, the electronic mark is scanned and decoded to obtain the coded mark, the site information corresponding to the scanned electronic mark can be obtained through the server, the obtained site information is combined with the obtained three-dimensional positioning information, more accurate three-dimensional information can be obtained, and the positioning accuracy is improved. Alternatively, the electronic mark may be in the form of a QR (Quick Response) code or a Visible Light source (such as an LED lamp, a VLC lamp, or a beacon) that sends out a coded mark as a Visible Light Communication (VLC) signal. As shown in fig. 3, in the present embodiment, in a scene where N robots 301 and M user terminals 302 are located in the multi-robot-multi-person collaboration system, a plurality of visible light sources 303 are provided; each visible light source has a unique coded identification (IDcode); and each visible light source generates a light signal which comprises coded identification information corresponding to each visible light source.
Therefore, in order to obtain more accurate positioning information, the robot acquires second position information of the current position through the positioning device and acquires a second optical signal of the environment through the signal acquisition device. Since most of the position information acquired by the positioning device is three-dimensional position information, the second position information acquired by the robot is three-dimensional position information. The second light signal generated by the visible light source is a Visible Light Communication (VLC) signal.
In an implementation example, a synchronous positioning and mapping (SLAM) device and a photodetector may be disposed in the robot, so that the robot achieves acquisition of second position information of the position by the positioning device, and achieves acquisition of a second optical signal of the environment by the photodetector. Specifically, taking any robot in the multi-robot-multi-person cooperative system as an example, the specific process of acquiring the second position information of the position where the robot is located and acquiring the second optical signal of the environment where the robot is located includes: the robot acquires second position information of the position through a synchronous positioning and mapping method; the robot collects a second optical signal of the environment through the photoelectric detector.
In detail, the SLAM device provided in the robot includes a main control chip and sensing measurement devices such as an odometer and an inertial sensor connected to the main control chip. When the robot acquires the second position information of the position through the synchronous positioning and mapping method, the main control chip of the SLAM device acquires data measured by each sensing measuring device to perform calculation of the synchronous positioning and mapping method, and a map of the environment where the robot is located is obtained and established, namely the acquired second position information of the position comprises positioning coordinates and the established map. Alternatively, the built Map may be a Grid Map (Occupancy Grid Map).
Because the VLC signal generated by the visible light source is a light signal, the robot acquires a second light signal of the environment through the photoelectric detector.
Step 22, the robot acquires second site information mapped by the second optical signal from the server; the second site information comprises building floor information, room number information and regional function division information;
the code identifications corresponding to the visible light sources with the code identifications and the position information (such as longitude, latitude, building name, floor number, floor direction, room number, floor position, regional function and the like) of the visible light sources are uniquely mapped in a working scene, and the mapping relation list is stored in advance in a server or a cloud end connected with the server. After the robot collects the second optical signal of the environment, the second site information mapped by the second optical signal can be inquired from the mapping relation list stored in the server. The second site information includes information such as latitude, building name, floor number, floor direction, room number, and regional functions. In detail, the zone function may be divided according to users (e.g., a staff exclusive zone or a public zone); or divided into functional areas (e.g., toilets, offices, etc.) according to the implemented functions.
Step 23, the robot obtains second geographic position information according to the second position information and the second site information;
and the robot performs information completion and position correction on the second position information according to the obtained second site information to obtain second geographic position information containing the site information, namely three-dimensional coordinate information.
And 24, the robot generates the positioning information according to the second geographical position information and the equipment identification of the robot, and sends the positioning information to the server.
And the robot generates positioning information according to the obtained second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server to finish the reporting of the position information once.
In one implementation example, the mapping relationship between different light (VLC) signals and the site information stored in the server is extracted from a construction drawing of a work scene of the multi-robot multi-person cooperative system or stored in the cloud in advance. For example, if the operation scenario of the multi-robot-multi-person collaboration system is an entire building, the site information may correspond to floor information, room number information, and regional function information of the building. If the optical signals are all sent by visible light sources (such as LED lamps, beacons, etc.) installed in the building, the construction drawing corresponding to the building has detailed position information of each visible light source, and the position information includes the number of floors and the position coordinates in the floors. Therefore, the mapping relation between different optical signals and the field information can be obtained by extracting the position information of the visible light source generating the optical signal from the construction drawing of the operation scene, establishing the mapping relation between the optical signal generated by each visible light source and the position information of each visible light source, generating a mapping relation list and storing the mapping relation list in the server or in the cloud. Specifically, the VLC coded Identification (ID) of each visible light source is stored in the cloud together with the physical location information it maps to, including longitude, latitude, building, floor, room number, and regional function information in the map of the building.
In another implementation example, when the electronic mark in the working scene of the system is installed for the first time and no drawing data record exists, the synchronous positioning and drawing function of the robot can be utilized to complete the mapping of the new map and the marking of the position of the electronic mark in the new map. The mapping relation between different optical signals and the site information stored in the server can be constructed by the robots in the multi-robot-multi-person cooperation system. Specifically, before any robot or user terminal in the system acquires the location information mapped by the optical signal from the server, the process of constructing the mapping relationship between the optical signal and the location information specifically includes steps 31 to 33:
step 31, if the robot acquires a third optical signal, acquiring position coordinate information of the position where the robot is located; the position coordinate information comprises position coordinates and third site information; the third site information comprises building floor information, room number information and regional function division information;
when the electronic mark in the working scene of the system is installed for the first time and no drawing data is recorded, a large amount of engineering time is consumed if the electronic mark is recorded in a manual mode, and the cost is high. In order to solve the technical problem, a plurality of robots in the system can patrol on a new map and simultaneously detect optical signals, and when the robots acquire a third optical signal, position coordinate information of the current position of the robots is acquired, so that the acquired position coordinate information is the position of the electronic mark generating the third optical signal. Specifically, the SLAM device arranged in the robot comprises a main control chip and sensing measurement devices such as an odometer and an inertial sensor which are connected with the main control chip. If the third optical signal is acquired, when the robot acquires the position coordinate information of the current position by a synchronous positioning and mapping method, the main control chip of the SLAM device acquires data measured by each sensing measuring device to perform calculation of the synchronous positioning and mapping method, and a map of the environment where the robot is located is obtained and established, namely the acquired position coordinate information of the location includes position coordinates and a constructed new map. Meanwhile, the robot can also acquire building floor information, room number information and regional function division information of the position according to a design drawing of a scene to acquire third site information, so that the position coordinate information further comprises the third site information.
Step 32, the robot constructs a mapping relation between the coding information of the third optical signal and the position coordinate information;
and the robot decodes the acquired third optical signal to obtain a code identification number corresponding to the electronic mark generating the third optical signal, and constructs a mapping relation between the code identification number of the acquired third optical signal and the obtained position coordinate information. Specifically, the presentation form of the new Map is that the established new Map can be a Grid Map (Occupancy Grid Map) containing detected electronic tags; different pixel values in the grid map represent the area with the obstacle (pixel value is 1), the free area (pixel value is 0), and the emark area (pixel value is-1). In detail, the occupied grid map with the electronic marker is in a pgm picture format, and records the position of a starting point of the map, the direction of the map, the resolution of pixels and physical distances of an image, the positions of the electronic markers, a code Identification (ID) corresponding to each electronic marker position, and the position of an obstacle.
And step 33, the robot sends the mapping relation to the server for storage.
And after the robot constructs the mapping relation between the coding information of the third optical signal and the position coordinate information, the mapping relation is sent to the server for storage. For example, as shown in fig. 4, taking one of the robots in the system as an example, the robot detects the optical signal to construct a schematic diagram of a new map. The robot is placed at the start of a known position and orientation on the digital map. Alternatively, the starting point may be calibrated using an electronic marker (e.g., a VLC signal). When the robot starts to move, the position is estimated from the starting point by a SLAM device based on sensors (IMU/LiDAR/Time-of-Flight). When the robot walks under the electronic marker (such as a VLC lamp), the receiver of the robot detects the third light signal, so that the lamp corresponding ID for generating the third light signal is obtained by decoding the collected third light signal, and the current position coordinate information of the robot is mapped onto the received VLC lamp ID. When the robot covers all areas, the position of each lamp in the area is mapped to its corresponding lamp ID. This mapping of light IDs to location and orientation is stored in a database for use by the user of the user terminal and for navigation by the robot.
In another embodiment, to avoid the problems of complicated operation and long response time caused by excessively complicated coding settings of the optical signal, the optical signal may be combined with the wireless signal to make the optical signal code reusable. The mapping relationship between the optical signal and the venue information may also be a mapping relationship between the optical signal and the venue information and between the wireless signals, and the method for constructing the mapping relationship between the optical signal and the wireless signals and between the venue information specifically includes steps 41 to 44:
specifically, if the working scene of the system is an indoor environment with a huge space, a large number of VLC lamps need to be arranged in the indoor environment to meet the positioning requirement. To ensure that the code identification (ID code) of each VLC light in a scene is not repeated, the VLC signal code length may be long, increasing the computational complexity and response time of the mobile device. To address this issue, the optical signal may also be combined with a wireless signal to enable the optical signal encoding to be reused. Specifically, a plurality of wireless signals are arranged in an indoor scene, so that the coverage area of each wireless signal can divide the scene into different areas; the limited VLC light code identification can be reused within the coverage range of each wireless signal. Optionally, the wireless signal may be a bluetooth or WIFI signal, etc. Fig. 5 is a schematic diagram illustrating the combination of optical signals and wireless signals. When the wireless signals are bluetooth signals, several bluetooth signals (e.g., bluetooth signals a-I in the figure) are arranged in an indoor scene so that the coverage area of each wireless signal can divide the scene into different areas. As shown, a limited VLC light code identification ID (e.g., codes 1-9) can be reused within the coverage area of each wireless signal. VLC IDs are distributed in a region in such a way that they are far apart from the same twin ID in a neighboring region. This particular arrangement reduces the probability of false triggers due to unwanted changes in the bluetooth signal RSS.
Step 41, if the robot collects a third optical signal, receiving a wireless signal of the environment to obtain a first wireless signal list, and acquiring position coordinate information of the position; the wireless signal list comprises the signal strength of each received wireless signal;
due to the fact that the VLC lamps in different areas are in different wireless signal coverage ranges, when the robot collects the third light signal, the position coordinate information of the current position of the robot is obtained, and the wireless signal of the environment where the robot is located needs to be received to obtain the first wireless signal list.
Step 42, the robot determines the wireless signal with the strongest signal strength in the first wireless signal list as a first target signal;
since the position of the third optical signal is the coverage area of the corresponding wireless signal, the wireless signal corresponding to the third optical signal should be the wireless signal with the strongest signal intensity in the first wireless signal list. The robot determines the wireless signal with the strongest signal intensity in the first wireless signal list as a first target signal, and therefore the wireless signal corresponding to the area where the third optical signal is located is found.
Step 43, the robot constructs a mapping relation between the coded information of the third optical signal, the position coordinate information and the first target signal;
and the robot constructs a mapping relation between the coded information of the third light signal and the acquired position coordinate information and a first target signal corresponding to the region where the VLC lamp generating the third light signal is located.
And step 44, the robot sends the mapping relation to the server for storage.
The robot sends the mapping to the server for storage, so that information of VLC Identifier (ID) of each lamp and its radio frequency (bluetooth/WiFi) MAC address is stored together with its physical location information in the server or cloud according to the map of the building (longitude, latitude, building, floor, room number, and regional function information, etc.).
As can be seen from the foregoing embodiment, the mapping relationship between the optical signal and the venue information may be a mapping relationship between the optical signal and the venue information and a wireless signal, and taking any user terminal in the multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the user terminal further includes steps 51 to 55:
step 51, the user terminal acquires first position information of the position, collects a first optical signal of the environment and receives a wireless signal of the environment to obtain a second wireless signal list;
because VLC lamps in different areas are in different wireless signal coverage ranges, when a user terminal collects a first optical signal, first position information of the position where the current user terminal is located is obtained, and a second wireless signal list is obtained by receiving wireless signals of the environment where the user terminal is located.
Step 52, the user terminal determines the radio signal with the strongest signal strength in the second radio signal list as a second target signal;
since the position of the first optical signal is the coverage area of the corresponding wireless signal, the wireless signal corresponding to the first optical signal should be the wireless signal with the strongest signal intensity in the second wireless signal list. And the user terminal determines the wireless signal with the strongest signal intensity in the second wireless signal list as a second target signal, so as to search the wireless signal corresponding to the area where the first optical signal is located.
Step 53, the user terminal obtains the site information mapped by the first optical signal and the second target signal from the server;
because the code identifications corresponding to the visible light sources carrying the code identifications and the position information (such as information of longitude, latitude, building name, floor number, floor direction, room number, floor position, region function and the like) of the visible light sources and the wireless signals of the regions where the visible light sources are located are uniquely mapped in the working scene, and the mapping relation list is stored in advance in the server or the cloud end connected with the server. After the user terminal collects the first optical signal and the second target signal of the environment, the location information mapped correspondingly by the first optical signal and the second target signal can be inquired from the mapping relation list stored in the server. The site information corresponds to first site information including latitude, building name, floor number, number of rooms in the floor direction, regional functions and the like.
Step 54, the user terminal obtains first geographical location information according to the first location information and the first site information;
and the user terminal performs information completion and position correction on the first position information according to the obtained first site information to obtain first geographical position information containing the site information, namely three-dimensional coordinate information.
And step 55, the user terminal generates the positioning information according to the first geographical position information and the equipment identifier of the user terminal, and sends the positioning information to the server.
And the user terminal generates positioning information according to the obtained first geographical position information and the equipment identifier of the local machine, and sends the positioning information to the server to finish the reporting of the position information once.
As can be seen from the foregoing embodiment, the mapping relationship between the optical signal and the site information may be a mapping relationship between the optical signal and the site information as well as between the optical signal and the wireless signal, and taking any one robot in a multi-robot-multi-person cooperative system as an example, the process of generating the positioning information by the robot further includes steps 61 to 65:
step 61, the robot acquires second position information of the position, collects a second optical signal of the environment and receives a wireless signal of the environment to obtain a third wireless signal list;
because the VLC lamps in different areas are in different wireless signal coverage ranges, when the robot acquires a second optical signal, second position information of the current robot position is acquired, and a third wireless signal list is obtained by receiving wireless signals of the environment where the robot is located.
Step 62, the robot determines the wireless signal with the strongest signal strength in the third wireless signal list as a third target signal;
since the position of the second optical signal is the coverage area of the corresponding wireless signal, the wireless signal corresponding to the second optical signal should be the wireless signal with the strongest signal strength in the list of the third wireless signals. The robot determines the wireless signal with the strongest signal intensity in the third wireless signal list as a third target signal, so as to search the wireless signal corresponding to the area where the second optical signal is located.
Step 63, the robot acquires the site information mapped by the second optical signal and the third target signal from the server;
because the code identifications corresponding to the visible light sources carrying the code identifications and the position information (such as information of longitude, latitude, building name, floor number, floor direction, room number, floor position, region function and the like) of the visible light sources and the wireless signals of the regions where the visible light sources are located are uniquely mapped in the working scene, and the mapping relation list is stored in advance in the server or the cloud end connected with the server. After the robot collects the second optical signal and the third target signal of the environment, the site information mapped correspondingly by the second optical signal and the third target signal can be inquired from the mapping relation list stored in the server. The site information corresponds to second site information including information such as latitude, building name, floor number, floor direction, room number and the like, and information such as regional function and the like.
Step 64, the robot obtains second geographic position information according to the second position information and the second site information;
and the robot performs information completion and position correction on the second position information according to the obtained second site information to obtain second geographic position information containing the site information, namely three-dimensional coordinate information.
And step 65, the robot generates the positioning information according to the second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server.
And the robot generates positioning information according to the obtained second geographic position information and the equipment identifier of the robot, and sends the positioning information to the server to finish the reporting of the position information once.
S120, updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map.
Specifically, the shared map in the server is a three-dimensional electronic map, which contains three-dimensional coordinate information and site information, and can be generated according to parameters corresponding to a construction drawing of an operation scene of the multi-robot-multi-person cooperation system and a map obtained by mapping of the robots. The positioning information received by the server can be generated by any robot or any user terminal in the system, and the server updates the positioning mark positions of each user terminal and each robot on the shared map according to the received positioning information sent by each user terminal and each robot, so that the real-time tracking of the positions of each robot and each user terminal in the shared map is realized. And each updated shared map is sent to all robots and all user terminals in the system, so that each robot and each user terminal in the system can know the positions of each robot and each user terminal in the system, and the position sharing between the robots and the user terminals is realized. Further, the positions of the plurality of robots and the plurality of user terminals can be displayed simultaneously on the map of the user terminal or the map of the robot.
In one implementation example, the construction of the shared map may be accomplished by controlling several robots in a multi-robot-multi-person collaborative system to map and scan the work scene. Specifically, before updating the positioning mark of the first device identifier in the shared map to the position of the geographic position information, and sending the updated shared map to all the robots and the user terminals, the method further includes steps 71 to 74:
step 71, controlling a plurality of robots to map and draw a target scene, and performing laser scanning on the target scene to obtain map data and scanning data;
specifically, a mapping device, such as a SLAM device, is provided in each robot, and the SLAM device includes a main control chip and sensing and measuring devices such as an odometer and an inertial sensor connected to the main control chip. And when a plurality of robots are controlled to map a target scene, each robot carries out mapping by a synchronous positioning and mapping method. In detail, each robot is controlled to patrol in a target scene, a master control chip of the SLAM device of the robot acquires data measured by each sensing measuring device in the patrol process, the data are synchronously positioned and calculated by a map building method, and a map of the environment where the robot is located is built, namely map data are obtained. Optionally, the target scene may be a working scene of multi-robot-multi-person collaboration system operation. And each robot is also internally provided with a three-dimensional laser scanning device (such as a laser, a radar scanner and the like); when a plurality of robots are controlled to map a target scene, the three-dimensional laser scanning devices of all the robots are synchronously controlled to be started, when all the robots travel in the target scene, the three-dimensional laser scanning devices of all the robots scan the contours and boundaries in the target scene, such as the contours and boundaries of facilities such as walls and houses, and record the information of three-dimensional coordinates, reflectivity, textures and the like of a large number of dense points on the surface of a measured object to obtain scanning data.
Step 72, building an architectural information model of the target scene according to the map data and the scanning data;
and establishing a Building Information model (Building Information Modeling) of the target scene according to the map data and the scanning data obtained in the steps, thereby obtaining a three-dimensional Building model.
Step 73, acquiring parameter information of the target scene; the parameter information comprises building floor information, room number information and regional function division information;
and to implement the building function, the target scene will usually have artificially defined site information, such as building name, floor number, direction of the floor, room number, floor position, and regional function. Parameter information of a target scene, namely, information such as building floor information, room number information, and regional function division information can be acquired from a building drawing or a design blueprint.
And step 74, correspondingly marking the parameter information in the building information model to obtain the shared map.
The parameter information of the target scene is input into the building information model, so that the information such as building floor information, room number information, regional function division information and the like of the target scene can be accurately marked in the building information model to obtain a shared map, and the constructed shared map comprises three-dimensional coordinate information and site information. For example, if the target scene has several rooms, the parameter information is correspondingly labeled in the building information model, so that each room in the building information model of the target scene has labels of building floor information, room number information, and area function partition information.
In another implementation mode of establishing the shared map, a SLAM map obtained by mapping a field by a robot in advance is used as a primary draft of the shared map, then scanning information is obtained by scanning walls and facilities in a scene through laser, and outlines and boundaries on the shared map, such as outlines and boundaries of facilities such as walls and houses, are generated in the primary draft according to the obtained scanning information; and then correcting and completing the shared map according to the construction drawing parameters corresponding to the site, thereby completing the construction of the shared map, and enabling the shared map to become a three-dimensional electronic map which comprises information such as longitude, latitude, building name, floor number, floor direction, room number, floor position and the like. Compared with the construction completely drawn by manpower, the time for constructing the man-machine shared map can be greatly reduced, and the map precision is improved.
In one implementation example, in order to deal with changes such as rearrangement of furniture and the like or indoor reconstruction and the like in a target scene, the robot can be used for regularly scanning the indoor environment of the target scene by laser and updating the shared map in time according to scanning data so as to ensure real-time performance and accuracy of the shared map. Specifically, after the parameter information is correspondingly marked in the building information model to obtain the shared map, the method further includes: controlling a plurality of robots to carry out laser scanning on the target scene at regular time to obtain scene scanning data; and updating the shared map according to the scene scanning data.
S130, receiving assistance request information; the assistance request information comprises assistance request instructions and a second device identification.
In an application scenario, a worker may use a user terminal to generate assistance request information at any place and send the assistance request information to a server to generate an assistance job policy. For a detailed example, when the application scene of the system is a shopping mall, if a worker finds that an accident such as a fire or an injured person needs assistance while patrolling the scene, the worker may select an assistance request instruction corresponding to the item to be assisted by the user terminal, generate assistance request information by combining with the device identifier of the user terminal, and send the assistance request information to the server.
And S140, obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map.
Because the shared map in the server can track the positions of each robot and each user terminal in real time, the positioning coordinates, namely the position of the initiation place of the assistance request instruction, can be quickly obtained according to the positioning mark of the second equipment identifier contained in the assistance request information in the shared map, and the response rate of the system is improved.
S150, determining the robot and/or the user terminal meeting the preset conditions in the shared map as a target device.
Because the shared map in the user terminal can also track the positions of each robot and each user terminal in real time, when a worker uses the user terminal to generate assistance request information at any place, the worker can also select target equipment required for assistance, such as a plurality of robots and/or user terminals closest to the user terminal in the shared map, according to the positions of each robot and each user terminal displayed by the shared map in the user terminal, so that the generated assistance request information also comprises equipment identification information of the specified target equipment. The process of determining that the robot and/or the user terminal meeting the preset condition in the shared map is the target device after the server receives the assistance request information specifically comprises the following steps: and determining that the robot and/or the user terminal corresponding to the equipment identification information included in the assistance request information is a target device, and completing the determination of the target equipment required by the assistance request instruction.
In an embodiment, in order to respond to the cooperative work instruction as soon as possible, a worker using another user terminal that is closer to the initiation location (i.e., the positioning coordinate) of the cooperative request instruction needs to be notified in time to arrive at the site for cooperative work, so that the preset condition may be set to select a preset number of robots closest to the positioning coordinate and/or another user terminal other than the user terminal. Determining that the robot and/or the user terminal meeting the preset condition in the shared map are/is a target device, which may specifically be: and determining a preset number of robots and/or user terminals closest to the positioning coordinates in the shared map as target devices. Optionally, the preset number may be preset or may be changed according to the type of the assistance request instruction.
And S160, generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction.
And after the target equipment required by the assistance request instruction is determined, the server generates a cooperative work instruction according to the positioning coordinate and the assistance request instruction.
S170, sending the cooperative work instruction to the target device so that the robot corresponding to the target device and/or a user of the user terminal can reach the positioning coordinate to perform cooperative work.
And the server sends the generated cooperative work instruction to the determined target device, so that the robot and/or the user of the user terminal receiving the cooperative work instruction reach the positioning coordinate and perform cooperative work according to the control strategy of the cooperative work instruction, and when the cooperative request instruction is initiated, the robot and the staff using the user terminal are timely arranged to respond to the instruction together for cooperative work.
The embodiment of the invention provides a multi-robot-multi-person cooperative control method, which is applied to a multi-robot-multi-person cooperative system; the system comprises N robots and M user terminals which are in communication connection with a server; n is more than or equal to 1; m is more than or equal to 1; the server receives positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification; by receiving the positioning information sent by each user terminal and each robot at regular time, the position change of each user terminal and each robot in the cooperation system can be grasped in real time. Updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information, and sending the updated shared map to all the robots and the user terminal; and the server updates the positioning mark positions of the user terminals and the robots on the shared map according to the received positioning information sent by the user terminals and the robots, so that the real-time tracking of the positions of the robots and the user terminals in the shared map is realized. The shared map includes positioning marks of the N robots and the M user terminals. Receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier; obtaining a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map; after receiving the assistance request information sent by any user terminal or robot, the server can quickly find the position requested to be sent from the shared map to obtain a positioning coordinate, so that the response rate of the system is improved; determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device; generating a cooperative work instruction according to the positioning coordinate and the assistance request instruction; and sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work. When the cooperation request instruction is initiated, the robot and the staff using the user terminal are arranged in time to respond to the instruction together to carry out cooperation operation.
Example two
Fig. 6 shows a multi-robot-multi-person cooperative control apparatus according to a second embodiment of the present invention. On the basis of the first embodiment, the embodiment of the present invention further provides a multi-robot-multi-person cooperative control apparatus, including: a processor 601, a memory 602, and a computer program 603, such as a program for a multi-robot-multi-person cooperative control method, stored in said memory 602 and executable on said processor 601. The processor 601, when executing the computer program 603, implements the steps in the above-described embodiment of the multi-robot-multi-person cooperative control method, such as steps S110 to S170 shown in fig. 1.
Illustratively, the computer program 603 may be partitioned into one or more modules that are stored in the memory 602 and executed by the processor 601 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program 603 in the washing apparatus. For example, the computer program 603 may be divided into a positioning information receiving module, a positioning updating module, an assistance request information receiving module, a positioning coordinate obtaining module, a target device determining module, a coordination instruction generating module, and an instruction sending module, where the specific functions of the modules are as follows:
the positioning information receiving module is used for receiving positioning information; the positioning information comprises geographical position information and a first equipment identifier; each user terminal and each robot have unique equipment identification;
the positioning updating module is used for updating the positioning mark of the first equipment identifier in the shared map to the position of the geographical position information and sending the updated shared map to all the robots and the user terminal; the shared map comprises positioning marks of N robots and M user terminals; the shared map is a three-dimensional coordinate map; n is more than or equal to 1; m is more than or equal to 1;
an assistance request information receiving module for receiving assistance request information; the assistance request information comprises an assistance request instruction and a second equipment identifier;
the positioning coordinate acquisition module is used for acquiring a positioning coordinate according to the positioning mark of the second equipment identifier in the shared map;
the target device determining module is used for determining the robot and/or the user terminal meeting preset conditions in the shared map as a target device;
a cooperative instruction generating module, configured to generate a cooperative work instruction according to the positioning coordinate and the assistance request instruction;
and the instruction sending module is used for sending the cooperative work instruction to the target device so as to enable the robot corresponding to the target device and/or a user of the user terminal to reach the positioning coordinate for cooperative work.
The multi-robot-multi-person cooperative control apparatus may include, but is not limited to, a processor 601, a memory 602, and a computer program 603 stored in the memory 602. Those skilled in the art will appreciate that fig. 6 is merely an example of a multi-robot-multi-person cooperative control apparatus, and does not constitute a limitation on the multi-robot-multi-person cooperative control apparatus, and may include more or less components than those shown, or some components may be combined, or different components, for example, the multi-robot-multi-person cooperative control apparatus may further include an input-output device, a network access device, a bus, and the like.
The Processor 601 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 602 may be an internal storage unit of the washing apparatus, such as a hard disk or a memory of the washing apparatus. The memory 602 may also be an external storage device, such as a plug-in hard disk provided on a drain of a washing apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 602 may also include both an internal storage unit of the drain of the washing apparatus and an external storage device. The memory 602 is used to store the computer program and other programs and data required for the drain method of the washing apparatus. The memory 602 may also be used to temporarily store data that has been output or is to be output.
EXAMPLE III
Fig. 7 is a schematic structural diagram of a multi-robot-multi-person collaboration system according to a third embodiment of the present invention. The system comprises N robots 72 and M user terminals 73 communicatively connected to a server 71; n is more than or equal to 1; m is more than or equal to 1; wherein,
the multi-robot-multi-person cooperative system executes the steps of the multi-robot-multi-person cooperative control method as described in the first embodiment.
Specifically, in order to realize that each user terminal and each robot in the multi-robot-multi-person cooperative system can receive optical communication signals (VLC), each user terminal and each robot are also provided with a visible light communication system level chip VLCSOC. As shown in fig. 8, the visible light communication system-on-chip VLCSOC 8, which implements the functions of receiving and encoding/decoding a light communication signal (VLC), may include a power supply (DC) 110, a wireless data source 130, a light emitting diode 150, a photosensor 170, a DC-DC power converter 250, a wireless communication unit 240, a security unit 180, and a VLC unit 140; where the VLC unit 140 includes a baseband DSP unit 220, an analog signal processing unit 230, a transmitter 140A, and a receiver 140B.
In detail, the wireless communication unit 240 is connected to the external data source 130 and used to communicate with the data source 130 to transmit/receive data for backhaul communication and/or system control, the VLC unit 140 is connected to the LEDsl5 and the one or more photosensors 170, and the security unit 180 is connected to the wireless communication unit 240 and the VLC unit 140, using the DC-DC power converter 250 to obtain power from the power source (DC) 110 and supply the power to the elements in the SOC 8.
The VLC unit 140 is operable to modulate LEDsl5 to transmit information and/or the VLC unit 140 is operable to receive information using VLC via one or more photosensors 170. In one embodiment, SOC8 may support VLC transmission and reception simultaneously, such that SOC8 may be used for VLC sources (e.g., LED lights, symbols, or other devices) as well as VLC receivers (e.g., handheld devices, etc.), and further, SOC8 may also support two-way VLC devices. In the present embodiment, the VLC unit 140 includes a Transmitter (TX) circuit 140A and a Receiver (RX) circuit 140B. TX circuit 140A may be a baseband digital signal processing unit (baseband DSP unit) 220 and an analog signal processing unit 230 that conditions LED lamp 150. RX circuit 140B may receive signals from photosensor 170 and detect VLC data in the received light.
VLC unit 140 and wireless communication unit 240 may be used in conjunction to communicate information to a device. The VLC unit 140 may, for example, transmit a quick response code (QR code), a Uniform Resource Locator (URL), or other data that may be used to access a greater amount of information. A larger amount of information may be transmitted through a wireless or wired network via the wireless communication unit 240.
The security unit 180 may ensure security of network and/or VLC data access, for example, the security unit 180 may include cryptographic hardware for encrypting data to be transmitted via VLC and/or decrypting data to be transmitted via VLC. Thus, sensitive data may be transmitted only to a particular user, while other recipients receiving such VLC data may not be able to decrypt the data. Likewise, data to be transmitted over the network may be encrypted by the security unit 180 and data received from the network (e.g., data to be transmitted via VLC) may be encrypted by the security unit 180.
The photosensor 170 can be any type of photosensor, for example, the photosensor can include a photodetector or a CMOS image sensor. The photodetector may be used for high bandwidth/data rate communications, while the CMOS image sensor may be used for low bandwidth/data rate communications. A given system may include one or more types of photosensors 170. Other photosensors may be used in other embodiments.
The LEDS15 may be any kind of LED. In one embodiment, the LEDS15 may be a large number of low cost standard LEDS. VLC may be more readily accepted by the market through a combination of inexpensive LEDsl5 and cost savings of the SoC16 (as compared to discrete components). Over time, SOC8 may also utilize moore's law to reduce cost, increase performance, and the like. VLC may also be used in conjunction with low cost wireless/wireline networks. In one embodiment, LEDsl5 may be Organic LEDs (OLEDs). The power supply may be a Direct Current (DC) power supply or an Alternating Current (AC) power supply, or may be powered via a dedicated power cord or together with the data (e.g., via power over ethernet-PoE). In addition, data may be transmitted/received through a wireless or wired communication system.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.