CN115482712A - Programmable group robot framework based on 5G network - Google Patents

Programmable group robot framework based on 5G network Download PDF

Info

Publication number
CN115482712A
CN115482712A CN202211345103.2A CN202211345103A CN115482712A CN 115482712 A CN115482712 A CN 115482712A CN 202211345103 A CN202211345103 A CN 202211345103A CN 115482712 A CN115482712 A CN 115482712A
Authority
CN
China
Prior art keywords
main control
intelligent
robot
control board
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211345103.2A
Other languages
Chinese (zh)
Inventor
杨小莉
范衠
任鹏翔
林鹏
黄书山
赵雷
黄伟鑫
董湘渝
洪峻操
王诏君
黎焕林
吴奕润
李伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou Kuaichang Robot Technology Co ltd
Shantou University
Original Assignee
Shantou Kuaichang Robot Technology Co ltd
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou Kuaichang Robot Technology Co ltd, Shantou University filed Critical Shantou Kuaichang Robot Technology Co ltd
Priority to CN202211345103.2A priority Critical patent/CN115482712A/en
Publication of CN115482712A publication Critical patent/CN115482712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B25/00Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
    • G09B25/02Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes of industrial processes; of machinery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0053Computers, e.g. programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Optimization (AREA)
  • Educational Administration (AREA)
  • Pure & Applied Mathematics (AREA)
  • Educational Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Operations Research (AREA)
  • Algebra (AREA)
  • Computer Hardware Design (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a programmable group robot architecture based on a 5G network, which comprises an upper PC main control end, a plurality of middle-layer control panels, a plurality of bottom-layer main control panels and a plurality of intelligent robots, wherein the number of the middle-layer control panels and the number of the bottom-layer main control panels are the same as that of the intelligent robots and are in one-to-one correspondence with the intelligent robots, and one middle-layer control panel and one bottom-layer main control panel are arranged on the corresponding intelligent robots; all be provided with sensor and a plurality of expansion module on each bottom master control board, all be equipped with UWB location label on each intelligent robot. The invention can enhance the communication efficiency among the group robots by the 5G network technology, has strong real-time performance and convenient demonstration; adopts a bottom-to-top hierarchical structure comprising a bottom layer main control board, a middle layer control board and an upper layer PC main control end on the aspect of hardware design, adopts a visual programming and algorithm programming mode on the aspect of software design, the robot comprises a whole set of structure from hardware to software, and a user can set different group behaviors according to actual requirements, so that the innovative group robot can be quickly and conveniently constructed.

Description

Programmable group robot framework based on 5G network
Technical Field
The invention relates to the technical field of swarm robot control, in particular to a programmable swarm robot framework based on a 5G network.
Background
In recent years, due to rapid development of internet and artificial intelligence, the robot industry draws wide attention, most group robots are applied to the fields of scientific research, civil use and military use such as war and unmanned aerial vehicle cluster performance, and the application of robots to the field of education is rare and rare. Since the education industry has been receiving attention from the country and society, robot education is a new industry in the education industry.
Although educational robots appear in the market, most of the existing educational robots are single robots, the visual field range and the bearing capacity of the single robots are limited, a cluster type educational robot real machine system for simulating a real scene is not provided, and a more complex task cannot be simulated; moreover, the single robots are designed in a one-to-one fixed mode, so that the design difficulty is increased once the number of the robots is increased, and particularly, when a single robot or a part of the robots fail, the whole robot cluster is easy to be paralyzed. In addition, most of the methods for controlling the group robots need mutual communication among the robots, and once the communication is blocked, the group robots are difficult to control; in addition, the motion trail of the educational robot is set by a designer in advance, so that the robot can move to reach a required direction and a corresponding path. From the analysis, the method for using a plurality of single robots for group robot control at present is difficult to be applied to the enlightenment education of robot cluster control, and the principle and the method of group robot control are difficult to be demonstrated to students in real time in teaching for watching, so that the method is not only inconvenient for the practical operation of the students, but also inconvenient for the group robot control, thereby reducing the learning interest of the students on the group robots and hindering the comprehension of the students on the core technology of the group robots.
Due to the rapid development of the 5G technology, the 5G endows the networked unmanned aerial vehicle/ship with important capabilities of real-time ultrahigh definition image transmission, remote low-delay control, permanent online and the like, and the operation efficiency is improved; the 5G + robot cloud brain can reduce cost and assist the robot in large-scale deployment by sharing calculation, storage, data and intelligence. The 5G has the characteristics of large bandwidth and low time delay, a plurality of service scenes are turned from local to the cloud, the data acquisition amount is increased rapidly, and based on big data analysis and machine learning, the accelerated development of artificial intelligence is promoted, so that an infinite imagination space is brought to new products, new services and new modes.
Therefore, the 5G technology is applied to the architecture design of the swarm robot, so that the task execution efficiency of the swarm robot is improved.
Disclosure of Invention
The invention aims to solve the technical problem of providing a programmable group robot framework based on a 5G network, which can enhance the communication efficiency among group robots through a 5G network technology, has strong real-time performance, convenient demonstration, simplicity and understandability and is convenient for students to use.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a programmable group robot architecture based on a 5G network is characterized in that:
(1) The intelligent robot system comprises an upper PC main control end, a plurality of middle-layer control boards, a plurality of bottom-layer main control boards and a plurality of intelligent robots, wherein the number of the middle-layer control boards and the number of the bottom-layer main control boards are the same as that of the intelligent robots and are in one-to-one correspondence with the intelligent robots, and one middle-layer control board and one bottom-layer main control board are arranged on the corresponding intelligent robots; each bottom layer main control board is provided with a sensor and a plurality of expansion modules, and each intelligent robot is provided with a UWB positioning tag;
(2) The upper PC main control end is connected with a middle control panel on each intelligent robot through a 5G network, the middle control panel is connected with the bottom main control panel, the signal input ends of the sensor and the plurality of expansion modules are connected with the bottom main control panel, and the control output ends of the sensor and the plurality of expansion modules are connected with each intelligent robot;
(3) The upper layer PC main control end formulates tasks of all the intelligent robots and plans running routes of all the intelligent robots, the middle layer control board receives the tasks and the running routes of all the intelligent robots, operation simulation is carried out, and then a next data instruction is transmitted to the bottom layer main control board; the bottom layer main control board generates a basic movement instruction according to the data and the data instruction obtained by the sensor and executes the basic movement instruction, so that the group intelligent effect is developed;
(4) Setting a plurality of positioning base stations with known positions in the running route range of each intelligent robot, wherein one positioning base station is connected with the upper PC main control end, a UWB positioning tag on each intelligent robot emits pulses according to a certain frequency, continuously carries out distance measurement with each positioning base station, and obtains the position of each intelligent robot through calculation;
(5) The bottom layer main control board sends basic motion instructions to each intelligent robot through the sensor and each expansion module, each intelligent robot automatically performs path planning and speed optimization after receiving the basic motion instructions, coordinates with other intelligent robots, monitors emergency situations when planning is performed, and coordinates potential conflicts or failure tasks which cannot be completed by the upper layer PC main control end;
(6) The upper PC main control end formulates tasks of all the intelligent robots and plans running routes of all the intelligent robots through PyQt visual programming; the middle-layer control board is programmed through a Python programming language, and acts on the middle-layer control board together with the data obtained by the sensor in the step (3) to generate an original data model, establishes a realizable task plan and transmits the realizable task plan to the bottom-layer main control board to execute corresponding behavior actions; the bottom main control board is programmed through a visual programming language or an Arduino programming language, and the basic behaviors and the extended behaviors of all the intelligent robots are obtained.
In the above (2), the middle control panel is connected to the bottom main control panel to realize communication connection between the middle control panel and the bottom main control panel, thereby realizing circulation transmission of information.
In the above (6), the basic behaviors of each intelligent robot include moving, braking, and the like; the expansion behaviors of the intelligent robots comprise fire extinguishment, obstacle elimination and the like.
The UWB positioning of the intelligent robot adopts a LinkTrack UWB positioning system, the specific position of each intelligent robot is determined by a TOF (time of flight) distance measurement technology, and the distance between nodes is measured by mainly utilizing the flight time of a pulse signal between two transceivers.
In the preferred scheme, in (1), all be equipped with the serial number that sets up on each intelligent robot's the UWB location label, through upper PC main control end discernment location each intelligent robot's UWB location label position in (4), basic motion instruction in rethread WIFI transmission (5) gives each intelligent robot.
In a further preferred scheme, in the step (4), after the distance between the UWB positioning tag of each intelligent robot and the positioning base station is measured, the coordinates of each intelligent robot are obtained through calculation, and then the coordinates of each intelligent robot are output in real time through the upper PC main control terminal, so that the positioning information of each UWB positioning tag is obtained; and then the positioning information of each UWB positioning label is accessed into the corresponding intelligent robot, and each intelligent robot is navigated. The positioning base station and the upper PC main control end synchronously output the positioning information of all UWB positioning labels in real time, and the positioning function of a plurality of intelligent robots is realized.
In a further preferable embodiment, in the step (4), it is assumed that t is measured from the UWB positioning tag of each intelligent robot to the time when the nth positioning base station receives the UWB signal transmitted by the UWB positioning tag i (i =1,2,3,4, \ 8230;, n), and assume that the distance r of the UWB positioning tag to the nth positioning base station is i (ii) a In the case of complete synchronization between the positioning base stations, the distance difference d of the UWB positioning tag relative to n sets of positioning base stations (assuming 1,2 is the first set, 2,3 is the second set, 3,4 is the third set, \ 8230;, n, 1 is the nth set) is obtained i Comprises the following steps:
d i 12=r 1 -r 2 =(t 1 -t 2 )×c,
d i 23=r 2 -r 3 =(t 2 -t 3 )×c,
d i 34=r 3 -r 4 =(t 3 -t 4 )×c,
……
d i 1n=r 1 - r n =(t 1 - t n ) X c, wherein c is the speed of light;
correspondingly setting the coordinates of the n positioning base stations as (x) 1 ,y 1 ,z 1 )、(x 2 ,y 2 ,z 2 )、(x 3 ,y 3 ,z 3 )、(x 4 ,y 4 ,z 4 )……(x n ,y n ,z n ) Then, a hyperbolic equation set about the UWB positioning label position is formed through a plurality of TDOA measured values, and the hyperbolic equation set about the UWB positioning label position is solved, so that the coordinate (x) of the UWB positioning label can be obtained i ,y i ,z i ):
Figure 816845DEST_PATH_IMAGE002
In the preferable scheme, in the step (2), a UART data bus and a TTL level signal which are arranged in the bottom layer main control board can be connected with the middle layer control board through digital ports 0 (RX) and 1 (TX) (5V) to carry out serial port communication; a plurality of expansion interfaces are arranged in the bottom main control board, and the signal input end of each expansion module is connected with the corresponding expansion interface. The bottom main control board is used for controlling basic movement behaviors of the intelligent robots, and a plurality of built-in expansion interfaces of the bottom main control board facilitate function expansion of the intelligent robots. The middle control board is mainly used for receiving task commands of the upper PC main control end, performing task planning by itself and then sending motion instructions to the bottom main control board.
In the preferable scheme, in the step (2), the signal input ends of the sensor and the plurality of expansion modules are directly connected with the bottom main control board in a stacking manner. The bottom layer main control board is used as an interaction medium to help the sensor and the expansion module to realize information interaction and cooperative cooperation.
In a further preferable scheme, in the step (1), the sensor is one or a combination of two of a flame sensor and a magnetic sensor; each expansion module is one or a combination of more of an LED array module, a barrier removal module and a fire extinguishing module. The expansion modules enable the intelligent robots to achieve more behavior functions.
In a further preferable scheme, in the step (1), when the sensor is a flame sensor and the expansion module is a fire extinguishing module, the flame sensor detects flames by using infrared rays, converts the flames into level signals with variable heights according to the brightness of the flames, inputs the level signals into the bottom layer main control board for processing and outputting, and then controls the fire extinguishing module to perform fire extinguishing action.
In a preferable scheme, in the step (5), when one of the intelligent robots does not detect that other intelligent robots around are about to collide, the other intelligent robots can be coordinated through human intervention to avoid collision between the intelligent robot and the other intelligent robots; when the emergency situations such as unknown terrain, environmental factors and the like are monitored, intervention prevention can be carried out through the upper PC main control end, and potential conflict or failure tasks are avoided.
In the preferable scheme, the upper PC main control end is a computer, and in the step (4), one of the positioning base stations is connected with a USB interface of the computer, so that the specific position of the intelligent robot is visualized.
In the preferred scheme, in (1), the bottom master control board adopts Arduinonano control panel.
In the preferable scheme, in the step (1), the middle control plate adopts a raspberry zero w control plate.
In the preferred scheme, a group intelligent algorithm is built in the middle-layer control panel and is written by a Python programming language. All group behaviors of the group robot need to be supported by means of group intelligent algorithms and raw data collected from the outside. Generally, the group intelligence algorithm includes a target capture algorithm, a region coverage algorithm and an anti-collision algorithm.
The target trapping algorithm and the area coverage algorithm can adopt a control method of the life search cluster education robot disclosed in the invention patent application with the application publication number of CN114446121A, and the application publication number of 2022 is 05 and 06.
The anti-collision algorithm may adopt a robot anti-collision algorithm disclosed in the invention patent application with publication number CN114347041A, which is published as 2022, 04, month and 15.
Compared with the prior art, the invention has the following advantages:
1. the invention adopts advanced 5G network communication technology not only applied to communication between group robots, but also applied to remote control of the group robots, provides stable data transmission and lower communication time delay for the group robot system, provides the reaction of the group robots to environment and control commands, optimizes the operational capability of the group robot system to the maximum extent, effectively improves the information exchange efficiency between the group robots, enhances the efficient transmission between the upper PC main control end and the group robots, improves the group behavior execution efficiency of the group robots, reduces the channel collision probability of communication between the group robots, improves the fairness of channel access and network throughput, and reduces the time delay of data transmission.
2. The invention adopts a bottom-to-top hierarchical structure comprising a bottom main control board, a middle control board and an upper PC main control end on the aspect of hardware design, adopts a visual programming and algorithm programming mode on the aspect of software design, comprises a whole structure from hardware to software, and can set different group behaviors according to actual requirements by a user so as to quickly and conveniently construct an innovative group robot.
3. The invention is internally provided with a simple group intelligent algorithm, can avoid mutual collision and interference among group robots, and simplifies the development degree requirement on software/hardware algorithm.
4. The invention has the advantages of convenient demonstration, simplicity and understandability, is more suitable for students to learn, has greater educational significance, can stimulate imagination and independent creativity of the students, and enables the students to better understand the basic working principle of group robot control and group robots.
Drawings
FIG. 1 is an architectural diagram of a preferred embodiment of the present invention;
fig. 2 is a schematic diagram of the distance of one of the UWB positioning tags to each positioning base station in the preferred embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1-2, the programmable swarm robot architecture based on the 5G network in the present embodiment:
(1) The intelligent robot system comprises an upper PC main control end, a plurality of middle-layer control panels, a plurality of bottom-layer main control panels and a plurality of intelligent robots, wherein the number of the middle-layer control panels and the number of the bottom-layer main control panels are the same as that of the intelligent robots and are in one-to-one correspondence with the intelligent robots, and one middle-layer control panel and one bottom-layer main control panel are arranged on the corresponding intelligent robots; each bottom layer main control board is provided with a sensor and a plurality of expansion modules, and each intelligent robot is provided with a UWB positioning tag;
(2) The upper PC main control end is connected with a middle control panel on each intelligent robot through a 5G network, the middle control panel is connected with the bottom main control panel, the signal input ends of the sensor and the plurality of expansion modules are connected with the bottom main control panel, and the control output ends of the sensor and the plurality of expansion modules are connected with each intelligent robot;
(3) The upper layer PC main control end formulates tasks of all the intelligent robots and plans running routes of all the intelligent robots, the middle layer control board receives the tasks and the running routes of all the intelligent robots, operation simulation is carried out, and then a next data instruction is transmitted to the bottom layer main control board; the bottom layer main control board generates a basic movement instruction according to the data and the data instruction obtained by the sensor and executes the basic movement instruction, so that the group intelligent effect is developed;
(4) Setting a plurality of positioning base stations with known positions in the running route range of each intelligent robot, wherein one positioning base station is connected with the upper PC main control end, a UWB positioning tag on each intelligent robot emits pulses according to a certain frequency, continuously carries out distance measurement with each positioning base station, and obtains the position of each intelligent robot through calculation;
(5) The bottom main control board sends basic motion instructions to each intelligent robot through the sensor and each expansion module, each intelligent robot automatically performs path planning and speed optimization after receiving the basic motion instructions, coordinates with other intelligent robots, monitors emergency situations when planning is performed, and coordinates potential conflicts or failed tasks which cannot be completed by the upper PC main control end;
(6) The upper PC main control end formulates tasks of all the intelligent robots and plans running routes of all the intelligent robots through PyQt visual programming; the middle-layer control board is programmed through a Python programming language, and acts on the middle-layer control board together with the data obtained by the sensor in the step (3) to generate an original data model, establishes a realizable task plan and transmits the realizable task plan to the bottom-layer main control board to execute corresponding behavior actions; the bottom main control board is programmed through a visual programming language or an Arduino programming language to obtain the basic behaviors and the expansion behaviors of each intelligent robot.
In the above (2), the middle control panel is connected to the bottom main control panel to realize communication connection between the middle control panel and the bottom main control panel, thereby realizing circulation transmission of information.
In the above (6), the basic behaviors of each intelligent robot include moving, braking, and the like; the expansion behaviors of the intelligent robots comprise fire extinguishment, obstacle elimination and the like.
The UWB positioning of the intelligent robot adopts a LinkTrack UWB positioning system, the specific position of each intelligent robot is determined by a TOF (time of flight) distance measuring technology, and the distance between nodes is measured by mainly utilizing the flight time of a pulse signal between two transceivers.
In above-mentioned (1), all be equipped with the serial number that sets up on each intelligent robot's the UWB location label, through upper PC main control end discernment location each intelligent robot's UWB location label position in (4), basic motion instruction in rethread WIFI transmission (5) gives each intelligent robot.
In the above (4), after the distance between the UWB positioning tag of each intelligent robot and the positioning base station is measured, the coordinates of each intelligent robot are obtained through calculation, and then the coordinates of each intelligent robot are output in real time through the upper PC main control terminal, so as to obtain the positioning information of each UWB positioning tag; and then the positioning information of each UWB positioning label is accessed into the corresponding intelligent robot, and each intelligent robot is navigated. The positioning base station and the upper PC main control end synchronously output positioning information of all UWB positioning labels in real time, and the positioning function of a plurality of intelligent robots is realized.
In the above (4), it is assumed that t is measured from the UWB positioning tag of each intelligent robot to the time when the nth positioning base station receives the UWB signal transmitted by the UWB positioning tag i (i =1,2,3,4), and it is assumed that the UWB positioning tag is signed to the nth positioning base stationIs a distance r i (ii) a Under the condition of complete synchronization among the positioning base stations, the distance difference d of the UWB positioning tag relative to n groups of positioning base stations (assuming 1,2 as a first group, 2,3 as a second group, 3,4 as a third group, 4, 1 as a 4 th group) is obtained i Comprises the following steps:
d 1 12=r 1 -r 2 =(t 1 -t 2 )×c,
d 2 23=r 2 -r 3 =(t 2 -t 3 )×c,
d 3 34=r 3 -r 4 =(t 3 -t 4 )×c,
d 4 14=r 1 - r 4 =(t 1 - t 4 ) X c, wherein c is the speed of light;
setting the coordinate correspondence of the 4 positioning base stations as (x) 1 ,y 1 ,z 1 )、(x 2 ,y 2 ,z 2 )、(x 3 ,y 3 ,z 3 )、(x 4 ,y 4 ,z 4 ) Then, a hyperbolic equation set about the UWB positioning label position is formed through a plurality of TDOA measured values, and the hyperbolic equation set about the UWB positioning label position is solved, so that the coordinate (x) of the UWB positioning label can be obtained i ,y i ,z i ):
Figure 140510DEST_PATH_IMAGE004
In the step (2), the UART data bus and the TTL level signal built in the bottom main control board may be connected to the middle control board through the digital ports 0 (RX) and 1 (TX) (5V) to perform serial communication; a plurality of expansion interfaces are arranged in the bottom main control board, and the signal input end of each expansion module is connected with the corresponding expansion interface. The bottom main control board is used for controlling basic motion behaviors of each intelligent robot, and a plurality of built-in expansion interfaces of the bottom main control board facilitate function expansion of the intelligent robot. The middle control board is mainly used for receiving task commands of the upper PC main control end, performing task planning by itself and then sending motion instructions to the bottom main control board.
In the above step (2), the signal input terminals of the sensor and the plurality of expansion modules are directly connected to the bottom main control board in a stacked manner. The bottom main control board is used as an interaction medium to help the information interaction and the cooperative cooperation between the sensor and the expansion module.
In the above-mentioned (1), when the sensor was flame sensor, expansion module was the fire extinguishing module, flame sensor utilized infrared ray to detect flame, then turned into the level signal of height change according to the luminance of flame, handled and output in inputing the bottom layer main control board, and then control fire extinguishing module and carry out the action of putting out a fire.
In the above (5), when one of the intelligent robots does not detect that other intelligent robots around are about to collide, the other intelligent robots may be coordinated through human intervention to avoid collision between the intelligent robot and the other intelligent robots; when the emergency situations such as unknown terrain, environmental factors and the like are monitored, intervention prevention can be carried out through the upper PC main control end, and potential conflict or failure tasks are avoided.
The upper PC main control end is a computer, and in the step (4), one positioning base station is connected with a USB interface of the computer, so that the specific position of the intelligent robot is visualized.
In the above step (1), the bottom main control board is an Arduinonano control board.
In the above (1), the middle control plate is raspberry zero w control plate.
A group intelligent algorithm is arranged in the middle-layer control panel and written by a Python programming language. All group behaviors of the group robot need to be supported by means of group intelligent algorithms and raw data collected from the outside. Generally, the group intelligence algorithm includes a target capture algorithm, a region coverage algorithm and an anti-collision algorithm.
The target trapping algorithm and the area coverage algorithm can adopt a control method of the life search cluster education robot disclosed in the invention patent application with the application publication number of CN114446121A, and the application publication number of 2022 is 05 and 06.
The anti-collision algorithm may adopt a robot anti-collision algorithm disclosed in the invention patent application with publication number CN114347041A, with publication number 2022, 04 and 15.
In addition, it should be noted that the names of the parts and the like of the embodiments described in the present specification may be different, and the equivalent or simple change of the structure, the characteristics and the principle described in the present patent idea is included in the protection scope of the present patent. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (10)

1. A programmable group robot architecture based on a 5G network is characterized in that:
(1) The intelligent robot system comprises an upper PC main control end, a plurality of middle-layer control panels, a plurality of bottom-layer main control panels and a plurality of intelligent robots, wherein the number of the middle-layer control panels and the number of the bottom-layer main control panels are the same as that of the intelligent robots and are in one-to-one correspondence with the intelligent robots, and one middle-layer control panel and one bottom-layer main control panel are arranged on the corresponding intelligent robots; each bottom layer main control board is provided with a sensor and a plurality of expansion modules, and each intelligent robot is provided with a UWB positioning tag;
(2) The upper PC main control end is connected with the middle control panel on each intelligent robot through a 5G network, the middle control panel is connected with the bottom main control panel, the signal input ends of the sensor and the plurality of expansion modules are connected with the bottom main control panel, and the control output ends of the sensor and the plurality of expansion modules are connected with each intelligent robot;
(3) The upper layer PC main control end formulates tasks of all the intelligent robots and plans running routes of all the intelligent robots, the middle layer control board receives the tasks and the running routes of all the intelligent robots, operation simulation is carried out, and then a next data instruction is transmitted to the bottom layer main control board; the bottom layer main control board generates a basic movement instruction according to the data and the data instruction obtained by the sensor and executes the basic movement instruction, so that the group intelligent effect is developed;
(4) Setting a plurality of positioning base stations with known positions in the running route range of each intelligent robot, wherein one positioning base station is connected with the upper PC main control end, a UWB positioning tag on each intelligent robot emits pulses according to a certain frequency, continuously carries out distance measurement with each positioning base station, and obtains the position of each intelligent robot through calculation;
(5) The bottom layer main control board sends basic motion instructions to each intelligent robot through the sensor and each expansion module, each intelligent robot automatically performs path planning and speed optimization after receiving the basic motion instructions, coordinates with other intelligent robots, monitors emergency situations when planning is performed, and coordinates potential conflicts or failure tasks which cannot be completed by the upper layer PC main control end;
(6) The upper PC main control end formulates tasks of all the intelligent robots and plans running routes of all the intelligent robots through PyQt visual programming; the middle-layer control board is programmed through a Python programming language, and acts on the middle-layer control board together with the data obtained by the sensor in the step (3) to generate an original data model, establishes a realizable task plan and transmits the realizable task plan to the bottom-layer main control board to execute corresponding behavior actions; the bottom main control board is programmed through a visual programming language or an Arduino programming language, and the basic behaviors and the extended behaviors of all the intelligent robots are obtained.
2. The programmable swarm robot architecture based on a 5G network of claim 1, wherein: in (1), all be equipped with the serial number that sets up on each intelligent robot's the UWB location label, through upper PC main control end discernment location each intelligent robot's UWB location label position in (4), basic motion instruction in rethread WIFI transmission (5) gives each intelligent robot.
3. The programmable swarm robot architecture based on a 5G network as claimed in claim 2, characterized by: in the step (4), after the distance between the UWB positioning tags of each intelligent robot and the positioning base station is measured, the coordinates of each intelligent robot are obtained through calculation, and then the coordinates of each intelligent robot are output in real time through the upper PC main control terminal, so that the positioning information of each UWB positioning tag is obtained; and then the positioning information of each UWB positioning label is accessed into the corresponding intelligent robot, and each intelligent robot is navigated.
4. The programmable swarm robot architecture based on a 5G network as recited in claim 3, characterized by: in the step (4), it is assumed that t is measured from the UWB positioning tag of each intelligent robot to the nth positioning base station when the UWB signal transmitted by the UWB positioning tag is received i (i =1,2,3,4, \ 8230;, n), and assume that the distance r of the UWB positioning tag to the nth positioning base station is i (ii) a Under the condition of complete synchronization among the positioning base stations, the distance difference d of the UWB positioning tag relative to n groups of positioning base stations (assuming 1,2 as the first group, 2,3 as the second group, 3,4 as the third group, \8230;, n, 1 as the nth group) is obtained i Comprises the following steps:
d i 12=r 1 -r 2 =(t 1 -t 2 )×c,
d i 23=r 2 -r 3 =(t 2 -t 3 )×c,
d i 34=r 3 -r 4 =(t 3 -t 4 )×c,
……
d i 1n=r 1 - r n =(t 1 - t n ) X c, wherein c is the speed of light;
correspondingly setting the coordinates of the n positioning base stations as (x) 1 ,y 1 ,z 1 )、(x 2 ,y 2 ,z 2 )、(x 3 ,y 3 ,z 3 )、(x 4 ,y 4 ,z 4 )……(x n ,y n ,z n ) Then, a hyperbolic equation set related to the position of the UWB positioning tag is formed through a plurality of TDOA measured values, and the hyperbolic equation set related to the position of the UWB positioning tag is solved, so that the coordinate (x) of the UWB positioning tag can be obtained i ,y i ,z i ):
Figure 192410DEST_PATH_IMAGE002
Figure 242668DEST_PATH_IMAGE004
5. The programmable swarm robot architecture based on a 5G network of claim 1, wherein: in the step (2), a UART data bus and a TTL level signal which are arranged in the bottom layer main control board can be connected with the middle layer control board through digital ports 0 (RX) and 1 (TX) (5V) to carry out serial port communication; a plurality of expansion interfaces are arranged in the bottom main control board, and the signal input end of each expansion module is connected with the corresponding expansion interface.
6. The programmable swarm robot architecture based on a 5G network of claim 1, wherein: in the step (2), the signal input ends of the sensor and the plurality of expansion modules are directly connected with the bottom layer main control board in a stacking manner; in the step (1), the sensor is one or the combination of two of a flame sensor and a magnetic sensor; each expansion module is one or a combination of more of an LED array module, a barrier removal module and a fire extinguishing module.
7. The programmable swarm robot architecture based on a 5G network of claim 6, wherein: in (1), when the sensor is the flame sensor, the extension module is the fire extinguishing module, the flame sensor utilizes infrared ray detection flame, then turns into the level signal of height change according to the luminance of flame, handles and output in inputing the bottom layer main control board, and then control fire extinguishing module and carry out the action of putting out a fire.
8. The programmable swarm robot architecture based on a 5G network as claimed in claim 1, characterized by: in the step (5), when one of the intelligent robots does not detect that other intelligent robots around are about to collide, the other intelligent robots can be coordinated through human intervention to avoid collision between the intelligent robot and the other intelligent robots; when the emergency situations such as unknown terrain, environmental factors and the like are monitored, intervention prevention can be carried out through the upper PC main control terminal, and potential conflicts or failed tasks are avoided.
9. The programmable swarm robot architecture based on a 5G network as claimed in claim 1, characterized by: the upper PC main control end is a computer, and in the step (4), one positioning base station is connected with a USB interface of the computer, so that the specific position of the intelligent robot is visualized.
10. The programmable swarm robot architecture based on a 5G network of claim 1, wherein:
in the step (1), the bottom layer main control board adopts an Arduinonano control board;
in the step (1), the middle control plate adopts a raspberry zero w control plate;
a group intelligent algorithm is built in the middle-layer control panel, and the group intelligent algorithm is written by a Python programming language.
CN202211345103.2A 2022-10-31 2022-10-31 Programmable group robot framework based on 5G network Pending CN115482712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211345103.2A CN115482712A (en) 2022-10-31 2022-10-31 Programmable group robot framework based on 5G network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211345103.2A CN115482712A (en) 2022-10-31 2022-10-31 Programmable group robot framework based on 5G network

Publications (1)

Publication Number Publication Date
CN115482712A true CN115482712A (en) 2022-12-16

Family

ID=84396026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211345103.2A Pending CN115482712A (en) 2022-10-31 2022-10-31 Programmable group robot framework based on 5G network

Country Status (1)

Country Link
CN (1) CN115482712A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117741568A (en) * 2024-02-20 2024-03-22 沈阳格熙科技有限公司 Positioning system suitable for intelligent robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117741568A (en) * 2024-02-20 2024-03-22 沈阳格熙科技有限公司 Positioning system suitable for intelligent robot

Similar Documents

Publication Publication Date Title
Chung et al. A survey on aerial swarm robotics
Tan et al. Research advance in swarm robotics
Hauert et al. Evolved swarming without positioning information: an application in aerial communication relay
CN106444423A (en) Indoor multi unmanned aerial vehicle formation flight simulation verification platform and achieving method thereof
Echeverria et al. Modular open robots simulation engine: Morse
Yaoming et al. A newly bio-inspired path planning algorithm for autonomous obstacle avoidance of UAV
CN115951598B (en) Virtual-real combination simulation method, device and system for multiple unmanned aerial vehicles
Doering et al. Design and optimization of a heterogeneous platform for multiple uav use in precision agriculture applications
CN112766595B (en) Command control device, method, system, computer equipment and medium
CN115482712A (en) Programmable group robot framework based on 5G network
Mathews et al. Supervised morphogenesis: Exploiting morphological flexibility of self-assembling multirobot systems through cooperation with aerial robots
Wang et al. [Retracted] Virtual Reality Technology of Multi UAVEarthquake Disaster Path Optimization
Hentati et al. Cooperative UAVs framework for mobile target search and tracking
CN114518772B (en) Unmanned aerial vehicle swarm self-organization method in rejection environment
Modares et al. Simulating Unmanned Aerial Vehicle swarms with the UB-ANC emulator
Madey et al. Design and evaluation of UAV swarm command and control strategies
Schlecht et al. Decentralized Search by Unmanned Air Vehicles Using Local Communication.
Hardes et al. Towards an open source fully modular multi unmanned aerial vehicle simulation framework
CN116861779A (en) Intelligent anti-unmanned aerial vehicle simulation system and method based on digital twinning
CN111552294A (en) Outdoor robot path-finding simulation system and method based on time dependence
Leong et al. Integrated perception and tactical behaviours in an auto-organizing aerial sensor network
CN113741461B (en) Multi-robot obstacle avoidance method oriented to limited communication under complex scene
CN114115363A (en) Multi-unmanned aerial vehicle unknown indoor space exploration method based on dynamic target tracking
CN113485435A (en) Heterogeneous multi-unmanned aerial vehicle monitoring system and method
CN113848757A (en) Intelligent unmanned aerial vehicle cluster software in-loop simulation system with variable communication topology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination