CN114208258A - Intelligent controller and sensor network bus and system and method including message retransmission mechanism - Google Patents

Intelligent controller and sensor network bus and system and method including message retransmission mechanism Download PDF

Info

Publication number
CN114208258A
CN114208258A CN202180004902.0A CN202180004902A CN114208258A CN 114208258 A CN114208258 A CN 114208258A CN 202180004902 A CN202180004902 A CN 202180004902A CN 114208258 A CN114208258 A CN 114208258A
Authority
CN
China
Prior art keywords
message
data
root
leaf
leaf nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202180004902.0A
Other languages
Chinese (zh)
Other versions
CN114208258B (en
Inventor
李伟坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pengyan Technology Shanghai Co ltd
Original Assignee
Pengyan Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/863,898 external-priority patent/US11156987B2/en
Application filed by Pengyan Technology Shanghai Co ltd filed Critical Pengyan Technology Shanghai Co ltd
Publication of CN114208258A publication Critical patent/CN114208258A/en
Application granted granted Critical
Publication of CN114208258B publication Critical patent/CN114208258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/40052High-speed IEEE 1394 serial bus
    • H04L12/40104Security; Encryption; Content protection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Small-Scale Networks (AREA)

Abstract

A machine automation system for controlling and operating an automation machine is disclosed. The system includes a controller and a sensor bus including a central processing core and a multi-media transport intranet for implementing a dynamic burst to broadcast transmission scheme in which messages are burst from node to central processing core and broadcast from the central processing core to all nodes.

Description

Intelligent controller and sensor network bus and system and method including message retransmission mechanism
Cross-referencing
The present application is a partial continuation of co-pending U.S. patent application No. 16/741,332, "intelligent controller and sensor network bus, and system and method including multi-tiered platform security architecture," filed on 13/1/2020, of co-pending U.S. patent application No. 16/653,558, "intelligent controller and sensor network bus, and system and method including intelligent flexible actuator module," filed on 15/10/2019, of co-pending U.S. patent application No. 16/572,358, "intelligent controller and sensor network bus, and system and method including generic encapsulation mode," filed on 16/9/2019, and partial continuation of U.S. patent application No. 16/529,682, "intelligent controller and sensor network bus," filed on 1/8/2019, And the continuation-in-part of the systems and methods ", all of which are incorporated herein by reference.
Technical Field
The present application relates to the field of buses. More particularly, the present application relates to controller and sensor network bus architectures.
Background
With the development of self-driving, intelligent robots, and factory automation, the field of mechanical automation is rapidly expanding. However, due to their diversification and high speed requirements, no bus or network architecture can effectively handle all of the requirements of these emerging technologies. In contrast, current networks have high latency, low bandwidth, complex wiring, large electromagnetic interference (EMI), high cost, insecure data, and complex system integration. For example, the network does not have sufficient speed and throughput to transmit sensor data such as cameras, light detection and ranging (LIDAR) over the network to the CPU core. Furthermore, existing cable systems are complex and short distance and cannot handle EMI without expensive shielding due to the copper cable system used. Currently there is no integrated "controller and sensor network" system bus solution that can support and transport internet L2/L3 ethernet packets (packets), motor and motion control messages, sensor data, and CPU-CMD from edge node to edge node in the system.
Disclosure of Invention
A machine automation system for controlling and operating an automation machine is provided. The system includes a controller and a sensor bus including a central processing core and a multi-media transport intranet for implementing a dynamic burst to broadcast transmission scheme in which messages are burst from node to central processing core and broadcast from the central processing core to all nodes.
In a first aspect, embodiments of the present application disclose a machine automation system for controlling and operating an automation machine. The system comprises: a controller and sensor bus, the controller and sensor bus comprising: at least one central processing core comprising one or more root ports, each root port having a root validation engine; one or more transport networks, each transport network comprising a plurality of leaf nodes and being directly coupled to the core via a different one of the root ports, each leaf node comprising a leaf validation engine and a leaf node memory; and a plurality of input/output ports, each input/output port coupled to one of the leaf nodes; and a plurality of external machine automation devices, each external machine automation device coupled to one of the leaf nodes via one or more of the root ports coupled to the one of the leaf nodes; wherein: one of the root ports sending an authorization message to one of the leaf nodes coupled to the one of the root ports indicating a transmission window; one of the leaf nodes sending a data message comprising a plurality of data packets having destination information and an acknowledgement request indicator to one of the root ports within the transmission window, and the leaf acknowledgement engine of the one of the leaf nodes storing a leaf copy of the data message comprising the plurality of data packets; and if the data message received by one of the root ports does not have any uncorrectable errors, the root validation engine of one of the root ports sends a data receipt message to one of the leaf nodes, which removes the leaf copy based on receiving the data receipt message.
In some embodiments, if one of the root ports does not receive the data message within the transmission window, the root acknowledgement engine of the one of the root ports sends a data loss message to one of the leaf nodes, which resends the data message including all of the data packets using the leaf copy. In some embodiments, if one of the root ports receives the data message having an uncorrectable error in the subset of the data packets, the root acknowledgement engine of the one of the root ports sends a data portion receipt message to one of the leaf nodes, the data portion receipt message including data packet loss/receipt information identifying the subset of the data packets that need to be retransmitted. In some embodiments, in response to receiving the data portion receive message, one of the leaf nodes removes from the leaf copy data packets that are not part of the subset based on the loss/receive information. In some embodiments, the loss/receive information comprises a bitmap comprising a bit for each data packet of the data message, the value of the bit identifying whether the data packet needs to be retransmitted. In some embodiments, the loss/reception information includes a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that need not be retransmitted. In some embodiments, after expiration of a timer associated with each subset, one of the leaf nodes resends the subset to one of the root ports in a new data message in the subsequent transmission window granted to the one of the leaf nodes. In some embodiments, the new data message includes one or more other data packets that are not part of the data message other than the subset.
In some embodiments, one of the leaf nodes is part of a plurality of nodes of a first one of the networks, and wherein the root acknowledgement engine of one of the root ports does not send the data receive message to one of the leaf nodes if the one of the root ports receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast by the one of the root ports to all leaf nodes in the first network. In some embodiments, in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes determines that one of the leaf nodes is the source of the data message and removes any data packets found in the data message from the leaf copy such that the any data packets are not retransmitted to one of the root ports. In some embodiments, when the destination information indicates that the data message is to be sent to one or more of the other leaf nodes, one of the root ports sends the data message to the other leaf nodes, and if one of the root ports does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors. In some embodiments, one of the root ports selects a target node of the leaf nodes of the first network, and only the target node is for responding with the leaf data reception message when the other leaf nodes are a plurality of the leaf nodes of the first network. In some embodiments, the target node is selected based on which of the leaf nodes of the first network was the last one to receive a message broadcast by one of the root ports over the first network.
In a second aspect, embodiments of the present application disclose a controller and a sensor bus. The bus includes: at least one central processing core comprising one or more root ports, each root port having a root validation engine; one or more transport networks, each transport network comprising a plurality of leaf nodes and being directly coupled to the core via a different one of the root ports, each leaf node comprising a leaf validation engine and a leaf node memory; and a plurality of input/output ports, each input/output port coupled to one of the leaf nodes; wherein: one of the root ports sending an authorization message to one of the leaf nodes coupled to the one of the root ports indicating a transmission window; one of the leaf nodes sending a data message comprising a plurality of data packets having destination information and an acknowledgement request indicator to one of the root ports within the transmission window, and the leaf acknowledgement engine of the one of the leaf nodes storing a leaf copy of the data message comprising the plurality of data packets; and if the data message received by one of the root ports does not have any uncorrectable errors, the root validation engine of one of the root ports sends a data receipt message to one of the leaf nodes, which removes the leaf copy based on receiving the data receipt message.
In some embodiments, if one of the root ports does not receive the data message within the transmission window, the root acknowledgement engine of the one of the root ports sends a data loss message to one of the leaf nodes, which resends the data message including all of the data packets using the leaf copy. In some embodiments, if one of the root ports receives the data message having an uncorrectable error in the subset of the data packets, the root acknowledgement engine of the one of the root ports sends a data portion receipt message to one of the leaf nodes, the data portion receipt message including data packet loss/receipt information identifying the subset of the data packets that need to be retransmitted. In some embodiments, in response to receiving the data portion receive message, one of the leaf nodes removes from the leaf copy data packets that are not part of the subset based on the loss/receive information. In some embodiments, the loss/receive information comprises a bitmap comprising a bit for each data packet of the data message, the value of the bit identifying whether the data packet needs to be retransmitted. In some embodiments, the loss/reception information includes a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that need not be retransmitted. In some embodiments, one of the leaf nodes resends the subset to one of the root ports in a new data message in the subsequent transmission window granted to the one of the leaf nodes after expiration of a timer associated with each subset. In some embodiments, the new data message includes one or more other data packets that are not part of the data message other than the subset.
In some embodiments, one of the leaf nodes is part of a plurality of nodes of a first one of the networks, and wherein the root acknowledgement engine of one of the root ports does not send the data receive message to one of the leaf nodes if the one of the root ports receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast by the one of the root ports to all leaf nodes in the first network. In some embodiments, in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes determines that one of the leaf nodes is the source of the data message and removes any data packets found in the data message from the leaf copy such that the any data packets are not retransmitted to one of the root ports. In some embodiments, when the destination information indicates that the data message is to be sent to one or more of the other leaf nodes, one of the root ports sends the data message to the other leaf nodes, and if one of the root ports does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors. In some embodiments, one of the root ports selects a target node of the leaf nodes of the first network, and only the target node is for responding with the leaf data reception message when the other leaf nodes are a plurality of the leaf nodes of the first network. In some embodiments, the target node is selected based on which of the leaf nodes of the first network was the last one to receive a message broadcast by one of the root ports over the first network.
In a third aspect, embodiments of the present application disclose a central core of a controller and sensor bus, the controller and sensor bus comprising one or more transport networks, each transport network comprising a plurality of leaf nodes and being directly coupled to the core, each leaf node comprising a leaf validation engine and a leaf node memory, the central core comprising: at least one central processing unit; and a non-transitory computer readable memory storing at least one root port coupled with the central processing unit and having a root validation engine, wherein the root port is to: sending an authorization message indicating a transmission window to one of the leaf nodes coupled to the root port; and if the root port does not have any uncorrectable errors in a data message received from one of the leaf nodes within the transmission window, the root acknowledgement engine sending a data reception message to the one of the leaf nodes, wherein the data message includes a plurality of data packets having destination information and an acknowledgement request indicator.
In some embodiments, the root acknowledgement engine of the root port is to send a data loss message to one of the leaf nodes if the root port does not receive the data message within the transmission window. In some embodiments, if the root port receives the data message having an uncorrectable error in the subset of the data packets, the root acknowledgement engine of the root port is to send a data portion receive message to one of the leaf nodes, the data portion receive message including data packet loss/receipt information identifying the subset of the data packets that need to be resent. In some embodiments, the loss/receive information comprises a bitmap comprising a bit for each data packet of the data message, the value of the bit identifying whether the data packet needs to be retransmitted. In some embodiments, the loss/reception information includes a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that need not be retransmitted. In some embodiments, one of the leaf nodes is part of a plurality of nodes of a first one of the networks, and wherein the root acknowledgement engine of the root port is to avoid sending the data receive message to one of the leaf nodes if the root port receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast to all leaf nodes in the first network.
In some embodiments, when the destination information indicates that the data message is to be sent to one or more of the other leaf nodes, the root port sends the data message to the other leaf nodes, and if the root port does not receive a leaf data receive message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receive message indicating that the received one or more data packets do not have any uncorrectable errors. In some embodiments, the root port selects a target node of the leaf nodes of the first network, and only the target node is to respond with the leaf data reception message when the other leaf nodes are a plurality of the leaf nodes of the first network. In some embodiments, the target node is selected based on which of the leaf nodes of the first network was the last one to receive a message broadcast by one of the root ports over the first network.
In a fourth aspect, a controller and a sensor bus are disclosed. The bus includes: one or more transport networks, each transport network comprising a root port and a plurality of leaf nodes, each leaf node comprising a leaf validation engine and a leaf node memory; and a plurality of input/output ports, each input/output port coupled with one of the leaf nodes, wherein one of the leaf nodes of a first one of the networks is to: receiving an authorization message from the root port of the first network indicating a transmission window assigned to one of the leaf nodes; sending a data message comprising a plurality of data packets having destination information and an acknowledgement request indicator to the root port within the transmission window and storing a leaf copy of the data message comprising the plurality of data packets; and receiving a data receipt message from the root port, wherein the data receipt message indicates whether the root port received the data message without any uncorrectable errors; and removing at least a portion of the leaf replica based on the data reception message.
In some embodiments, one of said leaf nodes is configured to retransmit said data message including all of said data packets using said leaf copy if said leaf node receives a data loss message from said root port, wherein said data loss message indicates that said root port did not receive said data message within said transmission window. In some embodiments, one of the leaf nodes is to receive a data portion receive message from the root port indicating that the root port received the data message with uncorrectable errors in the subset of data packets, and the data portion receive message includes data packet loss/receipt information identifying the subset of data packets that need to be retransmitted. In some embodiments, in response to receiving the data portion receive message, one of the leaf nodes is to remove from the leaf copy data packets that are not part of the subset based on the loss/receive information. In some embodiments, the loss/receive information comprises a bitmap comprising a bit for each data packet of the data message, the value of the bit identifying whether the data packet needs to be retransmitted. In some embodiments, the loss/reception information includes a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that need not be retransmitted.
In some embodiments, one of the leaf nodes is for: retransmitting each subset to one of the root ports in a new data message after expiration of a timer associated with the subset and in the subsequent transmission window granted to one of the leaf nodes. In some embodiments, the new data message includes one or more other data packets that are not part of the data message other than the subset. In some embodiments, in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes is to determine that one of the leaf nodes is the source of the data message, and remove any data packets found in the data message from the leaf copy such that the any data packets are not retransmitted to one of the root ports. In some embodiments, one of the leaf nodes is configured to not respond to other data messages broadcast by the root port to all of the leaf nodes on the first network unless the one of the leaf nodes receives a target message from the root port indicating that the leaf node is a target node that must respond with a leaf data receive message on behalf of all nodes of the first network upon receiving the other data messages broadcast by the root port.
In a fifth aspect, a method of operating a controller and a sensor bus is disclosed. The controller and sensor bus comprising at least one central processing core comprising one or more root ports each having a root validation engine, and one or more transport networks each comprising a plurality of leaf nodes and directly coupled to the core via different ones of the root ports, each leaf node comprising a leaf validation engine and a leaf node memory, the method comprising: one of the root ports sending an authorization message to one of the leaf nodes coupled to the one of the root ports indicating a transmission window; one of the leaf nodes sending a data message to one of the root ports within the transmission window, the data message including a plurality of data packets having destination information and an acknowledgement request indicator; the leaf validation engine of one of the leaf nodes storing a leaf copy of the data message comprising the plurality of data packets; if the data message received by one of the root ports does not have any uncorrectable errors, the root acknowledgement engine of one of the root ports sends a data reception message to one of the leaf nodes; and one of the leaf nodes removes the leaf copy based on receiving the data reception message.
In some embodiments, the method further comprises: if one of the root ports does not receive the data message within the transmission window, the root acknowledgement engine of the one of the root ports sends a data loss message to one of the leaf nodes; and one of said leaf nodes resending said data message including all of said data packets using said leaf copy. In some embodiments, the method further comprises: if one of the root ports receives the data message having an uncorrectable error in the subset of the data packets, the root acknowledgement engine of the one of the root ports sends a data portion receipt message to one of the leaf nodes, the data portion receipt message including packet loss/receipt information identifying the subset of the data packets that need to be resent. In some embodiments, the method further comprises: in response to receiving the data portion reception message, one of the leaf nodes removes from the leaf replica data packets that are not part of the subset based on the loss/reception information. In some embodiments, the loss/receive information comprises a bitmap comprising a bit for each data packet of the data message, the value of the bit identifying whether the data packet needs to be retransmitted.
In some embodiments, the loss/reception information includes a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that need not be retransmitted. In some embodiments, the method further comprises: after expiration of a timer associated with each subset and in the subsequent transmission window granted to one of the leaf nodes, the one of the leaf nodes resends the subset to one of the root ports in a new data message. In some embodiments, the new data message includes one or more other data packets that are not part of the data message other than the subset. In some embodiments, one of the leaf nodes is part of a plurality of nodes of a first one of the networks, the method further comprising: if one of the root ports receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast by one of the root ports to all leaf nodes in the first network, the root validation engine of one of the root ports does not send the data receive message to one of the leaf nodes. In some embodiments, the method further comprises: in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes determines that one of the leaf nodes is the source of the data message and removes any data packets found in the data message from the leaf copy such that the any data packets are not resent to one of the root ports.
In some embodiments, the method further comprises: when the destination information indicates that the data message is to be sent to one or more of the other leaf nodes, one of the root ports sends the data message to the other leaf nodes; and if one of the root ports does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine storing a root copy of the data message including the plurality of data packets and resending one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors. In some embodiments, the method further comprises: one of the root ports selects a target node of the leaf nodes of the first network, wherein only the target node is to respond with the leaf data reception message when the other leaf nodes are a plurality of the leaf nodes of the first network. In some embodiments, the target node is selected based on which of the leaf nodes of the first network was the last one to receive a message broadcast by one of the root ports over the first network.
Drawings
Fig. 1 illustrates a machine automation system according to some embodiments.
FIG. 2 illustrates an intelligent controller and sensor intranet bus according to some embodiments.
FIG. 3 illustrates a tree topology for an intelligent controller and sensor intranet bus in accordance with some embodiments.
FIG. 4 illustrates a block diagram of an exemplary computing device for implementing the system, in accordance with some embodiments.
Fig. 5 illustrates a method of operating a machine automation system including an intelligent controller and a sensor intranet bus, in accordance with some embodiments.
Fig. 6A illustrates an exemplary GEM packet format according to some embodiments.
FIG. 6B illustrates a detailed view of a GEM packet header format in accordance with some embodiments.
Fig. 6C illustrates a detailed view of a GEM header format of a node report message, according to some embodiments.
Fig. 6D illustrates a detailed view of a first variation of the GEM header format of a root port bandwidth grant message, in accordance with some embodiments.
Fig. 6E illustrates a detailed view of a second variation of the GEM header format of the root port bandwidth grant message, in accordance with some embodiments.
Fig. 6F illustrates a detailed view of a GEM header format for a control message, according to some embodiments.
FIG. 7A illustrates a Broadcast PHY Frame (Broadcast-PHY-Frame) according to some embodiments.
Fig. 7B illustrates a Burst PHY Frame (Burst-PHY-Frame) according to some embodiments.
Fig. 7C illustrates a gate Burst PHY Frame (gate Burst-PHY-Frame) according to some embodiments.
FIG. 8 illustrates a method of operating an intelligent controller and sensor intranet bus, in accordance with some embodiments.
FIG. 9 illustrates a smart flexible actuator (SCA) and sensor module according to some embodiments.
Fig. 10A illustrates a first variation of an SCA and a control board of a sensor module according to some embodiments.
Fig. 10B illustrates a second variation of an SCA and a control board of a sensor module according to some embodiments.
Fig. 10C illustrates a third variation of an SCA and a control board of a sensor module according to some embodiments.
Fig. 11A and 11B illustrate a machine automation system including coupled SCAs and sensor modules, in accordance with some embodiments.
FIG. 12 illustrates a method of operating a controller and sensor bus according to some embodiments.
Figure 13 illustrates a bus including a multi-layer security architecture, in accordance with some embodiments.
FIG. 14 illustrates a security module of a bus according to some embodiments.
FIG. 15 illustrates a bus including multiple subsystems divided into multiple cascade manager levels, according to some embodiments.
Figure 16 illustrates a method of implementing a bidirectional node/kernel authentication protocol, in accordance with some embodiments.
FIG. 17 illustrates a method of operating an intelligent controller and sensor intranet bus, according to some embodiments.
Fig. 18 illustrates a message retransmission mechanism for a bus according to some embodiments.
Fig. 19 illustrates an exemplary acknowledgement message in accordance with some embodiments.
FIG. 20 illustrates a method of implementing an ensure message passing mechanism on a controller and sensor bus, in accordance with some embodiments.
Detailed Description
Embodiments described herein relate to a machine automation system, method, and device for controlling and operating an automation machine. The system, method and apparatus include a controller and a sensor bus including a central processing core and a multi-media transport intranet for implementing a dynamic burst to broadcast transmission scheme in which messages are burst from node to central processing core and broadcast from the central processing core to all nodes. Thus, the system, method and apparatus have the advantage of high speed performance despite the incorporation of low speed network media, and provide a unified software image for the entire intranet system, including all gates, nodes and root ports, simplifying the software architecture, shortening the product development cycle, facilitating remote debugging at the system level, and monitoring and troubleshooting. In particular, the system, method and apparatus provide a unique intranet system architecture specifically defined and optimized for machine automation applications.
Fig. 1 illustrates a machine automation system 100 according to some embodiments. As shown in FIG. 1, system 100 includes one or more external devices 102 operatively coupled with an intelligent controller and sensor intranet bus 104. In some embodiments, the system 100 may be part of an automated device such as an autonomous vehicle, an automated industrial machine, or an automated robotic robot. Alternatively, the system 100 may be part of other machine automation applications. The device 102 may include one or more of the following devices: sensor devices (e.g., ultrasound, infrared, cameras, light detection and ranging (LIDAR), sound navigation and ranging (SONAR), magnetic, radio detection and ranging (RADAR)), internet appliances, motors, actuators, lights, displays (e.g., screen, user interface), speakers, graphics processing units, central processing units, memory (e.g., solid state drive, hard drive), controllers/microcontrollers, or combinations thereof. Each of the devices 102 can be operatively wired and/or wirelessly coupled with the bus 104 via one or more bus input/output (I/O) ports (see fig. 2). Although system 100 includes a discrete number of external devices 102 and buses 104, as shown in fig. 1, it is contemplated that more or fewer devices 102 and/or buses 104 may be provided.
Fig. 2 illustrates an intelligent controller and sensor intranet bus 104 according to some embodiments. As shown in fig. 2, bus 104 includes an intranet formed by a central core 200, which central core 200 is coupled to one or more gates 202 and a plurality of edge nodes 204 (each edge node 204 having one or more external IO ports 99) via one or more central transport networks 206, and to one or more edge sub-nodes 208 (each edge sub-node 208 having one or more external IO ports 99) via one or more sub-networks 210 extending from gates 202. Thus, as shown in FIG. 3, bus 104 forms a network tree topology in which central network 206 branches from core 200 (e.g., root port 230 of the core) to edge node 204 and gate 202, and sub-network 210 branches from gate 202 to child node 208 and/or child gate 202'. In this way, core 200 may see all of nodes 204 and child nodes 208 (because gates 202 and child gates 202' are transparent to core 200). In some embodiments, one or more of the gates 202 are coupled directly to the I/O port 99 without a node (e.g., coupled to an external CPU, GPU, AI kernel, and/or Solid State Drive (SSD)).
The port 99 may be any type of interface port such as peripheral component interconnect express (PCIe), Mobile Industry Processor Interface (MIPI), Ethernet, Universal Serial Bus (USB), General Purpose Input Output (GPIO), universal asynchronous receiver/transmitter (UART), integrated circuit (I)2C) And/or other types of ports. Although the bus 104 includes a discrete number of ports 99, cores 200, nodes 204,208, gates 202, networks 206, 210, other elements, and components thereof, as shown in FIG. 2, it is contemplated that more or fewer ports 99, cores 200, nodes 204,208, gates 202, networks 206, 210, other elements, and components thereof may be provided.
The central transport network 206 may include a faster/lower latency connection medium than the connection medium of the sub-network 210 coupled to the gate 202 of the central transport network 206. Similarly, for each iterative subnet, the subnet 210 may include a faster/lower latency connection medium than the connection medium of the subnet 210 'coupled to the gate 202' of the subnet 210. This network/subnet connection medium speed/latency relationship can enable the bus 104 to avoid slowing down the overall processing of the bus 104, although it still includes the slower connection medium described in detail below. Alternatively, one or more of the subnetworks 210, 210' and/or the central network 206 may have the same or other connection medium speed/delay relationship.
In some embodiments, the connection medium of the central transport network 206 includes a fiber optic cable 212 split using an optical splitter 214 (e.g., a 2-1 splitter) and having an optical transceiver 216 to couple to the nodes 204,208 and receive data from the nodes 204, 208. In some embodiments, the connection medium of the sub-network 210 includes an optical connection medium (e.g., similar to the central transmission network 206, but may be slower), a wireless connection (e.g., a radio frequency transceiver 218), a copper connection (e.g., a twisted pair copper wire 220, optionally split using an analog splitter 222 (e.g., a fan-out/multiplexer) and having a serializer/deserializer (SERDES)224 coupled to the nodes 204,208 and receiving data from the nodes 204, 208), and/or combinations thereof (e.g., hybrid fiber, copper, and/or wireless connection media). Thus, bus 104 supports multi-rate traffic transmission, and depending on latency/speed, connectivity, and/or distance requirements of data/traffic/external devices 102, different nodes/networks can be used to couple to bus 104 while still providing the required throughput. For example, for high speed, low latency, and long distance requirements, the optical connection medium of the central network can be used by coupling to node 204. Other networks 210 can also be used depending on cost, speed, connectivity, and/or distance requirements. In some embodiments, central network 206 is a passive optical network and/or copper sub-network 210 is an active network. In some embodiments shown in FIG. 2, one or more of the nodes 204 are coupled to a Controller Area Network (CAN)226 such that the nodes input data from each controller coupled to the CAN. Alternatively, as shown in FIG. 3, one or more subnetworks 210 may be CAN coupled to core 200 via one of gates 202.
Multi-layer bus addressing
The bus 104 may utilize a multi-tiered addressing scheme in which the root port 230, I/O ports 99, nodes 204,208, 234, and/or gates 202 are able to direct messages through the bus 104 using node, epoch (epoch), and GEM identification addresses. In particular, each of root port 230, nodes 204,208, 234, and gate 202 may be assigned a node identifier (node-ID), wherein nodes 204,208, and gate 202 are also assigned at least one epoch identifier (epoch-ID) and at least one GEM identifier (GEM-ID). The epoch-ID can be used to identify the source/destination of the message in the network 206, 210 (e.g., the node/gate devices and their I/O ports, embedded CPUs, and/or other types of services), while the GEM-ID can be used to identify the destination of the message (e.g., the node/gate devices and their sets and subsets of I/O ports, embedded CPUs, and/or other types of services). Thus, the epoch-ID can be used to transmit/route messages throughout the network 206, 210, while the GEM-ID can be used by the device itself (via port 99) to determine whether to capture the received/broadcast message as a target.
A node/gate may be assigned multiple epoch-IDs and multiple GEM-IDs according to the node/gate's Service Level Agreement (SLA) profile, which can correspond to the devices coupled to the node/gate's port 99. Thus, the node-ID of each of the nodes 204,208 and the gate 202 can be mapped to one or more epoch-IDs, which can be mapped to one or more GEM-IDs. For example, nodes 204,208 coupled with two IO ports 99 may have a single node-ID, two epoch-IDs (one epoch-ID per port 99), and ten GEM-IDs (one GEM-ID associated with a first epoch-ID and a first port 99, and nine GEM-IDs associated with a second epoch-ID and a second port 99). Furthermore, while the node-ID and the epoch-ID are unique for each node/gate/port, the GEM-ID may be shared between the nodes/gates/ports. For example, ports 99 of the same node 204,208 or different ports 99 of different nodes 204,208 can be associated with matching or overlapping sets GEM-IDs.
Gate 202 may also be assigned one or more virtual node-IDs to port 99 to which gate 202 is directly coupled. As with conventional nodes, these virtual nodes represented by the doors 202 can be assigned multiple epoch-IDs and multiple GEM-IDs according to the SLA profile of the doors 202 (which can correspond to the devices coupled to the ports 99 of the virtual node/door).
Other nodes 234 and core 232 (which are directly coupled to core 200, such as IO devices and embedded CPU cores) can each have one or more GEM-IDs and global node-IDs, but need not be assigned epoch-IDs, which is not necessary because messages to these nodes 234 and messages from these nodes 234 to core 200 are entirely within core 200. Similar to nodes 204,208, the number of GEM-IDs assigned to each of node 234 and kernel 232 can be determined based on the SLA configuration file for that node 234 or kernel 232 (which can correspond to the devices coupled to port 99 of node 234). Each of the kernel switch 220, root port 230, nodes 204,208, 234 and/or gates 202 can maintain and update a local SLA table indicating a mapping between each node-ID, epoch-ID and GEM-ID. Bus addressing thus provides the advantage of using the epoch-ID and/or node-ID to facilitate simplified burst/broadcast messaging between nodes, gates and cores within network 100, while using the GEM-ID to facilitate any desired more complex messaging between devices/IO ports 99 and/or the cores themselves.
General encapsulation mode
The bus 104 is capable of encapsulating all input data and internally generated data (e.g., control, operation, and management messages) into a Generic Encapsulation Mode (GEM) for transmission across the bus 104 intranet. Thus, the GEM acts as a unique standardized data and message container for transferring data between nodes and/or to the core 200 via the bus 104 intranet. As a result, as input data enters bus 104, the input data may be encapsulated into a GEM format at each node and routed through core 200 (where the input data is decapsulated for processing and repackaged for transmission) onto its destination node, which decapsulates the data back to the original format for egress to the target external device 102 or other destination. The input data may come from various sources (e.g., device 102, CAN 226), input via port 99 and/or embedded CPU core 232 at node 204,208, 234 or gate 202.
The GEM format exists in two formats: GEM packet (packet) and GEM control. The GEM packet format includes a GEM header (header) plus a GEM payload (e.g., from 8 bytes to 4 kilobytes in length). Generally, GEM packet formats are used to encapsulate input port data, packets, and messages at an ingress (e.g., node, port). The following are some examples of IO port data, packets, and messages that can utilize the GEM packet format:
carry ethernet packets using GEM packet format, transport from local gate 202 and/or node 204,208 to remote gate 202 and/or node 204 over 104 bus after GEM encapsulation (this can be used for internet and Wi-Fi interface over ethernet port or PCIe port, for example);
carry sensor data using GEM packet format, transmitted from local gate 202 and/or node 204 to remote gate 202 and/or node 204 over bus 104 after GEM encapsulation (e.g., CAN bus data, camera (MIPI) frame data, lidar (ethernet) data, magnetic encoder data, and other types of sensor data);
carry jumbo data and packets using GEM packet format and transmit from the local nodes 204,208 to the remote nodes 204,208 through a fragmentation and de-fragmentation scheme. This can include fragmentation, de-fragmentation, and re-ordering/re-transmission functions;
network control, operation and management messages, including physical layer operations, administration and maintenance (PLOAM), Node Management Control Interface (NMCI) and operations, administration and maintenance (OAM) messages, are communicated between core 200 and nodes 204,208 (and/or gates) using GEM packet format;
carry CPU/PCIe access CMD/DATA using GEM packet format, transfer from kernel 200 and local gate 202 and/or node 204 to remote local gate 202 and/or node 204 over bus 104 after GEM encapsulation (e.g., CPU 232 over PCIe, USB, I2C. UART and GPIO interfaces access the target device 102 from NODE-to-NODE (NODE-to-NODE).
Finally, a VPN tunnel application is made between the local nodes 204,208 and the remote nodes 204,208 over the bus 104 using GEM packet format.
The GEM control message format includes a message plus an extension message (e.g., length 8 bytes +8 bytes). GEM control message formats may be used in bus 104 for internal network management and control purposes, including Dynamic Bandwidth Allocation (DBA) report messages, DBA grants, GEM Reception (RX) acknowledgements, GEM flow control, GEM power management, GEM sensing, GEM remote messages, and/or other types of control messages. As described above, node 204 is responsible for encapsulating data into GEM packets and decapsulating data from GEM control message formats. This scheme enables the extension of the PCIe interface protocol from a point-to-point topology to a point-to-multipoint topology, and the extension of the interface distance from short distances to long distances.
Fig. 6A-F illustrate exemplary GEM packet formats and GEM header formats, according to some embodiments. As shown in fig. 6A, GEM packet 600 may include a header 602 and a corresponding payload 604. As described above, for message packets, the header may be a set size (e.g., 8 bytes) and the payload may vary in length (e.g., from 8 bytes to 4 kilobytes in length), and for control packets, the header may be 8 bytes with or without one or more 8 byte extensions, for example.
FIG. 6B illustrates a detailed view of a GEM packet header format in accordance with some embodiments. As shown in fig. 6B, the header 602 includes a GEM type field 606, a payload length indication field 608, an encryption key index field 610 (e.g., AES key index), a node/epoch ID field 612, a GEM-ID field 614, a GEM packet type field 616, a transmission sequence identifier field 618, an acknowledgement required (acknowledgement required) field 620, a last fragment indication field 622, and a header error correction/check (HEC) field 624. Alternatively, one or more of these fields may be omitted and/or one or more additional fields may be added. In some embodiments, GEM type field 606 is 2 bits, payload length indication field 608 is 12 bits, encryption key index field 610 is 2 bits, node/epoch ID field 612 is 12 bits, GEM-ID field 614 is 12 bits, GEM packet type field 616 is 3 bits, transport sequence identifier field 618 is 6 bits, acknowledgement required field 620 is 1 bit, last fragment indication field 622 is 1 bit, and header error correction/check field 622 is 13 bits. Alternatively, one or more of these fields may be larger or smaller.
The GEM type field 606 indicates which type of header 602 the GEM packet 600 is (and thus which type of packet). For example, the GEM type field may indicate that the header 602 is one or more of a packet header, a bandwidth grant message header (e.g., transmitted from the root port 230 to the gate/node), a bandwidth report message header (e.g., transmitted from the gate/node to the root port 230), and/or a control message (e.g., transmitted between the root port 230, the gate 202, and/or one or more of the nodes 204,208, 234). The payload length indication field 608 indicates the length of the payload 604 of the data packet 600. Encryption key index field 610 indicates the type of encryption to be used on data packet 600. For example, the encryption key index field 610 may be used as an index value in an encryption table to identify one or more of: whether to encrypt the data packet, which key to use to encrypt the data packet, and/or which encryption method to use.
The node/epoch ID field 612 can identify the source node or destination node of the packet 600. For example, for a GEM packet 600 that is bursty from node to core, field 612 may be or represent the epoch-ID of the node to indicate the source of packet 600. As another example, for GEM packets 600 broadcast from the root port 230 to nodes/gates within its network 206, 210, the field 612 may be or represent a node-ID for the destination (including unicast node-IDs, multicast node-IDs, and/or broadcast node-IDs). GEM-ID field 614 may be or represent a data/packet/message identifier for a source node of a point-to-point message or may be or represent a GEM-ID for a destination node of a point-to-multipoint message (e.g., including CAN message GEM-ID, sensor data GEM-ID, and/or ethernet packet GEM-ID). Thus, the GEM format provides the advantage of enabling the bus 104 to identify the direct source and/or destination nodes through the node/epoch ID field 612, while also enabling the target devices/ports/services to be identified through the use of the GEM-ID field 614.
GEM packet type field 616 may indicate the type and format of the header of a message encapsulated within the GEM format (e.g., received from device 102 and/or received through port 99). For example, field 616 may indicate that the header is a PLOAM message, a Node Management and Control Interface (NMCI) message, a CAN command message, sensor data, an ethernet packet, a CPU-IO (e.g., PCIe/USB) message, and/or a Node Operations and Control Report (NOCR) message. Acknowledgement required field 620 may indicate whether an acknowledgement message in response to the message is required, and (for data packets bursted from the node to core 200) transmission sequence identifier field 618 may identify the transmission sequence number and/or the epoch-ID of data packet 600 within the set of data packets from the source node. In some embodiments, an acknowledgement message from the receiving root port 230 is required when indicated by the require acknowledgement field 620. For packets broadcast from the root port 230 to the nodes/gates, the transmission sequence identifier field 618 may identify the transmission sequence number of the unicast/broadcast/multicast GEM-ID (e.g., CAN message GEM-ID, sensor data GEM-ID, ethernet packet GEM-ID, and CPU/PCIe/USB data message GEM-ID). In some embodiments, an acknowledgement from the receiving root port 230 and/or node is required, as indicated by the need acknowledgement field 620. The last fragment indication field 622 may indicate whether the data packet 600 is the last fragment of a series of fragments of a large data packet, and the header error correction/check field 622 may be used to check the header 602 for errors.
Fig. 6C illustrates a detailed view of a GEM header format of a node report message, according to some embodiments. As shown in fig. 6C, the header 602 includes a GEM type field 606, a report message type field 624, a source epoch-ID field 626, a total report size field 628, a report threshold size field 630, a report sequence number field 632, one or more source node Virtual Output Queue (VOQ) status fields 634 (e.g., CPU-IO, PLOAM, NMCI, CAN, sensor, ethernet, or other types), a report priority field 636, and a header error correction/check (HEC) field 622. Alternatively, one or more of these fields may be omitted and/or one or more additional fields may be added. In some embodiments, the GEM type field 606 is 2 bits, the report message type field 624 is 2 bits, the source epoch-ID field 626 is 12 bits, the report total size field 628 is 14 bits, the report threshold size field 630 is 8 bits, the report sequence number field 632 is 5 bits, the one or more source node virtual output queue status fields 634 are each 1 bit (or a single field of 6 bits), the report priority field 636 is 2 bits, and the header error correction/check (HEC) field 622 is 13 bits. Alternatively, one or more of these fields may be larger or smaller.
The report message type field 624 indicates which type of report header 602 (and thus which type of report message) the GEM packet 600 is. For example, the report message type field 624 may indicate that the header 602 is one or more of an invalid report message, a node report message of its own (e.g., where the epoch-ID of the source of the data packet maps to the node-ID of the source of the data packet), a node report message of another node (e.g., where the epoch-ID of the source of the data packet does not map to the node-ID of the source of the data packet), and/or a fatal fault report message (e.g., a message requiring/requesting the highest priority). The source epoch-ID field 626 may be or indicate: an epoch-ID of the source node (e.g., for the reports of PLOAM and NMCI plus CAN/sensor/ethernet queue flag), an epoch-ID of the CAN (e.g., for the reports of the CAN), an epoch-ID of one of the sensors/nodes (e.g., for the reports of the sensor), an ethernet epoch-ID (e.g., for the reports of ethernet packets), and/or a PCIe/USB epoch-ID (e.g., for PCIe/USB report messages). The report total size field 628 may indicate the total size of GEM data within the VOQ (for the epoch-ID and/or Node-ID), while the report threshold size field 630 may indicate the GEM packet boundary within the VOQ (e.g., for determining the size of the burst window authorized for the epoch and/or Node).
The report sequence number field 632 may indicate which number of messages in the sequence (e.g., whether there is a sequence of related report messages to determine if a message is missing or out of sequence). One or more source node Virtual Output Queue (VOQ) status fields 634 may each indicate the status of the source node with respect to a particular data function/type (e.g., CPU/IO, PLOAM, NMCI, CAN, sensor, ethernet). The report priority field 636 may indicate the priority given to the message (e.g., best effort, normal bandwidth request priority, CAN message request priority, fatal fault request priority).
Fig. 6D and 6E illustrate detailed views of two variations of the GEM header format of a root port bandwidth grant message, in accordance with some embodiments. As shown in fig. 6D, for a node grant message with a node-ID that is the same as the epoch-ID, the header 602 may include a GEM type field 606, an epoch-ID field 638, a start time field 640, a grant size field 642, a grant flag field 644, a report command field 646, a grant command field 648, a forced wake-up indicator (FWI) field 650, a burst profile field 652, and a header error correction/check field 622. Alternatively, one or more of these fields may be omitted and/or one or more additional fields may be added. In some embodiments, the GEM type field 606 is 2 bits, the epoch-ID field 638 is 12 bits, the start time field 640 is 14 bits, the grant size field 642 is 14 bits, the grant flag field 644 is 1 bit, the report command field 646 is 3 bits, the grant command field 648 is 2 bits, the forced wake-up indicator field 650 is 1 bit, the burst profile field 652 is 2 bits, and the header error correction/check field 622 is 13 bits. Alternatively, one or more of these fields may be larger or smaller.
The epoch-ID field 638 may be or indicate the epoch-ID of the node or node-ID for which the message is intended. The start time field 640 may indicate the start time of the grant window granted to the target node (e.g., the epoch of the node), and the grant size field 642 may indicate the size/duration of the grant window. The authorization flag field 644 may indicate whether the window is authorized. The report command field 646 may indicate what report is requested from a node/epoch/port. For example, the report command field 646 may indicate one or more of the following: no node Request To Send (RTS) status reports or force nodes to report RTS messages to ports for black boxes and diagnostic tests; in combination with one or more of: PLOAM and NMCI only report CPU-IO messages, CAN messages and sensor data plus PLOAM/NMCI mandatory reports; the Ethernet data packet is added with a forced report of a CPU-IO/CAN/sensor and a PLOAM/NMCI; and/or PLOAM/NMCI/CPU-IO/CAN/sensor/ethernet plus mandatory complete reporting of Node Operations and Control Reporting (NOCR). The grant command field 648 may indicate which type of message/data is granted to the burst window. For example, the authorization command field 648 may indicate one or more of the following: windows are not used for PLOAM and NMCI messages; the authorization window is only for PLOAM messages; the authorization window is for NMCI messages only; and/or authorization windows for PLOAM, NMCI, and NOCR messages. The FWI field 650 is used to indicate whether to force the sleeping node to wake up, and the burst profile field 652 may indicate the burst configuration (e.g., the length, pattern, and/or other characteristics of the SOB delimiter, EOB delimiter, and/or preamble).
As shown in fig. 6E, for GEM authorization messages with node-IDs that are not the same as the epoch-ID, the header 602 can be substantially the same as that of fig. 6D, except that there is no report command field 646 and FWI field 650. Further, unlike in fig. 6D, the grant command field 648 may be 6 bits. Alternatively, the authorization command field 648 may be larger or smaller. Also unlike in fig. 6D, the grant command field 648 may indicate a different type of GEM bandwidth grant. For example, field 648 may indicate bandwidth grants for all VOQs/CoS (class of service) based on the output schedule settings of the node, for CoS messages only, sensor data only, fatal failure messages only, and/or for both CoS messages and sensor data. In addition, field 648 may force a power savings for node-IDs where the node replies with an acknowledgement message.
Fig. 6F illustrates a detailed view of a GEM header format for a control message, according to some embodiments. As shown in fig. 6F, the header 602 includes a GEM type field 606, a control message type field 654, one or more control message fields 656, and a header error correction/check field 622. Alternatively, one or more of these fields may be omitted and/or one or more additional fields may be added. In some embodiments, GEM type field 606 is 2 bits, control message type field 654 is 4 bits, one or more control message fields together are 45 bits, and header error correction/check field 622 is 13 bits. Alternatively, one or more of these fields may be larger or smaller.
The control message type field 654 may indicate what type of control message the message is (e.g., so the control message fields 656 and their offsets may be known for processing). In some embodiments, the control message type field 654 indicates one or more of the following: report acknowledgement messages, CAN acknowledgement messages, flow control messages, power save messages, and IO event messages (e.g., fatal failure), runtime status messages, and/or timestamp updates (e.g., from port to node). The control message field 656 may include various control message fields based on the type of control message (as indicated in the control message type field 654).
Thus, the GEM format provides the following advantages: enabling the bus 104 to encapsulate different input data and messages for significantly different types of networks (e.g., controller area network, optical network, sensor device broadcast network, wireless network, CPU access network) into one unique format (GEM). This unique format can facilitate high-speed standardized processing and transmission of disparate data inputs in burst and broadcast messages, enabling efficient operation of the multi-network multi-device bus architecture required by modern machine automation applications.
Burst/broadcast frame format
In some embodiments, the broadcast message is formatted as a broadcast PHY frame defined by: preamble + start of frame delimiter + frame payload, wherein the frame payload comprises a plurality of GEM packet data and GEM control messages. The broadcast PHY frame may be a fixed frame size (e.g., between 25-125 μ s). Alternatively, larger or smaller frame sizes may be used. For example, the frame size may be smaller (e.g., 25 μ s or 50 μ s) for a central network 206 and a sub-network 210 with fewer node devices 204, 208. In some embodiments, the broadcast PHY frame is structured to carry GEM packets and GEM control messages to be transmitted from the root port 230 to the gate 202 and/or nodes 204,208, 234 over the networks 206, 210, including optical, copper, and wireless networks.
In some embodiments, the burst message is formatted as a burst PHY frame defined by: preamble + start of frame delimiter + payload of frame + end of frame delimiter, wherein the payload of frame comprises one or more GEM packet data and GEM control messages. The burst PHY frame size may vary depending on the total burst window size of the nodes/gates authorized by the root port HDBA and/or gate DBA. In some embodiments, the burst PHY frame (from gate 202 or nodes 204,208, 234) cannot exceed the maximum broadcast PHY frame size (e.g., between 25-125 μ s). In some embodiments, the burst PHY frame is structured to carry GEM packets and GEM control messages to transmit the slave gate 202 and/or nodes 204,208, 234 to the root port 230 and/or gate 202 via the networks 206, 210, including optical, copper, and wireless networks.
Fig. 7A illustrates a broadcast PHY frame 700 according to some embodiments. As shown in fig. 7A, a broadcast PHY frame 700 includes a physical synchronization block (PSBbc)702 for broadcast and a broadcast framing sublayer frame 704, the broadcast framing sublayer frame 704 including a GEM control message 706, one or more GEM packets 600, and a Framing Sublayer (FS) tail 708. As described above, each GEM packet 600 includes a header 602 and a payload 604. In some embodiments, the broadcast FS frame is Forward Error Correction (FEC) protected. Fig. 7B illustrates a burst PHY frame 710 according to some embodiments. As shown in fig. 7B, the burst PHY frame 710 includes a physical sync block unicast start of a burst delimiter (psbusd) 712, a burst Framing Sublayer (FS)714, and a physical sync block unicast end of a burst delimiter (psbusd) 716. PSBuc _ sd 712 may include a preamble 718 and a start of burst (SOB) delimiter 720, and PSBuc _ ed 716 may include an end of burst (EOB) delimiter 722. Burst FS 714 may include an FS header 724, one or more EPOCHs (EPOCHs) 726, and an FS trailer 708. Each epoch 726 may include one or more GEM packets 600 having a header 602 and payload 604 as described above. In some embodiments, the burst FS frame is FEC protected. In particular, by including EOB delimiters (in addition to the SOB delimiters and the size of the frame), the structure 710 enables a sniffer, analysis engine, or other element to monitor traffic within the bus 104, because although the size of the frame is not known/accessed, the structure 710 enables the element to determine the end of each burst frame based on the EOB delimiters.
Fig. 7C illustrates a gate burst PHY frame 728 according to some embodiments. As shown in fig. 7C, gate burst PHY frame 728 may include one or more burst PHY frames 710, which burst PHY frames 710 are combined together into a single combined burst PHY frame having a single preamble 729 and one or more gaps 730. In particular, as described in detail below, gate 202 may receive burst frames 728 from one or more child nodes 208 and one or more IO ports 99 (as virtual nodes) and combine these frames 728 into a combined gate burst PHY frame 728 as shown in fig. 7C. Thus, the system 100 provides the following advantages: i.e., more efficient communication of messages by combining burst frames and reducing the overhead per frame by using only a single preamble for the combined frame as a whole, rather than using separate preambles for each combined burst frame (which may each have a preamble of up to 256 bytes or more).
Fig. 8 illustrates a method of operating the intelligent controller and sensor intranet bus 103, according to some embodiments. As shown in fig. 8, at step 802, one or more nodes 204,208 input one or more messages from one or more devices 102 coupled to one or more ports 99. In step 804, the nodes 204,208 encapsulate the message into a Generic Encapsulation Mode (GEM) format for transmission to the central processing core 200. In step 806, if the destination of the incoming message is node 234 within core 200, the core decapsulates, processes, and sends the message to their destination without repackaging. Otherwise, in step 808, if the destination of the incoming message is one or more other nodes 204,208 (outside of the core 200), the core 200 decapsulates, processes, and re-encapsulates the message back into GEM format for broadcast to their destinations. In step 810, the nodes 204,208 unpackage the message received from the kernel 200 from the GEM format into the original format of the input data received from one of the devices 102. Alternatively, if incoming messages are incoming from node 234 internal to core 200, they can be incoming and processed by core 200 (without being encapsulated), and if their destination is one or more nodes 204,208 external to core 200, they are only encapsulated by core 200 for broadcast. Thus, the method provides the following advantages: i.e., enabling communication of many different types of data (e.g., sensors, controller buses, ethernet, or other types of data), more efficient communication of messages via combined burst frames, and reducing the overhead per frame by using only a single preamble for the combined frame as a whole rather than using separate preambles for each combined burst frame.
Inner core
The core 200 may include a core switch 228, one or more root ports 230 (internal ports), a central processing unit 232, and one or more core nodes 234 having IO ports 99 (external ports). In some embodiments, kernel 200 also includes a secure memory (e.g., Secure Digital (SD) memory) node 236 for storing data in black box memory 238. Alternatively, SD node 236 and/or memory 238 may be omitted. The kernel node 234 allows a user to couple user plug-in modules (e.g., CPU kernel, WIFILTE/5G, user applications) directly to the kernel 200, bypassing the network 206, 210.
The core switch 228 includes forwarding engine elements, queue buffer managers, and traffic managers. The forwarding engine element may include a plurality of forwarding engines. For example, the forwarding engine element may include one engine for the L2/L3/L4 ethernet header parser, lookup and sort/Access Control List (ACL) functions, including L2 Media Access Control (MAC) address learning and forwarding functions, L3 Internet Protocol (IP) address to GEM-ID routing/mapping. Additionally, one engine may be used for GEM header message parser, lookup, ACL, and forwarding and/or another engine may be used to support DOS attack functionality to protect bus 104 from external internet DOS attacks. The GEM queue buffer manager may be a centralized buffer architecture that employs a linked list based buffering and queue storage approach that combines store-N-forward and cut-through forwarding schemes. A cut-through forwarding scheme may be used for delay sensitive GEM packets and GEM messages, and a store-N-forwarding scheme may be used for congested GEM packets. Both schemes can be dynamically mixed together and dynamically switched between each other depending on the runtime traffic congestion situation. The GEM traffic manager supports dual token policing, single token rate limiting, and output shape-fitting functions based on GEM-ID and NODE-ID, including related Management Information Base (MIB) counters. GEM-ID Weighted Random Early Detection (WRED) and tail drop functionality may be supported, as well as early traffic congestion detection, indication and feedback mechanisms to inform the hybrid dynamic bandwidth allocation mechanism (HDBA), the root port 230, the gate 202 and the nodes 204,208, 234 to slow down traffic transmission to avoid traffic congestion from occurring.
Thus, core switch 228 may provide functionality on ingress, with switch 228 receiving a GEM from one or more of root port 230, local node 234, computer 232, and/or other IO ports, processing the GEM at egress, forwarding and transmitting the received GEM to one or more of root port 230, local node 234, computer 232, and/or other IO ports. In other words, switch 228 may accept GEM packets from multiple sources; performing GEM and Ethernet L2/L3/L4 header parsing, L2 MAC lookup and learning, GEM messages and five-tuple ACLs and classification; modify (if needed) the GEM header and GEM payload ethernet header; and a broadcast MAC that stores and forwards GEM packets (or express buffers) to one or more hybrid automatic repeat request (HARQ) functional blocks and one or more root ports 230.
In performing this processing and/or forwarding function, switch 228 may support a hybrid store, forward, and cut-through forwarding scheme in order to reduce propagation delay for delay sensitive GEM and provide sufficient buffering for over-bursty GEM traffic. In addition, the switch 228 may support immediate flow control mechanisms within the bus 104, including hybrid dynamic bandwidth allocation and delegation to ensure overall quality of service (QoS) on the bus 104. In addition, switch 228 may support L2/L3/L4 ACL and classification, L2 MAC address learning and forwarding, L3 IP address to GEM-ID routing/mapping, and DOS attack protection. Finally, switch 228 may support QoS scheduling, GEM buffer WRED/tail drop, node and/or GEM policing and output shaping functions.
Root port
Root port 230 may include a root transport MAC, a root receive MAC, a security engine (e.g., Advanced Encryption Standard (AES)), a Forward Error Correction (FEC) engine, a Hybrid Dynamic Bandwidth Allocation (HDBA) engine, an activation processor (e.g., activation state machine), and a burst mode SERDES IP. Alternatively, one or more of the above elements may be omitted. The transport MAC of each root port 230 is responsible for accepting the GEM to be exported from switch 228 and/or HARQ; mapping and packing the GEM into a broadcast frame format (e.g., a broadcast PHY frame structure); and broadcasts the GEM to all of the gates 202 and/or nodes 204 on the central transport network 206 to which the root port 230 is coupled (e.g., through the root SERDES and optical/copper network broadcast domains) 206. Conversely, the receive MAC of each root port 230 is responsible for receiving a GEM in burst frame format (e.g., burst PHY frame structure) from the burst mode SERDES and gate 202 and/or nodes 204, 208; extracting GEM from the burst frame format; analyzing a GEM head of the GEM; (e.g., based on the GEM header and system Service Level Agreement (SLA) profile settings) accepts the GEM sent to it, and then outputs the GEM/data to the switch 228 for further processing and forwarding. In other words, each root port 230 may receive burst traffic from node 204 and/or gate 202 (forwarded from node 208 in the subnet 210 of gate 202), convert the burst traffic to the correct format for processing by switch 228, and then reformat and broadcast the output traffic to all nodes 204 and 208 (via gate 202) as directed by switch 228.
The Hybrid Dynamic Bandwidth Allocation (HDBA) engine is responsible for receiving reports (e.g., NODE-DBA reports) regarding bandwidth usage, traffic congestion, and other factors; performing HDBA analysis based on the SLA profile of the node/port/device associated with each report, DBA report data itself and Committed Information Rate (CIR)/Peak Information Rate (PIR) feedback; a burst window is granted to each NODE device and a port/EPOCH-ID is assigned. In other words, the HDBA engine inputs data from each of the nodes 204,208 (of the network 206 associated with the root port 230 and its subnetwork 210) and/or other sources regarding bandwidth usage/traffic congestion and dynamically allocates a burst transmission window start time and/or size to each of these nodes 204, 208. When performing this assignment to node 208 within subnet 210, the gate 202 providing access to node 208 is transparent to the HDBA engine. Thus, as described in detail below, the gate 202 receives the desired data and performs burst transfers within the window of each node 208 of the subnet 210 assigned to the gate 202. The HDBA engine may also issue a Report acknowledgement message (GEM-Report-ACK message) to the nodes 204,208 to acknowledge receipt of the Report message (GEM-DBA Report).
The root activation state machine is responsible for performing and completing node 204,208, 234 device activation and registration through the activation processes and procedures that exchange physical layer operations, administration and maintenance (PLOAM) GEM messages between the nodes 204,208, 234 and the root port 230. The security engine may be an AES-128/256 encryption and decryption function for receiving and sending MACs. Alternatively, other encryption may be used. A Forward Error Correction (FEC) engine is used to control errors in data transmission over unreliable or noisy communication channels. In some embodiments, the FEC engine uses reed solomon FEC encoding schemes for RS (255,216) and RS (225,232) for 10G and 2.5G data rates, respectively. Alternatively, the FEC engine may use a Low Density Parity Check (LDPC) scheme and/or other FEC algorithms. Burst mode SERDES uses a fast Clock and Data Recovery (CDR) lock mode to ensure that the proper burst message (e.g., burst PHY frame) is received correctly. In some embodiments, a fast locking function of the CDR is required in fiber cut, fast failover, and protection switch restoration.
Finally, after the registration process, root port 230 receives a broadcast Data Distribution Service (DDS) message from a node 204,208 informing root port 230 that a new node/device has joined and registered to bus 104. Thus, root port 230 is used to always listen and accept these data distribution service messages (DDS) in the declaration of the join bus 104 from the switch 228 and the new node 204,208, and update the root port SLA profile database and settings to reflect the newly added node/device.
Node point
The edge nodes 204,208, 234 provide a bridging function within the bus 104 to interface with external devices 102 via the IO port 99 on one side and to connect to the bus intranet 104 on the other side. To provide data from the device 102 coupled to the port 99 of the node 204,208, 234 constructs and sends a burst message (e.g., a burst PHY frame of data encapsulated as a GEM) over the bus 104 to the other node 204,208 via the root port 230. Further, to provide data to the device 102 coupled to the ports 99 of the nodes 204, 28, the nodes 204,208, 234 receive broadcast messages (e.g., broadcast PHY frames encapsulating data as GEM) from other nodes 204,208 via the root port 230 (which is part of the network 206 or a sub-network 210 thereof), extract data from the broadcast messages (e.g., GEM from RX BC-PHY frames), and filter and accept data belonging to (destined for) the nodes 204, 208.
To perform these and other functions, the edge nodes 204,208 may include one or more IO ports 99, an encapsulation/decapsulation engine, a HARQ block, and a node MAC. Each port 99 may be a CPU interface (e.g., PCIe, USB, and UART), a sensor interface (e.g., MIPI, analog-to-digital converter (ADC), GPIO), an Internet interface (e.g., Ethernet, EtherCAT, and CAN-Bus), and an electromechanical module interface (e.g., Pulse Width Modulation (PWM), I-Bus)2C. ADC and GPIO). The encapsulation/decapsulation engine accepts input data from port 99 and encapsulates data packets, Commands (CMD) and messages received from internet ports (e.g., ethernet, Wi-Fi), sensor interfaces, electromechanical module interfaces, and CPUs (e.g., PCIe and USB) into GEM format at the ingress. The nodes 204,208 may then output the encapsulated message (e.g., GEM) to the HARQ and/or node transmission MAC (described below). At the egress, the GEM packet from the node receiving the MAC (received from the root port 230 and/or another node 204,208, 234) is accepted and the GEM is decapsulated back to the original data format (as received from the coupling device 102) for output to the device 102 via one of the ports 99. As in the root port 230, HARQ of the nodes 204,208 performs a hybrid automatic repeat request function to ensure that GEM packets are successfully transmitted to their destination node or nodes 204,208, 234. Specifically, the HARQ may incorporate a repeat transmission timer, a transmission GEM list flag table, and a reception acknowledgement check function (e.g., GEM RX-acknowledgement) to trigger GEM retransmission when a timeout occurs in the event that the timer does not receive an acknowledgement.
The node MAC includes a transmit MAC (tx MAC), a receive MAC (rx MAC), a security engine (e.g., AES), a Forward Error Correction (FEC) engine, a DBA reporting engine, and a SERDES IP. The TX MAC is responsible for mapping/packing the GEM into a burst structure (e.g., a burst PHY frame structure) and for sending burst messages to root port 230 and/or nodes 204,208, 234 during the burst window of the node authorized by the dynamic burst allocation engine of root port 230. The RX MAC is responsible for receiving and terminating broadcast messages (e.g., broadcast PHY frames) from the root port 230 and/or nodes 204,208, 234, extracting the GEM from the broadcast message format, parsing and accepting the GEM destined to it (e.g., to one of its ports 99) based on the node's SLA profile settings, and then outputting the data to the encapsulation/decapsulation engine.
The DBA reporting engine reports the entire data packets and messages in a queue (e.g., an EPOCH queue) to the HDBA engine of the associated root port 230 via burst reports (as described above). In addition, the DBA reporting engine accepts GEM grant messages from the HDBA of the associated root port 230 and/or DBA of the associated gate 202 and prepares the node to transmit a MAC to construct a burst message (e.g., a burst PHY frame) with GEM stored in a queue (e.g., an EPOCH queue).
The node activation processor is responsible for performing and completing the node 204,208, 234 activation processes and procedures between the nodes 204, 206, 234 and the root port 230. The security engine may be an AES-128/256 encryption and decryption function for receiving and sending MACs. Alternatively, other encryption may be used. The FEC engine is used to control data transmission errors over unreliable or noisy communication channels. In some embodiments, the FEC engine uses reed solomon FEC encoding schemes for RS (255,216) and RS (225,232) for 10G and 2.5G data rates, respectively. Burst mode SERDES uses a fast Clock and Data Recovery (CDR) locking mode to ensure fast fiber cut, fast failover, and protection switch recovery.
Finally, after the activation process (e.g., after the registration process is complete), the nodes 204, 206, 234 may broadcast a DDS message to the entire bus 104 to notify and advertise to the root port 230, switch 228, gate 202, and/or other nodes 204, 206, 234 that a new device has joined and registered with the bus 104 at that node 204,208, 234. In addition, the nodes 204, 206, 234 may listen for DDS messages from the switch 228 and other new nodes 204, 206, 234 in the declaration of joining the bus 104 and update their global SLA profile database and settings based on the DDS messages.
Door with a door panel
Gate 202 may include a node MAC (with multiple virtual node state machines and buffering), an Adaptive Domain Bridge (ADB), a root port MAC (with built-in gate DBA function/gate DBA), a gate SLA profile database, and a burst mode SERDES. The node MAC includes one or more of: multiple sets of transmit MAC, receive MAC, security engine (e.g., AES), FEC engine, DBA reporting function, SERDES function and/or virtual node processor (e.g., one set for each node within subnet 210), virtual node configuration files and settings, and associated MIB counters and reporting logic. The transport MAC receives the GEM from the gate ADB and maps and packages the GEM into its associated virtual node burst structure (e.g., burst PHY frame structure) based on the gate's virtual node SLA profile database settings. In addition, the transport MAC aggregates multiple virtual node burst structures (e.g., burst PHY frames) into one GATE burst structure (e.g., GATE/Turbo (GATE/Turbo) burst PHY frame) and sends the burst message to root port 230 over network 206 based on the authorized burst windows for those nodes 208 received from the HDBA of root port 230. The node receive MAC receives broadcast messages (e.g., broadcast PHY frames) from root port 230, extracts the GEM from the messages, parses the GEM header, determines which messages are for nodes 208 within sub-network 210 of gate 202 based on the GEM header and virtual node SLA profile database settings, and outputs the messages to the ADB.
The ADB performs a bridging function between the node MAC and the root MAC of gate 202. Specifically, in the broadcast direction (from root port 230 to node 208), the ADB receives the GEM from the node receiving MAC and performs GEM header lookup, checking and filtering functions based on the gate virtual node profile database in order to accept the GEM of node 208 belonging to the subnet 210 of the gate 202. The ADB may then output these GEM to the root port of gate 202 to transmit the MAC. In the burst direction (from node 208 to root port 230), the ADB receives the GEMs from the root receive MAC, stores them in their associated virtual node buffer memory, and outputs them to the virtual node transmit MAC when their burst window start time arrives.
The root port MAC of gate 202 includes a transmit MAC, a receive MAC, a security engine (e.g., AES), an FEC engine, a gate DBA, and a burst mode SERDES module. The transport MAC is responsible for accepting the GEM from the ADB, mapping and packetizing the GEM into a broadcast format (e.g., a broadcast PHY frame structure), and outputting the broadcast format frame to the burst mode SERDES. The receive MAC is responsible for receiving burst messages (e.g., burst PHY frames) from burst mode SERDES (e.g., remote nodes), extracting the GEM from the message, parsing and accepting (as indicated based on the parsed GEM header and SLA profile settings) only the GEM for node 208 within the subnet 210 of gate 202, and then outputting the GEM to the ADB of gate 202. The DBA of gate 202 is an extended HDBA for root port 230. The gate DBA authorizes and allocates a node burst window according to the gate DBA SLA profile setting (which is a subset of the root HDBA). The portal SLA profile database includes a list of node identifiers belonging to the portal 202 (e.g., located within the subnet 210 of the portal 202), a list of SLA profiles for node identifiers of the portal DBA function, and GEM forwarding information. Burst mode SERDES accepts a broadcast message (e.g., a broadcast PHY frame) from a root transport MAC and transmits in a broadcast transmission direction to node 208 in subnet 210. In the receive direction, burst mode SERDES receives burst messages (e.g., burst PHY frames) from node 208 through subnet 210 and outputs them to the root receive MAC for message/frame termination and GEM extraction.
The primary function of the gate 202 is to extend the central transport network 206 of a root port 230 by bridging the central transport network 206 to one or more subnetworks 210 (and nodes 208 therein) by adaptive bridging. In particular, gate 202 may burst messages from node 208 and/or other gates 202' within its subnet 210 to the root port 230 of the network 206 where they reside, as if the burst traffic came from a node within the central transport network 206. Similarly, the gate 202 may broadcast messages received from other nodes 204,208, 234, switches 228, and/or root ports 230 to the node 208 and/or other gates 202 'within the subnet 210 in which they are located as if the node 208 and/or other gates 202' were within the central transport network 206. Thus, the gate 202 may extend the central transport network 206 to additional nodes 208 and/or different types of subnetworks 210 while maintaining the burst/broadcast communication method within the central transport network 206.
In more detail, in the direction of a transmission burst (e.g., from node/gate to root port/switch/core), the burst window authorization mechanism from node 208 to gate 202 to root 230 may include the following steps. First, the DBA of gate 202 is a subset of the HDBA of root port 230 (of the network 206 of which gate 202 is a part), and thus the DBA of gate 202 is transparent to root port 230 and node 208. Second, when gate 202 receives a burst window authorization message (e.g., a GEM authorization message) broadcast from its root port 230, gate 202 uses a message header (e.g., a GEM header) to look up a gate SLA profile database for GEM forwarding information. In other words, the gate 202 uses the header data to determine whether the authorization message is for any node 208 within its subnet 210 as indicated in the gate SLA profile database. If the authorization message is not for any node 208 of its subnet 210, then the gate 202 discards the authorization message, otherwise, the gate 202 stores the message in its virtual node database, updates the database, and broadcasts a new window authorization message (e.g., a GEM authorization message) to all nodes/gates in its subnet 210, which subnet 210 leads to the node 208 to which the original authorization message was directed. In response, node 208 provides a burst message to gate 202, and gate 202 formats and/or otherwise prepares a message for bursting to root port 230 at the beginning of the burst window indicated in the received window grant message for that node 208.
Third, to achieve optimal throughput bandwidth, high burst bandwidth efficiency, and/or low transmission delay, the gate 202 may adjust the grant window indicated in the new grant message to be at least a predetermined amount of time before the grant window indicated in the original grant message. In particular, this amount of time provides the gate 202 time to receive and format the burst data from node 208 before bursting the data from gate 202 to root port 230 at the time indicated by the original window grant message. In fact, by doing so for multiple nodes 208 simultaneously, GATE 202 can aggregate messages from multiple different nodes (e.g., multiple burst PHY frames) into a single larger burst message (e.g., a GATE burst PHY frame)
Fourth, root port 230 and gates 202 can maintain a list of group memberships, and realize that the virtual nodes 208 under each gate 230 are a group, due to the protocol between gate traffic DBA reports and root port 230 window authorizations. Thus, when node 208 issues a report message (e.g., a GEM report) to the HDBA of root port 230, gate 203 may intercept the report message, modify it to include GEM data temporarily stored in the virtual node cache of gate 202 (if present), and issue a new report message to the HDBA of root port 230. In other words, the gate 202 may combine the reporting messages from the nodes in its subnet 210 in order to make the reporting more efficient.
In addition, when the HDBA of root port 230 issues grant messages (e.g., GEM grant messages) to nodes 208 in subnet 210, because they are aware of all nodes 208 in that subnet 210 (e.g., via a virtual node database), the HDBA of root port 230 can ensure that grant windows for nodes 208 belonging to the same gate 202 and/or subnet 210 are arranged in sequential/contiguous order, so that the gate 202 can combine and/or burst messages (e.g., burst PHY frames) for all virtual nodes without requiring each burst message to be provided with a preamble, except for the first burst message. Provides the benefits of reduced preamble overhead and improved burst bandwidth efficiency, particularly for small bursts of GEM control messages.
In other words, for the data path, gate 202 receives burst messages (e.g., burst PHY frames) from burst mode SERDES and remote nodes 208, extracts the GEM from the messages in the root receive MAC of gate 202, stores the GEM in its associated virtual node buffer memory, and waits for a virtual node burst window grant to enter from the root port 230 of these virtual nodes 208. The gate 202 may then map and wrap the stored GEM for that node 208 and the other nodes 208 back into the burst message format, thereby aggregating the multiple burst messages together to form one larger burst message in the node transport MAC of the gate 202. Finally, gate 202 may send the larger burst message to SERDES and root port 230 over network 206 based on the authorized burst window (e.g., multiple consecutive virtual node burst windows of gate 202).
Looking now at the broadcast direction (e.g., from root port/switch/kernel to node/gate), the gate 202 may likewise extend the central network 206 to the sub-network 210, while also being transparent to both the root port 230 of its network 206 and the node 208 in its sub-network 210. To accomplish this, the gate 202 may act like a virtual node, receiving a broadcast message (e.g., a broadcast PHY frame) from the root port 230, extracting a GEM from the message, discarding GEM not directed to one of the nodes 208/gates 202' in its subnet 210 (e.g., as indicated by the message header and the gate SLA profile database). Otherwise, gate 202 may use a store-N-forward and/or pass-through scheme to package and map the GEM back into the root port broadcast message structure (e.g., broadcast PHY frame structure) in the root transport MAC of gate 202 and broadcast the new broadcast message to all nodes 208 and/or gates 202' in its subnet 210.
Data transfer operation
In operation, bus 104 operates using a burst/broadcast communication scheme in which all data messages from nodes 204,208, 234 (and gate 202) are aggregated to core 200 using a burst transfer method in which a dynamically adjustable size transmission window (by core 200) is granted to nodes 204,208, 234 so that they (or gate 202 on behalf of them) can send their data messages as "bursts" within the granted window. If the sending node is in subnet 210, gate 202 (acting as the root port of the network 210) receives a burst message from node 208 through subnet 210 and then subsequently bursts the message to core 200 through central network 206 (as if node 208 were part of central network 206). In conducting such burst communications, the gate 202 may aggregate burst messages from multiple nodes 208 within the subnet 210, thereby increasing efficiency and reducing the impact of potentially increased delay of the subnet 210 relative to the central network 206. In fact, the above operations may also be repeated on a gate 202 ' or the like within the subnet 210, the gate 202 ' providing a gate access or the like to the subnet 210 ' to support any number of "chained/gated" networks. Further, in this process, gate 202 may be transparent to core 200 and node 208, such that messages need not be sent to gate 202.
The core 200 receives these messages (from one or more root ports 230 coupling the core 200 to each central network 206), processes the messages (including modifying and/or determining their target destinations), and broadcasts them (and any messages originating from the core 200) onto any central transport network 206 on which the target node 204,208, 234 of the message (or the gate 202 representing the target node 208) is located. Similar to the burst communication above, if the target node 208 is within the subnet 210, the gate 202 bridging to that subnet 210 may receive/intercept the message from the kernel and rebroadcast the message to all nodes 208 (and/or gates 202') on the subnet 210. To improve efficiency, any broadcast messages of target nodes 204 that are not on the subnet 210 (or its subnet) may be discarded by the gate 202. Again, this process is transparent and may be repeated by gates 202' or the like within the subnet 210 to have any number of chained networks broadcast messages over the network. Thus, all nodes 204,208, 234 (and gates 202) on each of the networks 206 (and the subnets 210 coupled thereto) receive all messages broadcast on that network 206 from the core 200, and need only look for messages directed to those nodes, discarding other messages.
In more detail, when nodes 204,208, 234 receive data from one or more external devices 102 through their one or more IO ports 99, store the data in a GEM-ID queue buffer memory and burst a reporting message (e.g., a GEM report) to the root port 230 of the central network 206 where they reside (either directly or through one or more gates 202 if they reside in a sub-network 210 of the central network 206) and wait to be granted a burst window to send the incoming data. As described above, the gates 202 may collect and aggregate report messages from multiple nodes 208 (and/or gates 202') in their subnet 210 into a single larger report message so that the gates 202 may more efficiently burst the single larger report message to the root port 230 during the burst window for these ports 208.
Meanwhile, the nodes 204,208, 234 may encapsulate the input data into GEM format (split GEM beyond predefined size into smaller GEM), encrypt the GEM with the security key of the nodes 204,208, 234, update the HARQ table, map and pack the GEM into burst format (e.g., burst PHY frame format), and encode (e.g., FEC RS (255,216) encoding). Then, upon the grant and arrival of the burst window for each node, the node bursts the GEM including the input data to the associated root port 230.
The HDBA of root port 230 receives all report messages from nodes 204,208 (and/or gates 202) and performs DBA analysis on each of nodes 204,208 based on the SLA profile database, delay sensitivity level, traffic congestion feedback, Committed Information Rate (CIR)/Peak Information Rate (PIR) feedback, and/or other factors to determine an authorization window burst size and start time for each of nodes 204, 208. Once the authorization burst window has been determined for one or more of the nodes 204,208, the root port 230 broadcasts the window for each node in a broadcast authorization message (e.g., GEM authorization) to all nodes 204,208 in the associated hub network 206 and/or any sub-network 210 (via the gate 202). As described above, the size of the broadcast message from root port 230 is the same, while the size of the burst window from nodes 204,208 to root port 230 may vary with the size of the HDBA dynamic allocation.
The gates 202, upon receiving a broadcast grant message targeting a node 208 within their subnet 210 (or its subnet), broadcast a new grant message to all nodes 208 within the subnet 210. In particular, these new grant messages may specify a burst window that occurs before the time indicated by the original/root port grant window. This is to ensure that gate 202 receives (e.g., "bursty") input data/GEM from port 208 before the original/root port grant window, giving gate 202 time to aggregate data/GEM from multiple nodes 208 and/or ports 99 into a single larger message for bursting to root port 230 when the original/root port grant window arrives. Thus, gates 202 may offset the inefficiencies and/or slowness of sub-networks 210 so that they do not slow down the efficiency of central transport network 206.
Upon receiving the burst message including the GEM (including the input data from external device 102), root port 230 may decode (e.g., FEC RS (255,216) decode) and error correct the burst message to decode and correct any transmission errors. Root port 230 may then extract the GEM from the burst message (e.g., transport frame format), decrypt the extracted GEM (e.g., using AES-GEM/256 and source node security keys), bypass the GEM section block, and pass the GEM to switch 228. For each GEM, the switch 228 may then perform a GEM header lookup, parse and sort the ethernet L2/L3 address and header, process the GEM forwarding flow graph and determine GEM forwarding destination information, store the GEM in a (pass-through) buffer memory, and export the GEM to the HARQ and destination root port 230 (e.g., the root port 230 whose network 206 or sub-network 210 includes the target nodes 204, 208) based on the SLA database QoS export scheduler.
The root port 230 receives the GEM, performs GEM encryption (e.g., AES-GEM/256 encryption) using the destination node's (or broadcast GEM) security key, packages and maps the GEM into a broadcast message structure (e.g., broadcast frame structure), encodes the message (e.g., FEC RS (255,216) encoding), and finally broadcasts the broadcast message to all nodes 204,208 in the root port's network 206 and its sub-network 210. If node 208 is within subnet 210, gate 202 to that subnet receives the broadcast message and broadcasts the message to all nodes 208 within subnet 210. In some embodiments, gate 202 filters out any broadcast messages that are not directed to nodes 208 within its subnet 210 (or its subnet), and broadcasts only broadcast messages directed to one of those nodes 208. Alternatively, the gate 202 may rebroadcast all broadcast messages to nodes 208 within its subnet 210 without determining whether the message is relevant to one of these nodes 208.
All nodes 204,208 monitor the received broadcast messages, process those broadcast messages for the relevant nodes 204,208 and discard the other broadcast messages. Specifically, for messages that are not discarded, the nodes 204,208 decode (e.g., FEC RS (255,216) decode) and error correct the messages, extract the GEM from the broadcast message format (e.g., BC PHY frame), decrypt the extracted GEM (e.g., using AES-128/256 and the security keys of the destination node), decapsulate the data from the GEM format back to the original IO Port (IO-Port) data format, and output the data to the external device 102 through the designated IO Port 99. Thus, the bus 104 and system 100 provide the following advantages: a plurality of different networks with varying input data, varying processing speeds and data constraints can be combined while still maintaining the low latency and high throughput required by the machine automation system. This is a unique intranet system architecture that is specifically defined and optimized for such machine automation applications.
FIG. 4 illustrates a block diagram of an exemplary computing device 400 for implementing system 100, in accordance with some embodiments. In addition to the features described above, the external device 102 may include some or all of the features of the device 400 described below. In general, a hardware architecture suitable for implementing the computing device 400 includes a network interface 402, memory 404, a processor 406, I/O devices 408 (e.g., readers), a bus 410, and a storage device 412. Optionally, one or more of the illustrated components may be removed or replaced by other components known in the art. The choice of processor is not critical as long as a suitable processor with sufficient speed is selected. The memory 404 may be any conventional computer memory known in the art. The storage device 412 may include a hard drive, CDROM, CDRW, DVD, DVDRW, flash memory card, or any other storage device. Computing device 400 may include one or more network interfaces 402. Examples of network interfaces include a network card connected to an ethernet or other type of LAN. I/O devices 408 may include one or more of the following: keyboard, mouse, monitor, display, printer, modem, touch screen, button interface, and other devices. Operating software/applications 430 or functions/modules thereof may be stored in the storage device 412 and memory 404 and processed as applications typically process. More or fewer components shown in fig. 4 may be included in computing device 400. In some embodiments, a robotic automation system hardware 420 is included. Although computing device 400 in fig. 4 includes application 430 and hardware 420 for system 100, system 100 may be implemented on a computing device in hardware, firmware, software, or any combination thereof.
Fig. 5 illustrates a method of operating a machine automation system 100 including an intelligent controller and a sensor intranet bus 104, in accordance with some embodiments. As shown in fig. 5, in step 502, the nodes 204,208 receive input data from a plurality of external devices 102 via one or more ports 99 of the bus 104. In step 504, the nodes 204,208 burst the input data as burst messages to the core 200 in variable-size burst windows. In some embodiments, for each of the nodes 204,208, the HDBA of the root port 230 dynamically adjusts the burst window start time and the size of the variable burst window, and allocates the adjusted window to the corresponding node 204,208 in a broadcast grant window message based on the data traffic parameter reported by one of the nodes 204, 208. In some embodiments, gate 202 aggregates two or more burst messages received from node 208, including incoming data and/or traffic reports, into a single larger burst report or incoming data message for burst to core 200. In such embodiments, the gate 202 may omit portions of the received burst message (e.g., preamble) in order to improve the efficiency of the bus 104. In some embodiments, upon receiving the broadcast window grant message from core 200, gate 202 adjusts the original time of the burst window to an earlier time and broadcasts the adjusted broadcast window grant message to node 208. Thus, node 208 bursts their data to gate 202 before the window granted by root port 230 so that gate 202 can combine multiple burst messages together and burst them in a later original time window. In step 506, the core 200 processes the incoming data and broadcasts it as a broadcast message to each of the nodes 204,208 within the central network 206 and sub-network 210 required to reach the target node 204,208 of the message. In step 508, the target node 204,208 converts the data of the broadcast message to a format accepted by the device 102 coupled to the node 204,208 and outputs the data to the device 102. Thus, the method provides the following advantages: the bus 104 can be kept high-speed despite the use of a low-speed network medium.
Message retransmission mechanism
When a node 204,208 transmits burst PHY frames to root port 230 of core 200, or vice versa, for broadcast PHY frames (e.g., destined for core 200 and/or one or more other nodes/devices coupled to bus 104), there is no guarantee that each burst/broadcast PHY frame will be successfully transmitted to the root/node. Accordingly, system 100 employs a message retransmission mechanism implemented by nodes 204,208 and root port 230 and/or core 200 to compensate for errors in message transmission.
Fig. 18 illustrates a message retransmission mechanism for bus 104 according to some embodiments. As shown in fig. 18, each of the nodes 204,208 includes a node initiator (node initiator)1802 and a node acknowledger (node acknowledger)1804, each gate 202 includes a gate initiator 1806 and a gate acknowledger 1808 (which may implement one or more virtual initiators/acknowledgers for any virtual node implemented by the gate 202), and the core 200 includes a core initiator 1810 and a core acknowledger 1812 (which may implement a virtual initiator/acknowledger for each node 204,208 with each root 202) shared by each root port 230. In particular, kernel confirmer 1812 may implement a virtual confirmer for each of nodes 204,208 that is dedicated to acknowledging messages received from these nodes. Similarly, the kernel initiator 1810 may implement a virtual initiator for each of the nodes 204,208 that is dedicated to initiating a retransmission mechanism for messages (e.g., unicast messages) sent to these nodes. In addition, the kernel initiator 1810 may also implement a virtual broadcast initiator for each root port 230 that is dedicated to initiating a retransmission mechanism for messages (e.g., broadcast messages) broadcast to all nodes 204,208 of that root port 230. Optionally, one or more of the root ports 230 may have a separate initiator 1810 and/or confirmer 1812. Each of the nodes, gates, and root initiators and/or acknowledgers may include access to one or more processors for performing the mechanisms described herein and/or one or more memories (e.g., SRAM) for storing retransmission tables and local copies of the transmitted GEM packet 600.
In a node-to-root transfer operation, as described in the data transfer operation section above, root port 230 sends an authorization window message (see fig. 6D and 6E) to one of nodes 204,208 (and/or gate 202) identifying the authorization window assigned to that node. The nodes 204,208 (and/or the gate 202) then burst a message to the root port 230 of the core using the authorization window (e.g., using burst PHY frames), where the message includes one or more GEM packets 600. Each data packet may include a source node identifier 612 (including a transmission sequence group identifier), a GEM identifier 614, a transmission sequence identifier 618, an acknowledgement request, and/or other data as described above. In particular, the node identifier 614 may include a portion (e.g., two bits) that identifies a sequence group of data packets, with the remaining portion identifying the source node. For each GEM packet 600 requesting an acknowledgement in a burst message (e.g., as indicated via acknowledgement request field 620), node initiator 1802 (and/or gate virtual initiator 1806) creates a new retransmit flow that includes a local copy of packet 600 in node initiator 1802's local memory and a new entry in the retransmit table. This data may be used to retransmit one or more of the data packets, if desired.
The new entry in the retransmission table may include one or more of a port identifier, a node identifier, an epoch identifier, a sequence group identifier, a GEM packet header, a GEM pointer, a GEM retransmission timer, a GEM retransmission timeout threshold, a GEM retransmission counter, and a maximum GEM retransmission threshold. For nodes 204,208, the port identifier may identify port 99 of nodes 204,208, for gate 202, the port identifier may identify root 230 and a node identifier, and for kernel/root, the port identifier may identify one of the roots 230. The node identifier may identify the source node 204,208 that originated the message. The epoch identifier may identify the GEM packet (from the port of the source node). The sequence group identifier and sequence identifier identify the sequence group to which the data packet 600 is assigned and the sequence numbers within that group assigned to the data packet 600. The GEM packet header may be a copy of the header of the GEM packet. The GEM pointer may point to the associated local copy of the packet in local memory. The GEM retransmission timer may time the elapsed time since the data packet 600 was transmitted, and the GEM retransmission timeout threshold may be a configurable value that indicates the value that the retransmission timer needs to reach to trigger automatic retransmission of the data packet. The GEM retransmission counter may indicate the number of times the data packet 600 needs to be retransmitted, and the maximum GEM retransmission threshold may be a configurable value that indicates the value that the retransmission counter needs to reach to prevent further retransmissions of the data packet (e.g., by clearing associated entries and local copies and/or sending interrupts to the core 200 to identify transmission problems). Alternatively, the table may include more or less fields.
After sending the grant window message, kernel validator 1812 monitors the grant window to receive burst messages from the nodes 204,208 granted to the window. If no message is received within that time, kernel validator 1812 sends a missing burst validation message to nodes 204,208 indicating that root port 230 did not receive a message during the grant window, and that root port 230 re-grants the same grant window to nodes 204,208 during the next period (e.g., via another grant message indicating the same time slot and/or size). In some embodiments, the lost burst acknowledgement message is broadcast to all nodes 204,208 in the network 206, 210 of the root port 230. Alternatively, the missing burst acknowledgement message may be unicast or multicast to one or a subset of the networks 206, 210. Upon receiving the lost burst confirm message (and subsequent grant messages), node initiator 1802 recreates the burst message using the retransmit table and the stored local copy of GEM packet 600 and retransmits the recreated burst message to root port 230 during the re-granted (and optionally higher priority) grant window. At the same time, the node initiator 1802 resets the retransmit timer and increments the retransmit counter. However, if incrementing the retransmit counter would cause the value to exceed the retransmit threshold, node initiator 1802 performs an action to diagnose the cause of the persistent failure in message delivery. For example, the initiator 1802 may send an interrupt to the core CPU to perform a link diagnostic test, clear the retransmission stream including the stored local copy and/or entry, the root port 230 may extend the length of the preamble of the burst message, and select a stronger FEC algorithm for future burst messages and/or other diagnostic actions.
When/if root port 230 receives the burst message, root port 230 decompresses the burst PHY frame and parses the received GEM packet 600. For each GEM packet 600, the core acknowledger 1812 validates the burst message that includes the packet 600. For example, the kernel validator 1812 may determine whether there are any uncorrectable errors in any of the packets 600, including whether the source of the packet 600 cannot be determined due to errors in the header of the GEM packet 600. In some embodiments, validating each GEM packet 600 includes performing one or more of Forward Error Correction (FEC), Cyclic Redundancy Check (CRC) validation, Bose, Ray-Chaudhuri, Hocquenghem (BCH) code validation, and/or other types of packet error correction.
If a packet is to be broadcast or multicast (non-unicast) and the destination of the packet is part of the same node 204,208 in the network 206, 210 as the source node 204,208 (e.g., coupled to the core 200 via the same root port 230) and the node 204,208 that is to receive the broadcast or multicast, then no acknowledgement is required for those packets 600 (even if requested according to the request field 620). In contrast, after core 200 processes packet 600 as needed, root 230 broadcasts or multicasts the packet without any uncorrectable errors (e.g., in a broadcast PHY frame) to all nodes 204,208 or a selected subset on networks 206, 210. As a result, when the source node 204,208 receives the broadcast/multicast message including the data packets, it identifies itself as the source of the data packets and the node originator 1802 removes the data packets from the retransmission stream. When the retransmission timer associated with any data packets not included in the broadcast/multicast message reaches the retransmission timer threshold (e.g., due to uncorrectable errors as described above), those any data packets are automatically retransmitted in subsequent burst messages. These retransmitted data packets 600 may be combined with other data packets 600 in a burst message to fill the authorized transmission window of the nodes 204, 208. As a result, when the destination of the data packet is a node 204,208 in the same network 206, 210 as the source node 204,208, the message retransmission mechanism provides the advantage of reducing network congestion by not requiring acknowledgement messages.
If the destination of the data packets is a node 204,208 that is not in the same network 206, 210 as the source node 204,208, then those data packets 600 require an acknowledgement requested according to the request field 620. The kernel acknowledger 1812 constructs and sends a received GEM acknowledgement message (RX-GEM-ACK) to the source nodes 204,208 indicating which packets 600 are valid and which, if any, have uncorrectable errors so that they need to be retransmitted. The RX-GEM-ACK may include a start of sequence identifier, an end of sequence identifier, a sequence group identifier, a source/destination node identifier, and/or other fields described herein.
For example, as shown in fig. 19, the RX-GEM-ACK may include a header TYPE field 1902 (similar to the GEM header TYPE (GEM-HD-TYPE)606 shown in fig. 6C-F), a control message TYPE 1904 (similar to the control message TYPE 654 in fig. 6F), a destination node state 1906 indicating whether the destination node is sleeping, powered off, or awake, a sequence group identifier 1908 identifying which sequence group the packet belongs to, a start 1910 identifying the sequence identifier of the first sequence number (of the group), a source/destination node identifier 1912 identifying the source or destination node, a GEM source/destination identifier 1914 (similar to the GEM packet ID (GEM-PKT-ID)614 of fig. 6B), a GEM packet TYPE 1916 (similar to the GEM packet TYPE (GEM-PKT-TYPE)616 of fig. 6B), and a header TYPE field 1906, An end of sequence identifier 1918 identifying the second sequence number (of the group), a receive acknowledgement request indicator 1920, a HEC 1922 (similar to HEC 624 of fig. 6B-F), and an optional bitmap 1924. In particular, the source/destination node identifier 1912 may identify the destination node, and the GEM source/destination identifier 1914 identifies the GEM destination, as generated by the kernel confirmer 1812.
The receive acknowledgement request indicator 1920 may indicate that: whether the acknowledgement is invalid, whether the range of sequence numbers from the value of the start 1910 of the sequence identifier to the value of the end 1918 of the sequence identifier is valid, whether only the sequence numbers of the values of the start and end 1910, 1918 of the sequence identifier are valid (not necessarily in between), or whether a bitmap 1924 is included, where each bit of the bitmap 1924 represents one sequence identifier and indicates whether the packet assigned to that identifier is validated (e.g., received without any uncorrectable errors). Alternatively, more or less fields may be used. In some embodiments, bitmap 1924 includes bits or cells/portions of each sequence number in the sequence group. Optionally, bitmap 1924 may include less than one bit or unit/portion per sequence number. For example, the bitmap 1924 may include only enough bits/cells to identify sequence numbers that are not within a range of sequence numbers from a value at the beginning 1910 of the sequence identifier to a value at the end 1918 of the sequence identifier. As a result, the overhead space required to transmit the bitmap 1924 may be reduced by utilizing the start/ end 1910, 1918 of the sequence identifier.
If there are no uncorrectable errors, the RX-GEM-ACK may indicate that all packets identified by the sequence identifier symbol and the beginning and end of the sequence identifier are valid. If there are one or more data packets 600 with uncorrectable errors, the RX-GEM-ACK may indicate which data packet in the burst message is valid using a bitmap comprising bits for each sequence number in the sequence group, where each bit represents one of the data packets/sequence numbers and indicates whether the data packet/sequence number is valid or invalid. Alternatively or additionally, the RX-GEM-ACK may identify a range of all valid or invalid sequence numbers (e.g., using the start of sequence and end of sequence fields as range markers) so that the bitmap may exclude that range of sequence numbers for the group (making the bitmap and RX-GEM-ACK smaller).
When the source node 204,208 receives the RX-GEM-ACK, the node initiator 1802 identifies which data packets 600 are effectively transmitted and removes their associated retransmission flows (e.g., removes retransmission table entries and/or local copies). All retransmission flows for the remaining packets (which have uncorrectable errors) remain in node initiator 1802, and node initiator 1802 continuously updates its retransmission timer and then retransmits them in subsequent bursts in subsequent grant windows (while updating its retransmission counter value) after its retransmission timer passes the retransmission threshold. This process is repeated until all the packets are effectively transmitted (and thus their streams are removed) or the retransmission counter value reaches the retransmission threshold, and action must be taken as described above. Furthermore, as described above, these retransmitted data packets 600 may be combined with other data packets in subsequent burst messages to effectively fill the grant window.
If the source node 204,208 does not receive the missing burst acknowledgment message, the RX-GEM-ACK, and the rebroadcast or multicast of the burst message (sourced by the source node 204, 208) for any reason, the node initiator 1802 continuously (e.g., every cycle) updates each data packet 600 retransmission timer and initiates retransmission as if the missing burst acknowledgment message was received when the timer reached the threshold. This process continues until all data packets 600 are effectively delivered or the retransmission counter value passes the retransmission threshold and action is taken as described above.
If the destination of a message (or for messages originating within core 200) is one or more other nodes, core 200 needs to process the message and forward the message from one of root ports 230 to the destination node 204, 208. As described below, this transmission from the root port 230 to the nodes 204,208 implements its own instance of a message retransmission mechanism operating in parallel with the mechanisms described above.
In this root-to-node transfer operation, as described in the data transfer operation section above, the core 200 processes the message (e.g., lookup, header modification, or other packet processing function), determines the destination nodes of the message, and passes the message to the root ports 230 coupled to those destination nodes. The root port 230 then broadcasts, multicasts, or unicasts a message (e.g., using a broadcast PHY frame) using the next broadcast window to some or all of the nodes 204,208 within the networks 206, 210 coupled to the root port 230, where the message includes one or more GEM packets 600. As described above, each data packet may include a node identifier field 612 (e.g., a destination node), a GEM identifier 614, a transmission sequence identifier 618, an acknowledgement request, and/or other data as described above. In particular, the node identifier 614 may include a portion (e.g., two bits) that identifies the sequence group of the data packet, while the remaining portion identifies the destination node.
Similar to in the node initiator 1802, the kernel initiator 1810 creates a new retransmit flow that includes a local copy of the data packet 600 in the kernel initiator 1802 local memory and a new entry in the retransmit table for each GEM data packet 600 in a broadcast/multicast/unicast message in which an acknowledgement (acknowledgement) is requested. As described above, these retransmission streams may be used to retransmit one or more data packets 600, if desired. The new entry may be the same as the entry of the node/ gate initiators 1802, 1806 described above, including, for example, one or more of a port identifier, a node identifier, an epoch identifier, a sequence group identifier, a GEM packet header, a GEM pointer, a GEM retransmission timer, a GEM retransmission timeout threshold, a GEM retransmission counter, and a maximum GEM retransmission threshold.
For unicast messages, the retransmission stream may be operated by a virtual initiator (implemented by core 200) dedicated to nodes 204,208, which nodes 204,208 are destinations for unicast messages/packets 600. As described above, the core initiator 1810 may implement a separate virtual initiator for each node 204,208 that handles retransmission flows for packets unicast to that node 204, 208. For broadcast or multicast messages, the retransmission stream may be operated by a broadcast or multicast specific virtual initiator corresponding to all nodes 204,208 included in the broadcast (e.g., all nodes of the network 206, 210) or all nodes 204,208 included in the multicast (e.g., a subset of all nodes of the network 206, 210). In such an embodiment, the root 200 may designate one of the nodes 204,208 of the node broadcast or multicast group as a confirmation node, where the node 204,208 is used to confirm all messages/packets broadcast/multicast on the network 206, 210 (even if the message/packet is not intended for the node), while the other nodes 204,208 do not respond (even if the message/packet is intended for those nodes). Thus, rather than multiple individual virtual initiators for each node creating a retransmission stream for each data packet sent to the node, a broadcast or multicast specific virtual initiator may create a single retransmission stream for an entire broadcast/multicast message that corresponds only to the acknowledging node but may represent the entire broadcast/multicast group of nodes 204, 208. Alternatively, the core 200 may designate the nodes 204,208 of the acknowledgement subset of the networks 206, 210 as acknowledgement nodes, where there is a separate broadcast or multicast-specific virtual initiator implemented by the core initiator 1810 for each node of the acknowledgement subset (which virtual initiator would still be a separate virtual initiator that is smaller than all nodes in the node's broadcast/multicast group).
In some embodiments, the acknowledging node is selected based on the order in which the broadcast messages were received by the nodes 204,208 (e.g., the last node in the order may be selected because it is most likely to receive errors). Alternatively, a broadcast or multicast specific virtual initiator may be omitted and for a "unicast" virtual initiator for each node 204,208, the "unicast" virtual initiator may create a retransmission stream if the node is the destination of one or more data packets of a broadcast/multicast message. In such embodiments, each node 204,208 may send an acknowledgement message back to the root port 230 (not just the selected one or subset). It should be noted that the following discussion describes a single destination or confirmation node for the sake of brevity. However, it should be understood that in the case of multiple destination or confirmation nodes, each destination or confirmation node will perform the actions described herein.
Subsequently or concurrently, root port 230 (notified by kernel initiator 1810) may send an authorization window message (see fig. 6D and 6E) to the destination or validation node 204,208 identifying the authorization window assigned to the node for receipt of the validation message. After sending the grant window message, kernel validator 1812 monitors the grant window to receive burst validation messages from the destination or validation node 204, 208.
If the acknowledgment message RX-GEM-ACK is not received within the retransmission timer period, the kernel initiator 1810 (via the virtual initiator associated with the packet whose retransmission timer has expired) recreates the unicast/broadcast/multicast message using the retransmission table and the copy of the GEM packet 600 and retransmits the rendered message to the same nodes 204,208 as the original message using the (optionally higher priority) next broadcast window in the same manner as the original message. At the same time, the core initiator 1810 resets the retransmission timer and increments the retransmission counter for each packet retransmission stream (e.g., in each unicast virtual initiator or broadcast/multicast virtual initiator of the associated node). However, if incrementing the retransmit counter would cause the value to exceed the retransmit threshold, the kernel initiator 1810 performs an action to diagnose the cause of the persistent failure in message delivery. For example, the initiator 1810 may send an interrupt to the core CPU to perform a link diagnostic test, clear the retransmission stream including the stored local copy and/or entry, the root port 230 may extend the length of the preamble of the burst message, and select a stronger FEC algorithm for future burst messages and/or other diagnostic actions.
For each of the nodes 204,208 that receive the broadcast/multicast/unicast message but are not destination nodes for the unicast or acknowledgement nodes of the multicast/broadcast, the nodes 204,208 may accept the data packets if they are ready to be sent to the nodes 204,208, but the nodes 204,208 will not send acknowledgements to the root port 230 (because the nodes 204,208 are not destination or acknowledgement nodes).
For each of the nodes 204,208 receiving the broadcast/multicast/unicast message and being a destination node for the unicast or acknowledgement node of the multicast/broadcast, the nodes 204,208 may accept the data packets if they are ready to be sent to the nodes 204, 208. But even if not, the nodes 204,208 will send acknowledgements to the root port 230 (since the nodes 204,208 are destination or acknowledgement nodes). Specifically, when/if the destination or acknowledgement node 204,208 receives the broadcast/multicast/unicast message, the destination or acknowledgement node 204,208 decompresses the message (e.g., broadcast PHY frame) and parses the received GEM packet 600. For each GEM packet 600, the node confirmer 1802 validates the message that includes the packet 600. For example, the node validation party 1802 may determine whether there are any uncorrectable errors in any of the packets 600, including whether the source of the packet 600 cannot be determined due to errors in the header of the GEM packet 600. In some embodiments, validating each GEM packet 600 includes performing one or more of Forward Error Correction (FEC), Cyclic Redundancy Check (CRC) validation, Bose, Ray-Chaudhuri, Hocquenghem (BCH) code validation, and/or other types of packet error correction.
If one or more of the packets 600 request an acknowledgement according to the request field 620, the node acknowledger 1804 constructs and sends a received GEM acknowledgement message (RX-GEM-ACK) to the root port 230 (of the network 206, 210) indicating which acknowledgement request packets 600 are valid and which, if any, packets have uncorrectable errors so that they need to be retransmitted. The RX-GEM-ACK may be substantially similar to the RX-GEM-ACK sent by the kernel acknowledger 1812 described above with respect to fig. 19. However, in some embodiments, the source/destination node identifier 1912 may identify the source node and the GEM source/destination identifier 1914 may identify the GEM source when generated by the node confirmer 1804.
If there are no uncorrectable errors, the RX-GEM-ACK may indicate that all packets identified by the sequence identifier number and the beginning and end of the sequence identifier are valid. Conversely, if there are one or more data packets 600 with uncorrectable errors, the RX-GEM-ACK may indicate which data packet in the broadcast/multicast/unicast message is valid using a bitmap that includes a bit for each sequence number in the sequence group, where each bit represents one of the data packets/sequence numbers and indicates whether the data packet/sequence number is valid or invalid. Alternatively or additionally, the RX-GEM-ACK may identify a range of all valid or invalid sequence numbers (e.g., using the start of sequence and end of sequence fields as range markers) so that the bitmap may exclude that range of sequence numbers for the group (making the bitmap and RX-GEM-ACK smaller).
When source root port 230 receives a RX-GEM-ACK from a destination or acknowledgement node, the corresponding virtual initiator of kernel initiator 1810 identifies which data packets 600 are effectively transmitted and removes their associated retransmission flows (e.g., removes retransmission table entries and/or local copies). The retransmission streams of all remaining data packets (which have uncorrectable errors) remain in the corresponding virtual initiators, which continuously update their retransmission timers and then retransmit them in subsequent broadcast windows in subsequent broadcast/multicast/unicast messages (while updating their retransmission counter values) after their retransmission timers pass the retransmission threshold. These retransmitted packets 600 may be combined with other packets 600 in subsequent broadcast/multicast/unicast messages in order to fill the transmission window of the root port 230.
As described above, if root port 230 does not receive an RX-GEM-ACK for any reason, the corresponding virtual initiator of core initiators 1810 continuously (e.g., every cycle) updates the retransmission timer for each data packet 600 and initiates retransmission when the timer reaches a threshold. This process is repeated until all data packets 600 are effectively transmitted (and thus their streams are removed) or the retransmission counter value reaches the retransmission threshold, and action must be taken as described above. Thus, system 100 provides the advantage that each message transmission (e.g., node-to-gate; node-to-root; gate-to-root; root-to-gate; root-to-node) within bus 104 can implement its own parallel message retransmission mechanisms, such that these mechanisms together provide a robust message passing guarantee over bus 104.
Although the description herein focuses on messages between nodes 204,208 and root port 230, it should be understood that messages may be forwarded between nodes 204,208 and root port 230 through one or more gates 202. In such embodiments, the gate 202 may interact with the nodes 204,208 in the same manner as the root port 230 when receiving messages from the nodes 204,208 or sending messages to the nodes 204, 208. Further, when receiving messages from root port 230 or sending messages to root port 230, the gate may interact with root port 230 in the same manner as nodes 204, 208. In other words, gate 202 provides an acknowledgement to the node and receives an acknowledgement from root port 230, and vice versa, since the message is transmitted from node 204,208 to gate 202 to root port 230 and back. Thus, gate 202 provides another layer of message retransmission mechanism that ensures that acknowledgement response times are low so that the mechanism does not interfere with high speed communications on bus 104. Additionally, when acting on behalf of the virtual nodes represented by gate 202, one or more gates 202 may act in the same manner as nodes 204,208, where gate 202 implements a virtual gate initiator and an authenticator for each virtual node.
Further, it should be noted that when describing functionality relating to kernel initiator 1810 and kernel validator 1812, such functionality may be implemented via virtual initiators and validators operated by kernel 200. In particular, each root port 230 has a virtual initiator and an authenticator for each node 204,208 (implemented by the kernel 200) within its network 206, 210 that perform the claimed functions when the functions relate to messages in which that node 204,208 is a source and/or destination. Additionally, the core 200 may implement an additional virtual initiator dedicated to each root port 230 of the plurality of nodes 204,208 in the network that multicast or broadcast messages to the root port 230.
Further, system 100 may acknowledge when a message with an error is received, rather than acknowledge when a message without an error is received. In such embodiments, in addition to the nodes/roots: assuming that the message has been correctly transmitted, the stored retransmit data is released if no acknowledgement is received within the retransmit time period, and at the same time, the operation of system 100 is substantially similar to that described herein, except for transmitting an acknowledgement when a message with uncorrectable errors is received (rather than when a correct or correctable message is received).
FIG. 20 illustrates a method of implementing an ensure message passing mechanism on a control and sensor bus, in accordance with some embodiments. As shown in fig. 20, at step 2002, one of the root ports 230 sends a window grant message to one of the nodes 204, 208. At step 2004, one of the nodes 204,208 bursts a message (e.g., a burst PHY frame message) to the root port 230 (destined for the core 200 or one or more other nodes 204, 208) within the transmission window. As described above, such a burst message may include a plurality of GEM packets 600, where each packet includes destination information 612 and an acknowledgement request indicator 620, as well as other data. At step 2006, one of the nodes 204,208 stores a copy of the message using its validation engine. At step 2008, if root port 230 receives a burst message without any uncorrectable errors (e.g., no errors in the burst PHY header and/or errors that can be corrected using FEC data), root port 230 sends a data reception acknowledgement message to one of leaf nodes 204, 208. As a result, at step 2010, one of the nodes 204,208 may remove the local copy of the burst message.
In some embodiments, if root port 230 does not receive a data message within the transmission window, root port 230 sends a missing burst message to one of nodes 204, 208. Upon receiving the lost burst message, one of the nodes 204,208 may retransmit the burst PHY frame message using the local copy. In some embodiments, if root port 230 receives a burst PHY frame message with uncorrectable errors in a subset of GEM packets 600 (e.g., some GEM packets 600 have errors that cannot be corrected using FEC data), root port 230 sends a data portion receive message to one of nodes 204, 208. As described above, the data portion receive message may include packet loss/receipt information identifying a subset of the packets 600 that need to be retransmitted. In some embodiments, in response to receiving the data portion receive message, one of the nodes 204,208 removes from the copy (as these data packets 600 no longer need to be sent) data packets 600 that are not part of the subset (e.g., data packets of a burst message that do not have uncorrectable errors) based on the lost/received information. As described above, the root port 230 may construct one or more of a bitmap (bit map) and start and end pointers indicating consecutive data packets that are correctable/correct (or uncorrectable/incorrect), where each bit (bit) corresponds to whether the data packet is normal or needs to be resent.
In some embodiments, one of the nodes 204,208 resends the subset (e.g., the packet with the uncorrectable error) to the root port 230 in a new burst message (e.g., after expiration of a timer associated with each subset) in a subsequent transmission window that is authorized for the one of the nodes 204, 208. In such embodiments, if there is room in the subsequent transmission window, the node 204,208 may add additional data (e.g., a new GEM packet 600) to the new burst message to increase the throughput of the bus 104. In some embodiments, if the destination of the burst message is a node 204,208 within the same network 206, 210 (e.g., a broadcast network associated with root port 230) as one of the nodes 204,208 that sent the burst message, the root port 230 may omit sending the data reception message because the broadcast of the burst message may serve as an acknowledgement. In particular, when one of the nodes 204,208 receives a burst message (broadcast from the root port 230 to all nodes in its broadcast network 206, 210) and is itself indicated as a source, one of the nodes 204,208 may treat this as a data reception message that received the burst message and clear the local copy and associated data.
In some embodiments, root port 230 passes the burst message to another port 230, and the root port 230 forwards/broadcasts the burst message from core 200 to the nodes of network 206/210 of the other port 230. In doing so, the root port 230 may store a local copy of the message (in the same manner as one of the nodes 204,208 described above) that may be used to replay some or all of the message if transmission of the message is not acknowledged by the destination node 204, 208. In some embodiments, for each network 206, 210 associated with root port 230, core 200 may select one or a subset of nodes 204,208 as the target validation node. As a result, when a message is broadcast to a node 204,208 of one of the networks 206, 210, only the targeted validation nodes 204,208 (not all nodes in the broadcast or multicast) are used to respond/validate whether they received the message without any uncorrectable errors (and/or what data packet 600 needs to be retransmitted). Thus, the system 100 provides the advantage of reducing the cost/congestion incurred by this mechanism by reducing the number of nodes that need to send data receipt acknowledgement messages back to the root port 230. In some embodiments, the nodes 204,208 furthest from the root port 230 (such that they are the last nodes to receive any broadcast messages) are the selected nodes 204, 208.
In some embodiments, the missing burst acknowledgement message or the received GEM acknowledgement message may be combined into a single message with a subsequent grant message authorizing a window for resending the missing data subset and/or missing the entire message. In some embodiments, the root port resizes one or more transmission windows authorized to the leaf node based on the size of the data with the uncorrectable errors for resending the data with the uncorrectable errors received by the root port (in the original message from the leaf node).
Multi-layer security
Figure 13 illustrates a bus 104 that includes a multi-layered security architecture that includes a component layer, a network layer, and a behavior layer, in accordance with some embodiments. Optionally, one or more of these layers may be omitted. Thus, the bus 104 of FIG. 13 may be substantially similar to the bus of FIG. 2, except for the differences described herein. As shown in fig. 13, bus 104 may include a security module 1302, a dedicated security module managing Central Processing Unit (CPU)1304, and one or more behavior monitoring nodes 1306. In some embodiments, there are one or more separate behavior monitoring nodes 1306 in each network 206 and/or sub-network 210 for monitoring the behavior of the nodes 204,208, 234 of those networks 206/210. Optionally, one or more behavior monitoring nodes 1306 may monitor the behavior of multiple or all of the nodes 204,208, 234 of the network 206 and/or the sub-network 210. In some embodiments, each core 200 includes a separate security module 1302 and a dedicated security module management CPU1304 within the core 200. Alternatively, one or more cores 200 may not have a separate security module 1302 and dedicated security module management CPU1304, and/or security module 1302 may be located outside of core 200 within bus 104 and dedicated security module management CPU1304 may be located. In some embodiments, each secure module 1302 has a separate dedicated secure module management CPU1304 that operates with the secure module 1302. Alternatively, one or more of the dedicated security module management CPUs 1304 can operate with a plurality of different security modules 1302.
The component layer may include a security module 1302, a dedicated security module management CPU1304, and a debug element 1306. As shown in fig. 14, the security module 1302 may include a memory 1402 (e.g., a non-volatile memory), a one-time programmable (OTP) memory 1404, a random number generator 1406 (e.g., a True Random Number Generator (TRNG)), a key generator 1408 (e.g., a hardware encryption key generation engine), a boot Read Only Memory (ROM)1410, a Random Access Memory (RAM), one or more CPUs 1414, and a security module interface 1416. In some embodiments, the module 1302 may include external memory via additional memory 1402 '(e.g., additional non-volatile memory) and/or additional RAM 1412'. In such embodiments, module 1302 may access, read from, or write to external memory via interface 1416. External memory may be located in one or more of cores 200 and/or elsewhere on bus 104. In some embodiments, only key generator 1408 has access to OTP memory 1404, and thus OTP memory 1404 is isolated from external access. In some embodiments, one or more of the elements of module 1302 may be omitted or duplicated, and/or different elements may be added.
OTP memory 1402 is a memory that cannot be reprogrammed or read without damaging the memory, and thus can only be programmed as a single instance. Within block 1302, the OTP memory 1402 is programmed to store one or more master seeds (primary seed) and/or a unique master key (e.g., an endorsement master key), a storage key, and a platform key derived from one or more master seeds of each core 200 and node 204,208, 234 of the bus 104. These master seeds and master keys are never shared outside of module 1302 and may be used within module 1302 to derive all other security keys (e.g., form a hierarchical tree of keys) for the nodes/kernels to which they have been assigned/associated. In particular, key generator 1408 may access the primary keys to generate secondary keys for one or more nodes and/or kernels, which may then be stored in memory 1402 (and in additional memory 1402' if memory 1402 is full). In some embodiments, the master platform key is used to derive one or more of a platform key for each node/core (for network credentials) and a network encryption key (e.g., AES encryption) for each node/core for encrypting messages on the bus 104. In some embodiments, the network encryption key may be started in each core 200 (and distributed to nodes coupled to the core). These keys may be changed during the period after the kernel 200 is restarted. Further, during operation of the kernel 200, the kernel 200 and/or system 100 may change network encryption keys and assign new keys to nodes (optionally excluding nodes exhibiting suspicious behavior as indicated by the behavior module described below). In some embodiments, the network encryption key is in a temporary key hierarchy in module 1302. In some embodiments, the primary storage key may be used to derive one or more of each node/core's memory 1402, 1402 ' encryption key and each node/core's file system encryption key. In some embodiments, the primary birth/endorsement key may be used to derive one or more of each node/kernel's identity keys for use in the identification/authentication process.
For example, a Root Security Key (RSK) for a node/core may be an RSA key generated (e.g., by key generator 1408) for the node/core based on one or more master keys (e.g., birth keys) for the node/core; the storage key for a node/core may be an RSA key generated (e.g., by key generator 1408) for the node/core based on the RSK of the node/core; the signing key (SignK) used to digitally sign the message for the node/core may be a key generated for the node/core based on the SK of the node/core (e.g., by key generator 1408); the node/core's Root Network Key (RNK) may be an RSA key generated for the node/core (e.g., by key generator 1408) based on the node/core's RSK; and the Network AES Key (NAK) used to encrypt/decrypt messages for the node/core may be transmitted to the node/core along with RNK. Other types of secondary keys may alternatively be used and/or derived from the primary key. Each secondary key of each node/core may be stored in memory 1402, 1402' of module 1302 in encrypted form along with their hierarchical relationship to each other and/or to their primary keys. One or more of these keys (in addition to the master seed and/or master key) for each node/core may be reset, redistributed, and/or recalculated by the dedicated security module 1302 periodically and/or in response to the current state (e.g., a detected behavioral state determined by a behavioral layer as described below). In some embodiments, one or more of the primary and secondary keys may only be used within the security module 1302. In some embodiments, the encryption key may be loaded into module 1302, decrypted and saved for later use.
Further, the primary and/or secondary keys may be used to provide credentials to each node and/or kernel. In particular, each kernel may be provided with an authentication center (e.g., stored in memory 1402, 1402') for verifying/authenticating the valid kernel to which the node may connect (see the mutual authentication procedure below). Similarly, each node may be provided with a network certificate and a birth certificate (e.g., stored in memory 1402, 1402') for joining one of the networks 206, 210 of the bus 104 and certifying the identity of the node on the bus 104, respectively. Further, the origin software certificate authority may be stored in OTP memory 1404. The authentication code of the authentication center and its entirety itself may be provided by the original owner of the system 100 (e.g., along with the seed) and may be used to authenticate software that can be loaded and used on the bus 104 (see trusted boot process below).
Random number generator 1406 may generate random numbers and/or strings that may be used by key generator 1408 along with a master seed and/or key to generate a secondary key of a key tree for each node 204,208, 234 and/or kernel 200. In some embodiments, the key generator 1408 may also generate an authentication code for messages that enable secure communications within the networks 206, 210, and/or may be used to generate hash-based keys for nodes and/or kernels. The security module interface 1416 may provide an interface for communicating with the dedicated security module management CPU1304 for receiving and responding to system 100 requests.
In some embodiments, module 1302 includes a reset function that can reset the settings of the security module such that all memories 1402, 1402' are deleted, thereby removing all security keys stored therein. However, even during reset, the data (e.g., master seed/key) stored in OTP memory 1404 is not affected. In some embodiments, reset function 1416 cannot be activated remotely, thus requiring the physical presence of an administrator to reset security module 1302.
The dedicated security module management CPU1304 may be isolated from all other CPU subsystems within the network 100 and dedicated to operation with the security module 1302. Thus, dedicated security module management CPU1304 provides unique access to security module 1302 within system 100. In order for any of the operative elements of bus 102 to access secure module 1302, they must interface with a secure module management CPU1304, which then communicates with module 1302, in order to retrieve the desired data.
The component layer may also implement a cascade manager infrastructure and a trust bootstrapping process. In particular, fig. 15 shows that bus 104 includes a plurality of subsystems divided into a plurality of cascade manager levels, according to some embodiments.
As shown in fig. 15, the highest level may include one or more of the following: the dedicated security module manages a CPU1304, a security module 1302, one or more controllers (e.g., a microcontroller unit (MCU))1502 for performing real-time control of the device 102, and one or more converters 1504 (e.g., analog-to-digital converters (ADCs), digital-to-analog converters (DCAs)). In some embodiments, the controller unit 1502 may incorporate one or more computer system applications or user applications. The second stage may include one or more network engines 1506. In some embodiments, one or more additional levels may be added. Each component of each level is provided with access to underlying resources/services, but each low-level component cannot direct access/usage to an upper-level resource/service. Conversely, if an upper layer resource/service is needed, the lower level component may send a request (e.g., an interrupt signal) for the needed resource/service to the higher level component. Thus, the high-level component may enforce security protocols on the low-level component by enforcing these protocols when the low-level component requests are granted, executed, or denied. Meanwhile, only the dedicated security module management CPU1304 can access the security module 1302 (in which encryption keys and certificates are stored). Alternatively, higher or lower levels and/or components may be used.
The trusted boot process is a secure boot process in which each boot program (e.g., the boot loader of a node or other element of system 100 and/or the operating system image of management CPU1304, controller 1502, drivers, user applications, and/or other programs) is authenticated prior to booting the next level of the system, such that programs that cannot be authenticated are prevented from running until authentication can be established. In particular, the memory 1402 of the security module 1302 may store a measurement set (e.g., a hash or other measurement metric) for each program (e.g., each image of the program and/or the boot loader) to be booted on the system 100 and an origin certificate authority that may verify the credentials of the booted program. During manufacture or startup of bus 104, an original authentication center (e.g., provided by an original owner) may be stored in OTP memory 1404. The measurement set for each procedure may include: gold measurement set (e.g., factory/initial setup); the last set of measurements recorded since the last boot attempt; and a current measurement set recorded from the boot of the program when the program is currently running on system 100. Furthermore, instead of overwriting existing measurement entries each time a program is updated, new entries for the golden, last and current measurement sets are stored (so the system can return to the previous measurement set if they wish to recover from subsequent updates). In some embodiments, each bootstrap includes a certificate (e.g., a manufacturer certificate), the bootstrap itself, and a measurement of the bootstrap (e.g., a signature code hash). As described below, the certificates and measurements of each boot program need to be verified before the boot program can be executed/booted.
In operation, while stopping the booting of all other programs, system 100 first uses the certificate authority stored in OTP memory 1404 to determine whether the boot loader certificate of the boot loader software of dedicated secure module management CPU1304 is authentic. For example, the certificate may be a signature of a key that may be decrypted using a key that is verified by a certificate authority. If the certificate is not authentic, the boot will abort and take corrective action (e.g., use the previously stored version, issue a management alert, etc.). If the certificate is authentic, the system measures the boot software image of the dedicated security module management CPU1304, stores the result as the last measurement set of the associated entry in the security module 1302, and compares the result with the stored gold measurement set of the entry. If the measurements match (or substantially match within a defined range of inconsistencies), the system directs the security module to manage CPU1304 and records the results as the current measurements for the associated entry. The system may then repeat this pattern to boot each subsequent program (while stopping the booting of other programs) and measure the program in the same manner, store the results, compare it to the stored golden measurement set, and boot the program if the results match (or substantially match within a defined range of inconsistencies). If any of the programs' measurements do not match (or do not substantially match within a defined range of inconsistencies), the measurements may be recalculated and/or the booting of the programs may be stopped and/or skipped until the administrator approves the inconsistency or approves booting from a previously stored (e.g., previously versioned) entry.
In some embodiments, if a subsequent user wants to add additional software that does not have credentials from the original certificate authority, there may be multiple stages of bootloaders, each using a subsequent certificate authority (authorized by the previous certificate authority) to authenticate the credentials of its boot software. Specifically, in such a multi-phase boot process, after authenticating the first phase boot loader software certificate and software measurements (e.g., hashing) as described above, the first phase boot loader software is executed and the first phase authentication center (e.g., provided by the original bus 104 owner and stored in OTP memory 1404) generates and loads a new authentication center into RAM 1412, 1412' of secure module 1302. This new certificate authority is signed by the original certificate authority and issues a phase 2 bootloader software certificate. This second stage bootloader software certificate may be used with the second stage bootloader software so it may be authenticated by the security module 1302 (by using a new certificate authority that is not the original certificate authority) in the same manner as the first stage bootloader software certificate is verified as described above.
If the phase 2 boot loader software certificate is verified, then software measurements (e.g., hashes) are made on the phase 2 boot loader software to determine if they substantially match the phase 2 golden measurements (or if this is the first time, the measurements are stored as golden measurements). If the measurements substantially match, then stage 2 boot loader software is executed. If any of the authentication failures, the boot of the boot loader software may be aborted or retried. This pattern can then continue for any subsequent stages, the previous stage generating new certificate authorities and software certificates for each subsequent stage in the chain. Thus, the system may ensure that each program running on bus 104 is authenticated.
The debug element 1306 may be implemented via one or more debug access ports (e.g., Joint Test Action Group (JTAG) ports) and/or remotely via the network 210 along with a debug control Interface (IF) and a debug controller. The debug element requires authentication before access to the bus 102 is enabled. In particular, the debug element requires debug credentials issued by the network component (e.g., a node manufacturer is required to enable a debug control interface (e.g., core 200) within the SoC). With respect to debugging of the security module 1302, the debug control IF may be enabled via the dedicated security module management CPU1304 and may only be valid for a predetermined period of time and/or other particular pre-programmed states. In some embodiments, debug element 1306 is disabled at runtime (e.g., to prevent runtime hacking).
Thus, the component layer provides the following advantages: unknown or unauthorized components are prevented from communicating or otherwise disrupting the operation of the bus 104, including physical and software corruption attempts. Further, the component layer may prevent power rail attacks by masking power consumption for spoofing security keys.
The network layer includes implementations of bidirectional node/kernel authentication and/or message encryption protocols. Whenever a node 204,208, 234 joins the bus 104 (e.g., the device 102 is coupled to the node 204,208, 234), bidirectional node/kernel authentication may be implemented on the bus 104, as needed and/or in response to behavior patterns detected by the behavior layer. Before the process begins, the identifiers of the new nodes (e.g., networking certificates) are stored in a database of the memory of the kernels 200 with which the nodes 204,208, 234 wish to communicate, and the identifiers and/or certificates (e.g., certificate authorities) of those kernels 200 are stored on the nodes 204,208, 234. After the node/kernel is authenticated, the kernel 200 credentials are stored on the nodes 204,208, 234 for future communication/authentication. These certificates may be kernel/node manufacturer certificates that are provided to the security module 1302, and the security module 1302 may then provide them (or their derivatives using one or more master seeds and/or keys of the kernel/node) to the kernel/node. In particular, each core 200 may store identifiers and/or certificates for all nodes 204,208, 234 within the network 206, 210 to which the core 200 belongs, and each node 204,208, 234 may store identifiers and/or certificates for all cores 200 within the network 206, 210 to which the node 204,208, 234 belongs.
Figure 16 illustrates a method of implementing a bidirectional node/kernel authentication protocol, in accordance with some embodiments. As shown in fig. 16, in step 1602, a node 204,208, 234 requests to join (or reestablish) communication with the kernel 200 under a policy (e.g., public, private, or otherwise) by sending a request message to the kernel 200 that includes an identifier of the node 204,208, 234. The policy may define the privilege level provided to the node 204,208, 234 and/or the encryption level required for the node 204,208, 234 to communicate. In step 1604, the kernel 200 verifies the identity of the node 204,208, 234 by comparing the received identifier with identifiers stored in an identifier database of the kernel 200. In step 1606, if the identifier of the node 204,208, 234 is verified, the kernel 200 sends a certificate request message to the node 204,208, 234. In step 1608, the node 204,208, 234 sends the node certificate to the kernel 200. In some embodiments, the node 204,208, 234 selects which of the stored certificates to send based on the policy requested in the request message of step 1602.
In step 1610, the kernel 200 verifies the node certificate (and the node can prove its ownership of the certificate) by comparing the received certificate with the certificate of the node stored in the certificate database of the kernel 200. In step 1612, if the credentials of the nodes 204,208, 234 are verified, the kernel 200 sends the kernel credentials to the nodes 204,208, 234. In some embodiments, core 200 selects which of the stored certificates to send based on the policy requested in the request message of step 1602. In step 1614, the node 204,208, 234 verifies the kernel certificate (and the kernel can prove its ownership of the certificate) by comparing the received certificate with the kernel certificate of the kernel 200 stored in the certificate database of the node 204,208, 234. In step 1616, if the certificate of the core 200 is verified, the node 204,208, 234 sends a message encryption key request message to the core 200. In some embodiments, the credential request message and its verification are based on policies, such that different policies are associated with different credentials, and authentication of different credentials requires submission of credentials associated with the correct policy.
In step 1618, the core 200 generates a new encryption key or retrieves an encryption key (e.g., NAK) stored in the secure module 1302 (e.g., via a request to the secure module management CPU 1304). In step 1620, kernel 200 sends the encryption key to nodes 204,208, 234. In step 1622, the node 204,208, 234 receives and stores the encryption key and sends the encryption key to the security module 1302. In some embodiments, the kernel 200 encrypts the encryption key (via the security module management CPU 1304) before transmitting it to the node 204,208, 234 using the Root Network Key (RNK) of the kernel 200 and the node 204,208, 234 so that other nodes cannot read the encryption key during transmission. In step 1624, the node 204,208, 234 sends an acknowledgement to the kernel 200 that the encryption key was received. Thus, system 100 enables each core/node pair to establish (and recreate) (either for use only by that core/node pair, or shared by a set of one or more nodes and/or cores) an encryption key for use in encrypting/decrypting communications between core 200 and nodes 204,208, 234 over bus 104.
Prior to this authentication process, new nodes 204,208, 234 joining the bus 104 may listen for broadcast messages from the core 200, but are restricted from sending/bursting messages on the bus 104 until they are authenticated. When listening, the new node 204,208, 234 will not be able to decrypt (e.g., by AES) the encrypted security policy message, but may learn the unencrypted common policy message. Furthermore, the authentication process described above may require system administrator privileges to perform.
The message encryption protocol enables nodes 204,208, 234 and/or core 200 of system 100 to encrypt all communications over bus 104 (if subject to a security policy) during a mutual authentication procedure using an encryption key (e.g., an AES key) assigned to node 204,208, 234 and/or core 200 by management CPU1304 and/or security module 1302. Alternatively, if the communications are not sensitive, they are subject to a common policy in which encryption may be omitted. The encryption key used to encrypt the message may be unique for each node/core pair communicating, such that different node/core pairs may use different encryption keys to encrypt their communications. Thus, the core 200 may store multiple encryption keys, each encryption key being associated with one or more different nodes 204,208, 234 and used to encrypt/decrypt messages from those one or more nodes 204,208, 234. Similarly, nodes 204,208, 234 may store multiple encryption keys, each encryption key associated with one or more different cores 200 and used to encrypt/decrypt messages from those one or more cores 200. Thus, even if the decryption key is compromised, an intruder can only decrypt messages from nodes 204,208, 234 and/or core 200 using the key, rather than messages encrypted using other keys. Thus, the network layer of system 100 provides the following advantages: such that a separate key may be used for each node/kernel communication combination and/or for encryption keys shared by some or all of the nodes/kernels, thereby customizing the security level of system 100. In addition, the network layer provides the advantage of mutual authentication, ensuring that nodes and kernels are authenticated before joining the network, and subsequent communications are encrypted to avoid unwanted snooping.
The behavior layer includes one or more behavior monitoring nodes (or cores) 1308 that may monitor the behavior of the nodes 204,208, 234 and/or cores 200 within the bus 104 (or a subset thereof) to detect and/or respond to anomalous behavior. In some embodiments, monitoring node 1308 is located within one or more of nodes 204,208, 234 and/or core 200. Alternatively or additionally, monitoring node 1308 may be separate from nodes 204,208, 234 and/or core 200.
In operation, monitoring node 1308 monitors and stores the behavior of one or more of nodes 204,208, 234 within bus 104 (and thus devices 102 coupled to them) and/or core 200. Monitoring node 1308 then compares the period of the monitored behavior to a set of stored behavior parameters or patterns to determine whether the period of the monitored behavior is within acceptable values of the behavior parameters (of the node/core). If the monitored behavior is not within the acceptable values of the behavior parameters, then monitoring node 1308 may take one or more security actions with respect to the node/kernel. These actions may include sending an alert or error message indicating the detected behavior, suspending operation of the node/kernel, requiring the node/kernel to re-authenticate with the system (e.g., via the authentication process of fig. 16), changing the encryption key used by all other nodes/kernels (so that the "misbehaving" node/kernel can no longer encrypt/decrypt messages on the system), and suspending operation of all or part of the bus 104, device 102, and/or system. Monitoring node 1308 may include a table that associates one or more actions with a node/kernel and its behavioral parameters, such that the action taken by monitoring node 1308 may be based on how the monitored action deviates from the behavioral parameters indicated by the table. In some embodiments, one or more actions are taken only if a predetermined number or percentage of monitoring nodes 1308 all indicate that the subject node/kernel's behavior (as independently monitored by those individual monitoring nodes 1308) is outside of the node/kernel's behavior parameters.
The monitored behavior may include message frequency, message type, power usage, message destination, message time, message size, congestion level, and/or other characteristics of the behavior of the nodes and/or cores described herein. Accordingly, the stored behavior parameters may include values, ranges, thresholds, ratios, or other metrics of one or more monitored behavior features and/or combinations thereof. The stored behavior parameters may be pre-programmed for each monitoring node 1308 (or shared by multiple monitoring nodes 1308) such that each type of node 204,208, 234 that it monitors and/or core 200 has an associated set of behavior parameters. Alternatively or additionally, one or more monitoring nodes 1308 may include artificial intelligence or self-learning functionality, wherein node 1308 generates and/or adjusts behavioral parameters for each type of node 204,208, 234 and/or kernel 200 that it monitors based on its behavior. For example, default behavior parameters may be pre-programmed and then periodically adjusted based on the monitored behavior during the time period.
Thus, the behavior layer provides the following advantages: detecting when a node and/or kernel is hacked due to key/certificate leaks (e.g., illegal software running on it using a legitimate certificate) and errors or other failures that result in misbehavior.
FIG. 17 illustrates a method of operating an intelligent controller and sensor intranet bus, according to some embodiments. As shown in fig. 17, in step 1702, for each subsystem of the bus 104, the bus 104 performs a trusted boot process that includes: the current boot image of the subsystem is measured and the subsystem is not booted unless the measurement of the current boot image matches the measurement of the boot image of the subsystem stored in the security module. In step 1704, the nodes 204,208, 234 and the kernel 200 perform a mutual authentication process by verifying an identity of the kernel 200 with one of the nodes 204,208, 234 based on one or more master seeds of the kernel 200 and/or derivatives of keys, and by verifying an identity of one of the devices 102 coupled to one of the nodes 204,208, 234 with the kernel 200 based on one or more master seeds of one of the nodes 204,208, 234 and/or derivatives of keys. In step 1706, behavior monitoring node 1308 stores a set of behavior parameters and actions corresponding to a set of one or more nodes 204,208, 234 and kernel 200, and for each of the set: monitoring and recording the behavior of one of the groups; comparing the monitored behavior with a behavior parameter corresponding to the one of the group and the behavior parameters in the set of actions; and if the monitored behavior does not satisfy the behavior parameter, performing the behavior parameter and one or more actions of the set of actions. Thus, this approach provides the advantage of ensuring security of the system 100 at the component, network, and behavioral level.
In some embodiments, after making one of the devices 102 available to transmit a message, the node/kernel periodically re-performs the mutual authentication process and disables operation of one of the devices 102 on the bus 104 if the mutual authentication process fails. In some embodiments, if the mutual authentication process is successful, the kernel 200 determines an encryption key for one of the devices 102 and one of the nodes, and the kernel and node/device use the encryption key to encrypt and decrypt messages. In some embodiments, each time a periodic re-execution of the mutual authentication process succeeds, the kernel 200 determines a new encryption key for one of the devices/nodes and encrypts and decrypts the message using the new encryption key.
Equipment module
In some embodiments, the device 102 may be a device module. Fig. 9 illustrates a smart flexible actuator (SCA) and sensor module 900 according to some embodiments. The SCA and sensor module 900 may be one or more of the devices 102 of the machine automation system 100 described herein. In some embodiments, the smart flexible actuator (SCA) and sensor module 900 is allowed to deviate from its own equilibrium position depending on the applied external force, where the equilibrium position of the flexible actuator is defined as the actuator position where the actuator produces zero force or zero torque. As shown in fig. 9, the SCA and sensor module may include one or more motors 902, one or more sensors 904, and/or a control board 906 (for controlling the motors 902 and/or sensors 904) coupled together via a network of devices 908. In particular, this type of module 900 may perform machine automation tasks required for high bandwidth and/or low latency (e.g., coupled with one or more controller devices 102 via bus 104). The motor 902 may include a drive motor to control actuation of the module 900 (e.g., movement of a robotic arm), and the sensor 904 may include an image and/or magnetic sensor to input image data and/or detect a position of the module 900 (e.g., a current position of a robotic arm, a position of an image sensor, a sensed image from the front of an autonomous vehicle, or other sensed data).
Figures 10A-C illustrate variations of control boards 906, 906', 906 ", according to some embodiments. As shown in fig. 10A, the control board 906 for the multi-connection mode module 900 may include a system on chip (SoC)1002, a transimpedance amplifier (TIA) and/or Laser Driver (LD)1004, a bi-directional optical assembly (BOSA)1006, a power regulator 1008, a motor driver 1010, a flexible actuator motor and power control connector 1012, a motor control signal transceiver 1014, one or more sensors 1016, an optical splitter 1018, an input power connector 1020, one or more output power connectors 1022, a first fiber optic connector 1024, and one or more second fiber optic connectors 1026 all operatively coupled together. In particular, the BOSA 1006, the optical splitter 1018, and the fiber connectors 1024, 1026 are coupled together via fiber optic cables. Optionally, one or more of the above elements may be omitted, the number thereof may be increased or decreased and/or other elements may be added.
The control board 906 may be a flexible printed circuit board. The BOSA 1006 may include a Transmitter Optical Subassembly (TOSA), a Receiver Optical Subassembly (ROSA), and a Wavelength Division Multiplexing (WDM) filter such that it may support two wavelengths on each fiber using bi-directional techniques. In some embodiments, BOSA 1006 is a hybrid silicon photonics BOSA. The motor drive 1010 may be a predriver, a door drive, or other type of drive. The flexible actuator motor and power control connector 1012 may transmit control and/or power signals to the motor 902. The motor control signal transceiver 1014 may receive motor control signals via the bus 104 and/or transmit motor, sensor, and/or other data to one or more controller devices 102. The sensor 1016 may include a magnetic sensor and/or other types of sensors. For example, the sensors 1016 may sense the position and/or orientation of the module 900 and provide the position data as feedback to the SoC 1002 and/or the controller device 102 coupled with the module 900 via the bus 104. The optical splitter 1018 may be built into the control board 906. An input power connector 1020 receives power from the control board 906. The output power connector 1022 is used to supply, transfer and/or forward power to one or more other boards/modules 900.
The first fiber optic connector 1024 is coupled to a fiber optic splitter 1018, the splitter 1018 separating the cable into two or more cables. One cable is coupled to the BOSA 1006 for transmitting signals to and from other elements of the board 906, and each of the remaining cables is coupled to a different one of the one or more second fiber optic connectors 1026. The first fiber optic connector 1024 and/or the second fiber optic connector 1026 can be a pigtail fiber connection point and/or connector 1024. In particular, the pigtail fiber optic connection point and/or connector may comprise a single, short, typically tight buffered optical fiber having a fiber optic connector pre-installed at one end and a bare fiber length at the other end. The ends of the pigtails can be stripped and fused into multi-fiber dry monofilaments. Other types of optical connection points and/or connectors 1024 may alternatively be used.
In controlling operations within the boards 906, 906', 906 ″, the motor driver 1010 may receive Pulse Width Modulated (PWM) control signals generated by the SoC 1002 (and/or via the controller device 102 of the SoC 1002) for controlling torque, speed, and/or other operations of the motor 902 of the SoC module 900 (via the flexible actuator motor and power control connector 1012). Additionally, the sensor 1016, the sensor 904, and/or the driver 1010 may provide motor and/or sensor state feedback to the SoC 1002, such that the SoC 1002 (and/or via the controller device 102 of the SoC 1002) may adjust the control signal based on the feedback to control operation of the motor 902 and/or the sensor 904. For example, the driver 1010 may provide motor current sensor feedback including an a-phase current value, a B-phase current value, and a C-phase current value, wherein an internal analog-to-digital converter (ADC) of the SoC 1002 converts these values to digital values, and the SoC 1002 (and/or via the controller device 102 of the SoC 1002) adjusts the PWM control signals transmitted to the driver 1010 based on the motor current sensor feedback received from the driver 1010, thereby adjusting the speed, torque, and/or other characteristics of the motor 902.
In operation within system 100, a first fiber optic connector 1024 enables board/module 900 to be coupled to bus 104 via a fiber optic cable, while a splitter 1018 and a second fiber optic connector 1026 enable board/module 900 to be coupled to one or more additional boards/modules 900 via additional fiber optic cables (e.g., for receiving control signals from one or more controller devices 102 coupled to other ports 99 of bus 104 and/or transmitting data signals to one or more controller devices 102 coupled to other ports 99 of bus 104). Thus, as shown in FIG. 11A, the boards/modules 900 may be coupled to the ports 99 of the bus 104 as a serial cascade, wherein only a single port 99 may be coupled to a plurality of boards/modules 900. Specifically, as shown in fig. 11A, one plate 906 is optically coupled (via fiber optic cable) to a port 99 from a first fiber optic connector 1024, and the first fiber optic connector 1024 of each subsequent plate 906 is coupled (all via fiber optic cable) to a second fiber optic connector 1026 of the other plate 906. Indeed, as shown in fig. 11A, if board 906 includes a plurality of second fiber optic connectors 1026, the cascade may branch into a tree-like structure in which a single board/module 900 is optically coupled to a plurality of other boards/modules 900. Meanwhile, the boards/modules 900 may share power in the same manner, where they are optically coupled via one or more modules 900 receiving power from a power source and one or more other modules 900 receiving power by coupling their input power connectors 1020 to the output power connectors 1022 of another module 900.
Alternatively, as shown in FIG. 11B, the control board 906' for the single-connect mode module 900 may not include one or more second fiber optic connectors 1026 and/or one or more output power connectors 1022. In some embodiments, as shown in fig. 10C, a control board 906 "for the single-connection mode image sensor module 900 may further include one or more flexible actuator motors 1028 and one or more image or other type of sensors 1030 (e.g., camera, lidar, magnetic, ultrasonic, infrared, radio frequency). In such embodiments, the motor 1028 can control the movement of the sensor 1030 while the sensor 1016 detects the position and/or orientation of the motor 1028 and/or the sensor 1030. Optionally, the control board 906 "may be a multi-connection mode image sensor module 900 further including one or more second fiber optic connectors 1026 and/or one or more output power connectors 1022.
As shown in fig. 11A, these single-connection mode modules 900 and/or boards 906' and 906 "may be coupled to a cascade or tree formed by multiple-connection mode modules 900 and/or coupled in parallel with bus 104. Additionally, as shown in fig. 11B, the system 100 may include one or more external optical splitters 1102, wherein one or more boards/ modules 906, 906', 906 "configured in series cascading, trees, and/or parallel may be further connected in parallel and/or in series when coupled to the bus 104 using the external optical splitters 1102. For example, as shown in fig. 11B, an optical splitter 1102 is used to couple to a single port 99, the cascaded outputs of the modules 900, one or more individual modules 900, and another splitter 1102. Although the splitters 1102 are 1-4 splitters as shown in FIG. 11B, they can be any ratio 1-N as desired. Also, although only modules 906, 906', 906 ″ are shown coupled to bus 104 as shown in FIGS. 11A and 11B, it should be understood that any combination of other devices 102 may be coupled to bus 104 along with the modules. For example, one or more controller devices 102 may be coupled to bus 104 for receiving data and issuing commands to the modules.
Thus, module 900 provides the following advantages: ultra-high throughput and data bandwidth are achieved and can support up to 10-fold to 100-fold bandwidth and long distances compared to other modules. In particular, the ability to utilize optical communication and serial cascade coupling enables the module 900 to provide fast data transfer speeds and ultra-low latency without interference from electromagnetic interference (EMI). Furthermore, the module 900 is particularly advantageous in robotic, industrial automation, and autonomous vehicle fields because the module 900 can handle the high bandwidth and low latency requirements of sensor data in these fields.
Fig. 12 illustrates a method of operating a controller and sensor bus including a plurality of ports for coupling with a plurality of external machine automation devices of a machine automation system, in accordance with some embodiments. As shown in fig. 12, in step 1202, one or more controller devices 102 are coupled to one or more ports 99 of a bus 104. In step 1204, the one or more SCAs and the first fiber optic connector 1024 of the sensor module 900 are coupled with the one or more ports 99. In step 1206, messages are relayed between the controller 104 and the SCA and sensor module 900 over the bus 104 via the one or more central transmission networks 206. In step 1208, the control board 906 adjusts the operation of the SCA and the sensor module 900 based on the message received from the controller device 102. In some embodiments, each of the SCA and sensor module 900 is coupled directly in parallel to one of the ports 99 via a fiber optic cable. In some embodiments, coupling the SCA and sensor module 900 includes coupling the SCA and sensor module 900 in parallel with the optical splitter 1102 and coupling the optical splitter 1102 to the port 99 via a fiber optic cable. In some embodiments, coupling the SCA and sensor module 900 includes coupling a first fiber optic connector 1024 of the first SCA and sensor module 900 to one of the ports 99 via a fiber optic cable, and coupling a second fiber optic connector 1026 of the first SCA and sensor module 900 to the first fiber optic connector 1024 of the second SCA and sensor module 900.
The system 100 and machine automation controller and sensor bus 104 implementing dynamic burst to broadcast transmission networks has many advantages. In particular, it provides the following advantages: simple cable systems and connections; eliminating the significant electromagnetic interference (EMI) impact caused by fiber optic cable users; ensuring low latency for node-to-node communications; high throughput bandwidth (10, 25, 100Gbps or more) for transmission from node to node; the node-to-node device can be expanded to 20 km; a passive optical network structure is adopted, so that the power consumption is low; a centralized DBA scheduling mechanism is adopted, so that the industrial QoS without flow congestion is provided; a built-in HARQ mechanism ensures successful transmission between nodes and GEM; a unified software image is provided for the whole Ethernet system (comprising all the gates, nodes and root ports), so that the software architecture is simplified, the product development period is shortened, and system-level debugging, monitoring and troubleshooting can be performed remotely.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of the principles of construction and operation of the invention. Reference herein to specific embodiments and details thereof is not intended to limit the scope of the claims appended hereto. It will be apparent to those skilled in the art that other various modifications may be made in the embodiment chosen for illustration without departing from the spirit and scope of the invention as defined in the claims. For example, although the bus is described as operating within a machine automation system, as described herein, it should be understood that the bus may operate with other types of systems and their devices to facilitate communication between the devices. In addition, the discussion herein regarding a particular type of node may refer to any type of node discussed herein, including virtual nodes and gates acting on behalf of the node. Further, it should be understood that operations performed by or for nodes 204,208, 234 may be operations performed by or for devices 102 coupled to nodes 204,208, 234 (e.g., consistent with nodes 204,208, 234) as described herein.

Claims (58)

1. A machine automation system for controlling and operating an automation machine, the system comprising:
a controller and sensor bus, the controller and sensor bus comprising:
at least one central processing core comprising one or more root ports, each root port having a root validation engine;
one or more transport networks, each transport network comprising a plurality of leaf nodes and being directly coupled to the core via a different one of the root ports, each leaf node comprising a leaf validation engine and a leaf node memory; and
a plurality of input/output ports, each input/output port coupled to one of the leaf nodes; and a plurality of external machine automation devices, each external machine automation device coupled to one of the leaf nodes via one or more of the root ports coupled to the one of the leaf nodes;
wherein:
one of the root ports sending an authorization message to one of the leaf nodes coupled to the one of the root ports indicating a transmission window;
one of the leaf nodes sending a data message comprising a plurality of data packets having destination information and an acknowledgement request indicator to one of the root ports within the transmission window, and the leaf acknowledgement engine of the one of the leaf nodes storing a leaf copy of the data message comprising the plurality of data packets; and
if the data message received by one of the root ports does not have any uncorrectable errors, the root validation engine of the one of the root ports sends a data receipt message to one of the leaf nodes that removes the leaf copy based on receiving the data receipt message.
2. The system according to claim 1, wherein said root validation engine of one of said root ports sends a data loss message to one of said leaf nodes if said data message is not received by said one of said root ports within said transmission window, said one of said leaf nodes resending said data message including all of said data packets using said leaf copy in a subsequent transmission window granted to said one of said leaf nodes by said one of said root ports in a subsequent grant message.
3. The system of claim 1, wherein if one of the root ports receives the data message having an uncorrectable error in the subset of data packets, the root acknowledgement engine of the one of the root ports sends a data portion receipt message to one of the leaf nodes, the data portion receipt message including packet loss/receipt information identifying the subset of data packets that need to be resent, and a subsequent transmission window authorized for the one of the leaf nodes to resend the subset of data packets.
4. The system of claim 3, wherein in response to receiving the data portion receive message, one of the leaf nodes removes from the leaf copy data packets that are not part of the subset based on the loss/receive information.
5. The system of claim 4, wherein the loss/reception information comprises a bitmap comprising a bit for each packet of the data message, the value of the bit identifying whether the packet needs to be retransmitted.
6. The system of claim 5, wherein the loss/reception information comprises a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packet that does not need to be retransmitted.
7. The system according to claim 6, wherein after expiration of a timer associated with each subset, said one of said leaf nodes resends said subset in a new data message to said one of said root ports in said subsequent transmission window granted to said one of said leaf nodes by said one of said root ports in a subsequent grant message.
8. The system of claim 7, wherein the new data message comprises one or more other data packets that are not part of the data message other than the subset.
9. The system according to claim 1, wherein one of the leaf nodes is part of a plurality of nodes of a first one of the networks, and wherein the root validation engine of one of the root ports does not send the data receive message to one of the leaf nodes if the one of the root ports receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast by the one of the root ports to all leaf nodes in the first network.
10. The system of claim 9, wherein in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes determines that one of the leaf nodes is the source of the data message and removes any packets found in the data message from the leaf copy such that the any packets are not resent to one of the root ports.
11. The system of claim 1, wherein one of the root ports sends the data message to one or more of the other leaf nodes when the destination information indicates that the data message is to be sent to the other leaf nodes, and wherein if one of the root ports does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors.
12. The system of claim 11, wherein one of the root ports selects a target node of the leaf nodes of the first network, and wherein only the target node is configured to respond with the leaf data reception message when the other leaf nodes are a plurality of the leaf nodes of the first network.
13. The system according to claim 12, wherein said target node is selected based on which of said leaf nodes of said first network is the last one to receive a message broadcast by one of said root ports over said first network.
14. A controller and sensor bus, the bus comprising:
at least one central processing core comprising one or more root ports, each root port having a root validation engine;
one or more transport networks, each transport network comprising a plurality of leaf nodes and being directly coupled to the core via a different one of the root ports, each leaf node comprising a leaf validation engine and a leaf node memory; and
a plurality of input/output ports, each input/output port coupled to one of the leaf nodes;
wherein:
one of the root ports sending an authorization message to one of the leaf nodes coupled to the one of the root ports indicating a transmission window;
one of the leaf nodes sending a data message comprising a plurality of data packets having destination information and an acknowledgement request indicator to one of the root ports within the transmission window, and the leaf acknowledgement engine of the one of the leaf nodes storing a leaf copy of the data message comprising the plurality of data packets; and
if the data message received by one of the root ports does not have any uncorrectable errors, the root validation engine of the one of the root ports sends a data receipt message to one of the leaf nodes that removes the leaf copy based on receiving the data receipt message.
15. The bus of claim 14, wherein if one of said root ports does not receive said data message within said transmission window, said root acknowledgement engine of one of said root ports sends a data loss message to one of said leaf nodes, wherein one of said leaf nodes resends said data message including all of said data packets using said leaf copy in a subsequent transmission window granted to one of said leaf nodes by one of said root ports in a subsequent grant message.
16. The bus of claim 14, wherein if one of the root ports receives the data message having an uncorrectable error in the subset of the data packets, the root acknowledgement engine of the one of the root ports sends a data portion receipt message to one of the leaf nodes, the data portion receipt message including packet loss/receipt information identifying the subset of the data packets that need to be resent, and a subsequent transmission window granted to the one of the leaf nodes for resending the subset of the data packets.
17. The bus of claim 16, wherein in response to receiving the data portion receive message, one of the leaf nodes removes from the leaf copy data packets that are not part of the subset based on the loss/receive information.
18. The bus of claim 17, wherein the loss/receive information comprises a bitmap comprising a bit for each packet of the data message, the value of the bit identifying whether the packet needs to be resent.
19. A bus as claimed in claim 18, wherein the loss/reception information comprises a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that do not need to be retransmitted.
20. The bus of claim 19, wherein one of the leaf nodes is configured to: after expiration of a timer associated with each subset and in the subsequent transmission window, resending the subset to one of the root ports in a new data message, wherein the subsequent transmission window is granted by the one of the root ports to the one of the leaf nodes in a subsequent grant message to resend the subset.
21. The bus of claim 20, wherein the new data message comprises one or more other data packets that are not part of the data message other than the subset.
22. The bus of claim 14, wherein one of the leaf nodes is part of a plurality of nodes of a first one of the networks, and wherein the root acknowledgement engine of one of the root ports does not send the data receive message to one of the leaf nodes if the one of the root ports receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast by the one of the root ports to all leaf nodes in the first network.
23. The bus of claim 22, wherein in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes determines that one of the leaf nodes is the source of the data message and removes any packets found in the data message from the leaf copy such that the any packets are not re-sent to one of the root ports.
24. The bus of claim 14, wherein one of the root ports sends the data message to one or more of the other leaf nodes when the destination information indicates that the data message is to be sent to the other leaf nodes, and wherein if one of the root ports does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors.
25. The bus of claim 24, wherein one of said root ports selects a target node of said leaf node of said first network, and wherein only said target node is configured to respond with said leaf data reception message when said other leaf node is a plurality of said leaf nodes of said first network.
26. The bus of claim 25, wherein the target node is selected based on which of the leaf nodes of the first network is the last one to receive a message broadcast by one of the root ports over the first network.
27. A central core of a controller and sensor bus, the controller and sensor bus comprising one or more transport networks, each transport network comprising a plurality of leaf nodes and being directly coupled to the core, each leaf node comprising a leaf validation engine and a leaf node memory, the central core comprising:
at least one central processing unit; and
a non-transitory computer readable memory storing at least one root port coupled with the central processing unit and having a root validation engine, wherein the root port is to:
sending an authorization message indicating a transmission window to one of the leaf nodes coupled to the root port; and
the root acknowledgement engine sends a data reception message to one of the leaf nodes if a data message received by the root port from the one of the leaf nodes within the transmission window does not have any uncorrectable errors, wherein the data message comprises a plurality of data packets having destination information and an acknowledgement request indicator.
28. The central core as recited in claim 27, wherein if the root port does not receive the data message within the transmission window, the root validation engine of the root port is configured to send a data loss message to one of the leaf nodes, the data loss message indicating a subsequent transmission window authorized for one of the leaf nodes to resend the data message.
29. The central core as recited in claim 27, wherein if the root port receives the data message having an uncorrectable error in the subset of data packets, the root acknowledgement engine of the root port is configured to send a data portion receipt message to one of the leaf nodes, the data portion receipt message including packet loss/receipt information identifying the subset of data packets that need to be resent and a subsequent transmission window authorized for one of the leaf nodes to resend the subset of data packets.
30. The central core as recited in claim 29, wherein the loss/receive information comprises a bitmap comprising a bit for each packet of the data message, the value of the bit identifying whether the packet needs to be resent.
31. The central core according to claim 30, wherein the loss/receipt information comprises a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that do not need to be retransmitted.
32. The central core as recited in claim 27, wherein one of the leaf nodes is part of a plurality of nodes of a first one of the networks, and wherein the root acknowledgement engine of the root port is configured to refrain from sending the data receive message to one of the leaf nodes if the root port receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast to all leaf nodes in the first network.
33. The central core as recited in claim 27, wherein when the destination information indicates that the data message is to be sent to one or more of the other leaf nodes, the root port sends the data message to the other leaf nodes, and if the root port does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors.
34. The central core as recited in claim 33, wherein the root port selects a target node for the leaf node of the first network, and only the target node is configured to respond with the leaf data receive message when the other leaf node is a plurality of the leaf nodes of the first network.
35. The central core as recited in claim 34, wherein the target node is selected based on which of the leaf nodes of the first network was last to receive a message broadcast by one of the root ports over the first network.
36. A controller and sensor bus, the bus comprising:
one or more transport networks, each transport network comprising a root port and a plurality of leaf nodes, each leaf node comprising a leaf validation engine and a leaf node memory; and
a plurality of input/output ports, each input/output port coupled with one of the leaf nodes, wherein one of the leaf nodes of a first one of the networks is to:
receiving an authorization message from the root port of the first network indicating a transmission window assigned to one of the leaf nodes;
sending a data message comprising a plurality of data packets having destination information and an acknowledgement request indicator to the root port within the transmission window and storing a leaf copy of the data message comprising the plurality of data packets; and
receiving a data receipt message from the root port, wherein the data receipt message indicates whether the root port received the data message without any uncorrectable errors; and
removing at least a portion of the leaf replica based on the data reception message.
37. The bus of claim 36, wherein one of said leaf nodes is configured to retransmit said data message including all of said data packets using said leaf copy if said leaf node receives a data loss message from said root port and a subsequent transmission window is granted to said leaf node by said root port for retransmitting said data message, wherein said data loss message indicates that said root port did not receive said data message within said transmission window.
38. The bus of claim 36, wherein one of the leaf nodes is configured to receive a data portion receive message from the root port indicating that the root port received the data message with the uncorrectable error in the subset of data packets, and wherein the data portion receive message includes packet loss/reception information identifying the subset of data packets that need to be retransmitted, and a subsequent transmission window granted to the one of the leaf nodes for retransmitting the subset of data packets.
39. The bus of claim 38, wherein in response to receiving the data portion receive message, one of the leaf nodes is configured to remove from the leaf copy data packets that are not part of the subset based on the loss/receive information.
40. The bus of claim 39, wherein the loss/receive information comprises a bitmap comprising a bit for each packet of the data message, the value of the bit identifying whether the packet needs to be resent.
41. A bus as defined in claim 40, wherein the loss/reception information comprises a start of sequence value and an end of sequence value, the start of sequence value and the end of sequence value identifying a range of the data packets that do not need to be retransmitted.
42. The bus as recited in claim 41, wherein one of said leaf nodes is configured to: after expiration of a timer associated with each subset and in the subsequent transmission window, resending the subset to one of the root ports in a new data message, wherein the subsequent transmission window is granted by the one of the root ports to the one of the leaf nodes in a subsequent grant message to resend the subset.
43. The bus of claim 42, wherein the new data message comprises one or more other data packets that are not part of the data message other than the subset.
44. The bus of claim 36, wherein in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes is configured to determine that one of the leaf nodes is the source of the data message and remove any packets found in the data message from the leaf copy such that the any packets are not re-sent to one of the root ports.
45. The bus as set forth in claim 36 wherein one of said leaf nodes is configured to not respond to other data messages broadcast by said root port to all of said leaf nodes on said first network unless said one of said leaf nodes receives a target message from said root port indicating that said leaf node is a target node that must respond with a leaf data receive message on behalf of all nodes of said first network upon receiving said other data messages broadcast by said root port.
46. A method of operating a controller and sensor bus, the controller and sensor bus comprising at least one central processing core comprising one or more root ports each having a root validation engine, and one or more transport networks each comprising a plurality of leaf nodes and directly coupled to the core via different ones of the root ports, each leaf node comprising a leaf validation engine and a leaf node memory, the method comprising:
one of the root ports sending an authorization message to one of the leaf nodes coupled to the one of the root ports indicating a transmission window;
one of the leaf nodes sending a data message to one of the root ports within the transmission window, the data message including a plurality of data packets having destination information and an acknowledgement request indicator;
the leaf validation engine of one of the leaf nodes storing a leaf copy of the data message comprising the plurality of data packets;
if the data message received by one of the root ports does not have any uncorrectable errors, the root acknowledgement engine of one of the root ports sends a data reception message to one of the leaf nodes; and
one of the leaf nodes removes the leaf copy based on receiving the data reception message.
47. The method of claim 46, further comprising:
if one of the root ports does not receive the data message within the transmission window, the root acknowledgement engine of the one of the root ports sends a data loss message to one of the leaf nodes; and
one of the leaf nodes resends the data message including all of the data packets using the leaf copy in a subsequent transmission window granted to the one of the leaf nodes by the one of the root ports in a subsequent grant message.
48. The method of claim 46, further comprising: if one of the root ports receives the data message having an uncorrectable error in the subset of data packets, the root acknowledgement engine of the one of the root ports sends a data portion receipt message to one of the leaf nodes, the data portion receipt message including packet loss/receipt information identifying the subset of data packets that need to be resent, and a subsequent transmission window authorized for the one of the leaf nodes to resend the subset of data packets.
49. The method of claim 48, further comprising: in response to receiving the data portion reception message, one of the leaf nodes removes from the leaf replica data packets that are not part of the subset based on the loss/reception information.
50. The method of claim 49, wherein the loss/reception information comprises a bitmap, wherein the bitmap comprises a bit for each packet of the data message, and wherein a value of the bit identifies whether the packet needs to be retransmitted.
51. The method of claim 50, wherein the loss/reception information comprises a start of sequence value and an end of sequence value, wherein the start of sequence value and the end of sequence value identify a range of the data packet that does not need to be retransmitted.
52. The method of claim 51, further comprising: after expiration of the timer associated with each subset and in the subsequent transmission window, the one of the leaf nodes resends the subset to the one of the root ports in a new data message, wherein the subsequent transmission window is granted to the one of the leaf nodes by the one of the root ports in a subsequent grant message to resend the subset.
53. The method of claim 52, wherein the new data message comprises one or more other data packets that are not part of the data message other than the subset.
54. The method according to claim 46, wherein one of said leaf nodes is part of a plurality of nodes of a first one of said networks, said method further comprising: if one of the root ports receives the data message without any uncorrectable errors and the destination information of the data message indicates that the data message needs to be broadcast by one of the root ports to all leaf nodes in the first network, the root validation engine of one of the root ports does not send the data receive message to one of the leaf nodes.
55. The method of claim 54, further comprising: in response to receiving the data message broadcast from one of the root ports within the first network, one of the leaf nodes determines that one of the leaf nodes is the source of the data message and removes any data packets found in the data message from the leaf copy such that the any data packets are not resent to one of the root ports.
56. The method of claim 46, further comprising:
when the destination information indicates that the data message is to be sent to one or more of the other leaf nodes, one of the root ports sends the data message to the other leaf nodes; and
if one of the root ports does not receive a leaf data receipt message from one or more of the other leaf nodes, the root acknowledgement engine stores a root copy of the data message including the plurality of data packets and resends one or more of the data packets to the other leaf nodes in a subsequent data message, the leaf data receipt message indicating that the received one or more data packets do not have any uncorrectable errors.
57. The method of claim 56, further comprising: one of the root ports selects a target node of the leaf nodes of the first network, wherein only the target node is to respond with the leaf data reception message when the other leaf nodes are a plurality of the leaf nodes of the first network.
58. The method according to claim 57, wherein said target node is selected based on which of said leaf nodes of said first network is the last one to receive a message broadcast by one of said root ports over said first network.
CN202180004902.0A 2020-04-30 2021-04-29 Intelligent controller and sensor network bus and system and method including message retransmission mechanism Active CN114208258B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/863,898 US11156987B2 (en) 2019-08-01 2020-04-30 Intelligent controller and sensor network bus, system and method including a message retransmission mechanism
US16/863,898 2020-04-30
PCT/US2021/029990 WO2021222641A1 (en) 2020-04-30 2021-04-29 Intelligent controller and sensor network bus, system and method including a message retransmission mechanism

Publications (2)

Publication Number Publication Date
CN114208258A true CN114208258A (en) 2022-03-18
CN114208258B CN114208258B (en) 2024-05-28

Family

ID=78332226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180004902.0A Active CN114208258B (en) 2020-04-30 2021-04-29 Intelligent controller and sensor network bus and system and method including message retransmission mechanism

Country Status (2)

Country Link
CN (1) CN114208258B (en)
WO (1) WO2021222641A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11809163B2 (en) 2019-08-01 2023-11-07 Vulcan Technologies Shanghai Co., Ltd. Intelligent controller and sensor network bus, system and method including a message retransmission mechanism

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104904160A (en) * 2012-11-09 2015-09-09 思杰系统有限公司 Systems and methods for appflow for datastream
US20170359128A1 (en) * 2016-06-14 2017-12-14 Teledyne Instruments, Inc. Long distance subsea can bus distribution system
CN108282416A (en) * 2017-12-29 2018-07-13 北京华为数字技术有限公司 A kind of dispatching method and device based on data frame
CN109257422A (en) * 2018-09-06 2019-01-22 广州知弘科技有限公司 Sensing network signal reconstruct method
CN110073301A (en) * 2017-08-02 2019-07-30 强力物联网投资组合2016有限公司 The detection method and system under data collection environment in industrial Internet of Things with large data sets
CN110915244A (en) * 2017-07-17 2020-03-24 庄卫华 Method for managing single-hop and multi-hop broadcasting in vehicle communication network and apparatus therefor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001313646A (en) * 2000-04-27 2001-11-09 Sony Corp Electronic device and method for controlling state of its physical layer circuit
US20020109883A1 (en) * 2001-02-12 2002-08-15 Teradvance Communications, Llc Method and apparatus for optical data transmission at high data rates with enhanced capacity using polarization multiplexing
DE102009050170B4 (en) * 2009-10-21 2013-08-01 Diehl Ako Stiftung & Co. Kg Home automation and home information system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104904160A (en) * 2012-11-09 2015-09-09 思杰系统有限公司 Systems and methods for appflow for datastream
US20170359128A1 (en) * 2016-06-14 2017-12-14 Teledyne Instruments, Inc. Long distance subsea can bus distribution system
CN110915244A (en) * 2017-07-17 2020-03-24 庄卫华 Method for managing single-hop and multi-hop broadcasting in vehicle communication network and apparatus therefor
CN110073301A (en) * 2017-08-02 2019-07-30 强力物联网投资组合2016有限公司 The detection method and system under data collection environment in industrial Internet of Things with large data sets
CN108282416A (en) * 2017-12-29 2018-07-13 北京华为数字技术有限公司 A kind of dispatching method and device based on data frame
CN109257422A (en) * 2018-09-06 2019-01-22 广州知弘科技有限公司 Sensing network signal reconstruct method

Also Published As

Publication number Publication date
WO2021222641A1 (en) 2021-11-04
CN114208258B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US11269795B2 (en) Intelligent controller and sensor network bus, system and method including a link media expansion and conversion mechanism
US11689386B2 (en) Intelligent controller and sensor network bus, system and method for controlling and operating an automated machine including a failover mechanism for multi-core architectures
US11156987B2 (en) Intelligent controller and sensor network bus, system and method including a message retransmission mechanism
US11258538B2 (en) Intelligent controller and sensor network bus, system and method including an error avoidance and correction mechanism
US11263157B2 (en) Intelligent controller and sensor network bus, system and method including a dynamic bandwidth allocation mechanism
US11086810B2 (en) Intelligent controller and sensor network bus, system and method including multi-layer platform security architecture
US11134100B2 (en) Network device and network system
CN114270328B (en) Intelligent controller and sensor network bus and system and method including multi-layered platform security architecture
US11809163B2 (en) Intelligent controller and sensor network bus, system and method including a message retransmission mechanism
CN111988369B (en) Intelligent controller and sensor network bus, system and method
US11269316B2 (en) Intelligent controller and sensor network bus, system and method including smart compliant actuator module
US11089140B2 (en) Intelligent controller and sensor network bus, system and method including generic encapsulation mode
KR102234210B1 (en) Security method for ethernet based network
US20170180397A1 (en) Thin Client Unit apparatus to transport intra-vehicular data on a communication network
JP2017529033A (en) Ethernet interface module
CN114208258B (en) Intelligent controller and sensor network bus and system and method including message retransmission mechanism
WO2022086723A1 (en) Intelligent controller and sensor network bus, system and method including a link media expansion and conversion mechanism
WO2022076730A1 (en) Intelligent controller and sensor network bus, system and method including a dynamic bandwidth allocation mechanism
CN114731292A (en) Low latency medium access control security authentication
WO2022076727A1 (en) Intelligent controller and sensor network bus, system and method including an error avoidance and correction mechanism
CN112912809A (en) Intelligent controller including universal packaging mode and sensor network bus, system and method
CN112867997A (en) Intelligent controller including intelligent flexible actuator module, and sensor network bus, system and method
KR20230096755A (en) Network test device, network test system and method for testing thereof
WO2012107074A1 (en) Device and method for securing ethernet communication
JP2024500544A (en) Data transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230103

Address after: Room 607, Block Y1, No. 112, Liangxiu Road, Pudong New Area, Shanghai

Applicant after: Pengyan Technology (Shanghai) Co.,Ltd.

Address before: Room 607, block Y1, 112 liangxiu Road, Pudong New Area, Shanghai 201203

Applicant before: Pengyan Technology (Shanghai) Co.,Ltd.

Applicant before: Li Weijian

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant