CN116149861A - High-speed communication method based on VPX structure - Google Patents

High-speed communication method based on VPX structure Download PDF

Info

Publication number
CN116149861A
CN116149861A CN202310211171.8A CN202310211171A CN116149861A CN 116149861 A CN116149861 A CN 116149861A CN 202310211171 A CN202310211171 A CN 202310211171A CN 116149861 A CN116149861 A CN 116149861A
Authority
CN
China
Prior art keywords
vpx
subtasks
data
pcie
card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310211171.8A
Other languages
Chinese (zh)
Other versions
CN116149861B (en
Inventor
杨东鑫
段勃
李浩澜
涂朝仕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Western Research Institute Of China Science And Technology Computing Technology
Original Assignee
Western Research Institute Of China Science And Technology Computing Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Western Research Institute Of China Science And Technology Computing Technology filed Critical Western Research Institute Of China Science And Technology Computing Technology
Priority to CN202310211171.8A priority Critical patent/CN116149861B/en
Publication of CN116149861A publication Critical patent/CN116149861A/en
Application granted granted Critical
Publication of CN116149861B publication Critical patent/CN116149861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Bus Control (AREA)

Abstract

The invention relates to the technical field of computers, and particularly discloses a high-speed communication method based on a VPX structure, which comprises the following steps: s1, a CPU receives a task, splits the task into a plurality of subtasks, and sends the subtasks to a PCIE converter through a PCIE bus; s2, the PCIE converter forwards the subtasks to each VPX board card connected with the subtasks through a PCIE bus; s3, after receiving the subtasks, the VPX board card processes the subtasks; s4, when the VPX boards need to communicate with each other, data are transmitted between the VPX boards through the PCIE bus, and the subtasks are cooperatively processed; s5, after the VPX board card finishes subtask processing, sending a processing result to a PCIE converter through a PCIE bus; and S6, the PCIE converter sends the processing result to the CPU. By adopting the technical scheme of the invention, the VPX board cards can be controlled to directly communicate in the VPX computing platform, the bandwidth of the system is improved, and the data exchange delay is effectively reduced.

Description

High-speed communication method based on VPX structure
Technical Field
The invention relates to the technical field of computers, in particular to a high-speed communication method based on a VPX structure.
Background
VPX is an open computing platform for providing excellent processing power and reliability in high performance computing and communication applications. The VPX platform uses high-speed serial interconnection technology to achieve high-bandwidth, low-latency data transmission based on the VITA (VME international trade union) standard. The VPX platform supports a variety of processor architectures such as PowerPC, riscv, x, 86 and ARM and provides a wide range of IO options including ethernet, fibre channel, radar interface, and high speed memory. The VPX platform also supports hot plug and multi-layer physical security functions, which can meet the requirements of various high-performance computing and communication applications.
On the VPX computing platform, with the increasing amount of information data such as voice, data, images, graphics and the like, which needs to be processed, the requirements on communication delay are also increasing. As shown in fig. 1, in the conventional scheme, main data needs to be communicated on the VPX backplane through PCIE channels of the CPU itself and VPX boards such as other GPUs/NPUs/network cards/nvmes, and because the number and bandwidth of PCIE channels of the CPU itself are limited, the overall delay and bandwidth are limited by PCIE channels of the CPU itself, and a situation of high delay is easy to occur.
For this reason, there is a need for a high-speed communication method based on a VPX structure capable of reducing data exchange delay.
Disclosure of Invention
The invention provides a high-speed communication method based on a VPX structure, which can effectively reduce the exchange delay of data in a VPX computing platform.
In order to solve the technical problems, the application provides the following technical scheme:
a high-speed communication method based on a VPX structure comprises the following steps:
s1, a CPU receives a task, splits the task into a plurality of subtasks, and sends the subtasks to a PCIE converter through a PCIE bus;
s2, the PCIE converter forwards the subtasks to each VPX board card connected with the subtasks through a PCIE bus;
s3, after receiving the subtasks, the VPX board card processes the subtasks;
s4, when the VPX boards need to communicate with each other, data are transmitted between the VPX boards through the PCIE bus, and the subtasks are cooperatively processed;
s5, after the VPX board card finishes subtask processing, sending a processing result to a PCIE converter through a PCIE bus;
and S6, the PCIE converter sends the processing result to the CPU.
The basic scheme principle and the beneficial effects are as follows:
in the scheme, the CPU is divided into the subtasks by the task splitting and is unloaded to each VPX board card, and each VPX board card can directly communicate through an independent PCIE bus when processing the subtasks.
Further, in the step S1, the CPU further obtains parameter information of each VPX board card, where the parameter information includes a processing capability and a current load; and the CPU also distributes VPX board cards for the subtasks according to the parameter information.
The subtasks processed by each VPX card can be balanced as much as possible, the situation that any VPX card is too high in load or too low in load is avoided, and the efficient operation of the whole system is ensured.
Further, in the step S1, a synchronization point is also created in the data of the subtask;
in step S4, the VPX board synchronizes data in the subtasks at the synchronization point.
To ensure data co-processing between the various VPX boards.
Further, in the step S4, the data synchronization specifically includes:
when the data of the VPX board processing subtasks reach the synchronous point, information is sent to other VPX boards;
waiting for signals of other VPX boards until all the VPX boards have reached the synchronization point;
and merging the data of all the VPX boards.
Further, in the step S1, the CPU further presets shared data in the subtask;
in step S4, the VPX board stores preset shared data into a shared memory; when the VPX board accesses the shared data, locking the shared data; and unlocking the shared data after the VPX board access is completed.
By locking and unlocking the shared data, the data competition and conflict are avoided when a plurality of VPX computing cards need to share the same data.
Further, the VPX board card comprises one or more of an NPU card, a GPU card, a DCU card, an NVME card and an FPGA card.
So as to support isomorphic and heterogeneous configuration forms of CPU/DCU/NPU/GPU and the like, and facilitate the realization of flexible adjustment of various computing resource proportions and balance of performance and power consumption.
Drawings
FIG. 1 is a schematic diagram of a prior art VPX fabric interconnect communication;
FIG. 2 is a flow chart of a high-speed communication method based on a VPX structure according to an embodiment;
fig. 3 is a logic block diagram of a high-speed communication system based on a VPX structure according to an embodiment.
Detailed Description
The following is a further detailed description of the embodiments:
example 1
As shown in fig. 2, a high-speed communication method based on a VPX structure of the present embodiment includes the following:
s1, a CPU receives a task, splits the task into a plurality of subtasks, creates a synchronization point in the data of the subtasks, namely, defines the position of the synchronization point in the data, such as a function or a code block, and presets shared data in the subtasks; the CPU also acquires parameter information of each VPX board card, wherein the parameter information comprises processing capacity and current load; the CPU also distributes VPX board cards for the subtasks according to the parameter information; the subtasks are sent to a PCIE converter through a PCIE bus; in this embodiment, the task allocation manner is load balancing, that is, the processing tasks of each VPX card are balanced as much as possible, for example, when the load of a certain VPX card is too high, the CPU may allocate a part of tasks to an idle VPX card, so as to achieve the effect of load balancing.
S2, the PCIE converter forwards the subtasks to each VPX board card connected with the subtasks through a PCIE bus; the VPX board card comprises one or more of an NPU card, a GPU card, a DCU card, an NVME card and an FPGA card.
S3, after receiving the subtasks, the VPX board card processes the subtasks;
s4, when the VPX boards need to communicate with each other, data are transmitted between the VPX boards through the PCIE bus, and the subtasks are cooperatively processed; specifically, the VPX board card inputs and outputs shared data into and from a shared memory of the VPX board card; defining a mutual exclusion lock, which is used for locking and unlocking shared data and initializing the mutual exclusion lock; when the VPX board accesses the shared data in the shared memory, locking the shared data; and unlocking the shared data after the VPX board access is completed.
The VPX card synchronizes data in the subtasks at the synchronization point. The synchronization process specifically includes:
when the data of the VPX board processing subtasks reach the synchronous point, information is sent to other VPX boards;
waiting for signals of other VPX boards until all the VPX boards have reached the synchronization point;
and merging the data of all the VPX boards.
S5, after the VPX board card finishes subtask processing, sending a processing result to a PCIE converter through a PCIE bus;
and S6, the PCIE converter sends the processing result to the CPU. And the CPU performs unified processing and storage on the processing results.
Taking image processing as an example, a VPX board card adopts a GPU card; the CPU divides the whole image into several small blocks, each of which is called a subtask. The CPU then dynamically allocates these sub-tasks to each VPX card for processing, based on the processing power of each VPX card and the current load conditions (including information on the number of tasks already allocated and the processing speed). In the task allocation process, the CPU needs to dynamically adjust according to the processing capacity and the current load condition of each VPX card so as to improve the overall processing efficiency of the system and the utilization rate of each computing resource as much as possible.
After receiving the subtasks, the VPX board card distributes the subtasks to the own main processor module for processing. In the process of processing tasks, the VPX boards need to communicate with each other to share data and cooperatively complete tasks. These communication data are transferred between the VPX boards via the PCIE bus.
After the VPX board finishes processing the task, the processing result is returned to the CPU through the PCIE bus, and the CPU processes and stores the processing result. And after receiving the processing results of each VPX board, the CPU gathers, merges and post-processes the processing results. Finally, the CPU outputs the processing result of the whole image to finish the whole image processing process.
Specifically, in image processing, if a large-sized image is divided into several small blocks to be processed, the boundary of each small block needs to be interacted with the data of the surrounding small blocks. In order to cooperatively complete the tasks, each VPX board card needs to know the data of the surrounding small blocks and transmit the processing results to the surrounding small blocks for subsequent processing. In this process, data transmission and interaction between the VPX boards are required to ensure data consistency and synergy between the individual tiles. To ensure data consistency, a synchronization mechanism is used in this embodiment to establish a synchronization point between the VPX boards, where the data is synchronized. Meanwhile, in order to avoid data competition and conflict, the embodiment uses a mutual exclusion mechanism to lock and unlock the data, so that the correctness and the effectiveness of data transmission are ensured.
Based on the above method, the present embodiment further provides a high-speed communication system based on a VPX structure, as shown in fig. 3, including a CPU, a PCIE converter, and a plurality of VPX boards.
The VPX board cards are all connected with the PCIE converter through PCIE buses; the CPU is connected with the PCIE converter through a PCIE bus; in this embodiment, the PCIE Switch is a PCIE Switch. The CPU is connected with the shared memory through the PCIE bus.
The VPX board card is also used for sending the data to the CPU through the PCIE converter; the CPU is also used for distributing the received data and sending the data to the target VPX board card through the PCIE converter.
The VPX boards are also connected with each other through PCIE buses, and the VPX boards are also used for directly sending data to other VPX boards through PCIE buses.
The VPX board card includes one or more of NPU card, GPU card, DCU card, NVME card, FPGA card, and in this embodiment, all of the above.
Specifically, the VPX board card includes a memory module, a main processor module, a data transmission module, a port module, a power module, and a cooling module (a radiator or a fan, etc.).
When the CPU and the VPX board card transmit data, the CPU and the VPX board card transmit data: the CPU sends the data to an input port of a PCIE Switch, the PCIE Switch forwards the data to an output port where a target VPX board card is located, and the data is received by a port module of the target VPX board card; the port module simultaneously returns the response data from the CPU. PCIE Switch decides how to forward data by storing a routing table that contains connection information with the VPX board. In the data transmission process, PCIE Switch may also perform functions such as flow control, packet retransmission, etc. to ensure reliability of transmission. The VPX board card performs data transmission with the CPU.
When the VPX board card and the VPX board card carry out data transmission: the main processing module of the source VPX board card is used for acquiring data from the storage module of the source VPX board card, sending the data to the data transmission module of the source VPX board card, and the data transmission module is used for packaging the data into data packets according to a preset protocol, for example, into RDMA data packets according to an RDMA protocol; the data transmission module is used for sending the data packet to the port module of the source VPX board card; the port module is used for sending the data packet to the PCIE bus; in this embodiment, the port module is an I/O port;
the port module of the target VPX board is used for receiving the data packet sent by the source VPX board from the PCIE bus, the data transmission module of the target VPX board is used for decapsulating the data packet after receiving the data packet from the port module, and sending the decapsulated data to the main processor module of the target VPX board.
Example two
The difference between the present embodiment and the first embodiment is that, when processing an image task, specifically, a surveillance video task of a traffic surveillance scene, the method of the present embodiment includes the following contents:
s1, a CPU continuously receives a processing task of a monitoring video, an image frame of the monitoring video is split into a single-lane image according to lane lines, the single-lane image is used as a subtask, and the subtask is sent to a PCIE converter through a PCIE bus; for example, there are 4 VPX boards, an image frame contains 4 lanes, and each lane is split into one single lane image; a single lane image is sent for each VPX board.
S2, the PCIE converter forwards the subtasks to each VPX board card connected with the subtasks through a PCIE bus;
s3, after receiving the subtask, the VPX board card identifies and tracks the vehicle in the single-lane image;
s4, when the VPX boards need to communicate with each other, data are transmitted between the VPX boards through the PCIE bus, and the subtasks are cooperatively processed; specifically, when the vehicle changes lanes, exchanging processing data with the corresponding VPX board card; the corresponding VPX board card is used for processing the single-lane image of the lane after the lane change of the vehicle.
In the processing process of the VPX board, recording a vehicle with the variable pass number larger than a first threshold value as an abnormal vehicle, recording a license plate and the total variable pass number of the vehicle, and taking the recorded license plate and the total variable pass number of the vehicle as variable pass data; the lane change data and the identification and tracking data are sent to a PCIE converter; identifying and tracking data including a total number of vehicles for the current lane;
s5, after the sub-task processing of the VPX board is completed, processing results, namely lane changing data, identification and tracking data, are sent to a PCIE converter through a PCIE bus;
s6 specifically comprises the following steps:
s601, the PCIE converter sends a processing result to the CPU;
s602, the CPU judges whether the total number of the variable passes of the vehicles in the same lane is larger than a second threshold value, and if so, the vehicles are marked as abnormal lanes; the CPU establishes a vehicle lane change database according to the abnormal vehicle and the abnormal lane data; and screening vehicles with frequent lane changes through a first threshold value, and screening lanes with multiple total lane changes through a second threshold value. The first threshold and the second threshold may be set according to actual conditions.
S603, the CPU redistributes the single-lane image for the VPX board according to the total number of vehicles in lanes, the processing capacity, the current load and the existing vehicle lane change database; in other words, the CPU allocates the subtasks once, and after the VPX board card processes certain data, the subtasks are allocated again.
Specifically, the CPU firstly judges whether the single-lane image to be allocated meets the condition that the corresponding lane belongs to an abnormal lane or the number of abnormal vehicles in the lane is larger than a third threshold;
if both conditions are not met, distributing the single-lane images according to the total number of vehicles, the processing capacity and the current load; the VPX board card with strong processing capability and small current load is responsible for processing single-lane images corresponding to lanes with more total number of vehicles so as to realize the effect of load balancing; if one of the two conditions is met, the single-lane image of the lane meeting the condition and the single-lane image of the adjacent lane are distributed to the VPX board cards adjacent to the connecting position of the PCIE bus, in other words, when the single-lane images of the adjacent lanes are distributed, the adjacent priority of the connecting position is higher than that of the load balancing, so that the physical distance of the VPX board cards is shorter, and the transmission delay is further reduced. In other embodiments, the processing time of the two allocation strategies may be estimated, and when the difference between the two processing times is within a set range (i.e., the processing time is close), the allocation of the adjacent connection positions is performed first, and then the allocation of the load balancing is performed, i.e., the priority of the adjacent connection positions is higher than that of the load balancing; when the processing time of only adopting load balancing is shorter, adopting a load balancing distribution strategy.
The CPU marks lanes meeting two conditions as high-load lanes, when the CPU sends single-lane images, the CPU obtains the bandwidth use condition of PCIE buses from the CPU to the VPX board, if the bandwidth use rate is greater than 80%, the single-lane images corresponding to the high-load lanes are sent to the appointed VPX board and the VPX board with the lowest load, and the received single-lane images corresponding to the high-load lanes are forwarded to the appointed VPX board by the VPX board with the lowest load so as to save the bandwidth of the PCIE buses from the CPU to the VPX board. And repeating the steps S2-S5 until all task calculation is completed.
The foregoing is merely an embodiment of the present invention, the present invention is not limited to the field of this embodiment, and the specific structures and features well known in the schemes are not described in any way herein, so that those skilled in the art will know all the prior art in the field before the application date or priority date, and will have the capability of applying the conventional experimental means before the date, and those skilled in the art may, in light of the teaching of this application, complete and implement this scheme in combination with their own capabilities, and some typical known structures or known methods should not be an obstacle for those skilled in the art to practice this application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (6)

1. A high-speed communication method based on a VPX structure, comprising the following steps:
s1, a CPU receives a task, splits the task into a plurality of subtasks, and sends the subtasks to a PCIE converter through a PCIE bus;
s2, the PCIE converter forwards the subtasks to each VPX board card connected with the subtasks through a PCIE bus;
s3, after receiving the subtasks, the VPX board card processes the subtasks;
s4, when the VPX boards need to communicate with each other, data are transmitted between the VPX boards through the PCIE bus, and the subtasks are cooperatively processed;
s5, after the VPX board card finishes subtask processing, sending a processing result to a PCIE converter through a PCIE bus;
and S6, the PCIE converter sends the processing result to the CPU.
2. The VPX structure-based high-speed communication method according to claim 1, wherein: in the step S1, the CPU further obtains parameter information of each VPX board card, where the parameter information includes a processing capability and a current load; and the CPU also distributes VPX board cards for the subtasks according to the parameter information.
3. The VPX structure-based high-speed communication method according to claim 2, wherein: in the step S1, a synchronization point is also created in the data of the subtask;
in step S4, the VPX board synchronizes data in the subtasks at the synchronization point.
4. A VPX structure-based high-speed communication method according to claim 3, characterized in that: in the step S4, the data synchronization specifically includes:
when the data of the VPX board processing subtasks reach the synchronous point, information is sent to other VPX boards;
waiting for signals of other VPX boards until all the VPX boards have reached the synchronization point;
and merging the data of all the VPX boards.
5. The VPX structure-based high-speed communication method according to claim 4, wherein: in the step S1, the CPU also presets shared data in the subtasks;
in step S4, the VPX board stores preset shared data into a shared memory; when the VPX board accesses the shared data, locking the shared data; and unlocking the shared data after the VPX board access is completed.
6. The VPX structure-based high-speed communication method according to claim 5, wherein: the VPX board card comprises one or more of an NPU card, a GPU card, a DCU card, an NVME card and an FPGA card.
CN202310211171.8A 2023-03-07 2023-03-07 High-speed communication method based on VPX structure Active CN116149861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310211171.8A CN116149861B (en) 2023-03-07 2023-03-07 High-speed communication method based on VPX structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310211171.8A CN116149861B (en) 2023-03-07 2023-03-07 High-speed communication method based on VPX structure

Publications (2)

Publication Number Publication Date
CN116149861A true CN116149861A (en) 2023-05-23
CN116149861B CN116149861B (en) 2023-10-20

Family

ID=86360038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310211171.8A Active CN116149861B (en) 2023-03-07 2023-03-07 High-speed communication method based on VPX structure

Country Status (1)

Country Link
CN (1) CN116149861B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120284712A1 (en) * 2011-05-04 2012-11-08 Chitti Nimmagadda Systems and methods for sr-iov pass-thru via an intermediary device
CN106708169A (en) * 2016-12-31 2017-05-24 中国舰船研究设计中心 Multicomputer system time synchronization method based on VPX framework and device
US20180082133A1 (en) * 2016-09-20 2018-03-22 Stmicroelectronics S.R.L. Method of detecting an overtaking vehicle, related processing system, overtaking vehicle detection system and vehicle
WO2018072240A1 (en) * 2016-10-20 2018-04-26 中国科学院深圳先进技术研究院 Direction-variable lane control method for tidal traffic flow on road network
CN109242754A (en) * 2018-07-17 2019-01-18 北京理工大学 A kind of more GPU High performance processing systems based on OpenVPX platform
CN209388308U (en) * 2019-03-12 2019-09-13 博微太赫兹信息科技有限公司 Universal data collection and signal processing system based on GPU and FPGA
CN111324558A (en) * 2020-02-05 2020-06-23 苏州浪潮智能科技有限公司 Data processing method and device, distributed data stream programming framework and related components
CN113515483A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Data transmission method and device
CN113836058A (en) * 2021-09-13 2021-12-24 南京南瑞继保电气有限公司 Method, device, equipment and storage medium for data exchange between board cards
CN113946537A (en) * 2021-10-14 2022-01-18 浪潮商用机器有限公司 Accelerating device and server
WO2022083466A1 (en) * 2020-10-19 2022-04-28 华为技术有限公司 Method and device for data processing

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120284712A1 (en) * 2011-05-04 2012-11-08 Chitti Nimmagadda Systems and methods for sr-iov pass-thru via an intermediary device
US20180082133A1 (en) * 2016-09-20 2018-03-22 Stmicroelectronics S.R.L. Method of detecting an overtaking vehicle, related processing system, overtaking vehicle detection system and vehicle
WO2018072240A1 (en) * 2016-10-20 2018-04-26 中国科学院深圳先进技术研究院 Direction-variable lane control method for tidal traffic flow on road network
CN106708169A (en) * 2016-12-31 2017-05-24 中国舰船研究设计中心 Multicomputer system time synchronization method based on VPX framework and device
CN109242754A (en) * 2018-07-17 2019-01-18 北京理工大学 A kind of more GPU High performance processing systems based on OpenVPX platform
CN209388308U (en) * 2019-03-12 2019-09-13 博微太赫兹信息科技有限公司 Universal data collection and signal processing system based on GPU and FPGA
CN111324558A (en) * 2020-02-05 2020-06-23 苏州浪潮智能科技有限公司 Data processing method and device, distributed data stream programming framework and related components
CN113515483A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Data transmission method and device
WO2022083466A1 (en) * 2020-10-19 2022-04-28 华为技术有限公司 Method and device for data processing
CN113836058A (en) * 2021-09-13 2021-12-24 南京南瑞继保电气有限公司 Method, device, equipment and storage medium for data exchange between board cards
CN113946537A (en) * 2021-10-14 2022-01-18 浪潮商用机器有限公司 Accelerating device and server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN SHANZHI等: "Integrated control strategies for ATM VPX survivable network", 《 ACTA ELECTRONICA SINICA》, vol. 26, no. 4, pages 55 - 79 *
李浩澜: "基于视频图像的高速公路异常事件实时检测系统", 《中国优秀硕士学位论文全文数据库工程科技II辑(月刊)》, no. 08, pages 034 - 182 *

Also Published As

Publication number Publication date
CN116149861B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
EP3575972B1 (en) Inter-processor communication method for access latency between system-in-package (sip) dies
US11750418B2 (en) Cross network bridging
CN108345555B (en) Interface bridge circuit based on high-speed serial communication and method thereof
CN101667169A (en) Multi-processor parallel processing system for digital signals
WO2022094771A1 (en) Network chip and network device
US6493784B1 (en) Communication device, multiple bus control device and LSI for controlling multiple bus
CN106844263B (en) Configurable multiprocessor-based computer system and implementation method
EP3008608B1 (en) Collaboration server
CN109885526A (en) A kind of information processing platform based on OpenVPX bus
US20220114132A1 (en) Data Switch Chip and Server
US9253121B2 (en) Universal network interface controller
CN117493237B (en) Computing device, server, data processing method, and storage medium
WO2023123905A1 (en) Data transmission processing method in chip system and related apparatus
CN112867998B (en) Operation accelerator, switch, task scheduling method and processing system
CN116149861B (en) High-speed communication method based on VPX structure
US20010054124A1 (en) Parallel processor system
CN116302522A (en) VPX-based image processing system, method and storage medium
US10614026B2 (en) Switch with data and control path systolic array
US20150178092A1 (en) Hierarchical and parallel partition networks
US8473966B2 (en) Iterative exchange communication
CN113556242B (en) Method and equipment for performing inter-node communication based on multi-processing nodes
CN103217681A (en) Tree-shaped topological mechanism multiprocessor sonar signal processing device and method
CN114445260A (en) Distributed GPU communication method and device based on FPGA
CN117632825B (en) Multiplexing communication system
CN117914808A (en) Data transmission system, method and switch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant