WO2017152733A1 - 一种大规模复杂无线通信系统的仿真方法和系统 - Google Patents
一种大规模复杂无线通信系统的仿真方法和系统 Download PDFInfo
- Publication number
- WO2017152733A1 WO2017152733A1 PCT/CN2017/073356 CN2017073356W WO2017152733A1 WO 2017152733 A1 WO2017152733 A1 WO 2017152733A1 CN 2017073356 W CN2017073356 W CN 2017073356W WO 2017152733 A1 WO2017152733 A1 WO 2017152733A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cpu
- simulation
- cpus
- parallel
- working
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/06—Testing, supervising or monitoring using simulated traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
Definitions
- This document relates to, but is not limited to, the field of wireless communication and high performance computing technologies, and in particular relates to a simulation method and system for a large-scale complex wireless communication system.
- the scenario proposes a mobile network scenario, including a mobile relay (relay) and a nomadic node.
- MTC Machine Type Communication
- Massive Machine Communication Massive Machine Communication
- MMC Mobility Management Entity
- D2D Device 2 Device
- Scheme and 256QAM high-order modulation technology propose macro-cell dual-link, vertical beamforming, virtual cell, wireless backhaul, MM (Massive MIMO, large-scale Input multiple output) and pen beam shaping, etc.; Fast cell switching technique and found cell.
- the network scale is getting larger and larger, and the network size changes dynamically according to business changes;
- the network structure is more and more complex, mainly reflected in multi-standard multi-frequency coexistence and interoperability, Multiple coverage modes coexist, complex topology, irregular network, and dynamic changes with the movement of terminals, machines, mobile relays, nomadic nodes; network sections More and more types of points, resulting in the coexistence of multiple wireless links and the coexistence of multiple service requirements, the cooperation between network nodes is more and more frequent; multiple physical layer processing technologies will be introduced; more antenna related technologies Introduction, it is necessary to accurately model the wireless channel, such as ray tracing technology and 3D channel, 3D scene modeling technology.
- the traditional single-core/single-machine simulation platform has been unable to complete the simulation and evaluation of the above technologies, mainly because it cannot provide sufficient memory to support large-scale scene simulation, and cannot save a large number of channels and other data generated during the simulation process; Rate to support a large number of complex physical layer processing, large-scale antenna array modeling, and a large number of high-level logic processing; unable to provide flexible architecture to support simulation of multi-system coexistence, multi-mode coverage, multi-service coexistence, etc.;
- the scalable structure supports the evolution of the protocol. As the protocol evolves, the addition and deletion of different types of network nodes may lead to the reintroduction of the simulation platform.
- Single core is currently used in 64-antenna ultra-large-scale antenna array technology and ray tracing simulation. It runs very slowly and takes several days to complete a simulation. In the ray tracing technology simulation of accurate channel modeling, the memory reaches 100G, and the operation speed Very slow, not to mention the simulation of subsequent 5G technologies.
- This paper provides a simulation and device for large-scale complex wireless communication systems, which can solve the memory pressure and computational pressure faced in the simulation process of large-scale complex wireless communication systems, and provide a flexible and scalable parallel architecture for the simulation platform.
- the embodiment of the invention provides a simulation method for a large-scale complex wireless communication system, and the method includes:
- the client reads the simulation configuration parameters, determines the function types of the CPUs working in parallel, and creates corresponding simulation tasks, and sends the simulation tasks to the CPUs working in parallel through the task manager;
- the CPU working in parallel receives the simulation task delivered by the task manager, according to the work configured by the client.
- the type can interact with other CPUs for data interaction and synchronization operations, running simulation code.
- the client reads the simulation configuration parameters and processes the data, determines the function types of the CPUs working in parallel, and creates corresponding simulation tasks, including:
- the interference relationship between the network nodes is calculated locally or by the CPU working in parallel, and the data interaction relationship between the network nodes is determined according to the calculation result.
- the CPU pair of the communication sets the communication timing
- the function types of the CPU include: timing control, data relay, and communication system simulation.
- the allocating the network node and the user equipment to the CPU responsible for the communication system simulation according to the data interaction relationship between the network node and the user equipment including performing at least one of the following processing:
- the CPU working in parallel performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code, including:
- the synchronization code between the CPUs is broadcast to other CPUs by running the simulation code.
- the CPUs working in parallel perform other CPUs according to the type of functions configured by the client.
- Data interaction and synchronization operations, running simulation code including:
- the CPU working in parallel determines that its function is data transfer according to the obtained simulation task, receives the synchronization message broadcasted by the timing control CPU, and obtains the information of the CPU pair for the current point-to-point communication, and determines that the CPU is currently responsible for the communication system simulation.
- the CPU performs peer-to-peer communication, the other CPUs that are buffered are sent to the current communication end of the CPU to delay the data transmission, and receive and cache the data sent by the current communication peer of the CPU to other CPUs.
- the CPU working in parallel performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code, including:
- the CPU working in parallel determines that its function is the communication system simulation according to the obtained simulation task, receives the synchronization message broadcasted by the timing control CPU, and acquires the information of the CPU pair for the current point-to-point communication, and determines that the current CPU and other CPUs are determined. For point-to-point communication, data interaction with other CPUs, running network node protocol stack code and user equipment protocol stack code, to calculate uplink and / or downlink interference.
- the parallel working CPU calculates uplink and/or downlink interference, including:
- the node has a positional relationship and a channel model between the user equipment with strong interference and the target network node, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the user equipment Uplink interference of the target network node;
- the device When calculating the downlink interference to the target user equipment residing in the CPU, acquiring a location relationship and a channel model between the network node residing in the CPU and the target user equipment, and the target user residing in other CPUs
- the device has a positional relationship and a channel model between the network node with strong interference and the target user equipment, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the network node Downlink interference from the target user equipment.
- Embodiments of the present invention provide a simulation system for a large-scale complex wireless communication system, including:
- the client is configured to read and configure the simulation configuration parameters, determine the function types of the respective CPUs working in parallel, and create corresponding simulation tasks, and send the simulation tasks to the CPUs working in parallel through the task manager;
- the CPU working in parallel is set to receive the simulation task delivered by the task manager, and performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code;
- the task manager is configured to receive the simulation task submitted by the client and deliver it to the CPU working in parallel.
- the client is configured to read and process the simulation configuration parameters in the following manner, determine function types of respective CPUs working in parallel, and create corresponding simulation tasks:
- the interference relationship between the network nodes is calculated locally or by the CPU working in parallel, and the data interaction relationship between the network nodes is determined according to the calculation result.
- the CPU pair of the communication sets the communication timing
- the function types of the CPU include: timing control, data relay, and communication system simulation.
- the client is configured to allocate a network node and a user equipment to the CPU responsible for the communication system simulation according to the data interaction relationship between the network node and the user equipment in the following manner:
- the CPU working in parallel is set to function according to the client configuration in the following manner.
- the synchronization message between the CPUs is broadcasted to other CPUs by running the simulation code.
- the CPU working in parallel is configured to perform data interaction and synchronization operations with other CPUs according to the function type configured by the client in the following manner, and run the simulation code:
- the function is data relay, receiving the synchronization message broadcasted by the timing control CPU and obtaining the information of the CPU pair for the current point-to-point communication, and determining that the current CPU is the point-to-point with the CPU responsible for the communication system simulation.
- the other CPUs that are buffered are sent to the current communication end of the CPU to delay the data transmission, and receive and cache the data sent by the current communication peer of the CPU to other CPUs.
- the CPU working in parallel is configured to perform data interaction and synchronization operations with other CPUs according to the function type configured by the client in the following manner, and run the simulation code:
- the obtained simulation task when the communication function is simulated, the synchronization message broadcasted by the timing control CPU is received, and the CPU pair information of the current point-to-point communication is obtained, and when it is determined that the CPU is performing point-to-point communication with other CPUs, Perform data interaction with other CPUs, run network node protocol stack code and user equipment protocol stack code, and calculate uplink and/or downlink interference.
- the CPUs operating in parallel are configured to calculate uplink and/or downlink interference in the following manner:
- the node has a positional relationship and a channel model between the user equipment with strong interference and the target network node, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the user equipment Uplink interference of the target network node;
- the device When calculating the downlink interference to the target user equipment residing in the CPU, acquiring a location relationship and a channel model between the network node residing in the CPU and the target user equipment, and the target user residing in other CPUs
- the device has a positional relationship and a channel model between the network node with strong interference and the target user equipment, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the network node Downlink interference from the target user equipment.
- the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
- a simulation method and system for a large-scale complex wireless communication system Compared with the related art, a simulation method and system for a large-scale complex wireless communication system provided by an embodiment of the present invention, a client reads an simulation configuration parameter and processes it, determines a function type of each CPU working in parallel, and creates a corresponding The simulation task sends the simulation task to each CPU working in parallel through the task manager. After receiving the simulation task delivered by the task manager, the parallel working CPU performs data interaction and synchronization operation with other CPUs according to the function type configured by the client. , run the simulation code.
- the invention can solve the memory pressure and computational pressure faced in the simulation process of large-scale complex wireless communication systems, and provides a flexible and scalable parallel architecture for the simulation platform to support simulation of various requirements, ideas and scenarios.
- FIG. 1 is a schematic structural diagram of a system of a distributed parallel simulation platform used in an embodiment of the present invention.
- FIG. 2 is a schematic diagram of functions of a client, a task manager, and a parallel CPU in a distributed parallel simulation platform according to an embodiment of the present invention.
- FIG. 3 is a flowchart of a simulation method of a large-scale complex wireless communication system according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a simulation system of a large-scale complex wireless communication system according to an embodiment of the present invention.
- FIG. 5 is a flowchart of a client process (client implementation example) of Example 1 of the present invention.
- FIG. 6 is a schematic diagram showing the CPU configuration and mutual relationship (example 1) of the single communication system simulation platform according to the second example of the present invention.
- FIG. 7 is a schematic diagram of a CPU configuration and a mutual relationship (example 2) of a single communication system simulation platform according to Example 3 of the present invention.
- FIG. 8 is a schematic diagram of a CPU configuration and a mutual relationship (example 3) of a multi-system coexistence simulation platform according to Example 4 of the present invention.
- FIG. 9 is a schematic diagram showing the CPU configuration and mutual relationship (example 4) of the multi-system coexistence simulation platform according to the fifth example of the present invention.
- FIG. 10 is a schematic flowchart showing the implementation of the parallel CPU of Example 6 of the present invention.
- FIG. 11 is a schematic flowchart showing the implementation of a timing control node according to Example 7 of the present invention.
- FIG. 12 is a schematic diagram of a parallel CPU data interaction implementation process (Example 1) according to Example 8 of the present invention.
- FIG. 13 is a schematic diagram of a parallel CPU data interaction implementation process (example 2) according to Example 9 of the present invention.
- FIG. 14 is a schematic diagram of a parallel CPU data interaction implementation process (Example 3) according to Example 10 of the present invention.
- Example 15 is a schematic diagram of determining a network simulation network range of a communication system simulation CPU according to Example 11 of the present invention.
- 16 is a flow chart showing the simulation implementation of the communication system simulation CPU of Example 13 of the present invention.
- FIG. 17 is a flowchart of implementing a relay node CPU according to Example 14 of the present invention.
- the embodiment of the invention adopts a distributed parallel simulation platform, and the architecture of the simulation platform mainly comprises three parts: a client, a task manager and a parallel CPU.
- the client is one or more remote PCs. These PCs are controlled by people and run independently between multiple PCs.
- the task manager is a CPU in the parallel simulator. This CPU can share an emulator with other parallel CPUs or it can monopolize a simulator. Multiple parallel simulators, each with one or more CPUs, these CPUs constitute parallel CPUs, which run independently in parallel and have their own independent memory.
- Figure 1 shows the architecture of a parallel system.
- the client is connected to the simulation laboratory through the network.
- the simulation lab has multiple high-performance simulators.
- the high-performance simulators are connected through switches, and one of the simulators is set by MPI (Message Passing Interface).
- MPI Message Passing Interface
- the CPUs of the remaining simulators constitute a parallel CPU.
- the main functions and information interactions of the client, the task manager, and the parallel CPU included in the distributed parallel simulation platform are as follows:
- the client is one or more PCs that are independent of each other.
- the simulation can be started by a person.
- the main work is to start the simulation and read the parameters of the simulation configuration; according to the configured number of parallel CPUs
- the type and number of network nodes in the simulation the simulation network nodes are allocated for each parallel CPU; the allocation is based on: the least data interaction between different parallel CPUs, and the calculation between different parallel CPUs is as balanced as possible; setting each parallel CPU Type; construct data interaction relationship between different parallel CPUs according to network node assignment result and data interaction relationship of different network nodes; submit read and preliminary processed simulation data and simulation platform code to task manager; issue to task manager
- the command receives the command response returned by the task manager; performs comprehensive processing on the simulation results returned by the multiple parallel CPUs.
- Task manager can be set on a simulator, the main work: receiving data and code submitted by the client, distributing the data and code to the parallel CPU; detecting the running status of each parallel CPU; performing corresponding according to the command of the client The operation, such as deleting or canceling the parallel simulation task, and retrieving the running result of the simulation task with the status of finished.
- the parallel CPU receives the result of the task manager distribution, runs the corresponding simulation code according to the network node type and the network node ID of the CPU, and performs data interaction, synchronization, and the like with the related parallel CPU.
- Parallel CPUs can include four functional types: timing control, data relay, mobile node emulation, and communication system emulation.
- Timing control function for broadcasting synchronization messages between CPUs to all parallel CPUs, driving simulation; transit function for receiving, buffering and forwarding signaling of network nodes belonging to different parallel CPUs, including between different parallel CPUs Signaling of user handover procedures, interaction between cells in different CPUs, and cooperative signaling; mobile node emulation function for simulating some constantly moving network nodes, dividing them into one node to avoid their movement Load imbalance between parallel CPUs; communication system simulation function for running simulations of different system network nodes, one CPU only runs one type of simulation, the same communication system simulates CPU directly through point-to-point data interaction, between CPU pairs The communication is performed in turn, and the interactive content includes real-time information such as air interface information, location information of the mobile user and the mobile network node.
- the distributed parallel simulation platform can perform ultra-large-scale network simulation without being limited by memory; it can perform complex physical layer calculation, ray tracing simulation, complex channel modeling, etc. without affecting the rate; Can perform simulation, collaboration and interoperability of multiple modes of coexistence and mutual interference Simulation, without increasing the complexity of the code; can well support the evolution of the communication protocol, when the protocol changes, only need to modify the changed part, without having to adopt the platform, such as when it needs to be some kind of When designing a communication method, it is only necessary to add a parallel CPU, and then perform simulation modeling in this parallel CPU; it can greatly improve the computational efficiency of the simulation, especially the simulation that requires fast output.
- an embodiment of the present invention provides a simulation method for a large-scale complex wireless communication system, where the method includes:
- the client reads the simulation configuration parameter, determines the function type of each CPU working in parallel, and creates a corresponding simulation task, and sends a simulation task to each CPU working in parallel through the task manager;
- S102 The CPU working in parallel receives the simulation task delivered by the task manager, and performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code.
- the client reads the simulation configuration parameters and processes the data, determines the function types of the CPUs working in parallel, and creates corresponding simulation tasks, including:
- the interference relationship between the network nodes is calculated locally or by the CPU working in parallel, and the data interaction relationship between the network nodes is determined according to the calculation result.
- the CPU pair of the communication sets the communication timing
- the function types of the CPU include: timing control, data relay, and communication system simulation.
- the allocating the network node and the user equipment to the CPU responsible for the communication system simulation according to the data interaction relationship between the network node and the user equipment includes performing at least one of the following processing:
- the CPU working in parallel performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code, including:
- the synchronization code between the CPUs is broadcast to other CPUs by running the simulation code.
- the CPU working in parallel performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code, including:
- the CPU working in parallel determines that its function is data transfer according to the obtained simulation task, receives the synchronization message broadcasted by the timing control CPU, and obtains the information of the CPU pair for the current point-to-point communication, and determines that the CPU is currently responsible for the communication system simulation.
- the CPU performs peer-to-peer communication, the other CPUs that are buffered are sent to the current communication end of the CPU to delay the data transmission, and receive and cache the data sent by the current communication peer of the CPU to other CPUs.
- the CPU working in parallel performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code, including:
- the CPU working in parallel determines that its function is the communication system simulation according to the obtained simulation task, receives the synchronization message broadcasted by the timing control CPU, and acquires the information of the CPU pair for the current point-to-point communication, and determines that the current CPU and other CPUs are determined. For point-to-point communication, data interaction with other CPUs, running network node protocol stack code and user equipment protocol stack code, to calculate uplink and / or downlink interference.
- the device When calculating the downlink interference to the target user equipment residing in the CPU, acquiring a location relationship and a channel model between the network node residing in the CPU and the target user equipment, and the target user residing in other CPUs
- the device has a positional relationship and a channel model between the network node with strong interference and the target user equipment, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the network node Downlink interference from the target user equipment.
- an embodiment of the present invention provides a simulation system for a large-scale complex wireless communication system, including:
- the client is configured to read the simulation configuration parameter, determine the function type of each CPU working in parallel, and create a corresponding simulation task, and send the simulation task to each CPU working in parallel through the task manager;
- the CPU working in parallel is set to receive the simulation task delivered by the task manager, and performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code;
- the task manager is configured to receive the simulation task submitted by the client and deliver it to the CPU working in parallel.
- the client is configured to read and process the simulation configuration parameters in the following manner, determine the function types of the respective CPUs working in parallel, and create corresponding simulation tasks:
- the interference relationship between the network nodes is calculated locally or by the CPU working in parallel, and the data interaction relationship between the network nodes is determined according to the calculation result.
- the CPU pair of the communication sets the communication timing
- the function types of the CPU include: timing control, data relay, and communication system simulation.
- the client is configured to allocate a network node and a user equipment to the CPU responsible for the communication system simulation according to the data interaction relationship between the network node and the user equipment in the following manner:
- the CPU working in parallel is set to perform data interaction and synchronization operation with other CPUs according to the function type configured by the client in the following manner, and run the simulation code:
- the synchronization message between the CPUs is broadcasted to other CPUs by running the simulation code.
- the CPU working in parallel is set to perform data interaction and synchronization operation with other CPUs according to the function type configured by the client in the following manner, and run the simulation code:
- the function is data relay, receiving the synchronization message broadcasted by the timing control CPU and obtaining the information of the CPU pair for the current point-to-point communication, and determining that the current CPU is the point-to-point with the CPU responsible for the communication system simulation.
- the other CPUs that are buffered are sent to the current communication end of the CPU to delay the data transmission, and receive and cache the data sent by the current communication peer of the CPU to other CPUs.
- the CPU working in parallel is set to perform data interaction and synchronization operation with other CPUs according to the function type configured by the client in the following manner, and run the simulation code:
- the synchronization message broadcasted by the timing control CPU is received, and the CPU pair information for the current point-to-point communication is obtained. It is determined that the current CPU performs point-to-point communication with other CPUs, performs data interaction with other CPUs, runs the network node protocol stack code and the user equipment protocol stack code, and calculates uplink and/or downlink interference.
- the CPU working in parallel is set to calculate uplink and/or downlink interference in the following manner:
- the node has a positional relationship and a channel model between the user equipment with strong interference and the target network node, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the user equipment Uplink interference of the target network node;
- the device When calculating the downlink interference to the target user equipment residing in the CPU, acquiring a location relationship and a channel model between the network node residing in the CPU and the target user equipment, and the target user residing in other CPUs
- the device has a positional relationship and a channel model between the network node with strong interference and the target user equipment, and calculates slow fading and fast fading between the signals according to the obtained information, and determines, according to the calculation result of the signal fading, the network node Downlink interference from the target user equipment.
- This example mainly describes the processing of the client during simulation of multiple simulation cases. It reads the simulation data of each case in turn, creates separate parallel jobs for each case, and submits them.
- the flow chart is shown in Figure 5.
- the simulation configuration data includes a simulation duration, a parallel CPU number, a path loss model, a fast decay model, a slow decay model, and a network scale.
- step S201 Select whether to calculate the power of the network node at the grid point on the client or the remote parallel computer. If the calculation is selected on the client, step S205 is performed, if the selection is calculated on the remote parallel computer, step S202 is performed;
- Parallel computing divides all network nodes into several parts, which can effectively alleviate memory pressure and may improve computational efficiency.
- Single parallel The CPU only calculates the power of some of the network nodes at all grid points, and the client calculates the power of all network nodes at all grid points.
- Memory evaluation is judged by client memory and network size.
- the method of efficiency evaluation the time of the parallel task submission and the result recovery t trans , the parallel computing time t ex , the client computing time is t pro_client , the above time can be obtained through the pre-test.
- t trans +t ex ⁇ t pro_client indicates that the client is computationally efficient
- executing S205 generally occurs when the network size is small; if t trans +t ex ⁇ t pro_client , then The parallel computing efficiency is high, and S202 is executed. In the case that the client memory is insufficient, the calculation efficiency may be evaluated without performing the S202.
- the entire simulation area is discretized by a grid
- the grid may be a square, a rectangle or a hexagon
- the grid point may be configured as a center point of the grid
- the grid point represents all points in the grid area
- the network node includes: a base station, a relay point, a nomadic node, and the like;
- the power of the network node on a grid point refers to: the power of the network node after the fast power decay and the slow fading to the grid point; and the power of the network node at the grid point is determined to determine the relationship between the network nodes. Interference, which is convenient for the next step of dividing the network nodes;
- S202 Create a parallel job in the task manager, set the number of parallel CPUs, the job name, the user name, the code required to submit the calculation, and the path of the data.
- the parallel job is used to calculate the power of all network nodes at all grid points. .
- S205 The client calculates the power of all network nodes on all grid points.
- S206 Set each parallel CPU function, determine an interference relationship of the network node according to the calculation result, divide the network node and the UE according to the interference relationship, and divide the network node and the UE into the parallel CPU.
- the division principle is that the network nodes of the heterogeneous inter-frequency points are divided into different CPUs.
- the network nodes with data interaction are divided into the same CPU as much as possible; the UE is divided into its accesses.
- the CPU in which the network node is located wherein, in the entire network range, the reference signal power of all network nodes to all UEs is calculated, and the reference signal power is the strongest for each UE.
- the network node divides the UE into a CPU in which the network node of the strongest reference signal is located.
- S209 Determine whether all the simulation use cases are processed, and execute S211 after the processing, otherwise execute S210.
- the timer is used to periodically query the job status and display the simulation process.
- the simulation result is recovered and processed, and the job in the completed state is deleted.
- Figure 6 corresponds to a simulation platform in an LTE single communication system simulation scenario.
- the simulation platform includes: a client, a task manager, and a parallel CPU.
- the interaction between the simulation code, the simulation data, the command, and the simulation result data is performed, and the client controls and operates the parallel CPU through the task manager. They can communicate in both directions.
- the parallel CPU may include the following types of CPUs: a timing control node CPU, a communication system emulation node CPU;
- the timing control node CPU broadcasts synchronization messages, current time, and simulation end flags between the CPUs to all CPU nodes.
- the LTE communication system simulation node CPU can perform direct point-to-point communication, and the interactive data is air interface information, location information of the mobile network node and the mobile UE, and control signaling between the network elements; wherein the air interface information is used to calculate the absence The neighboring area of the cell in the same CPU needs real-time interaction; the location information is used to calculate the real-time channel information; the control signaling has a delay, and it needs to be buffered for a certain time in the sending end CPU to perform some operations such as cooperation and switching. .
- the parallel CPU may also include a mobile node emulation CPU, such as the LTE mobile node in FIG. CPU.
- the mobile node emulation CPU is separately divided to avoid the load imbalance between CPUs caused by the movement of the site. If there is no mobile network node, the CPU can be deleted.
- Figure 7 corresponds to a simulation platform in another LTE single communication system simulation scenario.
- the LTE single communication system simulation platform of Example 3 also includes: a client, a task manager, and a parallel CPU; the difference between the example 3 and the example 2 is that the type of the relay node is added in the parallel CPU type. ;
- the transit node CPU is used for buffering and forwarding signaling of network nodes in different CPU nodes, including switching user information, handover commands, cooperation commands, etc., and the buffer is used for analog signaling interaction delay.
- FIG. 8 is a CPU configuration and relationship of a multi-communication system simulation platform.
- the multi-communication system simulation platform of Example 4 also includes: a client, a task manager, and a parallel CPU;
- the parallel CPU may include the following types of CPUs: a timing control node CPU, a communication system simulation node CPU;
- the communication system simulation node CPU includes a plurality of standards; each parallel CPU only runs one communication system simulation code, and the communication system simulation node CPU performs direct point-to-point data communication between the same system.
- the interaction is the air interface information, the location information and signaling of the mobile network node and the UE.
- inter-system interference only the signaling interaction is performed between different standards; the air interface information needs to be interacted in real time, and the signaling needs to be sent.
- the side cache is sent for a certain time.
- FIG. 9 is a diagram showing the CPU configuration and mutual relationship of another multi-communication system simulation platform.
- the multi-communication system simulation platform of Example 5 also includes: a client, a task manager, and a parallel CPU;
- Example 5 The difference between Example 5 and Example 4 is to increase the type of transit node in the parallel CPU;
- the transit node CPU receives signaling interactions of different standard nodes and buffers and forwards the signaling.
- the communication system simulation node CPU runs the simulation code, the same system node performs direct data interaction, and the main interaction air interface information is used for the interference calculation of the neighboring cells in different CPUs. Without considering the mutual interference between systems, the data interaction of different standards is only Signaling, and generally do not directly interact, need to interact through the transit node.
- the parallel CPU implementation process is shown in Figure 10 and includes the following steps:
- S401 Read the data of the CPU, and the task manager distributes the same code and data to all parallel CPUs.
- Each CPU can read data related to itself according to its own index, such as common simulation data, and the CPU resides.
- Parameter configuration of the network node and the UE, the CPU interferes with the parameter configuration of the network node and the UE, and the like.
- S402 Set the function of the CPU, and according to the read data, the function configured by the CPU can be known, and the function is executed after the function is set.
- S403 Send or receive a broadcast message.
- timing control node CPU timing is performed and a decision is made as to when a broadcast message is sent; the other node CPUs are waiting to receive a broadcast message sent by the timing control node CPU.
- step S405 determining whether the simulation ends, reading the simulation end flag from the broadcast message, and ending the flow if the simulation end flag is read; if the simulation end flag is not read, the sequence control node CPU executing step S407, the transit node CPU, The communication system simulation node CPU performs step S406;
- S406 data exchange between different CPUs (communication system simulation node CPU, transit node CPU) according to communication timing, interaction of air interface information at the current moment, location information and signaling of the mobile station and the mobile user;
- CPUs communication system simulation node CPU, transit node CPU
- step S407 waiting for other parallel CPU data interaction to complete, to achieve parallel CPU communication synchronization, the timing control node CPU, the transit node CPU after performing step S407 and then jump to S411, the communication system simulation node CPU executes step S407 and then performs step S408;
- the communication standard simulation node CPU determines whether the synchronization message between the CPUs broadcast by the timing control node CPU is a synchronization frame of the standard system, if yes, execute S409, otherwise execute S411.
- the timing control node CPU and the transit node CPU may be in a waiting state, ensure that the code of the parallel CPU performs synchronization, and then jump to S403.
- the timing control node generates and broadcasts three messages of current time, different system frame arrival message and simulation end flag, which is the heart of the whole system, and drives the simulation operation. It accumulates time in units of minimum simulation time granularity, each time accumulating, and The minimum simulation time unit of all standards performs the modulo operation, and the frame corresponding to the broadcast mode is 0.
- Figure 11 shows the implementation of the timing control node in the LTE, UMTS, and GSM coexistence simulations.
- the minimum simulation time units of the three modes are: 1ms, 0.667ms, and 0.577ms.
- the minimum simulation time unit of various standards is set, which is convenient for judging the time when the broadcast frame arrives at the message.
- the minimum simulation time units of the three systems are 1ms, 0.667ms, and 0.577ms, respectively, and the accumulated minimum time granularity is 1us.
- S505 Perform modulo operation on the minimum simulation time granularity of iCounter and each system
- S507 The frame arrival message, the current time iCounter, and the simulation end flag (set to 0) of the system in which the broadcast mode is 0, and the process proceeds to S502.
- Parallel CPUs use two communication methods for data interaction: broadcast and peer-to-peer. Frame arrival messages are broadcast to all parallel CPUs.
- the transit node CPU and the communication system emulation node CPU pass the point-to-point communication in turn, and only one pair of CPUs communicate at a time. During a certain CPU pair communication process, other CPUs wait, and the CPU-to-communication order is generated on the client side, in different parallel CPUs.
- the signaling interaction of the network node is implemented by the transit node CPU.
- broadcast communication is first performed, and then peer-to-peer data interaction communication is performed.
- the flow chart of the point-to-point data interaction of the communication system simulation node CPU is shown in FIG.
- S601 Read information of a transceiver pair that is currently performing point-to-point communication.
- S603 Determine whether the communication system simulation node CPU communicates with the transit node CPU, if yes, execute S605, otherwise execute S604.
- the data that needs to be sent in real time including the air interface information of the network node in the CPU, the location information of the UE and the mobile station in the CPU, is used for the network node in other CPUs to calculate the current time interference, and jumps to S606.
- S606 Execute a data sending and receiving command labsendreceive.
- S609 Determine whether all parallel CPU data interactions are completed, and then directly terminate; otherwise, execute S610.
- S610 Read the next pair of transceiver CPU pair information for performing point-to-point communication, and jump to S602.
- the difference from the example one in the example 8 is that there is no transit node forwarding signaling.
- broadcast communication is first performed, and then peer-to-peer data interaction communication is performed.
- peer-to-peer data interaction flow chart is shown in Figure 13.
- S701 Read information of a transceiver pair that is currently performing peer-to-peer communication.
- S702 Determine whether the index of the CPU of the local node is in the center of the transmitting and receiving CPU, if yes, execute S703; otherwise, execute S709.
- the real-time interaction data includes data that needs to be interacted with the peer CPU, including air interface information, location information of the mobile UE and the mobile station, and the like, and is used for calculating interference at the current simulation time. Generate fast fading information that interferes with the UE and the network node.
- S704 Determine whether there is signaling, if yes, execute S705; otherwise, jump to S707.
- S705 Determine whether the signaling delay is up, if yes, execute S706, otherwise execute S707.
- S707 Execute the data sending and receiving command labsendreceive.
- S710 Determine whether all parallel CPU data interactions are completed, and then directly terminate; otherwise, execute S711.
- S711 Read the next pair of transceiver CPU pair information, and jump to S702.
- Example 3 does not use peer-to-peer communication in turn, but uses all parallel CPU simultaneous communication methods, and there is no transit node CPU, signaling and data are sent together.
- the flow chart is shown in Figure 14.
- S801 Read real-time interaction data that needs to be sent, and prepare to send;
- the real-time interaction data that needs to be sent includes air interface information, location information of the UE and the mobile station, and the like, and is used for calculating interference at the current simulation time and generating fast-fading information of the interfering UE and the network node.
- S802 Determine whether there is interaction signaling, if there is signaling, execute S803; otherwise, execute S805.
- S803 Determine whether the signaling delay is up, if yes, execute S804; otherwise, execute S805.
- S807 Execute the wait command, wait for other parallel CPUs to complete the data interaction, realize synchronization, and end.
- This part mainly introduces the method and specific simulation implementation of the communication standard simulation node CPU to determine the simulation range.
- the simulation scope of the communication system emulation node includes: the network node residing in the CPU and the accessing UE of these network nodes, and other network nodes having strong interference to the host network node and the UE of the CPU and UE, a network node in a heterogeneous CPU.
- the network range of a single-system CPU emulation should include network nodes and access UEs of these network nodes. Due to the difference in uplink and downlink interference, the simulation range is different when performing uplink and downlink simulation respectively:
- the simulation scope includes: a network node and a UE camped in the CPU, a UE camped by another CPU having strong interference to the host network node of the CPU, and a network node in the heterogeneous CPU;
- the simulation scope includes: a network node and a UE that reside in the CPU, a network node that is hosted by another CPU that has strong interference with the host UE, and a network node in the heterogeneous CPU;
- Simultaneous uplink and downlink simulation is the union of the uplink and downlink simulation ranges.
- the method for determining the network node and the UE simulation range residing in the CPU is: finding the nodes farthest in the four directions of the upper, lower, left, and right of the nodes, and using the four points as a rectangle.
- the point on the side determines a rectangular area, and a certain range is expanded on the basis of this rectangular area (to prevent the UE from moving out of bounds), and the enlarged rectangular area determined thereby is the simulation range.
- another rectangular area is additionally determined by the same method, such as CPU_N simulation network range 2 in FIG.
- the geographic location of the node including the UE and the network node that interferes with the host node of the CPU is not within the simulation range determined by the above process, the three-dimensional geographic location information is separately added to the simulation range of the CPU, so that the Large reduction of simulation grid points can also simulate stereo scenes.
- the simulation of the communication system simulation node is simulated by the UE motion and handover simulation.
- the flow is shown in Figure 16.
- S900 Receive a broadcast message.
- S901 Determine whether the simulation is completed, and if yes, end directly, otherwise execute S902.
- S902 Determine whether it is the standard frame arrival message, if yes, execute S903, otherwise execute S912.
- the network node and UE data in the CPU are updated according to the received information.
- S904 Determine whether it is the first subframe, if it is the first subframe, execute S905, otherwise execute S906.
- the RSRP Reference Signal Receiving Power
- the calculated slow decay and fast decay may be different, resulting in a final result.
- the network nodes selected by the UE in a single parallel CPU are different from those selected at the client, and are reselected here.
- the candidate network nodes are resident network nodes, interference network nodes and other standard network nodes in the parallel CPU.
- the UE_ID and the target network node ID and the target CPU ID of the network node finally selected to the other CPUs are saved to update the UE data of the CPU in S910.
- the interference is from the signal sent by the network node, only the signal and interference of the network node to the UE need to be considered.
- the fast fading and the slow fading are mainly for all the camping network nodes of the CPU, the interfering network node, and the resident of the CPU. Signal fading between UEs.
- the interference is from the uplink signal sent by the UE, only the signal and interference of the UE to the network node need to be considered.
- the fast fading and the slow fading are mainly for all the UEs that are camped on the CPU, interfere with the UE, and the network that resides in the CPU. Signal fading between nodes.
- the fast decay and slow decay that need to be calculated are the combination of the network node and the UE in the above one-way simulation.
- Parallel CPUs can not interact with fast decay and slow decay data, the purpose is to reduce the number of interactions According to the quantity, the communication delay is reduced, and the random number of the fast fading is taken as the random seed by the location of the UE and the network node, thereby ensuring that the UE and the network node are consistent in fast fading and slow fading in different CPUs. Ensure the integrity of the simulation.
- the physical layer of the network node calculates the uplink interference of the single network node, for the single resource, the resident UE and the interfering UE that allocate the same resource are found, and the N1 UEs with the strongest interference to the network node are used as interference, and the N2 interferences are weaker.
- the UE acts as noise, and other UE interference may not be considered.
- the N1 interference strongest resident network node and the interference network node are used as interference calculation, and the N2 residual network nodes and the interference network node with weak interference are calculated as noise, and the remaining interference may be Not consider.
- RSRP Reference Signal Receiving Power
- the main processing of the commands generated by S905 and S909 causes the UE in the CPU to switch to other CPU-changing data updates, such as re-selection in S905 and S09, and the handover admission command, and the corresponding UE reselects and switches to the network node in other CPUs. , delete the information of this UE in this CPU.
- the transit node As an intermediary for signaling interaction, and the transit node buffers the analog signaling interaction delay of the signaling, and simplifies the CPU code design of the communication standard simulation node. Implement simulations such as switching, collaboration, and interoperability.
- the data interaction between different CPUs includes: source CPU ID, target CPU ID, cache delay, and message entity.
- the process of CPU data exchange of the transit node is shown in Figure 17.
- S1000 Receive a synchronization frame message, assuming that the currently received synchronization frame message is system A.
- S1003 Determine whether there is any signaling sent by the CPU of the standard CPU to the CPU_i in the cache, if yes, execute S1004, otherwise execute S1008.
- S1009 It is judged whether all the communications are completed, and the process ends directly; otherwise, S1010 is executed.
- S1010 Read the next pair of transceiver CPU pair information for point-to-point communication, and jump to S1002.
- the simulation method and system for a large-scale complex wireless communication system provided by the above embodiment, the client reads the simulation configuration parameters and processes, determines the function types of the respective CPUs working in parallel, and creates corresponding simulation tasks, through the task manager.
- the simulation task is sent to each CPU working in parallel.
- the CPU working in parallel After receiving the simulation task delivered by the task manager, the CPU working in parallel performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code.
- the embodiments of the present invention can solve the memory pressure and computational pressure faced in the simulation process of a large-scale complex wireless communication system, and provide a flexible and scalable parallel architecture for the simulation platform to support simulation of various requirements, ideas and scenarios. .
- the client reads the simulation configuration parameters and processes the data, determines the function types of the CPUs working in parallel, and creates a corresponding simulation task, and sends the simulation task to each CPU working in parallel through the task manager.
- the CPU that works in parallel receives the simulation task sent by the task manager, it performs data interaction and synchronization operations with other CPUs according to the function type configured by the client, and runs the simulation code.
- the embodiments of the present invention can solve the memory pressure and computational pressure faced in the simulation process of a large-scale complex wireless communication system, and provide a flexible and scalable parallel architecture for the simulation platform to support simulation of various requirements, ideas and scenarios. .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本文公开了一种大规模复杂无线通信系统的仿真方法和系统。所述大规模复杂无线通信系统的仿真方法,包括:客户端读取仿真配置参数,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;并行工作的CPU接收任务管理器下发的仿真任务,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码。
Description
本文涉及但不限于无线通信、高性能计算技术领域,尤其涉及的是一种大规模复杂无线通信系统的仿真方法和系统。
随着社会发展,新的应用和需求不断出现,如物联网、公共安全、突发事件等方面的需求,随之而来的是对无线通信技术提出更高的要求,主要体现在:更高的速率、更低的时延、更加可靠的网络覆盖、满足热点区域业务要求以及移动条件下仍能提供优质服务。
为了满足这些要求,一系列技术被提出:更高的频段、更大的带宽、天线增强技术、大规模天线阵列以提高速率;采用中继技术以保障小区边缘的覆盖和提升系统容量;针对移动场景,提出移动网络场景,包括移动中继(Relay)、游牧节点;为了满足物联网发展要求,实施MTC(Machine Type Communication,机器类型通信)技术,提出了大规模机器通信(Massive Machine Communication,简称MMC)的研究方向;针对公共安全需求和增强覆盖需求,实施终端直通技术(Device 2 Device,简称D2D);针对未来80%-90%的系统吞吐量来自室内和热点场景,提出密集小区布网方案和256QAM高阶调制技术;为解决密集组网引起的小区间干扰和频繁切换问题,提出宏微小区双链接、垂直波束赋形、虚拟小区、无线回传、MM(Massive MIMO,大规模多输入多输出)和笔形波束赋形等;为了节约能源,提出了小区快速开关和小区发现技术。
上述技术的引入,导致无线网络有如下发展趋势:网络规模越来越大,且网络规模根据业务变化而动态改变;网络结构越来越复杂,主要体现在多制式多频点共存和互操作、多种覆盖模式共存,拓扑结构复杂,不规则网络,且随终端、机器、移动Relay(中继)、游牧节点的移动而动态变化;网络节
点种类越来越多,由此产生多种无线链路共存和多种业务需求共存,网络节点之间的协作越来越频繁;多种物理层处理技术将被引入;更多天线相关技术的引入,必然需要对无线信道进行精确建模,如采用射线追踪技术和3D信道、3D场景建模技术。
传统的单核/单机仿真平台已经无法完成对上述技术的仿真和评估,主要体现在无法提供足够的内存支持大规模场景仿真,无法保存仿真过程中产生的大量信道等数据;无法提供高速的计算速率以支持对大量复杂的物理层处理、大规模天线阵列建模和大量高层逻辑处理;无法提供灵活的架构以支持多制式共存、多模覆盖、多业务共存等的仿真;无法提供较好的可扩展结构以支持协议的演进,随着协议的演进,对不同类型网络节点的添加和删除可能导致将仿真平台推倒重来。单核目前用于在64天线超大规模天线阵列技术和射线追踪仿真中,运行速度极慢,需要几天才能完成一个仿真;在精确信道建模的射线追踪技术仿真中内存达到100G,且运算速度非常慢,更不用提对后续5G技术的仿真。
仿真技术的滞后必将严重阻碍5G标准的研究和推进,因此,需要研究适用于大规模复杂无线通信系统性能指标的新的仿真方法和平台。
发明概述
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本文提供一种大规模复杂无线通信系统的仿真和装置,能够解决大规模复杂无线通信系统仿真过程中面临的内存压力、计算压力,并为仿真平台提供灵活性、扩展性好的并行架构。
本发明实施例提供了一种大规模复杂无线通信系统的仿真方法,该方法包括:
客户端读取仿真配置参数,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;
并行工作的CPU接收任务管理器下发的仿真任务,根据客户端配置的功
能类型与其他CPU进行数据交互和同步操作,运行仿真代码。
可选地,所述客户端读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,包括:
根据仿真配置参数配置的并行工作的CPU数量、网络节点的数量和参数,在本地计算或通过并行工作的CPU协同计算网络节点之间的干扰关系,根据计算结果确定网络节点之间的数据交互关系;
确定并行工作的各个CPU的功能类型,根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,构建并行工作的CPU之间的数据交互关系,为需要进行点对点通信的CPU对设置通信时序;
为并行工作的每一个CPU创建对应的仿真任务,所述仿真任务包括数据和仿真代码;
其中,所述CPU的功能类型包括:时序控制、数据中转、通信制式仿真。
可选地,所述根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,包括进行以下至少一种处理:
a)将异制式异频点的网络节点划分在不同的通信制式仿真CPU中;
b)将存在数据交互的同制式同频点的网络节点划分在同一种通信制式仿真CPU中,同制式同频点的网络节点,根据网络节点的干扰关系划分到不同的CPU中;
c)将用户设备划分到其接入网络节点所在的通信制式仿真CPU中。
d)不同并行CPU之间数据交互最少;
e)不同并行CPU之间计算量均衡。
可选地,并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:
并行工作的CPU根据获取到的仿真任务确定自己的功能是时序控制时,通过运行仿真代码向其他CPU广播CPU之间的同步消息。
可选地,并行工作的CPU根据客户端配置的功能类型与其他CPU进行
数据交互和同步操作,运行仿真代码,包括:
并行工作的CPU根据获取到的仿真任务确定自己的功能是数据中转时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与负责通信制式仿真的CPU进行点对点通信时,将缓存的其他CPU发送给本CPU当前通信对端的数据延时后发出,接收并缓存本CPU当前通信对端发送给其他CPU的数据。
可选地,并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:
并行工作的CPU根据获取到的仿真任务确定自己的功能是通信制式仿真时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与其他CPU进行点对点通信时,与其他CPU进行数据交互,运行网络节点协议栈代码和用户设备协议栈代码,计算上行和/或下行干扰。
可选地,所述并行工作的CPU计算上行和/或下行干扰,包括:
在对本CPU中驻留的目标网络节点计算上行干扰时,获取本CPU中驻留的用户设备与所述目标网络节点之间的位置关系和信道模型、其他CPU中驻留的对所述目标网络节点存在强干扰的用户设备与所述目标网络节点之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定用户设备对所述目标网络节点的上行干扰;
在对本CPU中驻留的目标用户设备计算下行干扰时,获取本CPU中驻留的网络节点与所述目标用户设备之间的位置关系和信道模型、其他CPU中驻留的对所述目标用户设备存在强干扰的网络节点与所述目标用户设备之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定网络节点对所述目标用户设备的下行干扰。
本发明实施例提供了一种大规模复杂无线通信系统的仿真系统,包括:
客户端,设置为读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;
并行工作的CPU,设置为接收任务管理器下发的仿真任务,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码;
任务管理器,设置为接收客户端提交的仿真任务并下发给并行工作的CPU。
可选地,所述客户端,设置为采用以下方式读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务:
根据仿真配置参数配置的并行工作的CPU数量、网络节点的数量和参数,在本地计算或通过并行工作的CPU协同计算网络节点之间的干扰关系,根据计算结果确定网络节点之间的数据交互关系;
确定并行工作的各个CPU的功能类型,根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,构建并行工作的CPU之间的数据交互关系,为需要进行点对点通信的CPU对设置通信时序;
为并行工作的每一个CPU创建对应的仿真任务,所述仿真任务包括数据和仿真代码;
其中,所述CPU的功能类型包括:时序控制、数据中转、通信制式仿真。
可选地,所述客户端,设置为采用以下方式根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备:
执行以下至少一种处理:
a)将异制式异频点的网络节点划分在不同的通信制式仿真CPU中;
b)将存在数据交互的同制式同频点的网络节点划分在同一种通信制式仿真CPU中,同制式同频点的网络节点,根据网络节点的干扰关系划分到不同的CPU中;
c)将用户设备划分到其接入网络节点所在的通信制式仿真CPU中。
d)不同并行CPU之间数据交互最少;
e)不同并行CPU之间计算量均衡。
可选地,并行工作的CPU,设置为采用以下方式根据客户端配置的功能
类型与其他CPU进行数据交互和同步操作,运行仿真代码:
根据获取到的仿真任务确定自己的功能是时序控制时,通过运行仿真代码向其他CPU广播CPU之间的同步消息。
可选地,并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:
根据获取到的仿真任务确定自己的功能是数据中转时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与负责通信制式仿真的CPU进行点对点通信时,将缓存的其他CPU发送给本CPU当前通信对端的数据延时后发出,接收并缓存本CPU当前通信对端发送给其他CPU的数据。
可选地,并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:
根据获取到的仿真任务确定自己的功能是通信制式仿真时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与其他CPU进行点对点通信时,与其他CPU进行数据交互,运行网络节点协议栈代码和用户设备协议栈代码,计算上行和/或下行干扰。
可选地,并行工作的CPU,设置为采用以下方式计算上行和/或下行干扰:
在对本CPU中驻留的目标网络节点计算上行干扰时,获取本CPU中驻留的用户设备与所述目标网络节点之间的位置关系和信道模型、其他CPU中驻留的对所述目标网络节点存在强干扰的用户设备与所述目标网络节点之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定用户设备对所述目标网络节点的上行干扰;
在对本CPU中驻留的目标用户设备计算下行干扰时,获取本CPU中驻留的网络节点与所述目标用户设备之间的位置关系和信道模型、其他CPU中驻留的对所述目标用户设备存在强干扰的网络节点与所述目标用户设备之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定网络节点对所述目标用户设备的下行干扰。
本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述方法。
与相关技术相比,本发明实施例提供的一种大规模复杂无线通信系统的仿真方法和系统,客户端读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务,并行工作的CPU接收到任务管理器下发的仿真任务后,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码。本发明能够解决大规模复杂无线通信系统仿真过程中面临的内存压力、计算压力,并为仿真平台提供灵活性、扩展性好的并行架构,以支持多种需求、想法和场景的仿真。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本发明实施例采用的分布式并行仿真平台的系统架构示意图。
图2为本发明实施例的分布式并行仿真平台中客户端、任务管理器和并行CPU的功能示意图。
图3为本发明实施例的一种大规模复杂无线通信系统的仿真方法流程图。
图4为本发明实施例的一种大规模复杂无线通信系统的仿真系统示意图。
图5为本发明示例1的客户端处理流程图(客户端实现实例)。
图6为本发明示例2的单通信制式仿真平台CPU构成及相互关系(实例一)示意图。
图7为本发明示例3的单通信制式仿真平台CPU构成及相互关系(实例二)示意图。
图8为本发明示例4的多制式共存仿真平台CPU构成及相互关系(实例三)示意图。
图9为本发明示例5的多制式共存仿真平台CPU构成及相互关系(实例四)示意图。
图10为本发明示例6的并行CPU实现流程示意图。
图11为本发明示例7的时序控制节点实现流程示意图。
图12为本发明示例8的并行CPU数据交互实现流程(实例一)示意图。
图13为本发明示例9的并行CPU数据交互实现流程(实例二)示意图。
图14为本发明示例10的并行CPU数据交互实现流程(实例三)示意图。
图15为本发明示例11的通信制式仿真CPU仿真网络范围确定示意图。
图16为本发明示例13的通信制式仿真CPU仿真实现流程图。
图17为本发明示例14的中转节点CPU实现流程图。
详述
为使本发明的目的、技术方案和优点更加清楚明白,下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
本发明实施例采用分布式并行仿真平台,该仿真平台的架构主要由三部分组成:客户端、任务管理器和并行CPU。客户端是一台或者多台远程PC机,这些PC机由人控制,多台PC机之间独立运行。任务管理器是并行仿真机中的一个CPU,这个CPU可以与其他并行CPU共用一台仿真机也可以独占一台仿真机。多台并行仿真机,每台仿真机有一个或多个CPU,这些CPU构成并行CPU,这些CPU之间并行独立运行,有自己独立的内存。
图1是并行系统的架构。客户端通过网络与仿真实验室相连,仿真实验室有多台高性能仿真机,高性能仿真机之间通过交换机相连,并通过MPI(Message Passing Interface,信息传递接口)将其中一台仿真机设置为任务管理器,其余仿真机的CPU构成并行CPU。
如图2所示,所述分布式并行仿真平台包含的客户端、任务管理器、并行CPU的主要功能和信息交互如下:
客户端,是一台或者多台彼此独立的PC机,可在远程办公室等地,由人操作启动仿真,主要工作是:启动仿真、并读取仿真配置的参数;根据配置的并行CPU数量和仿真中网络节点的类型和数量,为每个并行CPU分配仿真网络节点;分配的依据是:不同并行CPU之间数据交互最少,以及不同并行CPU之间计算量尽可能均衡;设置每个并行CPU的类型;根据网络节点分配结果和不同网络节点的数据交互关系构建不同并行CPU间的数据交互关系;将读取和初步处理的仿真数据和仿真平台代码提交到任务管理器;向任务管理器发出命令,接收任务管理器返回的命令响应;对多个并行CPU返回的仿真结果进行综合处理。
任务管理器,可设置在一台仿真机上,主要工作:接收客户端提交的数据和代码,将这些数据和代码分发到并行CPU中;检测每个并行CPU的运行状态;根据客户端的命令执行相应的操作,如删除或者取消并行仿真任务、回收状态为finish(完成)的仿真任务的运行结果等。
并行CPU,接收任务管理器分发的结果,根据本CPU的网络节点类型和网络节点ID,运行相应的仿真代码;与相关的并行CPU进行数据交互、同步等操作。
并行CPU可以包括四种功能类型:时序控制、数据中转、移动节点仿真和通信制式仿真。时序控制功能,用于向所有并行CPU广播CPU之间的同步消息,驱动仿真进行;中转功能,用于接收、缓存和转发归属不同并行CPU中网络节点的信令,包括不同并行CPU之间的用户切换过程的信令、不同CPU内小区之间的交互和协作信令等;移动节点仿真功能,用于对一些不断移动的网络节点进行仿真,将它们划分一个节点是为了避免它们的移动导致并行CPU之间的负荷失衡;通信制式仿真功能,用于运行不同制式网络节点的仿真,一个CPU只运行一种制式的仿真,相同通信制式仿真CPU通过点对点直接进行数据交互,各CPU对之间依次进行通信,交互内容包括实时信息,如空口信息、移动用户和移动网络节点的位置信息。
相对单核仿真平台,分布式并行仿真平台能进行超大规模的网络仿真,而不会受到内存的限制;能进行复杂的物理层计算、射线追踪仿真、复杂信道建模等,而不影响速率;能进行多种制式共存互干扰仿真、协作和互操作
仿真,而不会增加代码的复杂度;能很好的支持通信协议的演进,当协议变动时,只需要对变动的部分进行修改,而不需要对平台推倒重来,如当需要为某种应用设计一种通信方式时,只需要增加一个并行CPU,然后在这个并行CPU中进行仿真建模的编码即可;能极大提升仿真的运算效率,特别是需要快速输出结果的仿真。
如图3所示,本发明实施例提供了一种大规模复杂无线通信系统的仿真方法,该方法包括:
S101,客户端读取仿真配置参数,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;
S102,并行工作的CPU接收任务管理器下发的仿真任务,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码。
其中,所述客户端读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,包括:
根据仿真配置参数配置的并行工作的CPU数量、网络节点的数量和参数,在本地计算或通过并行工作的CPU协同计算网络节点之间的干扰关系,根据计算结果确定网络节点之间的数据交互关系;
确定并行工作的各个CPU的功能类型,根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,构建并行工作的CPU之间的数据交互关系,为需要进行点对点通信的CPU对设置通信时序;
为并行工作的每一个CPU创建对应的仿真任务,所述仿真任务包括数据和仿真代码;
其中,所述CPU的功能类型包括:时序控制、数据中转、通信制式仿真。
其中,所述根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,包括进行以下至少一种处理:
a)将异制式异频点的网络节点划分在不同的通信制式仿真CPU中;
b)将存在数据交互的同制式同频点的网络节点划分在同一种通信制式仿真CPU中,同制式同频点的网络节点,根据网络节点的干扰关系划分到不同的CPU中;
c)将用户设备划分到其接入网络节点所在的通信制式仿真CPU中。
d)不同并行CPU之间数据交互最少;
e)不同并行CPU之间计算量均衡。
其中,并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:
并行工作的CPU根据获取到的仿真任务确定自己的功能是时序控制时,通过运行仿真代码向其他CPU广播CPU之间的同步消息。
其中,并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:
并行工作的CPU根据获取到的仿真任务确定自己的功能是数据中转时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与负责通信制式仿真的CPU进行点对点通信时,将缓存的其他CPU发送给本CPU当前通信对端的数据延时后发出,接收并缓存本CPU当前通信对端发送给其他CPU的数据。
其中,并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:
并行工作的CPU根据获取到的仿真任务确定自己的功能是通信制式仿真时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与其他CPU进行点对点通信时,与其他CPU进行数据交互,运行网络节点协议栈代码和用户设备协议栈代码,计算上行和/或下行干扰。
其中,所述并行工作的CPU计算上行和/或下行干扰,包括:
在对本CPU中驻留的目标网络节点计算上行干扰时,获取本CPU中驻留的用户设备与所述目标网络节点之间的位置关系和信道模型、其他CPU中
驻留的对所述目标网络节点存在强干扰的用户设备与所述目标网络节点之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定用户设备对所述目标网络节点的上行干扰;
在对本CPU中驻留的目标用户设备计算下行干扰时,获取本CPU中驻留的网络节点与所述目标用户设备之间的位置关系和信道模型、其他CPU中驻留的对所述目标用户设备存在强干扰的网络节点与所述目标用户设备之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定网络节点对所述目标用户设备的下行干扰。
如图4所示,本发明实施例提供了一种大规模复杂无线通信系统的仿真系统,包括:
客户端,设置为读取仿真配置参数,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;
并行工作的CPU,设置为接收任务管理器下发的仿真任务,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码;
任务管理器,设置为接收客户端提交的仿真任务并下发给并行工作的CPU。
其中,所述客户端,设置为采用以下方式读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务:
根据仿真配置参数配置的并行工作的CPU数量、网络节点的数量和参数,在本地计算或通过并行工作的CPU协同计算网络节点之间的干扰关系,根据计算结果确定网络节点之间的数据交互关系;
确定并行工作的各个CPU的功能类型,根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,构建并行工作的CPU之间的数据交互关系,为需要进行点对点通信的CPU对设置通信时序;
为并行工作的每一个CPU创建对应的仿真任务,所述仿真任务包括数据
和仿真代码;
其中,所述CPU的功能类型包括:时序控制、数据中转、通信制式仿真。
其中,所述客户端,设置为采用以下方式根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备:
执行以下至少一种处理:
a)将异制式异频点的网络节点划分在不同的通信制式仿真CPU中;
b)将存在数据交互的同制式同频点的网络节点划分在同一种通信制式仿真CPU中,同制式同频点的网络节点,根据网络节点的干扰关系划分到不同的CPU中;
c)将用户设备划分到其接入网络节点所在的通信制式仿真CPU中。
d)不同并行CPU之间数据交互最少;
e)不同并行CPU之间计算量均衡。
其中,并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:
根据获取到的仿真任务确定自己的功能是时序控制时,通过运行仿真代码向其他CPU广播CPU之间的同步消息。
其中,并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:
根据获取到的仿真任务确定自己的功能是数据中转时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与负责通信制式仿真的CPU进行点对点通信时,将缓存的其他CPU发送给本CPU当前通信对端的数据延时后发出,接收并缓存本CPU当前通信对端发送给其他CPU的数据。
其中,并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:
根据获取到的仿真任务确定自己的功能是通信制式仿真时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在
确定当前是本CPU与其他CPU进行点对点通信时,与其他CPU进行数据交互,运行网络节点协议栈代码和用户设备协议栈代码,计算上行和/或下行干扰。
其中,并行工作的CPU,设置为采用以下方式计算上行和/或下行干扰:
在对本CPU中驻留的目标网络节点计算上行干扰时,获取本CPU中驻留的用户设备与所述目标网络节点之间的位置关系和信道模型、其他CPU中驻留的对所述目标网络节点存在强干扰的用户设备与所述目标网络节点之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定用户设备对所述目标网络节点的上行干扰;
在对本CPU中驻留的目标用户设备计算下行干扰时,获取本CPU中驻留的网络节点与所述目标用户设备之间的位置关系和信道模型、其他CPU中驻留的对所述目标用户设备存在强干扰的网络节点与所述目标用户设备之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定网络节点对所述目标用户设备的下行干扰。
示例1
客户端实现实例
该实例主要描述多个仿真用例(case)仿真时客户端的处理过程,依次读取每个case的仿真数据,为每个case单独创建并行Job,并提交。其流程图如图5所示。
S200:从参数表中读取仿真配置数据;
其中,所述仿真配置数据包括仿真时长、并行CPU数量、路损模型、快衰模型、慢衰模型、网络规模等。
S201:选择是在客户端还是远程并行计算机上计算网络节点在网格点上的功率。如果选择在客户端上计算,则执行步骤S205,如果选择是在远程并行计算机上计算,则执行步骤S202;
评估并行计算和客户端的内存压力和计算效率。并行计算时将所有的网络节点划分为几部分,能有效缓解内存压力并可能提升计算效率,单个并行
CPU只计算其中一部分网络节点在所有网格点上的功率,客户端计算所有网络节点在所有网格点上的功率。
内存评估通过客户端内存和网络规模进行判断。效率评估的方法:并行任务的提交和结果回收的时间ttrans,并行计算时间tex,客户端的计算时间为tpro_client,上述时间都可通过前期测试获得。在客户端内存足够的情况下,如果ttrans+tex≥tpro_client说明客户端计算效率高,执行S205,一般出现在网络规模较小的情况;如果ttrans+tex<tpro_client,则说明并行计算效率高,执行S202。在客户端内存不够的情况下,可以不进行计算效率的评估,直接执行S202。
其中,通过网格来离散化整个仿真区域,网格可以是正方形、长方形或六边形,网格点可以配置为所属网格的中心点,用网格点代表网格区域内的所有点,网格的大小和形状可以配置;
其中,网络节点包括:基站、中继点(Relay)、游牧节点等;
其中,网络节点在一个网格点上的功率是指:网络节点发射功率经过快衰、慢衰之后到网格点的功率;通过计算网络节点在网格点上的功率确定网络节点之间的干扰,便于下一步进行网络节点的划分;
S202:在任务管理器中创建并行Job,设置并行CPU数、Job名、用户名、提交计算需要的代码和数据的路径等,此并行Job用于计算所有网络节点在所有网格点上的功率。
S203:提交并行Job。
S204:等待计算完成,并回收计算结果,跳转到S206。
S205:客户端计算所有网络节点在所有网格点上的功率。
S206:设置每个并行CPU功能,根据计算结果确定网络节点的干扰关系,根据干扰关系对网络节点、UE进行划分,将网络节点和UE划分到并行CPU中。
划分原则是异制式异频点的网络节点划分在不同的CPU中,同制式同频点网络节点划分时将存在数据交互的网络节点尽可能划分到同一个CPU中;将UE划分到其接入网络节点所在的CPU中;其中,在整网范围内,计算所有网络节点到所有UE的参考信号功率,为每个UE找到参考信号功率最强的
网络节点,将UE划分到所述最强参考信号的网络节点所在的CPU中。
构建CPU之间的数据交互关系和进行点对点通信的CPU对的通信顺序;
S207:创建并行Job,用于进行正式仿真。
S208:提交并行Job。
S209:判断所有的仿真用例是否处理完成,处理完则执行S211,否则执行S210。
S210:切换到下一个仿真用例,执行S200。
S211:启动定时器;
其中,所述定时器用于周期性的查询Job状态和显示仿真进程,当Job处于完成(finish)状态时回收和处理仿真结果,并删除完成状态的Job。
示例2
仿真平台CPU构成及相互关系实例一
图6对应于一种LTE单通信制式仿真场景下的仿真平台。
所述仿真平台包括:客户端、任务管理器和并行CPU,它们之间进行仿真代码、仿真数据、命令和仿真结果数据的交互,客户端通过任务管理器对并行CPU进行控制和操作。它们之间可以双向通信。
并行CPU可以包括以下类型的CPU:时序控制节点CPU、通信制式仿真节点CPU;
其中,时序控制节点CPU向所有CPU节点广播CPU之间的同步消息、当前时刻和仿真结束标志。
LTE通信制式仿真节点CPU之间可进行直接的点对点通信,交互的数据是空口信息、移动网络节点和移动UE的位置信息、网元之间的控制信令等;其中,空口信息用于计算不在同一个CPU中小区的邻区干扰,需要实时交互;位置信息用于计算实时的信道信息;控制信令存在时延,需要在发送端CPU中缓存一定时间再发送,执行一些协作、切换等操作。
并行CPU还可以包括移动节点仿真CPU,比如图6中的LTE移动节点
CPU。单独划分移动节点仿真CPU是为了避免站点的移动导致CPU之间计算量负荷失衡,如果没有移动网络节点,此CPU可删除。
示例3
仿真平台CPU构成及相互关系实例二
图7对应于另一种LTE单通信制式仿真场景下的仿真平台。
与示例2的LTE单通信制式仿真平台类似,示例3的LTE单通信制式仿真平台也包括:客户端、任务管理器和并行CPU;示例3与示例2的差异是并行CPU类型中增加中转节点类型;
中转节点CPU用于缓存和转发不同CPU节点中网络节点的信令,包括切换用户信息、切换命令、协作命令等,缓存用于模拟信令交互时延。
示例4
仿真平台CPU构成及相互关系实例三
图8是一种多通信制式仿真平台CPU构成及相互关系。
与示例2的LTE单通信制式仿真平台类似,示例4的多通信制式仿真平台也包括:客户端、任务管理器和并行CPU;
其中,并行CPU可以包括以下类型的CPU:时序控制节点CPU、通信制式仿真节点CPU;
示例4与示例2的差异是:通信制式仿真节点CPU包括多种制式;每个并行CPU只运行一种通信制式仿真代码,通信制式仿真节点CPU之间进行直接的点对点数据通信,相同制式之间交互的是空口信息、移动网络节点和UE的位置信息和信令,在不考虑系统间互干扰的情况下,不同制式之间只进行信令交互;空口信息需要实时交互,信令需要在发送端缓存一定时间再发送。
示例5
仿真平台CPU构成及相互关系实例四
图9是另一种多通信制式仿真平台CPU构成及相互关系。
与示例4的多通信制式仿真平台类似,示例5的多通信制式仿真平台也包括:客户端、任务管理器和并行CPU;
示例5与示例4的差异是并行CPU中增加中转节点类型;
中转节点CPU接收不同制式节点的信令交互,并对信令进行缓存和转发。
通信制式仿真节点CPU运行仿真代码,相同制式节点进行直接数据交互,主要交互空口信息,用于不同CPU中邻区的干扰计算,在不考虑系统间互干扰的情况下,不同制式的数据交互只有信令,且一般不进行直接交互,需要通过中转节点交互。
示例6
并行CPU运行流程实例一
并行CPU实现流程如图10所示,包括以下步骤:
S400:读取仿真数据,当任务管理器分发完数据之后,所有的并行CPU均加载任务管理器分发的仿真配置参数。
S401:读取本CPU的数据,任务管理器分发到所有并行CPU是相同的代码和数据,每一个CPU可以根据自己的index,读取与自己相关的数据,如公共仿真数据、本CPU驻留网络节点和UE的参数配置、本CPU干扰网络节点和UE的参数配置等。
S402:设置本CPU的功能,根据读取的数据可知本CPU配置的功能,设置好功能之后执行自己的功能。
S403:发送或者接收广播消息;
在时序控制节点CPU中,进行计时并决策何时发送广播消息;其他节点CPU则等待接收时序控制节点CPU发送的广播消息。
S404:等待其他并行CPU接收到广播消息,实现并行CPU的通信同步。
S405:判断是否仿真结束,从广播消息中读取仿真结束标志,如果读取到仿真结束标志,则结束流程;如果没有读取到仿真结束标志,时序控制节点CPU执行步骤S407,中转节点CPU、通信制式仿真节点CPU执行步骤S406;
S406:不同CPU(通信制式仿真节点CPU、中转节点CPU)之间按照通信时序进行数据交互,交互当前时刻的空口信息、移动站点和移动用户的位置信息、信令;
S407:等待其他并行CPU数据交互完成,实现并行CPU通信同步,时序控制节点CPU、中转节点CPU执行完步骤S407后跳转到S411,通信制式仿真节点CPU执行完步骤S407后执行步骤S408;
S408:通信制式仿真节点CPU判断时序控制节点CPU广播的CPU之间的同步消息是否为本制式的同步帧,是则执行S409,否则执行S411。
S409:执行本通信制式仿真代码。
S410:产生交互数据。
S411:等待其他CPU完成,并行CPU代码执行同步后,跳转到S403;
其中,在通信制式仿真节点CPU执行S408-S410过程中,时序控制节点CPU和中转节点CPU可以处于等待状态,保证并行CPU的代码执行同步,然后跳转到S403。
示例7
时序控制节点CPU驱动仿真平台实例一
时序控制节点产生并广播当前时刻、不同制式帧到达消息和仿真结束标志三种消息,是整个系统的心脏,驱动仿真的运行,它以最小仿真时间粒度为单位进行时间累加,每累加一次,与所有制式的最小仿真时间单位进行模运算,广播模为0的制式对应的帧到达消息。图11以LTE、UMTS、GSM三种制式共存仿真为例说明时序控制节点实现,三种制式最小仿真时间单位分别是:1ms、0.667ms和0.577ms。
S500:设置仿真时长;
设置仿真时长,便于判断仿真是否结束。
S501:设置共存仿真中各种制式的最小仿真时间单位;
其中,设置各种制式的最小仿真时间单位,便于后续判断广播帧到达消息的时刻。
S502:计数器iCounter累加,累加的最小时间是最小时间粒度;
其中,三种制式的最小仿真时间单位分别是1ms、0.667ms、0.577ms,则累加的最小时间粒度是1us。
S503:判断仿真是否结束,是则执行S504,否则执行S505。
S504:广播消息,消息中仿真结束标志位置1;
S505:将iCounter与每一种制式的最小仿真时间粒度进行模运算;
S506,判断对每一种制式的求模结果中是否至少有一种制式的求模结果为0,是则执行S507,否则返回S502;
S507:广播模为0的制式的帧到达消息、当前时刻iCounter和仿真结束标志(置0),跳转到S502。
示例8
通信制式仿真节点CPU通信实现实例一
并行CPU之间采用两种通信方式进行数据交互:广播和点对点。帧到达消息通过广播方式发送给所有并行CPU。中转节点CPU和通信制式仿真节点CPU之间通过依次点对点通信,一次只有一对CPU通信,在某个CPU对通信过程中,其他CPU等待,CPU对通信的次序在客户端生成,不同并行CPU中网络节点的信令交互通过中转节点CPU实现。
仿真平台中,先进行广播通信,再进行点对点的数据交互通信。通信制式仿真节点CPU的点对点数据交互流程图如图12所示。
S601:读取当前进行点对点通信的收发CPU对的信息。
S602:判断本节点CPU的index是否在收发CPU对中,是则执行S603;
否则执行S608。
S603:判断通信制式仿真节点CPU是否与中转节点CPU通信,是则执行S605,否则执行S604。
S604:读取需要实时发出的数据;
其中,需要实时发出的数据,包括本CPU中网络节点的空口信息,本CPU中UE和移动站点的位置信息等,目的是用于其他CPU中网络节点计算当前时刻干扰,跳转到S606。
S605:读取信令数据。
S606:执行数据收发命令labsendreceive。
S607:保存接收的数据。
S608:执行等待命令,实现并行CPU的通信同步。
S609:判断所有并行CPU数据交互是否完成,是则直接结束;否则执行S610。
S610:读取下一对进行点对点通信的收发CPU对信息,跳转到S602。
示例9
通信制式仿真节点CPU通信实现实例二
与示例8中的实例一的差异是没有中转节点转发信令。
仿真平台中,先进行广播通信,再进行点对点的数据交互通信。点对点数据交互流程图如图13所示。
S701:读取当前进行点对点通信的收发CPU对的信息。
S702:判断本节点CPU的index是否在收发CPU对中,是则执行S703;否则执行S709。
S703:读取实时交互数据;
其中,所述实时交互数据包括与对端CPU需要交互的数据,包括空口信息,移动UE和移动站点的位置信息等,目的是用于当前仿真时刻计算干扰、
产生干扰UE和网络节点的快衰信息。
S704:判断是否有信令发送,是则执行S705;否则跳转到S707。
S705:判断信令时延是否到,是则执行S706,否则执行S707。
S706:读取信令数据。
S707:执行数据收发命令labsendreceive。
S708:保存接收的数据。
S709:执行等待命令,实现并行CPU的同步。
S710:判断所有并行CPU数据交互是否完成,是则直接结束;否则执行S711。
S711:读取下一对收发CPU对信息,跳转到S702。
示例10
通信制式仿真节点CPU通信实现实例三
实例三不采用点对点依次通信,而是采用所有并行CPU同时通信方式,也没有中转节点CPU,信令和数据一起发送。流程图如图14所示。
S800:是否有实时数据交互,是则执行S801;否则执行S802。
S801:读取需要发出的实时交互数据,准备发送;
其中,所述需要发出的实时交互数据包括空口信息,UE和移动站点的位置信息等,目的是用于当前仿真时刻计算干扰、产生干扰UE和网络节点的快衰信息。
S802:判断是否有交互信令发送,有信令发送则执行S803;否则执行S805。
S803:判断信令时延是否到,是则执行S804;否则执行S805。
S804:读取信令数据。
S805:执行gcat数据交互命令。
S806:读取发送给本CPU的数据,并保存。
S807:执行等待命令,等待其他并行CPU完成数据交互,实现同步,结束。
示例11
通信制式仿真节点CPU仿真实现实例一
本部分主要介绍通信制式仿真节点CPU确定仿真范围的方法和具体仿真实现。
由客户端实现可知,通信制式仿真节点的仿真范围包括:本CPU中驻留的网络节点和这些网络节点的接入UE、其他CPU中对本CPU驻留网络节点和UE存在强干扰的网络节点和UE、异制式CPU中的网络节点。
单个制式CPU仿真的网络范围应该包括网络节点和这些网络节点的接入UE,由于上下行干扰的差异,在分别进行上下行仿真时仿真范围有所差异:
上行:仿真范围包括:本CPU中驻留的网络节点和UE、对本CPU驻留网络节点存在强干扰的其他CPU驻留的UE、异制式CPU中的网络节点;
下行:仿真范围包括:本CPU中驻留的网络节点和UE、对本CPU驻留UE存在强干扰的其他CPU驻留的网络节点、异制式CPU中的网络节点;
上下行同时仿真,则是上、下行仿真范围的并集。
如图15所示,本CPU中驻留的网络节点和UE仿真范围确定方法是:找到这些节点中在上、下、左、右四个方向上最远的节点,以这四个点作为矩形边上的点确定一个矩形区域,在这个矩形区域的基础上再扩大一定范围(为了防止UE移动越界),由此决定的扩大后的矩形区域为仿真范围。在立体场景仿真中,如果驻留网络节点不连续,则用相同的方法另外再确定一个矩形区域,比如图15中CPU_N仿真网络范围2。
对本CPU驻留节点存在干扰的节点(包括UE和网络节点),如果其地理位置不在上述过程确定的仿真范围内,则单独将其三维地理位置信息加入到本CPU的仿真范围中,这样能极大减少仿真网格点数,还能实现立体场景仿真。
以UE运动和切换仿真说明通信制式仿真节点运行仿真,流程如图16所示。
S900:接收广播消息。
S901:判断仿真是否完成,是则直接结束,否则执行S902。
S902:判断是否是本制式帧到达消息,是则执行S903,否则执行S912。
S903:数据更新;
根据接收到的信息对本CPU中的网络节点和UE数据进行更新。
S904:判断是否为第一子帧,如果是第一子帧,执行S905,否则执行S906。
S905:小区重选,然后跳转到S910;
在客户端进行UE划分时,RSRP(Reference Signal Receiving Power,参考信号接收功率)功率计算和在单个并行CPU中RSRP计算的网络规模可能不一样,计算的慢衰和快衰可能有差异,导致最终在单个并行CPU中UE选择的网络节点与在客户端选择的不一样,此处进行重新选择,备选的网络节点是并行CPU中的驻留网络节点、干扰网络节点和其他制式网络节点。将最终选择到其他CPU中网络节点的UE_ID和目标网络节点ID和目标CPU ID保存,以便在S910中更新本CPU的UE数据。
S906:产生信道数据,用于协议栈仿真;
上下行产生的信道数据存在差异;
对于下行,由于干扰来自网络节点发出的信号,因此只需要考虑网络节点对UE的信号和干扰,快衰和慢衰主要是针对本CPU所有驻留网络节点、干扰网络节点,与本CPU驻留UE之间的信号衰落。
对于上行,由于干扰来自UE发送的上行信号,因此只需要考虑UE对网络节点的信号和干扰,快衰和慢衰主要是针对本CPU所有驻留UE、干扰UE,与本CPU中驻留网络节点之间的信号衰落。
上下行同时仿真时,需要计算的快衰和慢衰则是上述单向仿真时网络节点和UE的组合。
并行CPU之间可以不进行快衰、慢衰数据的交互,目的是减少交互的数
据量,降低通信时延,快衰的随机数以UE和网络节点的位置作为随机种子,从而保证UE和网络节点在不同CPU中的快衰和慢衰一致。保证仿真的完整性。
S907:运行网络节点侧协议栈代码;
在网络节点物理层计算单个网络节点上行干扰时,对于单个资源,找到分配相同资源的驻留UE和干扰UE,将对该网络节点干扰最强的N1个UE作为干扰,干扰稍弱的N2个UE作为噪声,其他UE干扰可以不考虑。
S908:运行UE侧协议栈代码;
在UE物理层计算单个UE下行干扰时,将N1个干扰最强驻留网络节点和干扰网络节点作为干扰计算,干扰稍弱的N2个驻留网络节点和干扰网络节点作为噪声计算,其余干扰可以不考虑。
S909:动态仿真流程;
改变本CPU中驻留运动网络节点、驻留UE的位置;
主要计算本CPU中驻留网络节点、干扰网络节点、其他制式网络节点到驻留UE的RSRP(Reference Signal Receiving Power,参考信号接收功率),进行切换判决等操作,产生切换请求命令等。
S910:数据更新;
主要处理S905、S909产生的命令导致本CPU中UE切换到其他CPU变化的数据更新,如S905、S09中有重选、切换接纳命令,且相应的UE重选、切换到其他CPU中的网络节点,则删除此UE在本CPU中的信息。
S911:交互数据打包。
S912:等待其他CPU完成,实现同步。跳转到S900。
示例12
中转节点CPU数据交互实例一
不同制式CPU之间通过中转节点作为中介进行信令交互,中转节点对信令的缓存模拟信令交互时延,同时简化了通信制式仿真节点CPU代码设计,
实现切换、协作和互操作等仿真。不同制式CPU之间数据交互的内容包括:源CPU ID、目标CPU ID、缓存时延、消息实体。中转节点CPU数据交互流程如图17所示。
S1000:接收同步帧消息,假设目前接收的同步帧消息是制式A。
S1001:读取当前进行点对点通信的收发CPU对信息。
S1002:判断当前进行点对点通信的收发CPU对是否为自己与制式A的一个CPU(比如,CPU_i),是则执行S1003,否则执行S1008。
S1003:判断缓存中是否有其他制式CPU发给CPU_i的信令,是则执行S1004,否则执行S1008。
S1004:判断其他制式CPU发送给CPU_i的信令时延是否到了,是则执行S1005,否则执行S1008。
S1005:读取其他制式CPU发送给CPU_i的信令。
S1006:与CPU_i之间执行数据交互命令labsendreceive。
S1007:保存CPU_i发送给其他制式CPU的信令。
S1008:等待其他CPU完成通信,实现并行CPU的通信同步。
S1009:判断所有的通信是否完成,是则直接结束,否则执行S1010。
S1010:读取下一对进行点对点通信的收发CPU对信息,跳转到S1002。
上述实施例提供的一种大规模复杂无线通信系统的仿真方法和系统,客户端读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务,并行工作的CPU接收到任务管理器下发的仿真任务后,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码。本发明实施例能够解决大规模复杂无线通信系统仿真过程中面临的内存压力、计算压力,并为仿真平台提供灵活和可扩展性很好的并行架构,以支持多种需求、想法和场景的仿真。
本发明实施例提供的技术方案,客户端读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务,并行工作的CPU接收到任务管理器下发的仿真任务后,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码。本发明实施例能够解决大规模复杂无线通信系统仿真过程中面临的内存压力、计算压力,并为仿真平台提供灵活和可扩展性很好的并行架构,以支持多种需求、想法和场景的仿真。
Claims (14)
- 一种大规模复杂无线通信系统的仿真方法,该方法包括:客户端读取仿真配置参数,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;并行工作的CPU接收任务管理器下发的仿真任务,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码。
- 如权利要求1所述的方法,其中:所述客户端读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,包括:根据仿真配置参数配置的并行工作的CPU数量、网络节点的数量和参数,在本地计算或通过并行工作的CPU协同计算网络节点之间的干扰关系,根据计算结果确定网络节点之间的数据交互关系;确定并行工作的各个CPU的功能类型,根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,构建并行工作的CPU之间的数据交互关系,为需要进行点对点通信的CPU对设置通信时序;为并行工作的每一个CPU创建对应的仿真任务,所述仿真任务包括数据和仿真代码;其中,所述CPU的功能类型包括:时序控制、数据中转、通信制式仿真。
- 如权利要求2所述的方法,其中:所述根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,包括进行以下至少一种处理:a)将异制式异频点的网络节点划分在不同的通信制式仿真CPU中;b)将存在数据交互的同制式同频点的网络节点划分在同一种通信制式仿真CPU中,同制式同频点的网络节点,根据网络节点的干扰关系划分到不同的CPU中;c)将用户设备划分到其接入网络节点所在的通信制式仿真CPU中;d)不同并行CPU之间数据交互最少;e)不同并行CPU之间计算量均衡。
- 如权利要求2所述的方法,其中:并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:并行工作的CPU根据获取到的仿真任务确定自己的功能是时序控制时,通过运行仿真代码向其他CPU广播CPU之间的同步消息。
- 如权利要求2所述的方法,其中:并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:并行工作的CPU根据获取到的仿真任务确定自己的功能是数据中转时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与负责通信制式仿真的CPU进行点对点通信时,将缓存的其他CPU发送给本CPU当前通信对端的数据延时后发出,接收并缓存本CPU当前通信对端发送给其他CPU的数据。
- 如权利要求2所述的方法,其中:并行工作的CPU根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码,包括:并行工作的CPU根据获取到的仿真任务确定自己的功能是通信制式仿真时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与其他CPU进行点对点通信时,与其他CPU进行数据交互,运行网络节点协议栈代码和用户设备协议栈代码,计算上行和/或下行干扰。
- 如权利要求6所述的方法,其中:所述并行工作的CPU计算上行和/或下行干扰,包括:在对本CPU中驻留的目标网络节点计算上行干扰时,获取本CPU中驻留的用户设备与所述目标网络节点之间的位置关系和信道模型、其他CPU中驻留的对所述目标网络节点存在强干扰的用户设备与所述目标网络节点之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定用户设备对所述目标网络节点的上行干扰;在对本CPU中驻留的目标用户设备计算下行干扰时,获取本CPU中驻留的网络节点与所述目标用户设备之间的位置关系和信道模型、其他CPU中驻留的对所述目标用户设备存在强干扰的网络节点与所述目标用户设备之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定网络节点对所述目标用户设备的下行干扰。
- 一种大规模复杂无线通信系统的仿真系统,包括:客户端,设置为读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务,通过任务管理器向并行工作的各个CPU下发仿真任务;并行工作的CPU,设置为接收任务管理器下发的仿真任务,根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码;任务管理器,设置为接收客户端提交的仿真任务并下发给并行工作的CPU。
- 如权利要求8所述的系统,其中:所述客户端,设置为采用以下方式读取仿真配置参数并进行处理,确定并行工作的各个CPU的功能类型并创建对应的仿真任务:根据仿真配置参数配置的并行工作的CPU数量、网络节点的数量和参数,在本地计算或通过并行工作的CPU协同计算网络节点之间的干扰关系,根据计算结果确定网络节点之间的数据交互关系;确定并行工作的各个CPU的功能类型,根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备,构建并行工作的CPU之间的数据交互关系,为需要进行点对点通信的CPU对设置通信 时序;为并行工作的每一个CPU创建对应的仿真任务,所述仿真任务包括数据和仿真代码;其中,所述CPU的功能类型包括:时序控制、数据中转、通信制式仿真。
- 如权利要求9所述的系统,其中:所述客户端,设置为采用以下方式根据网络节点、用户设备的数据交互关系为负责通信制式仿真的CPU分配网络节点和用户设备:执行以下至少一种处理:a)将异制式异频点的网络节点划分在不同的通信制式仿真CPU中;b)将存在数据交互的同制式同频点的网络节点划分在同一种通信制式仿真CPU中,同制式同频点的网络节点,根据网络节点的干扰关系划分到不同的CPU中;c)将用户设备划分到其接入网络节点所在的通信制式仿真CPU中;d)不同并行CPU之间数据交互最少;e)不同并行CPU之间计算量均衡。
- 如权利要求9所述的系统,其中:并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:根据获取到的仿真任务确定自己的功能是时序控制时,通过运行仿真代码向其他CPU广播CPU之间的同步消息。
- 如权利要求9所述的系统,其中:并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:根据获取到的仿真任务确定自己的功能是数据中转时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与负责通信制式仿真的CPU进行点对点通信时,将缓存的其 他CPU发送给本CPU当前通信对端的数据延时后发出,接收并缓存本CPU当前通信对端发送给其他CPU的数据。
- 如权利要求9所述的系统,其中:并行工作的CPU,设置为采用以下方式根据客户端配置的功能类型与其他CPU进行数据交互和同步操作,运行仿真代码:根据获取到的仿真任务确定自己的功能是通信制式仿真时,接收时序控制CPU广播的同步消息并获取当前进行点对点通信的收发CPU对信息,在确定当前是本CPU与其他CPU进行点对点通信时,与其他CPU进行数据交互,运行网络节点协议栈代码和用户设备协议栈代码,计算上行和/或下行干扰。
- 如权利要求13所述的系统,其中:并行工作的CPU,设置为采用以下方式计算上行和/或下行干扰:在对本CPU中驻留的目标网络节点计算上行干扰时,获取本CPU中驻留的用户设备与所述目标网络节点之间的位置关系和信道模型、其他CPU中驻留的对所述目标网络节点存在强干扰的用户设备与所述目标网络节点之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定用户设备对所述目标网络节点的上行干扰;在对本CPU中驻留的目标用户设备计算下行干扰时,获取本CPU中驻留的网络节点与所述目标用户设备之间的位置关系和信道模型、其他CPU中驻留的对所述目标用户设备存在强干扰的网络节点与所述目标用户设备之间的位置关系和信道模型,根据获取到的信息计算信号之间的慢衰和快衰,根据信号衰落的计算结果确定网络节点对所述目标用户设备的下行干扰。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610130298.7A CN107172650B (zh) | 2016-03-08 | 2016-03-08 | 一种大规模复杂无线通信系统的仿真方法和系统 |
CN201610130298.7 | 2016-03-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017152733A1 true WO2017152733A1 (zh) | 2017-09-14 |
Family
ID=59790054
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/073356 WO2017152733A1 (zh) | 2016-03-08 | 2017-02-13 | 一种大规模复杂无线通信系统的仿真方法和系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107172650B (zh) |
WO (1) | WO2017152733A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110769416A (zh) * | 2018-07-27 | 2020-02-07 | 上海华为技术有限公司 | 一种通信方法、装置、系统及可读存储介质 |
CN111132122A (zh) * | 2019-12-18 | 2020-05-08 | 南京熊猫电子股份有限公司 | 基于近距识别多制式终端用户信息的方法及移动终端感知系统 |
CN112199842A (zh) * | 2020-11-11 | 2021-01-08 | 中国电子科技集团公司第二十八研究所 | 一种基于任务导向的复杂仿真系统可信度评估方法 |
CN116149794A (zh) * | 2023-03-07 | 2023-05-23 | 北京创奇视界科技有限公司 | 一种基于容器架构的云仿真方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107819645B (zh) * | 2017-10-16 | 2021-02-09 | 南京网元通信技术有限公司 | 一种基于软件仿真的物联网测试方法 |
CN111506401B (zh) * | 2020-03-27 | 2023-11-21 | 北京百度网讯科技有限公司 | 自动驾驶仿真任务调度方法、装置、电子设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110191092A1 (en) * | 2011-04-12 | 2011-08-04 | Rocketick Technologies Ltd. | Parallel simulation using multiple co-simulators |
CN102880517A (zh) * | 2012-09-29 | 2013-01-16 | 中国人民解放军国防科学技术大学 | 一种基于超级计算机的hla仿真程序的对象调度方法 |
CN105335215A (zh) * | 2015-12-05 | 2016-02-17 | 中国科学院苏州生物医学工程技术研究所 | 一种基于云计算的蒙特卡洛仿真加速方法及系统 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000137690A (ja) * | 1998-10-29 | 2000-05-16 | Uerubiin:Kk | マルチcpuシステム |
WO2009008007A2 (en) * | 2007-07-09 | 2009-01-15 | Hewlett-Packard Development Company L.P. | Data packet processing method for a multi core processor |
CN101741627B (zh) * | 2008-11-14 | 2012-06-27 | 电子科技大学 | 一种双引擎分布式对等网络仿真系统体系结构 |
CN101788919B (zh) * | 2010-01-29 | 2013-08-14 | 中国科学技术大学苏州研究院 | 片上多核处理器时钟精确并行仿真系统及仿真方法 |
US8775152B2 (en) * | 2010-07-30 | 2014-07-08 | Ciena Corporation | Multi-core, multi-blade, and multi-node network environment simulation |
CN202231751U (zh) * | 2011-10-21 | 2012-05-23 | 孟宝宏 | 一种分布获取仿真参数的复杂电磁环境仿真平台 |
CN102591759B (zh) * | 2011-12-29 | 2014-08-13 | 中国科学技术大学苏州研究院 | 片上众核处理器时钟精确并行仿真系统 |
CN103686818B (zh) * | 2012-08-30 | 2017-05-24 | 电信科学技术研究院 | 一种仿真测试方法及设备 |
CN103781107B (zh) * | 2012-10-22 | 2018-03-23 | 中兴通讯股份有限公司 | 无线通信网络的仿真、仿真处理方法及装置 |
CN103092080B (zh) * | 2012-12-12 | 2015-01-07 | 北京交控科技有限公司 | 一种cbtc无线信号仿真系统及其仿真方法 |
CN104951349A (zh) * | 2014-03-24 | 2015-09-30 | 昆山耐特康托软件科技有限公司 | 一种网络化控制算法实时仿真器NetSimulator |
CN104053179B (zh) * | 2014-05-07 | 2017-06-23 | 重庆邮电大学 | 一种c‑ran系统级仿真平台 |
CN104734915B (zh) * | 2015-03-05 | 2018-02-27 | 重庆邮电大学 | 一种复合多进程多线程的多网络并发动态仿真方法 |
CN105207726B (zh) * | 2015-04-09 | 2018-11-27 | 北京交通大学 | 无线信道综合测试装置 |
-
2016
- 2016-03-08 CN CN201610130298.7A patent/CN107172650B/zh active Active
-
2017
- 2017-02-13 WO PCT/CN2017/073356 patent/WO2017152733A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110191092A1 (en) * | 2011-04-12 | 2011-08-04 | Rocketick Technologies Ltd. | Parallel simulation using multiple co-simulators |
CN102880517A (zh) * | 2012-09-29 | 2013-01-16 | 中国人民解放军国防科学技术大学 | 一种基于超级计算机的hla仿真程序的对象调度方法 |
CN105335215A (zh) * | 2015-12-05 | 2016-02-17 | 中国科学院苏州生物医学工程技术研究所 | 一种基于云计算的蒙特卡洛仿真加速方法及系统 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110769416A (zh) * | 2018-07-27 | 2020-02-07 | 上海华为技术有限公司 | 一种通信方法、装置、系统及可读存储介质 |
CN110769416B (zh) * | 2018-07-27 | 2023-06-20 | 上海华为技术有限公司 | 一种通信方法、装置、系统及可读存储介质 |
CN111132122A (zh) * | 2019-12-18 | 2020-05-08 | 南京熊猫电子股份有限公司 | 基于近距识别多制式终端用户信息的方法及移动终端感知系统 |
CN112199842A (zh) * | 2020-11-11 | 2021-01-08 | 中国电子科技集团公司第二十八研究所 | 一种基于任务导向的复杂仿真系统可信度评估方法 |
CN112199842B (zh) * | 2020-11-11 | 2022-10-04 | 中国电子科技集团公司第二十八研究所 | 一种基于任务导向的复杂仿真系统可信度评估方法 |
CN116149794A (zh) * | 2023-03-07 | 2023-05-23 | 北京创奇视界科技有限公司 | 一种基于容器架构的云仿真方法 |
CN116149794B (zh) * | 2023-03-07 | 2023-09-08 | 北京创奇视界科技有限公司 | 一种基于容器架构的云仿真方法 |
Also Published As
Publication number | Publication date |
---|---|
CN107172650A (zh) | 2017-09-15 |
CN107172650B (zh) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017152733A1 (zh) | 一种大规模复杂无线通信系统的仿真方法和系统 | |
JP6662961B2 (ja) | 非セルラーワイヤレスアクセスのためのシステムおよび方法 | |
CN104869526B (zh) | 一种设备到设备通信及其资源分配方法、设备 | |
CN114143799A (zh) | 通信方法及装置 | |
KR101765751B1 (ko) | 계층 이동성 | |
WO2015131677A1 (zh) | 虚拟小区的构建、协作节点的选择方法及装置 | |
US20240073768A1 (en) | Information transmission method and device thereof | |
WO2017005174A1 (zh) | 一种d2d资源分配方法及基站 | |
CN111083634A (zh) | 基于cdn和mec的车联网移动性管理方法 | |
WO2022073207A1 (zh) | 模型评估方法及装置 | |
EP3010270B1 (en) | Device, method and user equipment in a wireless communication system | |
CN104813738A (zh) | 使用中继设备的基站间逻辑接口通信 | |
CN108353380A (zh) | 蜂窝通信系统中的数据路由 | |
WO2016201913A1 (zh) | 一种数据传输方法、设备、系统及计算机存储介质 | |
EP3780732A1 (en) | Communication method and apparatus | |
JP7459879B2 (ja) | スペクトル管理装置及び無線通信方法 | |
KR101639620B1 (ko) | 주파수 자원을 공유하는 통신 시스템의 스케쥴링 정보 공유 프로토콜 | |
CN117062087B (zh) | 频谱资源分配方法、频谱资源分配方案传输方法及装置 | |
CN107820703B (zh) | 控制网络设备及发送控制信息或数据的方法及装置 | |
WO2024007264A1 (zh) | 模型训练方法及通信装置 | |
KR20180065612A (ko) | 서버 및 서버에 의해 수행되는 무선 통신 네트워크의 제어 방법 | |
Peng et al. | Fog Radio Access Networks (F-RAN) | |
CN115460617A (zh) | 基于联邦学习的网络负载预测方法、装置、电子设备及介质 | |
KR102439426B1 (ko) | 에지 컴퓨팅에서 멀티 연합 학습 서비스 오케스트레이터 및 이의 실행 방법 | |
CN108984558A (zh) | 一种用户设备数据通信方法及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17762419 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17762419 Country of ref document: EP Kind code of ref document: A1 |