WO2021000264A1 - 终端和基站 - Google Patents

终端和基站 Download PDF

Info

Publication number
WO2021000264A1
WO2021000264A1 PCT/CN2019/094432 CN2019094432W WO2021000264A1 WO 2021000264 A1 WO2021000264 A1 WO 2021000264A1 CN 2019094432 W CN2019094432 W CN 2019094432W WO 2021000264 A1 WO2021000264 A1 WO 2021000264A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
task
signal
interference
base station
Prior art date
Application number
PCT/CN2019/094432
Other languages
English (en)
French (fr)
Inventor
叶能
李祥明
潘健雄
刘文佳
侯晓林
Original Assignee
株式会社Ntt都科摩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Ntt都科摩 filed Critical 株式会社Ntt都科摩
Priority to PCT/CN2019/094432 priority Critical patent/WO2021000264A1/zh
Priority to CN201980097943.1A priority patent/CN114026804B/zh
Priority to US17/597,258 priority patent/US20220312424A1/en
Publication of WO2021000264A1 publication Critical patent/WO2021000264A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/08Learning-based routing, e.g. using neural networks or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/541Allocation or scheduling criteria for wireless resources based on quality criteria using the level of interference
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/1607Details of the supervisory signal
    • H04L1/1614Details of the supervisory signal using bitmaps

Definitions

  • the present disclosure relates to the field of wireless communication, and more specifically to terminals and base stations in the field of wireless communication.
  • NOMA non-orthogonal multiple access
  • 5G future wireless communication systems
  • NOMA uses non-orthogonal transmission at the sending end to allocate a wireless resource to multiple users, which is more suitable for Internet of Things (IoT) and large-scale machines with large communication capacity.
  • Wireless communication services such as communications (mMTC).
  • NOMA technology different users perform non-orthogonal transmission on the same sub-channel, and interference information is introduced on the transmitting side. Therefore, in order to correctly demodulate the received information, the receiving side It is necessary to use serial interference cancellation (SIC) technology to cancel interference information, thereby increasing the complexity of the receiver.
  • SIC serial interference cancellation
  • AI artificial intelligence
  • multi-task deep learning technology can perform multiple tasks that are related to each other at the same time. It has a certain duality with non-orthogonal multiple access technology that transmits multiple signals non-orthogonally at the same time, so it can be imagined Multi-task deep learning technology is applied to base stations or terminals that adopt non-orthogonal multiple access technology to realize the optimization of non-orthogonal multiple access technology.
  • a terminal including: a processing unit that uses a neural network to map a bit sequence to be transmitted into a complex symbol sequence, wherein the neural network is configured to map the bit sequence within a predetermined range of a complex plane Mapping is a sequence of complex symbols.
  • a receiving unit is further included, and the receiving unit receives information sent by the base station and containing the information used to indicate the network configuration of the neural network adopted by the base station and the information used to instruct the terminal Network configuration information of at least one of the network configuration information of the neural network.
  • the processing unit configures the neural network of the terminal based on the network configuration information.
  • the network configuration information includes network structure and network parameter information.
  • a base station including: a receiving unit to receive multiple signals superimposed by signals sent by multiple terminals; and a processing unit to restore the multiple signals through a multitask neural network
  • the multiple tasks respectively determine the preliminary estimation value of the multi-channel signal, and in the first task of the multi-task neural network, delete the preliminary estimation value of the first signal determined by the first task
  • the interference caused by other signals in the multi-channel signal is determined to determine the estimated value after the interference of the first signal is deleted, wherein the interference caused by the other signals in the multi-channel signal is based on the Obtained from preliminary estimates determined by tasks other than the first task.
  • the multi-task neural network includes a common part and a plurality of specific parts, and each task in the multi-task neural network shares the common part, which is used to determine The common features of each signal in the multi-channel signal, each task in the multi-task neural network corresponds to a specific part, which is used to determine the specific feature of each signal.
  • the multi-task neural network includes multiple layers, the multi-task neural network includes multiple interference removal stages, and each interference removal stage includes one or more layers of neural networks.
  • the preliminary estimation values of the first interference removal phase of the multi-channel signal are respectively determined through the multiple tasks, and the first interference removal phase of the first signal determined by the first task
  • the interference obtained based on the preliminary estimation value of the first interference cancellation stage of the other signal is deleted, so as to determine the estimated value of the interference cancellation of the first interference cancellation stage of the first signal.
  • the multiple tasks are used to determine the preliminary estimated value of the second interference cancellation stage of the multi-channel signal based on the estimated value after the interference cancellation in the first interference cancellation stage of the multi-channel signal, and The interference obtained based on the preliminary estimation value of the second interference cancellation stage of the other signals is deleted from the preliminary estimation value of the second interference cancellation stage of the first signal.
  • the above-mentioned base station further includes a sending unit that sends information related to the structure and parameters of the multi-task neural network.
  • the multi-task neural network is configured to balance the loss of each of the multiple tasks, where the loss is the value of a signal restored by each task and The difference between the true value of the signal.
  • a terminal includes: a receiving unit, which receives superimposed multi-channel signals sent by a base station; and a processing unit, which restores the multi-channel signals, and determines preliminary estimated values of the multi-channel signals through multiple tasks in a multi-task neural network.
  • the interference caused by other signals in the multi-channel signal is deleted from the preliminary estimation value of the first signal determined by the first task, thereby determining The estimated value after the interference of the first signal is deleted, wherein the interference caused by other signals in the multi-channel signal is based on a preliminary determination determined by other tasks among the multiple tasks except the first task Estimated value.
  • the multi-task neural network includes a common part and multiple specific parts, and each task in the multi-task neural network shares the common part, which is used to determine The common features of each signal in the multi-channel signal, each task in the multi-task neural network corresponds to a specific part, which is used to determine the specific feature of each signal.
  • the multi-task neural network includes multiple layers, the multi-task neural network includes multiple interference removal stages, and each interference removal stage includes one or more layers of neural networks.
  • the preliminary estimation values of the first interference removal phase of the multi-channel signal are respectively determined through the multiple tasks, and the first interference removal phase of the first signal determined by the first task
  • the interference obtained based on the preliminary estimation value of the first interference cancellation stage of the other signal is deleted, so as to determine the estimated value of the interference cancellation of the first interference cancellation stage of the first signal.
  • the multiple tasks are used to determine the preliminary estimated value of the second interference cancellation stage of the multi-channel signal based on the estimated value after the interference cancellation in the first interference cancellation stage of the multi-channel signal, and The interference obtained based on the preliminary estimation value of the second interference cancellation stage of the other signals is deleted from the preliminary estimation value of the second interference cancellation stage of the first signal.
  • the receiving unit receives information that is sent by the base station and contains information indicating the network configuration of the neural network adopted by the base station and the multi-task neural network used to indicate the terminal. Network configuration information of at least one of the network configuration information of the network.
  • the processing unit configures the multi-task neural network based on the network configuration information.
  • the network configuration information includes network structure and network parameter information.
  • the multi-task neural network is configured to balance the loss of each task in the multiple tasks, where the loss is the value of a signal restored by each task and The difference between the true value of the signal.
  • a base station includes a processing unit that uses a neural network to map the bit sequence to be transmitted into a complex symbol sequence, wherein the neural network is configured to map the bit sequence into a complex symbol sequence within a predetermined range of the complex plane.
  • the above-mentioned base station further includes a sending unit that sends the bit sequence subjected to the mapping processing by the processing unit, and sends information related to the structure and parameters of the neural network.
  • a transmission method for a terminal includes: using a neural network to map a sequence of bits to be transmitted into a sequence of complex symbols, wherein the neural network is configured to A predetermined range of the complex plane is mapped to a sequence of complex symbols.
  • the receiving includes the information used to indicate the network configuration of the neural network adopted by the base station and the information used to indicate the network configuration of the neural network of the terminal. At least one of the network configuration information.
  • the neural network of the terminal is configured based on the network configuration information.
  • the network configuration information includes network structure and network parameter information.
  • a receiving method for a base station includes: receiving a multi-channel signal superimposed by signals sent by a plurality of terminals; and restoring the multi-channel signal through a multi-task neural network.
  • the multiple tasks in the network respectively determine the preliminary estimated value of the multi-channel signal, and in the first task of the multi-task neural network, from the preliminary estimated value of the first signal determined by the first task
  • the interference caused by other signals in the multi-channel signal is deleted, so as to determine the estimated value of the interference of the first signal after the interference is deleted, wherein the interference caused by the other signals in the multi-channel signal is based on the Obtained from preliminary estimates determined by tasks other than the first task among the multiple tasks.
  • the multi-task neural network includes a common part and a plurality of specific parts, and each task in the multi-task neural network shares the common part, which is used to determine The common feature of each signal in the multi-channel signal, and each task in the multi-task neural network corresponds to a specific part, which is used to determine the specific feature of each signal.
  • the multi-task neural network includes multiple layers, the multi-task neural network includes multiple interference removal stages, and each interference removal stage includes one or more layers of neural networks.
  • the first interference cancellation stage the preliminary estimation values of the first interference cancellation stage of the multi-channel signal are respectively determined through the multiple tasks, and the first interference cancellation stage of the first channel signal determined by the first task Delete the interference obtained based on the preliminary estimate value of the first interference cancellation stage of the other signal from the preliminary estimation value, so as to determine the estimated value of the interference cancellation in the first interference cancellation stage of the first signal, in
  • the multiple tasks are used to determine the preliminary estimation value of the second interference cancellation stage of the multi-channel signal based on the estimated value after the interference cancellation in the first interference cancellation stage of the multi-channel signal, respectively, And delete the interference obtained based on the preliminary estimation value of the second interference cancellation stage of the other signals from the preliminary estimation value of the second interference cancellation stage of the first signal.
  • the above receiving method further includes sending information related to the structure and parameters of the multi-task neural network.
  • the multi-task neural network is configured to balance the loss of each of the multiple tasks, where the loss is the value of a signal restored by each task The difference with the true value of the signal.
  • a receiving method for a terminal includes: receiving superimposed multiple signals sent by a base station; and determining the multiple signals through multiple tasks in a multitask neural network. And in the first task of the multi-task neural network, deleting from the preliminary estimated value of the first signal determined by the first task the other signals in the multi-channel signal In order to determine the estimated value after the interference of the first signal is deleted, the interference caused by other signals in the multi-channel signal is based on the interference caused by the multiple tasks except the first task Preliminary estimates determined by other tasks.
  • the multi-task neural network includes a common part and a plurality of specific parts, and each task in the multi-task neural network shares the common part, which is used to determine The common feature of each signal in the multi-channel signal, and each task in the multi-task neural network corresponds to a specific part, which is used to determine the specific feature of each signal.
  • the multi-task neural network includes multiple layers, the multi-task neural network includes multiple interference removal stages, and each interference removal stage includes one or more layers of neural networks.
  • the first interference cancellation stage the preliminary estimation values of the first interference cancellation stage of the multi-channel signal are respectively determined through the multiple tasks, and the first interference cancellation stage of the first channel signal determined by the first task Delete the interference obtained based on the preliminary estimate value of the first interference cancellation stage of the other signal from the preliminary estimation value, so as to determine the estimated value of the interference cancellation in the first interference cancellation stage of the first signal, in
  • the multiple tasks are used to determine the preliminary estimation value of the second interference cancellation stage of the multi-channel signal based on the estimated value after the interference cancellation in the first interference cancellation stage of the multi-channel signal, respectively, And delete the interference obtained based on the preliminary estimation value of the second interference cancellation stage of the other signals from the preliminary estimation value of the second interference cancellation stage of the first signal.
  • At least one of the network configuration of the neural network used by the base station and the network configuration of the multi-task neural network of the terminal is received.
  • the multi-task neural network is configured based on the network configuration information.
  • the network configuration information includes network structure and network parameter information.
  • the multi-task neural network is configured to balance the loss of each of the multiple tasks, where the loss is the value of a signal restored by each task The difference with the true value of the signal.
  • a transmission method for a base station includes: using a neural network to map a sequence of bits to be transmitted into a sequence of complex symbols, wherein the neural network is configured to A predetermined range of the complex plane is mapped to a sequence of complex symbols.
  • the method further includes: superimposing and sending the bit sequence that has been mapped by the processing unit, and sending information related to the structure and parameters of the neural network.
  • FIG. 1 is a schematic diagram of a wireless communication system in which an embodiment of the present disclosure can be applied.
  • Fig. 2 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • Fig. 3 is a schematic structural diagram of a base station according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic structural diagram of a base station according to another embodiment of the present disclosure.
  • Fig. 5 is a schematic structural diagram of a terminal according to another embodiment of the present disclosure.
  • Fig. 6 is a flowchart of a sending method according to an embodiment of the present disclosure.
  • Fig. 7 is a flowchart of a receiving method according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of the hardware structure of a device according to an embodiment of the present disclosure.
  • the terminals described herein may include various types of terminals, such as User Equipment (UE), mobile terminals (or referred to as mobile stations), or fixed terminals. However, for convenience, hereinafter sometimes they can interact with each other. Use the terminal and UE in exchange.
  • the neural network is an artificial neural network used in the AI function module. For brevity, it may sometimes be referred to as a neural network in the following description.
  • the wireless communication system may be a 5G system, or any other type of wireless communication system, such as a Long Term Evolution (LTE) system or an LTE-A (advanced) system, or a future wireless communication system.
  • LTE Long Term Evolution
  • LTE-A advanced LTE-A
  • future wireless communication system a future wireless communication system.
  • the 5G system is taken as an example to describe the embodiments of the present disclosure, but it should be recognized that the following description can also be applied to other types of wireless communication systems.
  • the uplink transmission from the terminal to the base station is taken as an example for description.
  • a wireless communication system 100 applying non-orthogonal multiple access technologies such as NOMA or MIMO (Multiple-Input Multiple-Output) includes a base station 110, a terminal 120, a terminal 130, and a terminal 140.
  • the base station 110 includes a multi-user detection module 111.
  • the terminal 120, the terminal 130, and the terminal 140 include multi-user signature modules 121, 131, and 141. Assuming that multiple user terminals including terminals 120 to 140 send multiple signals to the base station 110, the bit sequence of each signal is sent to the multi-user signature modules 121, 131, and 141 in each terminal, respectively.
  • the bit sequence input to the multi-user signature modules 121, 131, and 141 may be the original bit sequence to be sent, or the bit sequence after operations such as encoding, spreading, interleaving, and scrambling. In other words, operations such as encoding, interleaving, spreading, and scrambling can also be performed in the multi-user signature modules 121, 131, and 141.
  • the input bit sequence is mapped in the multi-user signature modules 121, 131, and 141, and a complex symbol sequence is output.
  • the mapped complex symbol sequence is non-orthogonally mapped to the physical resource block and sent to the base station 110.
  • the superimposed multi-channel signals are received and sent to the multi-user detection module 111.
  • the multi-user detection module 111 In order to correctly demodulate the signal from each terminal from the received multi-channel signal, in the multi-user detection module 111, it is necessary to remove the interference caused by non-orthogonal transmission, and restore the signal for each user from the multi-channel signal. Effective signal. It can be seen that in the non-orthogonal multiple access technology, the complexity of the receiver increases due to the need to cancel interference at the receiving end, and the hardware of the receiver needs to be configured separately for different transmission schemes, and its flexibility is also limited. .
  • Fig. 2 is a schematic diagram of a terminal according to an embodiment of the present disclosure.
  • the terminal 200 includes a processing unit 210.
  • a multi-user signature (multiple access signature) processing and resource mapping processing are performed on a bit sequence composed of bit data to be sent to the base station.
  • a neural network is used to implement multi-user signature processing, that is, the neural network performs mapping processing on the bit sequence to be sent, and outputs a complex symbol sequence.
  • the bit sequence input to the neural network in the processing unit 210 may be a bit sequence that has undergone at least one of encoding, spreading, interleaving, and scrambling, or it may be an unprocessed original bit sequence. sequence.
  • the processing performed in the neural network may include one or more of encoding, spreading, interleaving, scrambling, etc., in addition to mapping the bit sequence into a complex symbol sequence.
  • the neural network of the terminal can map the bit sequence input to the neural network into a complex symbol sequence.
  • the processing unit 210 maps the bit sequence into a complex symbol sequence within a predetermined range of the complex plane.
  • the predetermined range can be expressed as a prescribed shape on a complex plane.
  • the prescribed shape may be any shape, as long as it is a subset of the complex plane.
  • the processing unit 210 by configuring the parameters of the neural network, the complex symbol sequence obtained by the mapping is confined in a parallelogram on the complex plane.
  • the specific implementation is as follows.
  • the terminal 200 is the n-th terminal that transmits a bit sequence to the base station.
  • the bit sequence to be transmitted is mapped to a complex symbol sequence, and the mapping will be performed
  • the parameter set of the neural network is configured as W n . Since the sequence of complex symbols is limited to the parallelogram on the complex plane, the parameter set W n needs to include the length of the long side of the parallelogram, the length of the short side, and the degrees of the two included angles.
  • the parameter set W n can be expressed as follows:
  • Ln a side length of a long side of the parallelogram
  • S n the length of the short side edge
  • ⁇ L, n the length of the short side edge
  • ⁇ S, n denote two parallelogram angle
  • R can be regarded as the structure of the neural network, and the form of R is agreed so that the complex symbol sequence obtained by the neural network mapping is limited to the parallelogram on the complex plane .
  • R can be represented as follows:
  • the parameter set W n can be mapped into a codebook of complex symbol sequences.
  • the bit sequence to be sent by the input neural network can be selected from the codebook generated above according to its input form (for example, it can be a form that satisfies one-hot codes, etc.) Therefore, the mapping of the complex symbol sequence corresponding to the bit sequence is determined.
  • the codebook about the n-th signal obtained by mapping can be expressed as a sequence:
  • the bit sequence to be sent satisfies the form of the one-hot code, and the n-th signal satisfies [0,0,1,0], select from the above sequence
  • the codeword determines the mapping of the complex symbol sequence corresponding to the n-th signal.
  • the position of the determined complex symbol sequence on the complex plane must fall on the parallelogram satisfying the parameters of the parameter set W n .
  • the parameter set W n is the parameter used to characterize the shape
  • R is the mapping rule corresponding to the shape.
  • the complex symbol sequence obtained by the mapping is limited to a subset of the entire complex plane, so that the complexity of the system is reduced when the neural network is applied to the multi-user signature processing.
  • the parameter set of the neural network is set as a parameter for representing a certain predetermined shape, the number of parameters of the neural network is reduced. For example, in the training of the neural network, it is only necessary to optimize the training mainly for the parameter set W n , which reduces the complexity of training.
  • the complex symbol sequence obtained through the above processing is mapped to the physical resource block.
  • neural network technology can be used for resource mapping.
  • the complex symbol sequence is input into the neural network for resource mapping, and the physical resource mapping is realized through the processing of the neural network.
  • the mapping of resources can be adjusted and learned.
  • NOMA or MIMO the terminal 200 transmits in a non-orthogonal multiple access mode the bit sequence that has been mapped by the processing unit 210 and has undergone resource mapping.
  • resource mapping a physical resource block is allocated more For the data of each terminal, the signal received by the base station is a superimposed multi-channel signal from multiple terminals.
  • the structure and parameters of the neural network adopted by the processing unit 210 can be specified by the base station according to the non-orthogonal multiple access scheme to be adopted.
  • the terminal 200 further includes a receiving unit 220, which receives network configuration information sent by the base station.
  • the network configuration information is used to specify the network configuration of the neural network.
  • the network configuration information can directly specify the network structure adopted by the terminal.
  • network parameters can be configured.
  • the terminal 200 configures a neural network based on the received network configuration information. When used online, the terminal can also perform online training and optimization of the neural network based on the received network configuration information.
  • the network configuration information may also be pre-defined precoding information, transmission scheme information, etc., for example, it may be a NOMA codebook or a MIMO codebook used in non-orthogonal communication.
  • the network configuration information may be exchanged between the base station and the terminal 200 through high-level signaling or physical layer signaling.
  • the terminal 200 may also determine the communication scheme to be adopted by the base station through a blind detection method, thereby determining the network parameters and network structure of the neural network used for the user signature. In this case, the process of signaling interaction with the base station can be omitted.
  • FIG. 3 is a schematic diagram of a base station according to an embodiment of the present disclosure.
  • the base station 300 includes a receiving unit 310 and a processing unit 320.
  • the receiving unit 310 receives multiple signals formed by superimposing signals from multiple terminals.
  • the processing unit 320 needs to process the received multiple signals to restore the signals of each terminal. That is, the processing unit 320 performs multi-user detection processing on the received multiple signals.
  • a multi-task neural network is used to perform multi-user detection processing.
  • multiple tasks in the multi-task neural network are used to restore the multiple signals received by the receiving unit 310.
  • a multi-task neural network applied to multi-user detection processing includes a common part and multiple specific parts. Each task in the multi-task neural network shares the common part. Each task corresponds to a specific part.
  • the processing unit 320 first input the received multi-channel signals into the common part of the multi-task neural network for preprocessing, to determine the common characteristics of each signal (that is, the common characteristics), and extract the effective implicitness of the input signal. With features.
  • the multiple signals processed by the common part are sent to each specific part of the multi-task neural network.
  • Each task is processed in each specific part to determine the specific characteristics of each signal.
  • the multiple signals sent to each specific part are all the same signal.
  • the multi-task neural network applied to multi-user detection may not include a common part, and the steps of extracting the effective hidden features of the input signal may also be processed in each specific part.
  • the processing unit 320 inputs the received multi-channel signals into the multi-task neural network.
  • the received multi-channel signals are processed, that is, in the multi-task neural network.
  • the input for each task is the same.
  • a network configured with different parameters is used to restore one of the multiple signals. First determine the preliminary estimated value of the signal, and then perform interference cancellation, and remove the interference caused by other signals from the preliminary estimated value, so as to determine the estimated value of the signal after interference cancellation.
  • the specific method is as follows.
  • the multiplex signal to the base station 300 receives the i-th path signal M i T i corresponding to the tasks described as an example, the task T i, the multiplexed signal is input to the neural network multi-tasking, obtained after the reduction treatment
  • the preliminary estimated value M i ′ of the i-th signal, and then interference cancellation processing is performed on the preliminary estimated value M i ′.
  • interference removal process interference is removed based on the preliminary estimated values of other signals determined by other tasks. Specifically, in the task T i, the preliminary estimate is also received on the other channel signal from another task, the task T i, the preliminary estimate M i 'preliminary estimate by subtracting the signal of the other path, thereby Obtain an estimated value M i after interference removal.
  • the estimated value M i after interference removal is the estimated value of the interference caused by the multi-channel signal superposition removed, which is relative to the preliminary estimated estimated value M i ' Higher accuracy.
  • the preliminary estimated value M i ′ is also sent to the other tasks, so that the other tasks can perform interference deletion processing.
  • the processing unit 320 a task for multitasking neural networks T i, the interference in the task deletion processing, may be subtracted from the other preliminary estimate M i 'linearly Preliminary estimate of the mission.
  • the initial estimate may be subtracted other tasks are summed and then multiplied by a coefficient k from the preliminary estimate M i 'in.
  • it can be represented by the following formula:
  • N is the number of multi-channel signals, that is, the number of tasks processed by the multi-task neural network
  • Mj' is the preliminary estimated value of other tasks
  • kj is the coefficient corresponding to the preliminary estimated value Mj'.
  • it can be pre-designated or obtained by training a neural network.
  • a neural network dedicated to the deletion step can also be used to perform the above subtraction processing.
  • the 'preliminary estimate and other signals obtained in other tasks by the neural network from the preliminary estimate M i' to the neural network input signals of the i-th preliminary estimate M i Africa The preliminary estimated value of other signals is linearly subtracted, and the estimated value M i after interference cancellation is output, so as to delete the interference caused by the superposition of multiple signals.
  • the multi-task neural network used by the processing unit 320 for multi-user detection is a multi-layer neural network
  • the multi-layer multi-task neural network can be divided into multiple interference removal stages, and individual interference removal stages
  • the number and the number of neural network layers included in each interference removal phase are arbitrary.
  • each interference removal phase can contain one or more layers of neural networks, and the interference removal process mentioned above is performed once after each interference removal phase, and The estimated value after interference removal obtained through interference removal processing is input into the next interference removal stage.
  • next interference cancellation stage in multiple tasks, based on the interference cancellation estimation value obtained in the previous interference cancellation stage, determine the preliminary estimation value of each signal in the multi-channel signal in the interference cancellation stage, and In each task, the interference determined based on the preliminary estimation value of the interference removal phase of other tasks is deleted from the preliminary estimation value of the interference removal phase of the current task. Therefore, after multiple interference removal stages, interference removal can be performed more thoroughly.
  • the processing unit 320 uses a multi-task neural network to perform multi-user detection, in addition to restoring the received multi-channel signals to obtain effective data or control signals from each terminal.
  • user activity detection, PAPR (peak-to-average ratio) reduction, etc. can also be performed in one or more tasks.
  • the following processing is also performed to reduce the loss of neural network processing.
  • the loss characterizes the difference between the value of the signal restored by the neural network and the true value of the signal, for example, it can be mean square error, cross entropy, etc.
  • the balance loss between each task represents the difference between the loss of each task.
  • the neural network is trained to be configured to not only minimize the loss of each task, but also minimize the difference between the loss of each task.
  • the complexity of the receiving end in the multi-user communication is reduced, since only the neural network of the multi-user detection needs to be detected according to the adopted transmission scheme.
  • the base station can be used for reception under the transmission scheme. Therefore, for a variety of different transmission schemes, the hardware at the receiving end is universal, and its flexibility is improved.
  • the bit error rate in the receiving process can be reduced.
  • the terminal and the base station according to the embodiment of the present invention are respectively described in conjunction with FIG. 2 and FIG. 3.
  • the terminal 200 shown in FIG. 2 is used at the transmitting end
  • the base station shown in FIG. 3 is used at the receiving end.
  • an end-to-end optimization method may be adopted to jointly optimize the neural network adopted by the terminal 200 and the base station 300.
  • the base station 300 further includes a sending unit 330.
  • the base station 300 determines the network configuration and network parameters of the multi-task neural network for multi-user detection on the base station side, and the sending unit 330 sends the network Configuration information, the network configuration information indicates the network configuration on the base station side, which may be dynamically configured, or statically or quasi-statically configured.
  • the receiving unit 220 of the terminal 200 configures a multi-task neural network for multi-user detection based on the information, so that the neural network of the terminal 200 and the neural network of the base station 300 can be combined from end to end. Optimized training.
  • the network configuration information sent by the sending unit 330 may be pre-defined precoding information, transmission scheme information, etc., for example, may be the adopted NOMA codebook, or MIMO codebook, etc., which may be through high-level signaling Or the physical layer signaling performs the above-mentioned information exchange between the terminal 200 and the base station 300.
  • the network configuration information sent by the base station 300 may include at least one of information indicating the network configuration of the multi-task neural network adopted by the base station side and information directly indicating the network configuration of the neural network on the terminal side.
  • the terminal 200 may also send the aforementioned network configuration information to the base station 300, and the base station configures the neural network of the base station according to the network configuration information sent by the terminal.
  • the objective function of the neural network is also defined as including the loss of each task and the balance loss between each task, so that the loss between each task
  • the purpose of minimizing the difference is to train the neural network to reduce the bit error rate.
  • the uplink transmission with the terminal as the transmitting end and the base station as the receiving end has been described as an example, but it is not limited to this.
  • the base station Take the downlink transmission to the terminal as an example for description.
  • FIG. 4 is a schematic diagram of a base station according to another embodiment of the present disclosure.
  • the base station 400 includes a processing unit 410.
  • a multi-user signature (multiple access signature) processing and resource mapping processing are performed on a bit sequence composed of bit data to be sent to multiple users.
  • a neural network is used to implement multi-user signature processing, that is, a bit sequence to be sent is mapped through the neural network, and a complex symbol sequence is output.
  • the bit sequence input to the neural network in the processing unit 410 may be a bit sequence that has undergone at least one of encoding, spreading, interleaving, and scrambling, or it may be an unprocessed original bit sequence. sequence.
  • the processing performed in the neural network may include one or more of encoding, spreading, interleaving, scrambling, etc., in addition to mapping the bit sequence into a complex symbol sequence.
  • the neural network of the base station can map the bit sequence input to the neural network into a complex symbol sequence.
  • the processing unit 410 maps the bit sequence into a complex symbol sequence within a predetermined range of the complex plane.
  • the predetermined range can be expressed as a prescribed shape on a complex plane.
  • the prescribed shape may be any shape, as long as it is a subset of the complex plane.
  • the processing unit 410 by configuring the parameters of the neural network, the complex symbol sequence obtained by the mapping is confined in a parallelogram on the complex plane.
  • the specific implementation is as follows.
  • the parameter set of the neural network that performs the mapping is configured as W n . Since the sequence of complex symbols is limited to the parallelogram on the complex plane, the parameter set W n needs to include the length of the long side of the parallelogram, the length of the short side, and the degrees of the two included angles.
  • the parameter set W n can also be expressed in the form of the above formula (1).
  • R can be regarded as the structure of the neural network, and the form of R is agreed so that the complex symbol sequence obtained by the neural network mapping is limited to the parallelogram on the complex plane .
  • R can also be expressed as the above formula (2).
  • the parameter set W n can be mapped into a codebook of complex symbol sequences.
  • the bit sequence to be sent by the input neural network can be selected from the codebook generated above according to its input form (for example, it can be a form that satisfies one-hot codes, etc.) Therefore, the mapping of the complex symbol sequence corresponding to the bit sequence is determined.
  • the codebook about the n-th signal obtained by mapping can be expressed as a sequence:
  • the bit sequence to be sent satisfies the form of the one-hot code, and the n-th signal satisfies [0,0,1,0], select from the above sequence
  • the codeword determines the mapping of the complex symbol sequence corresponding to the n-th signal.
  • the position of the determined complex symbol sequence on the complex plane must fall on the parallelogram satisfying the parameters of the parameter set W n .
  • the parameter set W n is the parameter used to characterize the shape
  • R is the mapping rule corresponding to the shape.
  • the complex symbol sequence obtained by the mapping is limited to a subset of the entire complex plane, thereby reducing the complexity of the system when the neural network is applied to the multi-user signature processing.
  • the parameter set of the neural network is set as a parameter for representing a certain predetermined shape, the number of parameters of the neural network is reduced. For example, in the training of the neural network, it is only necessary to optimize the training mainly for the parameter set W n , which reduces the complexity of training.
  • the complex symbol sequence obtained through the above processing is mapped onto the physical resource block.
  • neural network technology can be used for resource mapping.
  • the complex symbol sequence is input into the neural network for resource mapping, and the physical resource mapping is realized through the processing of the neural network.
  • the mapping of resources can be adjusted and learned.
  • NOMA or MIMO the base station 400 transmits in a non-orthogonal multiple access mode the bit sequence that has been mapped by the processing unit 410 and has undergone resource mapping.
  • resource mapping a physical resource block is allocated more than one bit sequence. For the data of each user, the signal sent to the terminal is a multi-channel signal containing data sent to multiple users.
  • FIG. 5 is a schematic diagram of a terminal according to another embodiment of the present disclosure.
  • the terminal 500 includes a receiving unit 510 and a processing unit 520.
  • the receiving unit 510 receives multiple signals from the base station, and the multiple signals include valid signals for multiple users.
  • the processing unit 520 processes the received multiple signals to restore one or more signals effective to the terminal 500. That is, the processing unit 520 performs multi-user detection processing on the received multiple signals.
  • a multi-task neural network is used to perform multi-user detection processing.
  • the multiple signals received by the receiving unit 510 are restored through multiple tasks in the multi-task neural network.
  • a multi-task neural network applied to multi-user detection processing includes a common part and multiple specific parts. Each task in the multi-task neural network shares the common part. Each task corresponds to a specific part.
  • the processing unit 520 first input the received multi-channel signals into the common part of the multi-task neural network for preprocessing, to determine the common characteristics of each signal (that is, the common characteristics), and to extract the effective implicitness of the input signal. With features.
  • the multiple signals processed by the common part are sent to each specific part of the multi-task neural network.
  • Each task is processed in each specific part to determine the specific characteristics of each signal.
  • the multiple signals sent to each specific part are all the same signal.
  • the multi-task neural network applied to multi-user detection may not include a common part, and the steps of extracting the effective hidden features of the input signal may also be processed in each specific part.
  • the processing unit 520 inputs the received multi-channel signals into the multi-task neural network.
  • the received multi-channel signals are processed, that is, in the multi-task neural network
  • the input for each task is the same.
  • a network configured with different parameters is used to restore one of the multiple signals. First determine the preliminary estimated value of the signal, and then perform interference cancellation, and remove the interference caused by other signals from the preliminary estimated value, so as to determine the estimated value of the signal after interference cancellation.
  • the specific method is as follows.
  • the i-th path signal is multiplexed signal terminal 500 for the valid signal, the following signals to the i-th M i T i corresponding to the tasks described as an example, the task T i, the input to the neural network of multi-tasking
  • the preliminary estimated value M i ′ of the i-th signal is obtained after restoration processing, and then interference cancellation processing is performed on the preliminary estimated value M i ′.
  • interference removal process interference is removed based on the preliminary estimated values of other signals determined by other tasks.
  • the preliminary estimate is also received on the other channel signal from another task, the task T i, the preliminary estimate M i 'preliminary estimate by subtracting the signal of the other path, so that Obtain an estimated value M i after interference removal. Therefore, the estimated value M i after interference removal is the estimated value of the interference caused by the multi-channel signal superposition removed, which is relative to the preliminary estimated estimated value M i ' Higher accuracy.
  • the preliminary estimated value M i ′ is also sent to the other tasks, so that the other tasks can perform interference deletion processing.
  • the processing unit 520 a task for multitasking neural networks T i, the interference in the task deletion processing, may be subtracted from the other preliminary estimate M i 'linearly Preliminary estimate of the mission.
  • the initial estimate may be subtracted other tasks are summed and then multiplied by a coefficient k from the preliminary estimate M i 'in.
  • it can be pre-designated or obtained by training a neural network.
  • a neural network dedicated to the deletion step can also be used to perform the above subtraction processing.
  • the 'preliminary estimate and other signals obtained in other tasks by the neural network from the preliminary estimate M i' to the neural network input signals of the i-th preliminary estimate M i Africa The preliminary estimated value of other signals is linearly subtracted, and the estimated value M i after interference cancellation is output, so as to delete the interference caused by the superposition of multiple signals.
  • the multi-task neural network used by the processing unit 520 for multi-user detection is a multi-layer neural network.
  • the multi-layer multi-task neural network can be divided into multiple interference removal stages, and individual interference removal stages.
  • the number and the number of neural network layers included in each interference removal phase are arbitrary.
  • each interference removal phase can contain one or more layers of neural networks, and the interference removal process mentioned above is performed once after each interference removal phase, and The estimated value after interference removal obtained through interference removal processing is input into the next interference removal stage.
  • next interference cancellation stage in multiple tasks, based on the interference cancellation estimation value obtained in the previous interference cancellation stage, determine the preliminary estimation value of each signal in the multi-channel signal in the interference cancellation stage, and In each task, the interference determined based on the preliminary estimation value of the interference removal phase of other tasks is deleted from the preliminary estimation value of the interference removal phase of the current task. Therefore, after multiple interference removal stages, interference removal can be performed more thoroughly.
  • the processing unit 520 uses a multi-task neural network to perform multi-user detection, in addition to restoring the received multi-channel signals to obtain effective data or control signals sent to the terminal.
  • user activity detection and PAPR peak-to-average ratio
  • the following processing is also performed to reduce the loss of neural network processing.
  • the loss characterizes the difference between the value of the signal restored by the neural network and the true value of the signal, for example, it can be mean square error, cross entropy, etc.
  • the balance loss between each task represents the difference between the loss of each task.
  • the neural network is trained to be configured to not only minimize the loss of each task, but also minimize the difference between the loss of each task.
  • the structure and parameters of the multi-task neural network used by the processing unit 520 can be determined by
  • the base station is designated according to its transmission scheme.
  • the receiving unit 320 of the terminal 500 receives the network configuration information sent by the base station.
  • the network configuration information is used to specify the network configuration of the multi-task neural network.
  • the network configuration information includes the network structure and network configuration of the multi-task neural network. Parameter information.
  • the terminal 500 configures a multi-task neural network based on the received network configuration information. When used in an online manner, the terminal 500 may also perform online training and optimization of the multi-task neural network based on the received network configuration information.
  • the network configuration information may also be pre-defined precoding information, transmission scheme information, etc., for example, it may be a NOMA codebook or a MIMO codebook used by the base station.
  • the network configuration information may be exchanged between the base station and the terminal 500 through high-level signaling or physical layer signaling.
  • the terminal 500 may also determine the transmission scheme of the base station through a blind detection method, thereby determining the network parameters and network structure of the multi-task neural network for multi-user detection. In this case, the process of signaling interaction with the base station can be omitted.
  • the complexity of the receiving end in the multi-user communication is reduced, because only the transmission scheme of the base station side is required to detect the neural network of the multi-user.
  • the terminal can be used for reception under the transmission scheme. Therefore, for a variety of different transmission schemes, the hardware of the receiving end is universal, and its flexibility is improved.
  • the bit error rate in the receiving process can be reduced.
  • the base station and the terminal according to the embodiment of the present invention are respectively described in conjunction with FIG. 4 and FIG. 5.
  • the base station 400 shown in FIG. 4 is used at the transmitting end
  • the terminal shown in FIG. 5 is used at the receiving end
  • an end-to-end optimization method may be adopted to jointly optimize the neural network adopted by the base station 400 and the terminal 500.
  • the base station 400 further includes a sending unit 420.
  • the base station 400 determines the network structure and network parameters of the neural network used for the multi-user signature on the base station side (for example, the above-mentioned R and W n ), the sending unit 420 sends network configuration information, which indicates the network configuration on the base station side, which may be dynamically configured, or statically or quasi-statically configured.
  • the receiving unit 510 of the terminal 500 configures a multi-task neural network for multi-user detection based on the information (for example, setting several interference removal stages, adopting a linear or non-linear interference removal method, etc.) In this way, the neural network of the base station 400 and the neural network of the terminal 500 can be jointly optimized training from end to end.
  • the network configuration information sent by the sending unit 420 may be pre-defined precoding information, transmission scheme information, etc., for example, it may be the NOMA codebook or MIMO codebook used by the base station, which may be passed through the higher layer
  • the above-mentioned information exchange is performed between the base station 400 and the terminal 500 by signaling or physical layer signaling.
  • the network configuration information sent by the base station 400 may include at least one of information indicating the network configuration of the neural network adopted by the base station 400 and information directly indicating the network configuration of the multi-task neural network on the terminal side.
  • the objective function of the neural network is also defined as including the loss of each task and the balance loss between each task, so that the loss between each task
  • the purpose of minimizing the difference is to train the neural network to reduce the bit error rate.
  • any training method such as a gradient descent training method, can be used for the optimization training of the neural network involved in the above description.
  • Fig. 6 is a flowchart of a method executed by a terminal or a base station as a transmitting end according to an embodiment of the present disclosure.
  • the method 600 includes step S610.
  • step S610 a neural network is used to perform multiple access signature processing on a bit sequence composed of bit data to be sent to multiple users, that is, the bit to be sent is processed through the neural network. The sequence is mapped, and a sequence of complex symbols is output.
  • the bit sequence input to the neural network in step S610 may be a bit sequence that has undergone at least one of encoding, spreading, interleaving, and scrambling, or it may be an unprocessed original bit sequence.
  • the processing performed in the neural network may include one or more of encoding, spreading, interleaving, scrambling, etc., in addition to mapping the bit sequence into a complex symbol sequence.
  • a multi-user signature mapping model can be used to map the bit sequence input to the neural network into a complex symbol sequence.
  • the bit sequence is mapped into a complex symbol sequence within a predetermined range of the complex plane.
  • the predetermined range can be expressed as a prescribed shape on a complex plane.
  • the prescribed shape may be any shape, as long as it is a subset of the complex plane.
  • step S610 by configuring the parameters of the neural network, the complex symbol sequence obtained by the mapping is confined in a parallelogram on the complex plane.
  • the parameter set of the neural network for performing the mapping is configured as W n . Since the sequence of complex symbols is limited to the parallelogram on the complex plane, the parameter set W n needs to include the length of the long side of the parallelogram, the length of the short side, and the degrees of the two included angles.
  • R can be regarded as the structure of the neural network, and the form of R is agreed so that the complex symbol sequence obtained by the neural network mapping is limited to the parallelogram on the complex plane .
  • the specific mapping method has been described above, and will not be repeated here.
  • step S610 the complex symbol sequence obtained by the mapping is limited to a subset of the entire complex plane, so that the complexity of the system is reduced when the neural network is applied to the multi-user signature processing.
  • the parameter set of the neural network is set as a parameter for representing a certain predetermined shape, the number of parameters of the neural network is reduced. For example, in the training of the neural network, it is only necessary to optimize the training mainly for the parameter set W n , which reduces the complexity of training.
  • the method 600 may further include step S620.
  • step S620 the complex symbol sequence obtained through the foregoing processing is mapped onto the physical resource block.
  • neural network technology may be used to perform resource mapping in step S620.
  • the complex symbol sequence is input into the neural network for resource mapping, and the physical resource mapping is realized through the processing of the neural network.
  • the mapping of resources can be adjusted and learned.
  • the terminal or base station using method 600 transmits the bit sequence that has been mapped in step S610 and has been resource mapped in step S620 in a non-orthogonal multiple access mode.
  • resource mapping a physical resource block is allocated Data for multiple users.
  • Fig. 7 is a flowchart of a method executed by a base station or a terminal as a receiving end according to an embodiment of the present disclosure.
  • the method 700 includes step S710, step S720, and step S730.
  • Step S710 receives multiple signals from the sending end, and multiple valid signals are superimposed on the multiple signals.
  • step S720 and step S730 the received multiple signals are processed to restore the effective information of each signal. That is, step S720 and step S730 perform multi-user detection processing on the received multiple signals.
  • a multi-task neural network is used to perform multi-user detection processing.
  • step S720 and step S730 the multi-channel signal received in step S710 is restored through multiple tasks in the multi-task neural network.
  • a multi-task neural network applied to multi-user detection processing includes a common part and multiple specific parts. Each task in the multi-task neural network shares the common part. Each task corresponds to a specific part.
  • the common part of the multi-task neural network is used for preprocessing to determine the common characteristics of each signal (that is, the common characteristics) and extract the effective hidden characteristics of the input signal.
  • Each task is processed in each specific part to determine the specific characteristics of each signal.
  • the input signal of each specific part is the same signal.
  • the multi-task neural network applied to multi-user detection may not include a common part, and the steps of extracting the effective hidden features of the input signal may also be processed in each specific part.
  • step S720 the received multi-channel signals are input into the multi-task neural network, and in each task of the multi-task neural network, the received multi-channel signals are processed, that is, the multi-task neural network.
  • the input of every task in the network is the same.
  • a network configured with different parameters is used to restore one of the multiple signals.
  • step S720 first determine the preliminary estimated value of the signal, and then perform interference cancellation in step S730, and remove the interference caused by other signals from the preliminary estimated value, thereby determining the estimated value of the signal after interference cancellation .
  • the specific method is as follows.
  • step S720 The multiplex signal to the multiplex signal in the i-th path signal M i T i corresponding to the tasks described as an example, in step S720, the task in the T i, the input to the neural network of multi-tasking, subjected to a reduction treatment preliminary estimate of the i-th channel signal M i ',
  • step S730 the preliminary estimate M i' for interference cancellation process to remove the interference of other preliminary estimate based on signals determined by other tasks.
  • step S730 in task T i , preliminary estimated values of other signals from other tasks are also received, and in task T i , the preliminary estimated value M i ′ is subtracted from other signals. Preliminary estimated value, thereby obtaining an estimated value M i after interference cancellation.
  • the estimated value M i after interference cancellation is the estimated value of the interference caused by the multi-channel signal superposition deleted, which is relative to the preliminary estimated estimation the value of M i 'with higher accuracy.
  • the preliminary estimated value M i ′ is also sent to the other tasks, so that the other tasks can perform interference deletion processing.
  • the task for a multitasking neural networks T i the interference cancellation process of the task
  • other tasks may be subtracted from the preliminary estimate M i 'linearly Preliminary estimate.
  • the initial estimate may be subtracted other tasks are summed and then multiplied by a coefficient k from the preliminary estimate M i 'in.
  • the coefficient k for each task can be pre-specified or obtained by training a neural network.
  • a neural network dedicated to the deletion step can also be used to perform the above subtraction processing.
  • the task T i the initial estimate of the i-th input to the neural network signals the preliminary estimate M i 'and other signals obtained in other tasks by the neural network from the preliminary estimate M i 'after subtracting the non-linearly in the preliminary estimate of the other signals, interference cancellation output estimation value M i ", thus removing multipath interference caused by signal superimposed.
  • the multi-task neural network used for multi-user detection is a multi-layer neural network.
  • the multi-layer multi-task neural network can be divided into multiple interference removal stages, the number of interference removal stages and each The number of neural network layers included in each interference removal phase is arbitrary.
  • each interference removal phase can contain one or more layers of neural networks.
  • the interference removal process mentioned above will be performed after each interference removal phase, and the interference removal The estimated value after the interference removal obtained by the removal processing is input to the next interference removal stage.
  • step S720 is applied to determine the preliminary estimation of each signal in the multi-channel signal in the interference removal phase based on the estimated value after interference removal obtained in the previous interference removal phase
  • step S730 is applied to delete the interference determined based on the preliminary estimated value of the interference removal phase of other tasks from the preliminary estimated value of the interference removal phase of this task. Therefore, after multiple interference removal stages, interference removal can be performed more thoroughly.
  • a multi-task neural network is used to perform multi-user detection. Therefore, in addition to restoring the received multi-channel signals to obtain valid data or control signals sent to the terminal, it also User activity detection, PAPR (peak-to-average ratio) reduction, etc. can be performed in one or more of these tasks.
  • the following processing is also performed to reduce the loss of neural network processing.
  • the loss characterizes the difference between the value of the signal restored by the neural network and the true value of the signal, for example, it can be mean square error, cross entropy, etc.
  • the optimization training of the multi-task neural network suppose its objective function includes the loss of each task and the balance loss between each task. Among them, the balance loss between each task represents the difference between the loss of each task.
  • the neural network is trained to be configured to not only minimize the loss of each task, but also minimize the difference between the loss of each task.
  • the structure and parameters of the neural network applied to the terminal can be specified by the base station according to the sending scheme.
  • the terminal applying the method 600 and the method 700 also receives the network configuration information sent by the base station.
  • the network configuration information is used to specify the network configuration of the neural network of the terminal.
  • the network configuration information includes network structure and network parameter information.
  • the terminal configures its neural network. When used online, the terminal can also perform online training and optimization of its neural network based on the received network configuration information.
  • the network configuration information may also be pre-defined precoding information, transmission scheme information, etc., for example, may be the NOMA codebook or MIMO codebook used.
  • Network configuration information can be exchanged between the base station and the terminal through high-level signaling or physical layer signaling.
  • the terminal may also send the above-mentioned network configuration information to the base station to specify the neural network configuration of the base station or help the base station determine the neural network configuration to be used.
  • the terminal applying the method 600 and the method 700 may also determine the transmission scheme of the base station through the blind detection method, thereby determining the network parameters and network structure of the multi-task neural network for multi-user detection. In this case, the process of signaling interaction with the base station can be omitted.
  • an end-to-end optimization method may be adopted to jointly optimize the neural networks adopted by the sending end and the receiving end.
  • the base station adopting the above method 600 and method 700 determines the network configuration and network parameters of the neural network it adopts, and sends network configuration information to the terminal adopting the above method 600 and method 700
  • the network configuration information indicates the network configuration on the base station side, which may be dynamically configured, or statically or quasi-statically configured.
  • the terminal configures the multi-task neural network of the terminal based on the information, so that the neural networks used by the sending end and the receiving end can be jointly optimized training from end to end.
  • the network configuration information sent by the base station may be pre-defined precoding information, transmission scheme information, etc., for example, may be the NOMA codebook or MIMO codebook used by the base station, and may be through high-level signaling or The physical layer signaling performs the above-mentioned information exchange between the sending end and the receiving end.
  • the transmitted network configuration information may include at least one of information indicating the network configuration of the neural network adopted by the base station and information directly indicating the network configuration of the multi-task neural network on the terminal side.
  • the objective function of the neural network is also defined as including the loss of each task and the balance loss between each task, so that the loss between each task
  • the purpose of minimizing the difference is to train the neural network to reduce the bit error rate.
  • optimization training of the neural network involved in the above description can adopt any training method, such as a gradient descent training method.
  • each functional block can be realized by one device that is physically and/or logically combined, or two or more devices that are physically and/or logically separated can be directly and/or indirectly (for example, It is realized by the above-mentioned multiple devices through wired and/or wireless) connection.
  • a device such as a first communication device, a second communication device, or a flying user terminal, etc.
  • a device may function as a computer that executes the processing of the wireless communication method of the present disclosure.
  • FIG. 6 is a schematic diagram of the hardware structure of the involved device 800 (base station or user terminal) according to an embodiment of the present disclosure.
  • the aforementioned device 800 may be constituted as a computer device physically including a processor 810, a memory 820, a memory 830, a communication device 840, an input device 850, an output device 860, a bus 870, and the like.
  • the words “device” may be replaced with circuits, devices, units, etc.
  • the hardware structure of the user terminal and the base station may include one or more of the devices shown in the figure, or may not include some of the devices.
  • processor 810 For example, only one processor 810 is shown in the figure, but it may be multiple processors.
  • processing may be executed by one processor, or may be executed by more than one processor simultaneously, sequentially, or by other methods.
  • processor 810 may be installed by more than one chip.
  • Each function of the device 800 is realized by, for example, the following way: by reading predetermined software (programs) into hardware such as the processor 810 and the memory 820, the processor 810 is allowed to perform calculations to control the communication performed by the communication device 840 , And control the reading and/or writing of data in the memory 820 and the memory 830.
  • predetermined software programs
  • the processor 810 is allowed to perform calculations to control the communication performed by the communication device 840 , And control the reading and/or writing of data in the memory 820 and the memory 830.
  • the processor 810 operates, for example, an operating system to control the entire computer.
  • the processor 810 may be constituted by a central processing unit (CPU, Central Processing Unit) including an interface with peripheral devices, a control device, a computing device, and a register.
  • CPU Central Processing Unit
  • the aforementioned processing unit and the like may be implemented by the processor 810.
  • the processor 810 reads programs (program codes), software modules, data, etc. from the memory 830 and/or the communication device 840 to the memory 820, and executes various processes according to them.
  • programs program codes
  • software modules software modules
  • data etc.
  • the program a program that causes a computer to execute at least a part of the operations described in the above embodiments can be adopted.
  • the processing unit of the aforementioned terminal or base station may be implemented by a control program stored in the memory 820 and operated by the processor 810, and may be implemented in the same way for other functional blocks.
  • the memory 820 is a computer-readable recording medium, such as Read Only Memory (ROM), Programmable Read Only Memory (EPROM, Erasable Programmable ROM), Electrically Programmable Read Only Memory (EEPROM, Electrically EPROM), It is composed of at least one of random access memory (RAM, Random Access Memory) and other suitable storage media.
  • the memory 820 may also be called a register, a cache, a main memory (main storage device), and the like.
  • the memory 820 may store executable programs (program codes), software modules, etc. used to implement the methods involved in an embodiment of the present disclosure.
  • the memory 830 is a computer-readable recording medium, such as a flexible disk, a floppy (registered trademark) disk, a magneto-optical disk (for example, a CD-ROM (Compact Disc ROM), etc.), Digital universal discs, Blu-ray (registered trademark) discs), removable disks, hard drives, smart cards, flash memory devices (for example, cards, sticks, key drivers), magnetic strips, databases , A server, and at least one of other appropriate storage media.
  • the memory 830 may also be referred to as an auxiliary storage device.
  • the communication device 840 is hardware (transmitting and receiving equipment) used for communication between computers via a wired and/or wireless network, and is also referred to as a network device, a network controller, a network card, a communication module, etc., for example.
  • the communication device 840 may include high-frequency switches, duplexers, filters, frequency synthesizers, and the like.
  • the aforementioned sending unit, receiving unit, etc. may be implemented by the communication device 840.
  • the input device 850 is an input device (for example, keyboard, mouse, microphone, switch, button, sensor, etc.) that accepts input from the outside.
  • the output device 860 is an output device that implements output to the outside (for example, a display, a speaker, a light emitting diode (LED, Light Emitting Diode) lamp, etc.).
  • the input device 850 and the output device 860 may also be an integrated structure (for example, a touch panel).
  • bus 870 for communicating information.
  • the bus 870 may be composed of a single bus, or may be composed of different buses between devices.
  • base stations and user terminals may include microprocessors, digital signal processors (DSP, Digital Signal Processor), application specific integrated circuits (ASIC, Application Specific Integrated Circuit), programmable logic devices (PLD, Programmable Logic Device), and on-site Hardware such as Field Programmable Gate Array (FPGA, Field Programmable Gate Array) can be used to implement part or all of each functional block.
  • DSP digital signal processors
  • ASIC Application Specific Integrated Circuit
  • PLD programmable logic devices
  • FPGA Field Programmable Gate Array
  • FPGA Field Programmable Gate Array
  • the channel and/or symbol may also be a signal (signaling).
  • the signal can also be a message.
  • the reference signal may also be referred to as RS (Reference Signal) for short, and may also be referred to as pilot (Pilot), pilot signal, etc., according to applicable standards.
  • a component carrier CC, Component Carrier
  • CC Component Carrier
  • the information, parameters, etc. described in this specification can be represented by absolute values, can be represented by relative values to predetermined values, or can be represented by corresponding other information.
  • the wireless resource can be indicated by a prescribed index.
  • the formulas etc. using these parameters may also be different from those explicitly disclosed in this specification.
  • the information, signals, etc. described in this specification can be expressed using any of a variety of different technologies.
  • the data, commands, instructions, information, signals, bits, symbols, chips, etc. that may be mentioned in all the above descriptions can pass voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of them. Combination to express.
  • information, signals, etc. can be output from the upper layer to the lower layer, and/or from the lower layer to the upper layer.
  • Information, signals, etc. can be input or output via multiple network nodes.
  • the input or output information, signals, etc. can be stored in a specific place (such as memory), or can be managed through a management table.
  • the input or output information, signals, etc. can be overwritten, updated or supplemented.
  • the output information, signals, etc. can be deleted.
  • the input information, signals, etc. can be sent to other devices.
  • the notification of information is not limited to the mode/implementation described in this specification, and may be performed by other methods.
  • the notification of information may be through physical layer signaling (for example, Downlink Control Information (DCI), Uplink Control Information (UCI)), upper layer signaling (for example, radio resource control (RRC, Radio Resource Control) signaling, broadcast information (Master Information Block (MIB, Master Information Block), System Information Block (SIB, System Information Block), etc.), media access control (MAC, Medium Access Control) signaling ), other signals or a combination of them.
  • DCI Downlink Control Information
  • UCI Uplink Control Information
  • RRC Radio Resource Control
  • RRC Radio Resource Control
  • MIB Master Information Block
  • SIB System Information Block
  • MAC Medium Access Control
  • the physical layer signaling may also be referred to as L1/L2 (layer 1/layer 2) control information (L1/L2 control signal), L1 control information (L1 control signal), or the like.
  • the RRC signaling may also be referred to as an RRC message, for example, it may be an RRC Connection Setup (RRC Connection Setup) message, an RRC Connection Reconfiguration (RRC Connection Reconfiguration) message, and so on.
  • the MAC signaling may be notified by, for example, a MAC control element (MAC CE (Control Element)).
  • the notification of prescribed information is not limited to being explicitly performed, and may also be done implicitly (for example, by not performing notification of the prescribed information, or by notification of other information).
  • the judgment can be made by the value (0 or 1) represented by 1 bit, by the true or false value (Boolean value) represented by true (true) or false (false), or by the comparison of numerical values ( For example, comparison with a predetermined value) is performed.
  • the software is called software, firmware, middleware, microcode, hardware description language, or other names, it should be broadly interpreted as referring to commands, command sets, codes, code segments, program codes, programs, sub Programs, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, steps, functions, etc.
  • software, commands, information, etc. may be transmitted or received via a transmission medium.
  • a transmission medium For example, when using wired technology (coaxial cable, optical cable, twisted pair, digital subscriber line (DSL, Digital Subscriber Line), etc.) and/or wireless technology (infrared, microwave, etc.) to send from a website, server, or other remote resources
  • wired technology coaxial cable, optical cable, twisted pair, digital subscriber line (DSL, Digital Subscriber Line), etc.
  • wireless technology infrared, microwave, etc.
  • system and "network” used in this manual can be used interchangeably.
  • base station BS, Base Station
  • radio base station eNB
  • gNB gNodeB
  • cell gNodeB
  • cell group femto cell
  • carrier femto cell
  • the base station can accommodate one or more (for example, three) cells (also called sectors). When the base station accommodates multiple cells, the entire coverage area of the base station can be divided into multiple smaller areas, and each smaller area can also pass through the base station subsystem (for example, indoor small base stations (RF remote heads (RRH, Remote Radio Head))) to provide communication services.
  • RF remote heads RF remote Radio Head
  • mobile station MS, Mobile Station
  • user terminal user terminal
  • UE User Equipment
  • terminal can be used interchangeably.
  • Mobile stations are sometimes used by those skilled in the art as subscriber stations, mobile units, subscriber units, wireless units, remote units, mobile devices, wireless devices, wireless communication devices, remote devices, mobile subscriber stations, access terminals, mobile terminals, wireless Terminal, remote terminal, handset, user agent, mobile client, client, or some other appropriate terms.
  • the wireless base station in this specification can also be replaced with a user terminal.
  • the various modes/implementations of the present disclosure can also be applied.
  • the functions of the first communication device or the second communication device in the aforementioned device 800 can be regarded as functions of the user terminal.
  • words such as "up” and “down” can also be replaced with "side”.
  • the uplink channel can also be replaced with a side channel.
  • the user terminal in this specification can also be replaced with a wireless base station.
  • the above-mentioned functions of the user terminal can be regarded as functions of the first communication device or the second communication device.
  • a specific operation performed by a base station may also be performed by its upper node depending on the situation.
  • various actions performed for communication with the terminal can pass through the base station or more than one network other than the base station.
  • Nodes for example, Mobility Management Entity (MME), Serving-Gateway (S-GW, Serving-Gateway), etc. can be considered, but not limited to this), or a combination of them.
  • MME Mobility Management Entity
  • S-GW Serving-Gateway
  • Serving-Gateway Serving-Gateway
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution Advanced
  • LTE-B Long Term Evolution Beyond
  • LTE-Beyond Super 3rd generation mobile communication system
  • IMT-Advanced 4th generation mobile communication system
  • 4G 4th generation mobile communication system
  • 5G 5th generation mobile communication system
  • FAA Future Radio Access
  • New-RAT Radio Access Technology
  • NR New Radio
  • new radio access NX, New radio access
  • FX Future generation radio access
  • GSM Global System for Mobile communications
  • CDMA3000 Code Division Multiple Access 3000
  • UMB Ultra Mobile Broadband
  • UMB Ultra Mobile Broadband
  • IEEE 920.11 Wi-Fi (registered trademark)
  • IEEE 920.16 WiMAX
  • any reference to units using names such as "first” and “second” used in this specification does not fully limit the number or order of these units. These names can be used in this specification as a convenient way to distinguish two or more units. Therefore, the reference of the first unit and the second unit does not mean that only two units can be used or that the first unit must precede the second unit in several forms.
  • determining used in this specification may include various actions. For example, with regard to “judgment (determination)", calculation (calculating), calculation (computing), processing (processing), deriving (deriving), investigating, searching (looking up) (such as tables, databases, or other Search), confirmation (ascertaining) in the data structure, etc. are regarded as “judgment (confirmation)”. In addition, with regard to “judgment (determination)”, it is also possible to combine receiving (for example, receiving information), transmitting (for example, sending information), input, output, and accessing (for example, Access to the data in the memory), etc. are regarded as “judgment (confirmation)”.
  • judgment (determination) resolving, selection, choosing, establishing, comparing, etc. can also be regarded as performing "judgment (determination)”.
  • judgment (confirmation) several actions can be regarded as “judgment (confirmation)”.
  • connection refers to any direct or indirect connection or combination between two or more units, which can be It includes the following situations: between two units that are “connected” or “combined” with each other, there is one or more intermediate units.
  • the combination or connection between the units may be physical, logical, or a combination of the two. For example, "connect” can also be replaced with "access”.
  • two units are connected by using one or more wires, cables, and/or printed electrical connections, and as a number of non-limiting and non-exhaustive examples, by using radio frequency areas , Microwave region, and/or light (both visible light and invisible light) region wavelength electromagnetic energy, etc., are “connected” or “combined” with each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本公开提供一种终端和基站。该一种终端包括:处理单元,使用神经网络将要发送的比特序列映射为复数符号序列,其中所述神经网络被配置为将比特序列在复平面的预定范围内映射为复数符号序列。

Description

终端和基站 技术领域
本公开涉及无线通信领域,并且更具体地涉及无线通信领域中的终端和基站。
背景技术
目前,已提出将非正交多址接入(NOMA)技术应用于5G等未来的无线通信系统中,以提高通信系统的频谱利用率。相对于传统的正交多址接入技术,NOMA在发送端采用非正交发送,将一个无线资源分配给多个用户,更适用于通信容量较大的物联网(IoT)、大规模机器类通信(mMTC)等无线通信服务。在应用了NOMA技术的通信传输中,不同的用户在相同的子信道上进行非正交传输,在发送侧引入了干扰信息,因此,为了对接收到的信息进行正确地解调,在接收侧需要采用串行干扰删除(SIC)技术等对干扰信息进行干扰删除,从而提高了接收机的复杂度。并且,针对不同的NOMA方案需要设计不同类型的接收机,对于接收机的灵活性有一定的限制。
另一方面,随着科技的发展,人工智能(AI)技术被用于很多不同的领域,并且已经提出了将AI技术应用于无线通信系统中以满足用户的需求。在AI技术中,多任务深度学习技术能够同时执行多个互相具有关联的任务,其与同时非正交地传输多路信号的非正交多址接入技术具有一定的对偶性,因此可以设想将多任务深度学习技术应用于采用非正交多址接入技术的基站或终端中,以实现对非正交多址接入技术的优化。
发明内容
根据本公开的一个方面,提供了一种终端,包括:处理单元,使用神经网络将要发送的比特序列映射为复数符号序列,其中所述神经网络被配置为将比特序列在复平面的预定范围内映射为复数符号序列。
根据本公开的一个示例,在上述终端中,还包括接收单元,所述接收单元接收由基站发送的包含用于表示所述基站采用的神经网络的网络配置的信息和用于指示所述终端的神经网络的网络配置的信息中的至少一个的网络配置信息。
根据本公开的一个示例,在上述终端中,所述处理单元基于所述网络配置信息来配置所述终端的神经网络。
根据本公开的一个示例,在上述终端中,所述网络配置信息包含网络结构和网络参数信息。
根据本公开的另一个方面,提供了一种基站,包括:接收单元,接收由多个终端发送的信号叠加的多路信号;以及处理单元,还原所述多路信号,通过多任务神经网络中的多个任务分别确定所述多路信号的初步估计值,并在所述多任务神经网络的第一任务中,从由所述第一任务确定的第一路信号的初步估计值中删除由所述多路信号中其他路信号造成的干扰,从而确定所述第一路信号的干扰删除后的估计值,其中由所述多路信号中其他路信号造成的干扰是基于由所述多个任务中除了所述第一任务以外的其他任务确定的初步估计值获得的。
根据本公开的一个示例,在上述基站中,所述多任务神经网络包含一个公共部分和多个特定部分,所述多任务神经网络中的每个任务共用所述公共部分,其用于确定所述多路信号中每路信号的公共特征,所述多任务神经网络中的每个任务分别对应于一个所述特定部分,其用于分别确定每路信号的特定特征。
根据本公开的一个示例,在上述基站中,所述多任务神经网络包含多层,所述多任务神经网络包含多个干扰删除阶段,每个干扰删除阶段包含一或多层神经网络,在第一干扰删除阶段中,通过所述多个任务分别确定所述多路信号的第一干扰删除阶段的初步估计值,并从由所述第一任务确定的第一路信号第一干扰删除阶段的初步估计值中删除基于所述其他路信号的第一干扰删除阶段的初步估计值而获得的干扰,从而确定所述第一路信号的第一干扰删除阶段的干扰删除后的估计值,在第二干扰删除阶段中,通过所述多个任务分别基于所述多路信号的第一干扰删除阶段的干扰删除后的估计值确定所述多路信号的第二干扰删除阶段的初步估计值,并从所述第一路信号第二干扰删除阶段的初步估计值中删除基于所述其他路信号的第二干扰删除阶段的初步估计值而获得的干扰。
根据本公开的一个示例,在上述基站中还包括发送单元,发送与所述多任务神经网络的结构和参数有关的信息。
根据本公开的一个示例,在上述基站中,所述多任务神经网络被配置为 对所述多个任务中每个任务的损失进行平衡,所述损失是每个任务还原的一路信号的值与该路信号的真实值之间的差异。
根据本公开的另一个方面,提供了一种终端。该终端包括:接收单元,接收由基站发送的叠加的多路信号;以及处理单元,还原所述多路信号,通过多任务神经网络中的多个任务分别确定所述多路信号的初步估计值,并在所述多任务神经网络的第一任务中,从由所述第一任务确定的第一路信号的初步估计值中删除由所述多路信号中其他路信号造成的干扰,从而确定所述第一路信号的干扰删除后的估计值,其中由所述多路信号中其他路信号造成的干扰是基于由所述多个任务中除了所述第一任务以外的其他任务确定的初步估计值获得的。
根据本公开的一个示例,在上述终端中,所述多任务神经网络包含一个公共部分和多个特定部分,所述多任务神经网络中的每个任务共用所述公共部分,其用于确定所述多路信号中每路信号的公共特征,所述多任务神经网络中的每个任务分别对应于一个所述特定部分,其用于分别确定每路信号的特定特征。
根据本公开的一个示例,在上述终端中,所述多任务神经网络包含多层,所述多任务神经网络包含多个干扰删除阶段,每个干扰删除阶段包含一或多层神经网络,在第一干扰删除阶段中,通过所述多个任务分别确定所述多路信号的第一干扰删除阶段的初步估计值,并从由所述第一任务确定的第一路信号第一干扰删除阶段的初步估计值中删除基于所述其他路信号的第一干扰删除阶段的初步估计值而获得的干扰,从而确定所述第一路信号的第一干扰删除阶段的干扰删除后的估计值,在第二干扰删除阶段中,通过所述多个任务分别基于所述多路信号的第一干扰删除阶段的干扰删除后的估计值确定所述多路信号的第二干扰删除阶段的初步估计值,并从所述第一路信号第二干扰删除阶段的初步估计值中删除基于所述其他路信号的第二干扰删除阶段的初步估计值而获得的干扰。
根据本公开的一个示例,在上述终端中,所述接收单元接收由基站发送的包含用于表示所述基站采用的神经网络的网络配置的信息和用于指示所述终端的所述多任务神经网络的网络配置的信息中的至少一个的网络配置信息。
根据本公开的一个示例,在上述终端中,所述处理单元基于所述网络配 置信息来配置所述多任务神经网络。
根据本公开的一个示例,在上述终端中,所述网络配置信息包含网络结构和网络参数信息。
根据本公开的一个示例,在上述终端中,所述多任务神经网络被配置为对所述多个任务中每个任务的损失进行平衡,所述损失是每个任务还原的一路信号的值与该路信号的真实值之间的差异。
根据本公开的另一方面,提供了一种基站。该基站包括:处理单元,使用神经网络将要发送的比特序列映射为复数符号序列,其中所述神经网络被配置为将比特序列在复平面的预定范围内映射为复数符号序列。
根据本公开的一个示例,在上述基站中,还包括:发送单元,发送由所述处理单元进行了映射处理的所述比特序列,并发送与所述神经网络的结构和参数有关的信息。
根据本公开的另一方面,提供了一种用于终端的发送方法,该发送方法包括:使用神经网络将要发送的比特序列映射为复数符号序列,其中所述神经网络被配置为将比特序列在复平面的预定范围内映射为复数符号序列。
根据本公开的一个示例,在上述发送方法中,接收由基站发送的包含用于表示所述基站采用的神经网络的网络配置的信息和用于指示所述终端的神经网络的网络配置的信息中的至少一个的网络配置信息。
根据本公开的一个示例,在上述发送方法中,基于所述网络配置信息来配置所述终端的神经网络。
根据本公开的一个示例,在上述发送方法中,所述网络配置信息包含网络结构和网络参数信息。
根据本公开的另一方面,提供了一种用于基站的接收方法,该接收方法包括:接收由多个终端发送的信号叠加的多路信号;以及还原所述多路信号,通过多任务神经网络中的多个任务分别确定所述多路信号的初步估计值,并在所述多任务神经网络的第一任务中,从由所述第一任务确定的第一路信号的初步估计值中删除由所述多路信号中其他路信号造成的干扰,从而确定所述第一路信号的干扰删除后的估计值,其中由所述多路信号中其他路信号造成的干扰是基于由所述多个任务中除了所述第一任务以外的其他任务确定的初步估计值获得的。
根据本公开的一个示例,在上述接收方法中,所述多任务神经网络包含 一个公共部分和多个特定部分,所述多任务神经网络中的每个任务共用所述公共部分,其用于确定所述多路信号中每路信号的公共特征,所述多任务神经网络中的每个任务分别对应于一个所述特定部分,其用于分别确定每路信号的特定特征。
根据本公开的一个示例,在上述接收方法中,所述多任务神经网络包含多层,所述多任务神经网络包含多个干扰删除阶段,每个干扰删除阶段包含一或多层神经网络,在第一干扰删除阶段中,通过所述多个任务分别确定所述多路信号的第一干扰删除阶段的初步估计值,并从由所述第一任务确定的第一路信号第一干扰删除阶段的初步估计值中删除基于所述其他路信号的第一干扰删除阶段的初步估计值而获得的干扰,从而确定所述第一路信号的第一干扰删除阶段的干扰删除后的估计值,在第二干扰删除阶段中,通过所述多个任务分别基于所述多路信号的第一干扰删除阶段的干扰删除后的估计值确定所述多路信号的第二干扰删除阶段的初步估计值,并从所述第一路信号第二干扰删除阶段的初步估计值中删除基于所述其他路信号的第二干扰删除阶段的初步估计值而获得的干扰。
根据本公开的一个示例,在上述接收方法中,还包括发送与所述多任务神经网络的结构和参数有关的信息。
根据本公开的一个示例,在上述接收方法中,所述多任务神经网络被配置为对所述多个任务中每个任务的损失进行平衡,所述损失是每个任务还原的一路信号的值与该路信号的真实值之间的差异。
根据本公开的另一方面,提供了一种用于终端的接收方法,该接收方法包括:接收由基站发送的叠加的多路信号;通过多任务神经网络中的多个任务分别确定所述多路信号的初步估计值;以及在所述多任务神经网络的第一任务中,从由所述第一任务确定的第一路信号的初步估计值中删除由所述多路信号中其他路信号造成的干扰,从而确定所述第一路信号的干扰删除后的估计值,其中由所述多路信号中其他路信号造成的干扰是基于由所述多个任务中除了所述第一任务以外的其他任务确定的初步估计值获得的。
根据本公开的一个示例,在上述接收方法中,所述多任务神经网络包含一个公共部分和多个特定部分,所述多任务神经网络中的每个任务共用所述公共部分,其用于确定所述多路信号中每路信号的公共特征,所述多任务神经网络中的每个任务分别对应于一个所述特定部分,其用于分别确定每路信 号的特定特征。
根据本公开的一个示例,在上述接收方法中,所述多任务神经网络包含多层,所述多任务神经网络包含多个干扰删除阶段,每个干扰删除阶段包含一或多层神经网络,在第一干扰删除阶段中,通过所述多个任务分别确定所述多路信号的第一干扰删除阶段的初步估计值,并从由所述第一任务确定的第一路信号第一干扰删除阶段的初步估计值中删除基于所述其他路信号的第一干扰删除阶段的初步估计值而获得的干扰,从而确定所述第一路信号的第一干扰删除阶段的干扰删除后的估计值,在第二干扰删除阶段中,通过所述多个任务分别基于所述多路信号的第一干扰删除阶段的干扰删除后的估计值确定所述多路信号的第二干扰删除阶段的初步估计值,并从所述第一路信号第二干扰删除阶段的初步估计值中删除基于所述其他路信号的第二干扰删除阶段的初步估计值而获得的干扰。
根据本公开的一个示例,在上述接收方法中,接收由基站发送的包含用于表示所述基站采用的神经网络的网络配置和所述终端的所述多任务神经网络的网络配置中的至少一个的网络配置信息。
根据本公开的一个示例,在上述接收方法中,基于所述网络配置信息来配置所述多任务神经网络。
根据本公开的一个示例,在上述接收方法中,所述网络配置信息包含网络结构和网络参数信息。
根据本公开的一个示例,在上述接收方法中,所述多任务神经网络被配置为对所述多个任务中每个任务的损失进行平衡,所述损失是每个任务还原的一路信号的值与该路信号的真实值之间的差异。
根据本公开的另一方面,提供了一种用于基站的发送方法,该发送方法包括:使用神经网络将要发送的比特序列映射为复数符号序列,其中所述神经网络被配置为将比特序列在复平面的预定范围内映射为复数符号序列。
根据本公开的一个示例,在上述发送方法中,还包括:叠加发送由所述处理单元进行了映射处理的所述比特序列,并发送与所述神经网络的结构和参数有关的信息。
附图说明
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其 它目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1是可在其中应用本公开的实施例的无线通信系统的示意图。
图2是根据本公开的一个实施例的终端的结构示意图。
图3是根据本公开的一个实施例的基站的结构示意图。
图4是根据本公开的另一个实施例的基站的结构示意图。
图5是根据本公开的另一个实施例的终端的结构示意图。
图6是根据本公开的一个实施例的发送方法的流程图。
图7是根据本公开的一个实施例的接收方法的流程图。
图8是根据本公开实施例所涉及的设备的硬件结构的示意图。
具体实施方式
为了使得本公开的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本公开的示例实施例。在附图中,相同的参考标号自始至终表示相同的元件。应当理解:这里描述的实施例仅仅是说明性的,而不应被解释为限制本公开的范围。此外,这里所述的终端可以包括各种类型的终端,例如用户终端(User Equipment,UE)、移动终端(或称为移动台)或者固定终端,然而,为方便起见,在下文中有时候可互换地使用终端和UE。此外,在本公开的实施例,神经网络是在AI功能模块中使用的人工神经网络。为了简洁,在以下描述中有时候可称为神经网络。
首先,参照图1说明可在其中应用本公开的实施例的无线通信系统。该无线通信系统可以是5G系统,也可以是任何其他类型的无线通信系统,比如长期演进(Long Term Evolution,LTE)系统或者LTE-A(advanced)系统,或者未来的无线通信系统等。在下文中,以5G系统为例来描述本公开的实施例,但应当认识到,以下描述也可以适用于其他类型的无线通信系统,以下,以从终端到基站的上行发送为例进行说明。
如图1所示,应用NOMA或者MIMO(多输入多输出(Multiple-Input Multiple-Output))等非正交多址接入技术的无线通信系统100包含基站110、终端120、终端130、以及终端140,基站110中包含多用户检测模块111。 终端120、终端130、以及终端140中包含对多用户签名模块121、131以及141。假设包含终端120~140的多个用户终端向基站110发送多路信号,则每一路信号的比特序列在各个终端中分别被送入多用户签名模块121、131以及141。输入多用户签名模块121、131以及141的比特序列可以是要发送的原始比特序列,也可以是经过编码、扩频、交织、加扰等操作之后的比特序列。换言之,编码、交织、扩频、加扰等操作也可以在多用户签名模块121、131以及141中进行。输入的比特序列在多用户签名模块121、131以及141中被进行映射,输出复数符号序列。经过映射的复数符号序列被非正交地映射到物理资源块,并发送给基站110。
在基站110中,接收到叠加的多路信号并发送至多用户检测模块111。为了从接收到的多路信号中正确地解调出来自各个终端的信号,在多用户检测模块111中,需要删除非正交发送导致的干扰,从多路信号中还原出用于各用户的有效信号。可见,在非正交多址接入技术中,由于需要在接收端进行干扰删除,导致接收机的复杂度提高,并且需要针对不同的发送方案分别配置接收机的硬件,其灵活性也受到限制。
在现有技术中,已有人提出将神经网络技术与非正交多址接入技术进行结合,但由于来自多个用户的信号之间存在非正交的复杂的关系,所以难以对神经网络进行训练和优化,例如,提出了在发送端采用全连接的深度神经网络(FC-DNN)将比特序列映射为复数符号序列的方法,由于采用这种方法得到的复数符号序列在复数平面上出现的位置是没有规律的,导致训练过程涉及到大量的参数,难以进行优化。此外,也并未提出基于神经网络而降低接收端的复杂度、提高灵活性的技术方案。
为了解决上述问题,本公开提出了一种终端以及基站。下面,参照图2来说明根据本公开的一个实施例的终端。图2是根据本公开的一个实施例的终端的示意图。
如图2所示,终端200包括处理单元210。在处理单元210中,基于非正交多址接入技术,对由要发送给基站的比特数据组成的比特序列进行多用户签名(multiple access signature)处理以及资源映射处理。根据本实施例,在处理单元210中,采用神经网络来实现多用户签名处理,即,通过神经网络对要发送的比特序列进行映射处理,输出复数符号序列。
根据本发明的一个示例,在处理单元210中输入神经网络的比特序列可以是经过了编码、扩频、交织、加扰等处理中的至少一个的比特序列,也可以是未经过处理的原始比特序列。换言之,在神经网络中进行的处理,除了将比特序列映射为复数符号序列之外,还可以包括编码、扩频、交织、加扰等中的一个或多个。
例如,终端的神经网络可将输入到神经网络的比特序列映射为复数符号序列。并且根据本公开的实施例,通过配置神经网络的结构和参数,使得处理单元210将比特序列在复平面的预定范围内映射为复数符号序列。该预定范围在复数平面上可以表示为一个规定的形状。可选择地,该规定的形状可以是任意的形状,只要是复平面上的一个子集即可。此外,也可以结合通信领域的知识,将该形状设定为最有利于传输通信的形状。由于限定了比特序列在复数平面上的映射范围,从而与采用FC-DNN等的映射方式相比,减少了神经网络的参数的数量,降低了对神经网络进行优化训练的复杂度。
根据本发明的一个示例,在处理单元210中,通过配置神经网络的参数,使得映射得到的复数符号序列在复数平面上被限定在一个平行四边形中。其具体实现的方式如下。
假设在非正交多址接入的上行发送中,终端200是第n个向基站发送比特序列的终端,在处理单元210中,将要发送的比特序列映射为复数符号序列,则将进行该映射的神经网络的参数集配置为W n。由于要将复数符号序列在复数平面上限定于平行四边形中,因此参数集W n需要包含平行四边形的长边边长、短边边长、两个夹角的度数等参数。例如,可以将参数集W n表示如下:
W n={L n,S n,θ L,n,θ S,n}......式(1)
其中,Ln表示平行四边形长边的边长,S n表示短边的边长,θ L,n和θ S,n分别表示平行四边形的两个夹角。
另外,假设采用函数R来表示神经网络的映射规则,则R可以视为是神经网络的结构,约定R的形式以使经过神经网络映射得到的复数符号序列在复数平面上被限定于平行四边形中。例如,假设非正交多址接入中可映射的物理资源元素(Resource element,CE)的最大数目为4,终端200所发送的第n路信号使用2个物理资源元素,则当采用上述式(1)所表示的参数集W n时,可以将R表示如下:
Figure PCTCN2019094432-appb-000001
通过式(2)中的R,可以将参数集W n映射成复数符号序列的码本。在此基础上,输入神经网络的所要发送的比特序列可以根据其输入的形式(例如,可以是满足独热码(one-hot)的形式等)而从上述生成的码本中选择对应的码字,从而就确定了该比特序列对应的复数符号序列的映射。例如,在采用式(1)和式子(2)的W n和R(W n)时,映射得到的关于第n路信号的码本可以表示为序列:
Figure PCTCN2019094432-appb-000002
当所要发送的比特序列满足独热码的形式,且设第n路信号满足[0,0,1,0]时,则从上述序列中选择
Figure PCTCN2019094432-appb-000003
作为码字,以确定第n路信号对应的复数符号序列的映射。
由于网络结构R被约定为对应于平行四边形的映射规则,因此所确定的复数符号序列在复数平面上的位置一定会落在满足参数集W n的参数的平行四边形上。
根据以上示例,当将复数符号序列在复数平面的形状限定为平行四边形之外的其他形状时,参数集W n就是用于表征该形状的参数,R就是与该形状对应的映射规则。
通过处理单元210的上述处理,映射得到的复数符号序列被限定在整个复平面的一个子集中,从而使得在将神经网络应用于多用户签名处理时系统的复杂度降低。并且,由于将神经网络的参数集设为用于表征某个规定的形状的参数,减少了神经网络的参数的数量。例如,在神经网络的训练中仅需主要针对参数集W n进行优化训练,降低了训练的复杂度。
在处理单元210中,将经过上述处理得到的复数符号序列映射到物理资源块上。根据本发明的一个示例,可以采用神经网络技术来进行资源映射。将复数符号序列输入到用于进行资源映射的神经网络中,通过该神经网络的处理实现物理资源映射。此时,由于采用了神经网络,资源的映射可以进行调整和学习。在NOMA或MIMO等之中,终端200以非正交多址接入的方式发送由处理单元210进行了映射处理并经过了资源映射的比特序列,在资源映射中,对一个物理资源块分配多个终端的数据,基站所接收到的信号是来自多个终端的叠加了的多路信号。
根据本发明的一个示例,对于处理单元210所采用的神经网络的结构和参数(例如,上述的W n和R),可以由基站根据所要采用的非正交多址接入的方案来指定。在该情况下,终端200还包括接收单元220,其接收由基站发送的网络配置信息,该网络配置信息用于指定神经网络的网络配置,例如,网络配置信息中可以直接指定终端采用的网络结构和网络参数。终端200基于接收到的网络配置信息,配置神经网络。当以在线的方式使用时,终端也可以基于接收到的网络配置信息对神经网络进行在线的训练优化。在一个示例中,网络配置信息也可以是预先规定好的预编码信息、发射方案信息等,例如,可以是非正交通信中所采用的NOMA码本、或者MIMO码本等。网络配置信息可以通过高层信令或者物理层信令在基站与终端200之间进行交互。
根据本发明的另一个示例,终端200也可以通过盲检测的方法来确定基站所要采用的通信方案,从而确定用于用户签名的神经网络的网络参数和网络结构。在该情况下,可以省略掉与基站进行信令交互的过程。
以上结合图2说明了将神经网络应用于以非正交多址接入的方式进行发送的终端,基于同样的出发点,也可以将神经网络应用于非正交多址接入技术中的接收端。下面,参照图3来说明根据本公开的一个实施例的基站。图3是根据本公开的一个实施例的基站的示意图。
如图3所示,基站300包括接收单元310以及处理单元320。接收单元310接收由多个终端的信号叠加而成的多路信号。处理单元320需要对接收到的多路信号进行处理,以还原出各个终端的信号。即,处理单元320对接收到的多路信号进行多用户检测处理。
根据本实施例,采用多任务神经网络来进行多用户检测处理。在处理单元320中,通过多任务神经网络中的多个任务来还原接收单元310所接收的多路信号。
根据本发明的一个示例,应用于多用户检测处理的多任务神经网络包含一个公共部分和多个特定部分,多任务神经网络中的每个任务共用公共部分,所述多任务神经网络中的每个任务分别对应于一个特定部分。在处理单元320中,首先将接收到的多路信号输入多任务神经网络的公共部分中进行预处理,以确定每路信号的公共特征(即,具有共性的特征),提取输入信号的有效隐 含特征。经过公共部分处理的多路信号被送入多任务神经网络的各个特定部分。在各个特定部分中分别进行各个任务的处理,以分别确定每路信号的特定特征,这里,送入各个特定部分的多路信号均为相同的信号。可选择地,应用于多用户检测的多任务神经网络也可以不包含公共部分,提取输入信号的有效隐含特征的步骤也可以分别在各个特定部分中进行处理。
根据本实施例,处理单元320将接收到的多路信号输入多任务神经网络,在多任务神经网络的每一个任务中,均对接收到的多路信号进行处理,即,多任务神经网络中每一个任务的输入都是相同的。在多任务神经网络的各个任务中,使用配置了不同参数的网络来分别对多路信号的其中一路进行还原处理。首先确定该路信号的初步估计值,然后进行干扰删除,从该初步估计值中删除其他路信号造成的干扰,从而确定该路信号的干扰删除后的估计值。具体方式如下所述。
以下以基站300接收到的多路信号中的第i路信号M i对应的任务T i为例进行说明,在任务T i中,向多任务神经网络输入的多路信号,经过还原处理后得到第i路信号的初步估计值M i’,接下来,对该初步估计值M i’进行干扰删除处理。在干扰删除处理中,基于其他任务确定的其他路信号的初步估计值来删除干扰。具体而言,在任务T i中,还接收到来自其他任务的关于其他路信号的初步估计值,在任务T i中,将初步估计值M i’减去其他路信号的初步估计值,从而得到一个干扰删除后的估计值M i”。由此,干扰删除后的估计值M i”是删除了多路信号叠加带来的干扰的估计值,其相对于初步估计估计值M i’具有更高的精确度。同理,为了在其他任务中还原来自其他终端的信号,则在任务T i中,也将初步估计值M i’送入该其它任务中,以便于该其他任务进行干扰删除处理。
根据本发明的一个示例,在处理单元320中,对于多任务神经网络中的一个任务T i而言,在该任务的干扰删除处理中,可以从初步估计值M i’中线性地减去其它任务的初步估计值。例如,可以从初步估计值M i’中减去将其它任务的初步估计值乘以系数k之后进行相加的和。例如,可以以下式进行表示:
Figure PCTCN2019094432-appb-000004
其中,N为多路信号的路数,即多任务神经网络所处理的任务数量,Mj’为其它任务的初步估计值,kj是与初步估计值Mj’对应的系数。可选择地,对于每一个系数kj,可以预先指定,也可以通过训练神经网络而得到。
根据本发明的另一个示例,也可以使用一个专门用于进行删除步骤的神经网络来进行上述相减的处理。在任务T i中,向该神经网络输入第i路信号的初步估计值M i’以及在其他任务中得到的其他路信号的初步估计值,通过该神经网络从初步估计值M i’中非线性地减去其他路信号的初步估计值,输出干扰删除后的估计值M i”,从而删除多路信号叠加带来的干扰。
根据本发明的一个示例,处理单元320进行多用户检测所采用的多任务神经网络为多层神经网络,可以将该多层的多任务神经网络划分为多个干扰删除阶段,干扰删除阶段的个数以及每个干扰删除阶段所包含的神经网络层数是任意的,例如每个干扰删除阶段可以包含一层或多层神经网络,每经过一个干扰删除阶段就进行一次上述的干扰删除处理,并将经过干扰删除处理得到的干扰删除后的估计值输入下一个干扰删除阶段。在下一个干扰删除阶段中,在多个任务中,基于上一个干扰删除阶段中得到的干扰删除后的估计值确定所述多路信号中每路信号在该干扰删除阶段的初步估计值,并在各个任务中,从本任务的该干扰删除阶段的初步估计值中删除基于其他任务的该干扰删除阶段的初步估计值而确定的干扰。由此,经过多个干扰删除阶段,能够更为彻底地进行干扰删除。
根据本发明的一个示例,在基站300中,由于处理单元320采用多任务神经网络来进行多用户检测,因此除了对接收到的多路信号进行还原以得到来自各个终端的有效数据或控制信号之外,还可以在其中的一个或多个任务中进行用户活跃性检测、PAPR(信号峰均比(Peak-to-average ratio))降低等。
根据本发明的一个示例,在处理单元320中,在对用于多用户检测的神经网络进行训练优化时,还进行以下处理以降低神经网络处理的损失(loss)。损失表征了由神经网络还原得到的信号的值与信号的真实值之间的差异,例如,可以是均方误差、交叉熵等。在进行多任务神经网络的优化训练时,设其目标函数包含各个任务的损失以及各个任务之间的平衡损失,其中,各个任务之间的平衡损失代表了各个任务的损失之间的差异度。通过对神经网络进行训练,以将其配置为不仅最小化各个任务的损失,还使各个任务的损失之间的差异最小化。采用如此训练的多任务神经网络进行多路信号的还原时, 能够使神经网络处理整体的损失降低,优化对接收到的多路信号的还原结果。
根据本公开,通过将多任务神经网络引入处理单元320的多用户检测中,降低了在多用户通信中接收端的复杂度,由于只需要根据所采用的发送方案来对多用户检测的神经网络的网络结构和/或参数进行微小的调整,就能够将基站用于该发送方案下的接收,从而对于多种不同的发送方案而言,接收端的硬件等是通用的,其灵活性得到了提高。此外,由于在多任务神经网络中引入了干扰删除,并且在神经网络的目标函数中引入了各个任务之间的平衡损失,能够降低接收过程中的误比特率。
以上,结合图2和图3分别说明了根据本发明的实施例的终端和基站,根据本发明的一个示例,在发送端采用图2所示的终端200,接收端采用图3所示的基站300的情况下,可以采用端到端的优化方式对终端200和基站300所采用的神经网络进行联合优化。
具体而言,在该情况下,基站300还包括发送单元330,首先由基站300确定在基站侧用于多用户检测的多任务神经网络的网络结构和网络参数等网络配置,发送单元330发送网络配置信息,该网络配置信息表示基站侧的网络配置,其可以是动态配置的,或者也可以是静态或准静态地配置的。终端200的接收单元220在接收到上述网络配置信息之后,基于该信息配置用于多用户检测的多任务神经网络,从而能够从端到端对终端200的神经网络和基站300的神经网络进行联合的优化训练。在一个示例中,由发送单元330发送的网络配置信息可以是预先规定好的预编码信息、发射方案信息等,例如,可以是采用的NOMA码本、或者MIMO码本等,可以通过高层信令或者物理层信令在终端200与基站300之间进行上述信息的交互。在一个示例中,基站300发送的网络配置信息可以包含表示基站侧采用的多任务神经网络的网络配置的信息和直接表示终端侧的神经网络的网络配置的信息中的至少一个。
根据本发明的一个示例,也可以是由终端200向基站300发送上述网络配置信息,基站根据终端发送的网络配置信息配置基站的神经网络。
根据本发明的一个示例,当采用端到端的方式进行联合优化时,也将神经网络的目标函数定义为包含各个任务的损失以及各个任务之间的平衡损失,并以使各个任务的损失之间的差异最小化为目的进行神经网络的训练, 以降低误比特率。
以上,以终端作为发送端,基站作为接收端的上行发送为例进行了说明,但并不限定于此,对于从基站向终端的下行发送,或者设备与设备之间的D2D传输,以下,以基站向终端进行发送的下行传输为例进行说明。
参照图4来说明根据本公开的另一个实施例的基站。图4是根据本公开的另一个实施例的基站的示意图。
如图4所示,基站400包括处理单元410。在处理单元410中,基于非正交多址接入技术,对由要发送给多个用户的比特数据组成的比特序列进行多用户签名(multiple access signature)处理以及资源映射处理。根据本实施例,在处理单元410中,采用神经网络来实现多用户签名处理,即,通过神经网络对要发送的比特序列进行映射处理,输出复数符号序列。
根据本发明的一个示例,在处理单元410中输入神经网络的比特序列可以是经过了编码、扩频、交织、加扰等处理中的至少一个的比特序列,也可以是未经过处理的原始比特序列。换言之,在神经网络中进行的处理,除了将比特序列映射为复数符号序列之外,还可以包括编码、扩频、交织、加扰等中的一个或多个。
例如,基站的神经网络可将输入到神经网络的比特序列映射为复数符号序列。并且根据本公开的实施例,通过配置神经网络的结构和参数,使得处理单元410将比特序列在复平面的预定范围内映射为复数符号序列。该预定范围在复数平面上可以表示为一个规定的形状。可选择地,该规定的形状可以是任意的形状,只要是复平面上的一个子集即可。此外,也可以结合通信领域的知识,将该形状设定为最有利于传输通信的形状。由于限定了比特序列在复数平面上的映射范围,从而与采用FC-DNN等的映射方式相比,减少了神经网络的参数的数量,降低了对神经网络进行优化训练的复杂度。
根据本发明的一个示例,在处理单元410中,通过配置神经网络的参数,使得映射得到的复数符号序列在复数平面上被限定在一个平行四边形中。其具体实现的方式如下。
假设需要将发送给n个终端的比特数据组成的比特序列映射为复数符号序列,则将进行该映射的神经网络的参数集配置为W n。由于要将复数符号序列在复数平面上限定于平行四边形中,因此参数集W n需要包含平行四边形的 长边边长、短边边长、两个夹角的度数等参数。例如,同样可以将参数集W n表示为上式(1)的形式。
另外,假设采用函数R来表示神经网络的映射规则,则R可以视为是神经网络的结构,约定R的形式以使经过神经网络映射得到的复数符号序列在复数平面上被限定于平行四边形中。例如,假设非正交多址接入可映射的物理资源元素的最大数目为4,发送给n个终端的每路信号使用2个物理资源元素,则当采用上述式(1)所表示的参数集W n时,同样可以将R表示为上式(2)。
通过式(2)中的R,可以将参数集W n映射成复数符号序列的码本。在此基础上,输入神经网络的所要发送的比特序列可以根据其输入的形式(例如,可以是满足独热码(one-hot)的形式等)而从上述生成的码本中选择对应的码字,从而就确定了该比特序列对应的复数符号序列的映射。例如,在采用式(1)和式子(2)的W n和R(W n)时,映射得到的关于第n路信号的码本可以表示为序列:
Figure PCTCN2019094432-appb-000005
当所要发送的比特序列满足独热码的形式,且设第n路信号满足[0,0,1,0]时,则从上述序列中选择
Figure PCTCN2019094432-appb-000006
作为码字,以确定第n路信号对应的复数符号序列的映射。
由于网络结构R被约定为对应于平行四边形的映射规则,因此所确定的复数符号序列在复数平面上的位置一定会落在满足参数集W n的参数的平行四边形上。
根据以上示例,当将复数符号序列在复数平面的形状限定为平行四边形之外的其他形状时,参数集W n就是用于表征该形状的参数,R就是与该形状对应的映射规则。
通过处理单元410的上述处理,映射得到的复数符号序列被限定在整个复平面的一个子集中,从而使得在将神经网络应用于多用户签名处理时系统的复杂度降低。并且,由于将神经网络的参数集设为用于表征某个规定的形状的参数,减少了神经网络的参数的数量。例如,在神经网络的训练中仅需主要针对参数集W n进行优化训练,降低了训练的复杂度。
在处理单元410中,将经过上述处理得到的复数符号序列映射到物理资源块上。根据本发明的一个示例,可以采用神经网络技术来进行资源映射。将复数符号序列输入到用于进行资源映射的神经网络中,通过该神经网络的处理实现物理资源映射。此时,由于采用了神经网络,资源的映射可以进行 调整和学习。在NOMA或MIMO等之中,基站400以非正交多址接入的方式发送由处理单元410进行了映射处理并经过了资源映射的比特序列,在资源映射中,对一个物理资源块分配多个用户的数据,发送给终端的信号是包含了发送给多个用户的数据的多路信号。
下面,参照图5来说明根据本公开的另一个实施例的终端。图5是根据本公开的另一个实施例的终端的示意图。
如图5所示,终端500包括接收单元510以及处理单元520。接收单元510接收来自基站的多路信号,该多路信号中包含对于多个用户的有效信号。处理单元520对接收到的多路信号进行处理,以还原出对终端500有效的一路或多路信号。即,处理单元520对接收到的多路信号进行多用户检测处理。
根据本实施例,采用多任务神经网络来进行多用户检测处理。在处理单元520中,通过多任务神经网络中的多个任务来还原接收单元510所接收的多路信号。
根据本发明的一个示例,应用于多用户检测处理的多任务神经网络包含一个公共部分和多个特定部分,多任务神经网络中的每个任务共用公共部分,所述多任务神经网络中的每个任务分别对应于一个特定部分。在处理单元520中,首先将接收到的多路信号输入多任务神经网络的公共部分中进行预处理,以确定每路信号的公共特征(即,具有共性的特征),提取输入信号的有效隐含特征。经过公共部分处理的多路信号被送入多任务神经网络的各个特定部分。在各个特定部分中分别进行各个任务的处理,以分别确定每路信号的特定特征,这里,送入各个特定部分的多路信号均为相同的信号。可选择地,应用于多用户检测的多任务神经网络也可以不包含公共部分,提取输入信号的有效隐含特征的步骤也可以分别在各个特定部分中进行处理。
根据本实施例,处理单元520将接收到的多路信号输入多任务神经网络,在多任务神经网络的每一个任务中,均对接收到的多路信号进行处理,即,多任务神经网络中每一个任务的输入都是相同的。在多任务神经网络的各个任务中,使用配置了不同参数的网络来分别对多路信号的其中一路进行还原处理。首先确定该路信号的初步估计值,然后进行干扰删除,从该初步估计值中删除其他路信号造成的干扰,从而确定该路信号的干扰删除后的估计值。具体方式如下所述。
假设多路信号中的第i路信号是对于终端500的有效信号,以下以与第i路信号M i对应的任务T i为例进行说明,在任务T i中,向多任务神经网络输入的多路信号,经过还原处理后得到第i路信号的初步估计值M i’,接下来,对该初步估计值M i’进行干扰删除处理。在干扰删除处理中,基于其他任务确定的其他路信号的初步估计值来删除干扰。具体而言,在任务T i中,还接收到来自其他任务的关于其他路信号的初步估计值,在任务T i中,将初步估计值M i’减去其他路信号的初步估计值,从而得到一个干扰删除后的估计值M i”。由此,干扰删除后的估计值M i”是删除了多路信号叠加带来的干扰的估计值,其相对于初步估计估计值M i’具有更高的精确度。同理,如果在其他任务中也需要删除干扰,则在任务T i中,也将初步估计值M i’送入该其它任务中,以便于该其他任务进行干扰删除处理。
根据本发明的一个示例,在处理单元520中,对于多任务神经网络中的一个任务T i而言,在该任务的干扰删除处理中,可以从初步估计值M i’中线性地减去其它任务的初步估计值。例如,可以从初步估计值M i’中减去将其它任务的初步估计值乘以系数k之后进行相加的和。可选择地,对于每一个系数k,可以预先指定,也可以通过训练神经网络而得到。
根据本发明的另一个示例,也可以使用一个专门用于进行删除步骤的神经网络来进行上述相减的处理。在任务T i中,向该神经网络输入第i路信号的初步估计值M i’以及在其他任务中得到的其他路信号的初步估计值,通过该神经网络从初步估计值M i’中非线性地减去其他路信号的初步估计值,输出干扰删除后的估计值M i”,从而删除多路信号叠加带来的干扰。
根据本发明的一个示例,处理单元520进行多用户检测所采用的多任务神经网络为多层神经网络,可以将该多层的多任务神经网络划分为多个干扰删除阶段,干扰删除阶段的个数以及每个干扰删除阶段所包含的神经网络层数是任意的,例如每个干扰删除阶段可以包含一层或多层神经网络,每经过一个干扰删除阶段就进行一次上述的干扰删除处理,并将经过干扰删除处理得到的干扰删除后的估计值输入下一个干扰删除阶段。在下一个干扰删除阶段中,在多个任务中,基于上一个干扰删除阶段中得到的干扰删除后的估计值确定所述多路信号中每路信号在该干扰删除阶段的初步估计值,并在各个任务中,从本任务的该干扰删除阶段的初步估计值中删除基于其他任务的该干扰删除阶段的初步估计值而确定的干扰。由此,经过多个干扰删除阶段, 能够更为彻底地进行干扰删除。
根据本发明的一个示例,在终端500中,由于处理单元520采用多任务神经网络来进行多用户检测,因此除了对接收到的多路信号进行还原以得到发送给本终端的有效数据或控制信号之外,还可以在其中的一个或多个任务中进行用户活跃性检测、PAPR(信号峰均比(Peak-to-average ratio)降低等。
根据本发明的一个示例,在处理单元520中,在对用于多用户检测的神经网络进行训练优化时,还进行以下处理以降低神经网络处理的损失(loss)。损失表征了由神经网络还原得到的信号的值与信号的真实值之间的差异,例如,可以是均方误差、交叉熵等。在进行多任务神经网络的优化训练时,设其目标函数包含各个任务的损失以及各个任务之间的平衡损失,其中,各个任务之间的平衡损失代表了各个任务的损失之间的差异度。通过对神经网络进行训练,以将其配置为不仅最小化各个任务的损失,还使各个任务的损失之间的差异最小化。采用如此训练的多任务神经网络进行多路信号的还原时,能够使神经网络处理整体的损失降低,优化对接收到的多路信号的还原结果。
根据本发明的一个示例,对于处理单元520所采用的多任务神经网络的结构和参数(例如,当神经网络为多层时,其各层之间的权重矩阵、偏置向量等),可以由基站根据其发送方案来指定。在该情况下,终端500的接收单元320接收由基站发送的网络配置信息,该网络配置信息用于指定多任务神经网络的网络配置,例如,网络配置信息包含多任务神经网络的网络结构和网络参数信息。终端500基于接收到的网络配置信息,配置多任务神经网络。当以在线的方式使用时,终端500也可以基于接收到的网络配置信息对多任务神经网络进行在线的训练优化。在一个示例中,网络配置信息也可以是预先规定好的预编码信息、发射方案信息等,例如,可以是基站所采用的NOMA码本、或者MIMO码本等。网络配置信息可以通过高层信令或者物理层信令在基站与终端500之间进行交互。
根据本发明的另一个示例,终端500也可以通过盲检测的方法来确定基站的发送方案,从而确定用于多用户检测的多任务神经网络的网络参数和网络结构。在该情况下,可以省略掉与基站进行信令交互的过程。
根据本公开,通过将多任务神经网络引入处理单元520的多用户检测中,降低了在多用户通信中接收端的复杂度,由于只需要根据基站侧的发送方案来对多用户检测的神经网络的网络结构和/或参数进行微小的调整,就能够将 终端用于该发送方案下的接收,从而对于多种不同的发送方案而言,接收端的硬件等是通用的,其灵活性得到了提高。此外,由于在多任务神经网络中引入了干扰删除,并且在神经网络的目标函数中引入了各个任务之间的平衡损失,能够降低接收过程中的误比特率。
以上,结合图4和图5分别说明了根据本发明的实施例的基站和终端,根据本发明的一个示例,在发送端采用图4所示的基站400,接收端采用图5所示的终端500的情况下,可以采用端到端的优化方式对基站400和终端500所采用的神经网络进行联合优化。
具体而言,在该情况下,基站400还包括发送单元420,首先由基站400确定在基站侧用于多用户签名的神经网络的网络结构和网络参数等网络配置(例如,上述的R和W n),发送单元420发送网络配置信息,该网络配置信息表示基站侧的网络配置,其可以是动态配置的,或者也可以是静态或准静态地配置的。终端500的接收单元510在接收到上述网络配置信息之后,基于该信息配置用于多用户检测的多任务神经网络(例如,设置几个干扰删除阶段,采用线性还是非线性的干扰删除方式等),从而能够从端到端对基站400的神经网络和终端500的神经网络进行联合的优化训练。在一个示例中,由发送单元420发送的网络配置信息可以是预先规定好的预编码信息、发射方案信息等,例如,可以是基站所采用的NOMA码本、或者MIMO码本等,可以通过高层信令或者物理层信令在基站400与终端500之间进行上述信息的交互。在一个示例中,基站400发送的网络配置信息可以包含表示基站400采用的神经网络的网络配置的信息和直接表示终端侧的多任务神经网络的网络配置的信息中的至少一个。
根据本发明的一个示例,当采用端到端的方式进行联合优化时,也将神经网络的目标函数定义为包含各个任务的损失以及各个任务之间的平衡损失,并以使各个任务的损失之间的差异最小化为目的进行神经网络的训练,以降低误比特率。
无论是在上行发送还是下行发送中,在上述说明中涉及的对神经网络的优化训练,可以采用任意的训练方法,例如梯度下降的训练方法等。
接下来,参照图6说明由终端或基站执行的发送方法。图6是根据本公 开的一个实施例的由作为发送端的终端或基站执行的方法的流程图。
如图6所示,方法600包括步骤S610。根据本实施例,在步骤S610中,采用神经网络来对由要发送给多个用户的比特数据组成的比特序列进行多用户签名(multiple access signature)处理,即,通过神经网络对要发送的比特序列进行映射处理,输出复数符号序列。
根据本发明的一个示例,在步骤S610中输入神经网络的比特序列可以是经过了编码、扩频、交织、加扰等处理中的至少一个的比特序列,也可以是未经过处理的原始比特序列。换言之,在神经网络中进行的处理,除了将比特序列映射为复数符号序列之外,还可以包括编码、扩频、交织、加扰等中的一个或多个。
例如,可使用多用户签名映射模型来将输入到神经网络的比特序列映射为复数符号序列。并且根据本公开的实施例,在步骤S610中,通过配置神经网络的结构和参数,将比特序列在复平面的预定范围内映射为复数符号序列。该预定范围在复数平面上可以表示为一个规定的形状。可选择地,该规定的形状可以是任意的形状,只要是复平面上的一个子集即可。此外,也可以结合通信领域的知识,将该形状设定为最有利于传输通信的形状。由于限定了比特序列在复数平面上的映射范围,从而与采用FC-DNN等的映射方式相比,减少了神经网络的参数的数量,降低了对神经网络进行优化训练的复杂度。
根据本发明的一个示例,在步骤S610中,通过配置神经网络的参数,使得映射得到的复数符号序列在复数平面上被限定在一个平行四边形中。
具体地,假设要发送的n路信号的比特数据组成的比特序列映射为复数符号序列,则将进行该映射的神经网络的参数集配置为W n。由于要将复数符号序列在复数平面上限定于平行四边形中,因此参数集W n需要包含平行四边形的长边边长、短边边长、两个夹角的度数等参数。
另外,假设采用函数R来表示神经网络的映射规则,则R可以视为是神经网络的结构,约定R的形式以使经过神经网络映射得到的复数符号序列在复数平面上被限定于平行四边形中。具体的映射方式在上文中已经进行了说明,在此不再赘述。
通过步骤S610的上述处理,映射得到的复数符号序列被限定在整个复平面的一个子集中,从而使得在将神经网络应用于多用户签名处理时系统的复杂度降低。并且,由于将神经网络的参数集设为用于表征某个规定的形状的 参数,减少了神经网络的参数的数量。例如,在神经网络的训练中仅需主要针对参数集W n进行优化训练,降低了训练的复杂度。
方法600还可以包括步骤S620,在步骤S620中,将经过上述处理得到的复数符号序列映射到物理资源块上。根据本发明的一个示例,步骤S620可以采用神经网络技术来进行资源映射。将复数符号序列输入到用于进行资源映射的神经网络中,通过该神经网络的处理实现物理资源映射。此时,由于采用了神经网络,资源的映射可以进行调整和学习。采用方法600的终端或基站以非正交多址接入的方式发送在步骤S610中进行了映射处理并在步骤S620中经过了资源映射的比特序列,在资源映射中,对一个物理资源块分配多个用户的数据。
图7是根据本公开的一个实施例的由作为接收端的基站或终端执行的方法的流程图。
如图7所示,方法700包括步骤S710、步骤S720以及步骤S730。步骤S710接收来自发送端的多路信号,该多路信号中叠加了多个有效信号。在步骤S720和步骤S730中对接收到的多路信号进行处理,以还原出各路信号的有效信息。即,步骤S720和步骤S730对接收到的多路信号进行多用户检测处理。
根据本实施例,采用多任务神经网络来进行多用户检测处理。在步骤S720和步骤S730中,通过多任务神经网络中的多个任务来还原在步骤S710中接收到的多路信号。
根据本发明的一个示例,应用于多用户检测处理的多任务神经网络包含一个公共部分和多个特定部分,多任务神经网络中的每个任务共用公共部分,所述多任务神经网络中的每个任务分别对应于一个特定部分。多任务神经网络的公共部分用于进行预处理,以确定每路信号的公共特征(即,具有共性的特征),提取输入信号的有效隐含特征。在各个特定部分中分别进行各个任务的处理,以分别确定每路信号的特定特征,这里,各个特定部分的输入信号均为相同的信号。可选择地,应用于多用户检测的多任务神经网络也可以不包含公共部分,提取输入信号的有效隐含特征的步骤也可以分别在各个特定部分中进行处理。
根据本实施例,在步骤S720中,将接收到的多路信号输入多任务神经网 络,在多任务神经网络的每一个任务中,均对接收到的多路信号进行处理,即,多任务神经网络中每一个任务的输入都是相同的。在多任务神经网络的各个任务中,使用配置了不同参数的网络来分别对多路信号的其中一路进行还原处理。在步骤S720中,首先确定该路信号的初步估计值,然后在步骤S730中进行干扰删除,从该初步估计值中删除其他路信号造成的干扰,从而确定该路信号的干扰删除后的估计值。具体方式如下所述。
以下以与多路信号中的第i路信号M i对应的任务T i为例进行说明,在步骤S720中,在任务T i中,向多任务神经网络输入的多路信号,经过还原处理后得到第i路信号的初步估计值M i’,接下来,在步骤S730中,对该初步估计值M i’进行干扰删除处理,基于其他任务确定的其他路信号的初步估计值来删除干扰。具体而言,在步骤S730中,在任务T i中,还接收到来自其他任务的关于其他路信号的初步估计值,在任务T i中,将初步估计值M i’减去其他路信号的初步估计值,从而得到一个干扰删除后的估计值M i”。由此,干扰删除后的估计值M i”是删除了多路信号叠加带来的干扰的估计值,其相对于初步估计估计值M i’具有更高的精确度。同理,为了还原其他任务的有效信号,则在任务T i中,也将初步估计值M i’送入该其它任务中,以便于该其他任务进行干扰删除处理。
根据本发明的一个示例,在步骤S730中,对于多任务神经网络中的一个任务T i而言,在该任务的干扰删除处理中,可以从初步估计值M i’中线性地减去其它任务的初步估计值。例如,可以从初步估计值M i’中减去将其它任务的初步估计值乘以系数k之后进行相加的和。对于每一个任务的系数k,可以预先指定,也可以通过训练神经网络而得到。
根据本发明的另一个示例,也可以使用一个专门用于进行删除步骤的神经网络来进行上述相减的处理。在步骤S730中,在任务T i中,向该神经网络输入第i路信号的初步估计值M i’以及在其他任务中得到的其他路信号的初步估计值,通过该神经网络从初步估计值M i’中非线性地减去其他路信号的初步估计值,输出干扰删除后的估计值M i”,从而删除多路信号叠加带来的干扰。
根据本发明的一个示例,进行多用户检测所采用的多任务神经网络为多层神经网络,可以将该多层的多任务神经网络划分为多个干扰删除阶段,干扰删除阶段的个数以及每个干扰删除阶段所包含的神经网络层数是任意的, 例如每个干扰删除阶段可以包含一层或多层神经网络,每经过一个干扰删除阶段就进行一次上述的干扰删除处理,并将经过干扰删除处理得到的干扰删除后的估计值输入下一个干扰删除阶段。在下一个干扰删除阶段中,在多个任务中,应用步骤S720,基于上一个干扰删除阶段中得到的干扰删除后的估计值确定所述多路信号中每路信号在该干扰删除阶段的初步估计值,并在各个任务中,应用步骤S730,从本任务的该干扰删除阶段的初步估计值中删除基于其他任务的该干扰删除阶段的初步估计值而确定的干扰。由此,经过多个干扰删除阶段,能够更为彻底地进行干扰删除。
根据本发明的一个示例,在方法700中,采用多任务神经网络来进行多用户检测,因此除了对接收到的多路信号进行还原以得到发送给本终端的有效数据或控制信号之外,还可以在其中的一个或多个任务中进行用户活跃性检测、PAPR(信号峰均比(Peak-to-average ratio)降低等。
根据本发明的一个示例,在对用于多用户检测的神经网络进行训练优化时,还进行以下处理以降低神经网络处理的损失(loss)。损失表征了由神经网络还原得到的信号的值与信号的真实值之间的差异,例如,可以是均方误差、交叉熵等。在进行多任务神经网络的优化训练时,设其目标函数包含各个任务的损失以及各个任务之间的平衡损失,其中,各个任务之间的平衡损失代表了各个任务的损失之间的差异度。通过对神经网络进行训练,以将其配置为不仅最小化各个任务的损失,还使各个任务的损失之间的差异最小化。采用如此训练的多任务神经网络进行多路信号的还原时,能够使神经网络处理整体的损失降低,优化对接收到的多路信号的还原结果。
根据本发明的一个示例,对于上述方法600及方法700而言,无论终端侧采用的是发送方法还是接受方法,应用于终端的神经网络的结构和参数都可以由基站根据发送方案来指定。在该情况下,应用方法600及方法700的终端还接收由基站发送的网络配置信息,该网络配置信息用于指定终端的神经网络的网络配置,例如,网络配置信息包含网络结构和网络参数信息。基于接收到的网络配置信息,终端配置其神经网络。当以在线的方式使用时,终端也可以基于接收到的网络配置信息对其神经网络进行在线的训练优化。在一个示例中,网络配置信息也可以是预先规定好的预编码信息、发射方案信息等,例如,可以是所采用的NOMA码本、或者MIMO码本等。网络配 置信息可以通过高层信令或者物理层信令在基站与终端之间进行交互。
根据本发明的另一个示例,也可由终端向基站发送上述网络配置信息,以指定基站的神经网络配置或帮助基站确定要使用的神经网络配置。
根据本发明的另一个示例,应用方法600及方法700的终端也可以通过盲检测的方法来确定基站的发送方案,从而确定用于多用户检测的多任务神经网络的网络参数和网络结构。在该情况下,可以省略掉与基站进行信令交互的过程。
根据本发明的一个示例,在发送端和接收端分别为采用上述方法600和方法700时,可以采用端到端的优化方式对发送端和接收端采用的神经网络进行联合优化。
具体而言,在该情况下,采用上述方法600和方法700的基站确定其所采用的神经网络的网络结构和网络参数等网络配置,并向采用上述方法600和方法700的终端发送网络配置信息,该网络配置信息表示基站侧的网络配置,其可以是动态配置的,或者也可以是静态或准静态地配置的。终端在接收到上述网络配置信息之后,基于该信息配置终端的多任务神经网络,从而能够从端到端对发送端和接收端所采用的神经网络进行联合的优化训练。在一个示例中,基站发送的网络配置信息可以是预先规定好的预编码信息、发射方案信息等,例如,可以是基站所采用的NOMA码本、或者MIMO码本等,可以通过高层信令或者物理层信令在发送端与接收端之间进行上述信息的交互。在一个示例中,发送的网络配置信息可以包含表示基站采用的神经网络的网络配置的信息和直接表示终端侧的多任务神经网络的网络配置的信息中的至少一个。
根据本发明的一个示例,当采用端到端的方式进行联合优化时,也将神经网络的目标函数定义为包含各个任务的损失以及各个任务之间的平衡损失,并以使各个任务的损失之间的差异最小化为目的进行神经网络的训练,以降低误比特率。
此外,在上述说明中涉及的对神经网络的优化训练,可以采用任意的训练方法,例如梯度下降的训练方法等。
<硬件结构>
另外,上述实施方式的说明中使用的框图示出了以功能为单位的块。这些功能块(结构单元)通过硬件和/或软件的任意组合来实现。此外,各功能块的实现手段并不特别限定。即,各功能块可以通过在物理上和/或逻辑上相结合的一个装置来实现,也可以将在物理上和/或逻辑上相分离的两个以上装置直接地和/或间接地(例如通过有线和/或无线)连接从而通过上述多个装置来实现。
例如,本公开的一个实施例的设备(比如第一通信设备、第二通信设备或飞行用户终端等)可以作为执行本公开的无线通信方法的处理的计算机来发挥功能。图6是根据本公开的实施例的所涉及的设备800(基站或用户终端)的硬件结构的示意图。上述的设备800(基站或用户终端)可以作为在物理上包括处理器810、内存820、存储器830、通信装置840、输入装置850、输出装置860、总线870等的计算机装置来构成。
另外,在以下的说明中,“装置”这样的文字也可替换为电路、设备、单元等。用户终端和基站的硬件结构可以包括一个或多个图中所示的各装置,也可以不包括部分装置。
例如,处理器810仅图示出一个,但也可以为多个处理器。此外,可以通过一个处理器来执行处理,也可以通过一个以上的处理器同时、依次、或采用其它方法来执行处理。另外,处理器810可以通过一个以上的芯片来安装。
设备800的各功能例如通过如下方式实现:通过将规定的软件(程序)读入到处理器810、内存820等硬件上,从而使处理器810进行运算,对由通信装置840进行的通信进行控制,并对内存820和存储器830中的数据的读出和/或写入进行控制。
处理器810例如使操作系统进行工作从而对计算机整体进行控制。处理器810可以由包括与周边装置的接口、控制装置、运算装置、寄存器等的中央处理器(CPU,Central Processing Unit)构成。例如,上述的处理单元等可以通过处理器810实现。
此外,处理器810将程序(程序代码)、软件模块、数据等从存储器830和/或通信装置840读出到内存820,并根据它们执行各种处理。作为程序,可以采用使计算机执行在上述实施方式中说明的动作中的至少一部分的程 序。例如,上述终端或基站的处理单元可以通过保存在内存820中并通过处理器810来工作的控制程序来实现,对于其它功能块,也可以同样地来实现。
内存820是计算机可读取记录介质,例如可以由只读存储器(ROM,Read Only Memory)、可编程只读存储器(EPROM,Erasable Programmable ROM)、电可编程只读存储器(EEPROM,Electrically EPROM)、随机存取存储器(RAM,Random Access Memory)、其它适当的存储介质中的至少一个来构成。内存820也可以称为寄存器、高速缓存、主存储器(主存储装置)等。内存820可以保存用于实施本公开的一实施方式所涉及的方法的可执行程序(程序代码)、软件模块等。
存储器830是计算机可读取记录介质,例如可以由软磁盘(flexible disk)、软(注册商标)盘(floppy disk)、磁光盘(例如,只读光盘(CD-ROM(Compact Disc ROM)等)、数字通用光盘、蓝光(Blu-ray,注册商标)光盘)、可移动磁盘、硬盘驱动器、智能卡、闪存设备(例如,卡、棒(stick)、密钥驱动器(key driver))、磁条、数据库、服务器、其它适当的存储介质中的至少一个来构成。存储器830也可以称为辅助存储装置。
通信装置840是用于通过有线和/或无线网络进行计算机间的通信的硬件(发送接收设备),例如也称为网络设备、网络控制器、网卡、通信模块等。通信装置840为了实现例如频分双工(FDD,Frequency Division Duplex)和/或时分双工(TDD,Time Division Duplex),可以包括高频开关、双工器、滤波器、频率合成器等。例如,上述的发送单元、接收单元等可以通过通信装置840来实现。
输入装置850是接受来自外部的输入的输入设备(例如,键盘、鼠标、麦克风、开关、按钮、传感器等)。输出装置860是实施向外部的输出的输出设备(例如,显示器、扬声器、发光二极管(LED,Light Emitting Diode)灯等)。另外,输入装置850和输出装置860也可以为一体的结构(例如触控面板)。
此外,处理器810、内存820等各装置通过用于对信息进行通信的总线870连接。总线870可以由单一的总线构成,也可以由装置间不同的总线构成。
此外,基站和用户终端可以包括微处理器、数字信号处理器(DSP,Digital Signal Processor)、专用集成电路(ASIC,Application Specific Integrated  Circuit)、可编程逻辑器件(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)等硬件,可以通过该硬件来实现各功能块的部分或全部。例如,处理器810可以通过这些硬件中的至少一个来安装。
(变形例)
另外,关于本说明书中说明的用语和/或对本说明书进行理解所需的用语,可以与具有相同或类似含义的用语进行互换。例如,信道和/或符号也可以为信号(信令)。此外,信号也可以为消息。参考信号也可以简称为RS(Reference Signal),根据所适用的标准,也可以称为导频(Pilot)、导频信号等。此外,分量载波(CC,Component Carrier)也可以称为小区、频率载波、载波频率等。
此外,本说明书中说明的信息、参数等可以用绝对值来表示,也可以用与规定值的相对值来表示,还可以用对应的其它信息来表示。例如,无线资源可以通过规定的索引来指示。进一步地,使用这些参数的公式等也可以与本说明书中明确公开的不同。
在本说明书中用于参数等的名称在任何方面都并非限定性的。例如,各种各样的信道(物理上行链路控制信道(PUCCH,Physical Uplink Control Channel)、物理下行链路控制信道(PDCCH,Physical Downlink Control Channel)等)和信息单元可以通过任何适当的名称来识别,因此为这些各种各样的信道和信息单元所分配的各种各样的名称在任何方面都并非限定性的。
本说明书中说明的信息、信号等可以使用各种各样不同技术中的任意一种来表示。例如,在上述的全部说明中可能提及的数据、命令、指令、信息、信号、比特、符号、芯片等可以通过电压、电流、电磁波、磁场或磁性粒子、光场或光子、或者它们的任意组合来表示。
此外,信息、信号等可以从上层向下层、和/或从下层向上层输出。信息、信号等可以经由多个网络节点进行输入或输出。
输入或输出的信息、信号等可以保存在特定的场所(例如内存),也可以通过管理表进行管理。输入或输出的信息、信号等可以被覆盖、更新或补充。输出的信息、信号等可以被删除。输入的信息、信号等可以被发往其它装置。
信息的通知并不限于本说明书中说明的方式/实施方式,也可以通过其它方法进行。例如,信息的通知可以通过物理层信令(例如,下行链路控制信息(DCI,Downlink Control Information)、上行链路控制信息(UCI,Uplink Control Information))、上层信令(例如,无线资源控制(RRC,Radio Resource Control)信令、广播信息(主信息块(MIB,Master Information Block)、系统信息块(SIB,System Information Block)等)、媒体存取控制(MAC,Medium Access Control)信令)、其它信号或者它们的组合来实施。
另外,物理层信令也可以称为L1/L2(第1层/第2层)控制信息(L1/L2控制信号)、L1控制信息(L1控制信号)等。此外,RRC信令也可以称为RRC消息,例如可以为RRC连接建立(RRC Connection Setup)消息、RRC连接重设定(RRC Connection Reconfiguration)消息等。此外,MAC信令例如可以通过MAC控制单元(MAC CE(Control Element))来通知。
此外,规定信息的通知(例如,“为X”的通知)并不限于显式地进行,也可以隐式地(例如,通过不进行该规定信息的通知,或者通过其它信息的通知)进行。
关于判定,可以通过由1比特表示的值(0或1)来进行,也可以通过由真(true)或假(false)表示的真假值(布尔值)来进行,还可以通过数值的比较(例如与规定值的比较)来进行。
软件无论被称为软件、固件、中间件、微代码、硬件描述语言,还是以其它名称来称呼,都应宽泛地解释为是指命令、命令集、代码、代码段、程序代码、程序、子程序、软件模块、应用程序、软件应用程序、软件包、例程、子例程、对象、可执行文件、执行线程、步骤、功能等。
此外,软件、命令、信息等可以经由传输介质被发送或接收。例如,当使用有线技术(同轴电缆、光缆、双绞线、数字用户线路(DSL,Digital Subscriber Line)等)和/或无线技术(红外线、微波等)从网站、服务器、或其它远程资源发送软件时,这些有线技术和/或无线技术包括在传输介质的定义内。
本说明书中使用的“系统”和“网络”这样的用语可以互换使用。
在本说明书中,“基站(BS,Base Station)”、“无线基站”、“eNB”、“gNB”、“小区”、“扇区”、“小区组”、“载波”以及“分量载波”这样的用语可以互换使用。基站有时也以固定台(fixed station)、NodeB、eNodeB(eNB)、接入点(access point)、发送点、接收点、毫微微小区、小小区等用语来称呼。
基站可以容纳一个或多个(例如三个)小区(也称为扇区)。当基站容纳多个小区时,基站的整个覆盖区域可以划分为多个更小的区域,每个更小的区域也可以通过基站子系统(例如,室内用小型基站(射频拉远头(RRH,Remote Radio Head)))来提供通信服务。“小区”或“扇区”这样的用语是指在该覆盖中进行通信服务的基站和/或基站子系统的覆盖区域的一部分或整体。
在本说明书中,“移动台(MS,Mobile Station)”、“用户终端(user terminal)”、“用户装置(UE,User Equipment)”以及“终端”这样的用语可以互换使用。移动台有时也被本领域技术人员以用户台、移动单元、用户单元、无线单元、远程单元、移动设备、无线设备、无线通信设备、远程设备、移动用户台、接入终端、移动终端、无线终端、远程终端、手持机、用户代理、移动客户端、客户端或者若干其它适当的用语来称呼。
此外,本说明书中的无线基站也可以用用户终端来替换。例如,对于将无线基站和用户终端间的通信替换为多个用户终端间(D2D,Device-to-Device)的通信的结构,也可以应用本公开的各方式/实施方式。此时,可以将上述的设备800中的第一通信设备或第二通信设备所具有的功能当作用户终端所具有的功能。此外,“上行”和“下行”等文字也可以替换为“侧”。例如,上行信道也可以替换为侧信道。
同样,本说明书中的用户终端也可以用无线基站来替换。此时,可以将上述的用户终端所具有的功能当作第一通信设备或第二通信设备所具有的功能。
在本说明书中,设为通过基站进行的特定动作根据情况有时也通过其上级节点(upper node)来进行。显然,在具有基站的由一个或多个网络节点(network nodes)构成的网络中,为了与终端间的通信而进行的各种各样的动作可以通过基站、除基站之外的一个以上的网络节点(可以考虑例如移动管理实体(MME,Mobility Management Entity)、服务网关(S-GW,Serving-Gateway)等,但不限于此)、或者它们的组合来进行。
本说明书中说明的各方式/实施方式可以单独使用,也可以组合使用,还可以在执行过程中进行切换来使用。此外,本说明书中说明的各方式/实施方式的处理步骤、序列、流程图等只要没有矛盾,就可以更换顺序。例如,关 于本说明书中说明的方法,以示例性的顺序给出了各种各样的步骤单元,而并不限定于给出的特定顺序。
本说明书中说明的各方式/实施方式可以应用于利用长期演进(LTE,Long Term Evolution)、高级长期演进(LTE-A,LTE-Advanced)、超越长期演进(LTE-B,LTE-Beyond)、超级第3代移动通信系统(SUPER 3G)、高级国际移动通信(IMT-Advanced)、第4代移动通信系统(4G,4th generation mobile communication system)、第5代移动通信系统(5G,5th generation mobile communication system)、未来无线接入(FRA,Future Radio Access)、新无线接入技术(New-RAT,Radio Access Technology)、新无线(NR,New Radio)、新无线接入(NX,New radio access)、新一代无线接入(FX,Future generation radio access)、全球移动通信系统(GSM(注册商标),Global System for Mobile communications)、码分多址接入3000(CDMA3000)、超级移动宽带(UMB,Ultra Mobile Broadband)、IEEE 920.11(Wi-Fi(注册商标))、IEEE 920.16(WiMAX(注册商标))、IEEE 920.20、超宽带(UWB,Ultra-WideBand)、蓝牙(Bluetooth(注册商标))、其它适当的无线通信方法的系统和/或基于它们而扩展的下一代系统。
本说明书中使用的“根据”这样的记载,只要未在其它段落中明确记载,则并不意味着“仅根据”。换言之,“根据”这样的记载是指“仅根据”和“至少根据”这两者。
本说明书中使用的对使用“第一”、“第二”等名称的单元的任何参照,均非全面限定这些单元的数量或顺序。这些名称可以作为区别两个以上单元的便利方法而在本说明书中使用。因此,第一单元和第二单元的参照并不意味着仅可采用两个单元或者第一单元必须以若干形式占先于第二单元。
本说明书中使用的“判断(确定)(determining)”这样的用语有时包含多种多样的动作。例如,关于“判断(确定)”,可以将计算(calculating)、推算(computing)、处理(processing)、推导(deriving)、调查(investigating)、搜索(looking up)(例如表、数据库、或其它数据结构中的搜索)、确认(ascertaining)等视为是进行“判断(确定)”。此外,关于“判断(确定)”,也可以将接收(receiving)(例如接收信息)、发送(transmitting)(例如发送信息)、输入(input)、输出(output)、存取(accessing)(例如存取内存中的数据)等视为是进行“判断(确定)”。此外,关于“判断(确定)”, 还可以将解决(resolving)、选择(selecting)、选定(choosing)、建立(establishing)、比较(comparing)等视为是进行“判断(确定)”。也就是说,关于“判断(确定)”,可以将若干动作视为是进行“判断(确定)”。
本说明书中使用的“连接的(connected)”、“结合的(coupled)”这样的用语或者它们的任何变形是指两个或两个以上单元间的直接的或间接的任何连接或结合,可以包括以下情况:在相互“连接”或“结合”的两个单元间,存在一个或一个以上的中间单元。单元间的结合或连接可以是物理上的,也可以是逻辑上的,或者还可以是两者的组合。例如,“连接”也可以替换为“接入”。在本说明书中使用时,可以认为两个单元是通过使用一个或一个以上的电线、线缆、和/或印刷电气连接,以及作为若干非限定性且非穷尽性的示例,通过使用具有射频区域、微波区域、和/或光(可见光及不可见光这两者)区域的波长的电磁能等,被相互“连接”或“结合”。
在本说明书或权利要求书中使用“包括(including)”、“包含(comprising)”、以及它们的变形时,这些用语与用语“具备”同样是开放式的。进一步地,在本说明书或权利要求书中使用的用语“或(or)”并非是异或。
以上对本公开进行了详细说明,但对于本领域技术人员而言,显然,本公开并非限定于本说明书中说明的实施方式。本公开在不脱离由权利要求书的记载所确定的本公开的宗旨和范围的前提下,可以作为修改和变更方式来实施。因此,本说明书的记载是以示例说明为目的,对本公开而言并非具有任何限制性的意义。

Claims (10)

  1. 一种终端,包括:
    处理单元,使用神经网络将要发送的比特序列映射为复数符号序列,其中所述神经网络被配置为将比特序列在复平面的预定范围内映射为复数符号序列。
  2. 如权利要求1所述的终端,其中,
    所述终端还包括接收单元,所述接收单元接收由基站发送的包含用于表示所述基站采用的神经网络的网络配置的信息和用于指示所述终端的神经网络的网络配置的信息中的至少一个的网络配置信息。
  3. 如权利要求2所述的终端,其中,
    所述处理单元基于所述网络配置信息来配置所述终端的神经网络。
  4. 如权利要求2或3所述的终端,其中,
    所述网络配置信息包含网络结构和网络参数信息。
  5. 一种基站,包括:
    接收单元,接收由多个终端发送的信号叠加的多路信号;以及
    处理单元,还原所述多路信号,通过多任务神经网络中的多个任务分别确定所述多路信号的初步估计值,并在所述多任务神经网络的第一任务中,从由所述第一任务确定的第一路信号的初步估计值中删除由所述多路信号中其他路信号造成的干扰,从而确定所述第一路信号的干扰删除后的估计值,其中由所述多路信号中其他路信号造成的干扰是基于由所述多个任务中除了所述第一任务以外的其他任务确定的初步估计值获得的。
  6. 如权利要求5所述的基站,其中,
    所述多任务神经网络包含一个公共部分和多个特定部分,所述多任务神经网络中的每个任务共用所述公共部分,其用于确定所述多路信号中每路信号的公共特征,所述多任务神经网络中的每个任务分别对应于一个所述特定部分,其用于分别确定每路信号的特定特征。
  7. 如权利要求5或6所述的基站,其中,
    所述多任务神经网络包含多层,
    所述多任务神经网络包含多个干扰删除阶段,每个干扰删除阶段包含一 或多层神经网络,
    在第一干扰删除阶段中,通过所述多个任务分别确定所述多路信号的第一干扰删除阶段的初步估计值,并从由所述第一任务确定的第一路信号第一干扰删除阶段的初步估计值中删除基于所述其他路信号的第一干扰删除阶段的初步估计值而获得的干扰,从而确定所述第一路信号的第一干扰删除阶段的干扰删除后的估计值,
    在第二干扰删除阶段中,通过所述多个任务分别基于所述多路信号的第一干扰删除阶段的干扰删除后的估计值确定所述多路信号的第二干扰删除阶段的初步估计值,并从所述第一路信号第二干扰删除阶段的初步估计值中删除基于所述其他路信号的第二干扰删除阶段的初步估计值而获得的干扰。
  8. 如权利要求5或6所述的基站,还包括:
    发送单元,发送与所述多任务神经网络的结构和参数有关的信息。
  9. 如权利要求5或6所述的基站,其中,
    所述多任务神经网络被配置为对所述多个任务中每个任务的损失进行平衡,
    所述损失是每个任务还原的一路信号的值与该路信号的真实值之间的差异。
  10. 一种接收方法,包括:
    接收叠加的多路信号;
    通过多任务神经网络中的多个任务分别确定所述多路信号的初步估计值;以及
    在所述多任务神经网络的第一任务中,从由所述第一任务确定的第一路信号的初步估计值中删除由所述多路信号中其他路信号造成的干扰,从而确定所述第一路信号的干扰删除后的估计值,
    其中由所述多路信号中其他路信号造成的干扰是基于由所述多个任务中除了所述第一任务以外的其他任务确定的初步估计值获得的。
PCT/CN2019/094432 2019-07-02 2019-07-02 终端和基站 WO2021000264A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/094432 WO2021000264A1 (zh) 2019-07-02 2019-07-02 终端和基站
CN201980097943.1A CN114026804B (zh) 2019-07-02 2019-07-02 终端和基站
US17/597,258 US20220312424A1 (en) 2019-07-02 2019-07-02 Terminal and base station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/094432 WO2021000264A1 (zh) 2019-07-02 2019-07-02 终端和基站

Publications (1)

Publication Number Publication Date
WO2021000264A1 true WO2021000264A1 (zh) 2021-01-07

Family

ID=74100289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094432 WO2021000264A1 (zh) 2019-07-02 2019-07-02 终端和基站

Country Status (3)

Country Link
US (1) US20220312424A1 (zh)
CN (1) CN114026804B (zh)
WO (1) WO2021000264A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114880130A (zh) * 2022-07-11 2022-08-09 中国科学技术大学 并行训练中突破内存限制的方法、系统、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540267A (zh) * 2018-04-13 2018-09-14 北京邮电大学 一种基于深度学习的多用户数据信息检测方法及装置
CN109246048A (zh) * 2018-10-30 2019-01-18 广州海格通信集团股份有限公司 一种基于深度学习的物理层安全通信方法和系统
CN109660325A (zh) * 2017-10-12 2019-04-19 中兴通讯股份有限公司 数据处理方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867437B (zh) * 2009-04-20 2013-04-17 华为技术有限公司 通信系统的基带映射方法、映射器及发射机
US10397039B2 (en) * 2012-12-05 2019-08-27 Origin Wireless, Inc. Apparatus, systems and methods for fall-down detection based on a wireless signal
WO2018066924A1 (ko) * 2016-10-06 2018-04-12 엘지전자 주식회사 무선 통신 시스템에서 하향링크 신호를 송신 또는 수신하는 방법 및 이를 위한 장치
EP3474280B1 (en) * 2017-10-19 2021-07-07 Goodix Technology (HK) Company Limited Signal processor for speech signal enhancement
CN109246038B (zh) * 2018-09-10 2021-04-20 东南大学 一种数据模型双驱动的gfdm接收机及方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109660325A (zh) * 2017-10-12 2019-04-19 中兴通讯股份有限公司 数据处理方法及装置
CN108540267A (zh) * 2018-04-13 2018-09-14 北京邮电大学 一种基于深度学习的多用户数据信息检测方法及装置
CN109246048A (zh) * 2018-10-30 2019-01-18 广州海格通信集团股份有限公司 一种基于深度学习的物理层安全通信方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUAWEI ET AL.: "Discussion on the design of NoMA receiver", 3GPP TSG RAN WG1 MEETING #93 R1-1805908, 25 May 2018 (2018-05-25), DOI: 20200310101922Y *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114880130A (zh) * 2022-07-11 2022-08-09 中国科学技术大学 并行训练中突破内存限制的方法、系统、设备及存储介质

Also Published As

Publication number Publication date
US20220312424A1 (en) 2022-09-29
CN114026804A (zh) 2022-02-08
CN114026804B (zh) 2023-12-05

Similar Documents

Publication Publication Date Title
EP3713130B1 (en) Method and apparatus for configuring reference signal channel characteristics, and communication device
AU2018361151B2 (en) System and method for indicating wireless channel status
WO2021253937A1 (zh) 无线通信系统的终端、基站以及由终端和基站执行的方法
US11324018B2 (en) Terminal and a base station
WO2021253936A1 (zh) 用户设备、基站、用户设备和基站的信道估计和反馈系统
CN114946133A (zh) 通信的方法、设备和计算机可读介质
US20160044650A1 (en) Methods and apparatus for interference management
WO2018201910A1 (zh) 波束信息反馈方法及用户装置
WO2019222907A1 (zh) 预编码方法、解码方法、发送设备和接收设备
WO2018196505A1 (zh) 星座图旋转方法及装置
WO2016066098A1 (zh) 用于无线通信的装置和方法
US20210168818A1 (en) Method for signal transmission, and corresponding user terminals and base stations
WO2019213934A1 (zh) 用于传输信号的方法及相应的用户终端、基站
WO2021232412A1 (zh) 终端以及由终端执行的方法
WO2021000264A1 (zh) 终端和基站
CN107113106A (zh) 一种共小区网络下的多天线传输方法及基站
KR101784625B1 (ko) 비직교 다중접속 기반 공간 편이 변조 및 다중입력 다중출력 멀티플렉싱 방법 및 장치
WO2020227866A1 (zh) 终端以及发送方法
US20230036406A1 (en) Method of codebook sounding reference signal (srs) antenna mapping to improve uplink performance
WO2021232413A1 (zh) 发送设备、接收设备、干扰信息发送方法和信道接入方法
JP2024500395A (ja) より高いランクの送信をサポートするためのタイプiiポート選択コードブックを拡張する方法
WO2022150484A1 (en) Methods of mapping multiple sd-fd bases per csi-rs port for type ii port selection codebook
CN111279736B (zh) 一种用于生成扩展符号的方法及装置
WO2019140557A9 (zh) 无线通信方法、用户设备和基站
JP2019186923A (ja) アンテナポートを割り当てる方法及び基地局

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19935966

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19935966

Country of ref document: EP

Kind code of ref document: A1