CN112262369A - Method, apparatus and computer readable medium for data processing - Google Patents

Method, apparatus and computer readable medium for data processing Download PDF

Info

Publication number
CN112262369A
CN112262369A CN201880094548.3A CN201880094548A CN112262369A CN 112262369 A CN112262369 A CN 112262369A CN 201880094548 A CN201880094548 A CN 201880094548A CN 112262369 A CN112262369 A CN 112262369A
Authority
CN
China
Prior art keywords
reference data
output reference
output
input
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880094548.3A
Other languages
Chinese (zh)
Other versions
CN112262369B (en
Inventor
赵光玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Shanghai Bell Co Ltd
Nokia Oyj
Original Assignee
Nokia Shanghai Bell Co Ltd
Nokia Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co Ltd, Nokia Networks Oy filed Critical Nokia Shanghai Bell Co Ltd
Publication of CN112262369A publication Critical patent/CN112262369A/en
Application granted granted Critical
Publication of CN112262369B publication Critical patent/CN112262369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • H03F1/3241Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
    • H03F1/3247Modifications of amplifiers to reduce non-linear distortion using predistortion circuits using feedback acting on predistortion circuits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/20Power amplifiers, e.g. Class B amplifiers, Class C amplifiers
    • H03F3/24Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages
    • H03F3/245Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages with semiconductor devices only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/32Carrier systems characterised by combinations of two or more of the types covered by groups H04L27/02, H04L27/10, H04L27/18 or H04L27/26
    • H04L27/34Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
    • H04L27/36Modulator circuits; Transmitter circuits
    • H04L27/366Arrangements for compensating undesirable properties of the transmission path between the modulator and the demodulator
    • H04L27/367Arrangements for compensating undesirable properties of the transmission path between the modulator and the demodulator using predistortion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F2200/00Indexing scheme relating to amplifiers
    • H03F2200/336A I/Q, i.e. phase quadrature, modulator or demodulator being used in an amplifying circuit
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F2200/00Indexing scheme relating to amplifiers
    • H03F2200/451Indexing scheme relating to amplifiers the amplifier being a radio frequency amplifier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]

Abstract

Embodiments of the present disclosure relate to methods, apparatuses, and computer program products for data processing. One method comprises the following steps: obtaining input reference data and first output reference data for training an Artificial Neural Network (ANN); generating second output reference data by suppressing noise in the first output reference data; and training the ANN based on the input reference data and the second output reference data. In some embodiments, the trained ANN may be used to accurately determine the configuration of Digital Predistortion (DPD) in a Power Amplifier (PA) system.

Description

Method, apparatus and computer readable medium for data processing
Technical Field
The non-limiting and example embodiments of the present disclosure relate generally to the field of data processing technology and, in particular, relate to a method, apparatus and computer program product for training an Artificial Neural Network (ANN).
Background
This section introduces aspects that may help to better understand the disclosure. Accordingly, the statements in this section are to be read in this light and are not to be construed as admissions of what is present in the prior art or what is not present in the prior art.
Modern wireless services require efficient and linear transmission of a Radio Frequency (RF) carrier modulated in amplitude as well as in phase by an envelope signal. The conflicting requirements of power efficiency and linearity place very stringent requirements on the transmitter, and in particular on its Power Amplifier (PA).
Although class a PAs are best in terms of linearity, their efficiency is quite poor compared to other amplification classes, such as "AB", "C" and Doherty amplifiers. However, higher efficiency results in higher non-linearity and the PA output will be distorted, often to the point where system performance requirements are not met. Therefore, class AB power amplifiers or other variants are often used with some suitable form of linearization scheme.
Digital Predistortion (DPD) has been considered a popular method to compensate for the non-linearity of the PA. In a PA system with DPD, the transmission characteristics of the PA can be modeled by sampling the output of the PA and calculating its inverse characteristics. The digital baseband signal is then multiplied by the inverse of the nonlinear transmission characteristic of the PA, up-converted to RF frequency, and applied to the PA input. In this way, the DPD engine can correct output distortion of the PA and obtain higher efficiency.
The challenge of DPD techniques is that the distortion (i.e., non-linear) characteristics of the PA may vary with time, temperature, and bias, and it is not easy to design a correct predistortion algorithm.
Disclosure of Invention
Various embodiments of the present disclosure are generally directed to methods, apparatuses, and computer storage media for data processing.
In a first aspect of the disclosure, a method of data processing is provided. The method comprises the following steps: obtaining input reference data and first output reference data for training an ANN; generating second output reference data by suppressing noise in the first output reference data; and training the ANN based on the input reference data and the second output reference data.
In some embodiments, generating the second output reference data may further comprise: second output reference data is generated by polynomial fitting based on the input reference data and the first output reference data. In some embodiments, generating the second output reference data may further comprise: the second output reference data is generated based on a Least Squares (LS) criterion.
In some embodiments, generating the second output reference data may comprise: determining an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively; and generating second output reference data based on the determined amplitude and phase. In some other embodiments, determining the amplitude and phase of the second output reference data may comprise: determining a magnitude by a polynomial fit based on the magnitude of the first output reference data relative to the input reference data; and determining the phase by polynomial fitting based on the phase of the first output reference data relative to the input reference data.
In some embodiments, generating the second output reference data may comprise: determining an in-phase component and a quadrature component of second output reference data based on the input reference data and the first output reference data, respectively; and generating second output reference data based on the determined in-phase and quadrature components.
In some embodiments, the method may further comprise: parameters to be applied to the DPD of the PA are determined based on the trained ANN. In some embodiments, obtaining the input reference data and the first output reference data may comprise: obtaining training data input to the PA as input reference data; and obtaining feedback data output from the PA in response to the training data as first output reference data.
In a second aspect of the disclosure, an apparatus for data processing is provided. The device includes: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: obtaining input reference data and first output reference data for training an ANN; generating second output reference data by suppressing noise in the first output reference data; and training the ANN based on the input reference data and the second output reference data.
In a third aspect of the present disclosure, another apparatus for data processing is provided. The device includes: means for obtaining input reference data and first output reference data for training an ANN; means for generating second output reference data by suppressing noise in the first output reference data; and means for training the ANN based on the input reference data and the second output reference data.
In a fourth aspect of the disclosure, a computer program is provided. The computer program comprises instructions which, when executed by an apparatus, cause the apparatus to perform a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, a computer-readable medium is provided, having stored thereon a computer program which, when executed by at least one processor of an apparatus, causes the apparatus to perform the method of the first aspect of the present disclosure.
In a sixth aspect of the disclosure, an apparatus for communication is provided. The apparatus comprises: PA; and a DPD coupled to an input of the PA; wherein the parameters of the DPD are obtained based on an ANN, the ANN being trained using input reference data and output reference data; and wherein the output reference data is generated by suppressing noise in the feedback data output from the PA.
Drawings
The above and other aspects, features and benefits of various embodiments of the present disclosure will become more fully apparent from the following detailed description, which proceeds with reference to the accompanying drawings, wherein like reference numerals are used to refer to like or equivalent elements throughout. The accompanying drawings are shown to facilitate a better understanding of embodiments of the disclosure and are not necessarily drawn to scale, in which:
fig. 1 illustrates a wireless communication network in which embodiments of the present disclosure may be implemented;
FIG. 2 shows a flow diagram of a data processing method according to an example embodiment of the present disclosure;
FIG. 3 schematically illustrates a diagram of an ANN;
FIG. 4 illustrates an example of reconstructing clean training data via polynomial fitting in accordance with an embodiment of the present disclosure;
fig. 5 to 6 show another example of reconstructing clean training data according to an embodiment of the present disclosure;
FIG. 7 shows a flow diagram of a method of reconstructing clean training data in accordance with an embodiment of the present disclosure;
fig. 8 illustrates an example of configuring DPD in an ANN-based PA system according to an embodiment of the present disclosure;
fig. 9 shows a plot of the amplitude-to-amplitude characteristic of DPD configured based on an ANN trained with clean data, in accordance with an embodiment of the present disclosure;
fig. 10 shows the original spectrum of a PA system without DPD;
fig. 11 shows the spectrum of a PA system with conventional DPD;
fig. 12 shows a frequency spectrum of a PA system with DPD designed according to an embodiment of this disclosure; and
fig. 13 shows a simplified block diagram of an apparatus that may be used for data processing according to an example embodiment of the present disclosure.
Detailed Description
Hereinafter, the principle and spirit of the present disclosure will be described with reference to illustrative embodiments. It is to be understood that all such embodiments are presented solely to enable those skilled in the art to better understand and further practice the present disclosure, and not to limit the scope of the present disclosure. For instance, features illustrated or described as part of one embodiment, can be used with another embodiment to yield a still further embodiment. In the interest of clarity, not all features of an actual implementation are described in this specification.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," "including," "has," "having," "has," "including," and/or "including," when used herein, specify the presence of stated features, elements, and/or components, etc., but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof.
As used in this application, the term "circuitry" may refer to one or more or all of the following:
(a) a purely hardware circuit implementation (such as an implementation in analog and/or digital circuitry only); and
(b) a combination of hardware circuitry and software, such as (as applicable):
(i) combinations of analog and/or digital hardware circuit(s) and software/firmware, and
(ii) any portion of hardware processor(s) with software (including digital signal processor (s)), software, and memory(s) that work together to cause an apparatus (such as a mobile phone or server) to perform various functions; and
(c) hardware circuit(s) and/or processor(s), such as microprocessor(s) or portions of microprocessor(s), require software (e.g., firmware) for operation, but the software may not be present when not required for operation.
This definition of circuitry applies to all uses of the term in this application, including in any claims. As another example, as used in this application, the term "circuitry" also covers an implementation of merely a hardware circuit or processor (or multiple processors) or a portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers (e.g., and if applicable to the particular claim element) a baseband integrated circuit or processor integrated circuit for a computing device.
As used herein, the term "communication network" refers to a network that conforms to any suitable communication standard, such as 5G, New Radio (NR), Long Term Evolution (LTE), LTE-advanced (LTE-a), Wideband Code Division Multiple Access (WCDMA), High Speed Packet Access (HSPA), and the like. A "communication network" may also be referred to as a "communication system". Further, communication between network devices, between a network device and a terminal device, or between terminal devices in a communication network may be performed according to any suitable communication protocol, including but not limited to global system for mobile communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), New Radio (NR), 5G, Wireless Local Area Network (WLAN) standards such as the IEEE 802.11 standard, and/or any other suitable communication standard currently known or to be developed in the future.
As used herein, the term "network device" refers to a node in a communication network via which a terminal device receives services. For example, the network devices may include, but are not limited to, Base Stations (BSs) and Node BS (NBs), evolved NBs (enbs), 5G NBs (gnbs) or Access Points (APs), and the like.
The term "terminal device" refers to any terminal device that may be capable of communicating. By way of example, and not limitation, a terminal device may also be referred to as a communication device, UE, Subscriber Station (SS), portable subscriber station, Mobile Station (MS), or Access Terminal (AT). The terminal devices may include, but are not limited to, mobile phones, cellular phones, smart phones, voice over IP (VoIP) phones, wireless local loop phones, tablets, wearable terminal devices, Personal Digital Assistants (PDAs), portable computers, desktop computers, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback devices, in-vehicle wireless terminal devices, wireless endpoints, mobile stations, notebook embedded equipment (LEE), notebook installation equipment (LME), USB dongles, smart devices, wireless Customer Premises Equipment (CPE), and the like. In the following description, the terms "terminal device", "communication device", "terminal", "user equipment" and "UE" may be used interchangeably.
As yet another example, in an internet of things (IOT) scenario, a terminal device may represent a machine or other device that performs monitoring and/or measurements and transmits results of such monitoring and/or measurements to another terminal device and/or network device. In this case, the terminal device may be a machine-to-machine (M2M) device, which may be referred to as a Machine Type Communication (MTC) device in the 3GPP context. As one particular example, the terminal device may be a UE implementing the 3GPP narrowband internet of things (NB-IoT) standard. Examples of such machines or devices are sensors, metering devices (such as electric meters), industrial machinery, or household or personal appliances (e.g., refrigerators, televisions, personal wearable devices (such as watches), etc.). In other cases, the terminal device may represent a vehicle or other device capable of monitoring and/or reporting its operational status or other functionality associated with its operation.
Fig. 1 illustrates an example wireless communication network 100 in which embodiments of the present disclosure may be implemented. As shown in fig. 1, the wireless communication network 100 may include one or more network devices (also referred to as network nodes), e.g., network device 101, and the network device 101 may be in the form of an eNB or a gNB. It will be appreciated that the network device 101 may also be in the form of an NB, Base Transceiver Station (BTS) and/or Base Station Subsystem (BSS), AP, etc. Network device 101 provides radio connectivity to a group of terminal devices (e.g., terminal device 102). Both the network device 101 and the terminal device 102 are equipped with a transmitter and a receiver (or transceiver) to enable communication therebetween.
Power Amplifiers (PAs) are important components in transmitters (or transceivers) and must be carefully designed to achieve efficient communication. Due to high efficiency, class AB and class C PAs have been widely used in transmitters/transceivers. However, high efficiency is accompanied by high non-linearity, which may cause system performance degradation and is undesirable.
By compensating for the non-linearity of the PA, system performance may be improved. DPD has been considered as a candidate for compensation. In a PA system with DPD, the input signal can be pre-distorted before entering the PA, and in this way the distortion at the output of the PA can be corrected.
The challenge of DPD techniques is that the distortion (i.e., non-linearity) characteristics of the PA may change (e.g., over time, temperature, and bias), and thus, determining the appropriate parameters/algorithms for DPD operation may not be easy.
In the DPD field, the traditional approach for designing DPD parameters/algorithms is to use Volterra series, some variants thereof or combinations of Volterra series with other techniques (e.g., orthogonal processing). However, these methods are often very complex and have a low ability to solve the non-linearity fitting problem.
Another way to determine the DPD parameters is to use a feedback mechanism, i.e. sample the output signal of the PA and use it to correct the parameters of the DPD. The mechanism utilizes input training reference data and output reference data. The output reference data may be collected from the feedback of a PA that is noisy and non-linear. With this feedback mechanism, noise in the feedback may cause incorrect estimation of the transmission characteristics of the PA and cause improper DPD design.
In some embodiments of the present disclosure, it is proposed to train DPD based on ANN. Both the ANN and the Volterra series are of particular interest in the microwave domain for low-pass equivalent behavior modeling of wireless transmitters. The inventors of the present disclosure have observed that ANN has much stronger fitting capabilities than Volterra, but performs poorly in noisy situations. Although certain techniques (e.g., the adjustments used in ANN) can be used to suppress the sensitivity of ANN to noise, their performance does not meet the requirements of DPD applications.
To address this and other similar issues, in some embodiments of the present disclosure, clean training data is proposed for the ANN. With clean training data, DPD based ANN is well suited for wideband non-linearity applications.
In some embodiments, the raw noisy training data (which may be obtained from the feedback of the PA) may be pre-processed (e.g., via polynomial fitting) to construct new clean training data. The scheme overcomes the defects of ANN under the noisy condition, and simultaneously maintains the advantages of the nonlinear fitting capability of the ANN. In some further embodiments, the new training data may be reconstructed based on some optimization criteria (e.g., LS criteria).
By way of example and not limitation, in some embodiments, Amplitude (AM) versus AM and AM versus PM curves of the output reference data relative to the input training reference data are first calculated. Regression methods (e.g., LS-based polynomial fits) can then be used to fit these curves. The fitted polynomial can be used to reconstruct new output reference data that is noise free or noise suppressed and maintains the non-linearity characteristics. Note that in some other embodiments, many other fitting functions (e.g., piecewise fitting) may be used for this purpose.
The reconstructed new output reference data is used to train an ANN (e.g., an ANN based on a delay tap Backhaul (BP)) to determine the appropriate parameters for DPD. Due to the suppressed noise in the reconstructed new output reference data, the number of neurons in the ANN can be chosen higher to achieve better performance without causing overfitting.
To facilitate an understanding of the solution presented herein, some embodiments will be described below with reference to fig. 2 to 13.
Fig. 2 illustrates an example method 200 in accordance with an embodiment of the disclosure. The method may be implemented by a training apparatus, which may be implemented, for example, in a transceiver of the network device 101 or the terminal device 102 in fig. 1, or may provide an input to the transceiver. However, it should be understood that the method 200 may also be implemented by other devices, apparatuses, or clouds of data processing. For purposes of illustration only and not limitation, the method 300 will be described below with reference to a training apparatus.
As shown in fig. 2, at block 210, the training apparatus obtains input reference data and first output reference data for training the ANN. Note that embodiments are not limited to any particular application of ANN. For purposes of illustration only and not limitation, the ANN may be used to determine configurations/parameters for DPD in the PA. In such embodiments, at block 210, the training apparatus may obtain training data input to the PA as input reference data; and obtaining feedback data output from the PA in response to the training data as first output reference data.
Additionally, embodiments are not limited to any particular structure of an ANN. For illustration only, fig. 3 schematically shows a diagram of the delay tap BP ANN, however, it should be understood that embodiments of the present disclosure are not limited thereto. The example ANN shown in fig. 3 includes a plurality of neurons (represented as small circles in fig. 3). In addition, a tapped delay line (denoted by symbol v in fig. 3) is employed in the input neurons to simulate the memory effect of the PA. In FIG. 3, IinAnd QinIs an input to the ANN, and IoutAnd QoutIs the output of the ANN. Although only one hidden layer is shown in the example ANN in fig. 3, it should be understood that in some embodiments, the ANN may include multiple hidden layers. Further, symbol b in fig. 3 denotes a threshold value, f denotes an activation function, in which a sigmoid function can be used, and w denotes a coefficient of an ANN model to be learned via training.
At block 220, the training apparatus generates second output reference data by suppressing noise in the first output reference data. In some embodiments, the first output reference data may be collected from feedback of the PA and may include noise. In this case, the relationship between the input reference data and the first output reference data cannot accurately reflect the transmission characteristics of the PA. By suppressing noise in the first output reference data, the second output reference data generated at block 220 is cleaner and better suited for training the ANN.
Embodiments are not limited to any particular manner for suppressing noise in the first output reference data to obtain clean second output reference data at block 220. In other words, any suitable pre-processing (preprocessing) or pre-processing (pre-processing) now known or developed in the future may be used for this purpose. In some embodiments, for illustration only and not limitation, the training device may generate the second output reference data by polynomial fitting based on the input reference data and the first output reference data. For example, at block 220, the training device may generate second output reference data by polynomial fitting based on LS criteria.
Fig. 4 shows an example of reconstructing the second output reference data via polynomial fitting. Specifically, an AM-AM curve 410 of first output reference data (obtained at block 210 and may be referred to as raw output) relative to input reference data (also obtained at block 210 and may be referred to as raw input) and an AM-AM curve 420 reconstructed via polynomial fitting to the curve 410 are shown. In fig. 4, the horizontal axis represents the amplitude of the input reference data (which may be denoted as a _ I herein), and the vertical axis represents the amplitude of the output reference data (which may be denoted as a _ O herein). As shown in fig. 4, the black dots forming the AM-AM curve 410 are scattered due to noise in the first output reference data. In contrast, the AM-AM curve 420 reconstructed by polynomial fitting to the curve 410 is thin, which means that noise is suppressed. The second output reference data may be derived directly from the AM-AM curve 420.
Alternatively or additionally, in some embodiments, the training apparatus may determine an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively, and generate the second output reference data based on the determined amplitude and phase.
By way of example and not limitation, at block 220, the training device may determine the amplitude of the second output reference data by polynomial fitting based on the amplitude of the first output reference data relative to the input reference data (e.g., based on the AM-AM gain curve of the first output reference data). Likewise, the training apparatus may determine the phase of the second output reference data by polynomial fitting based on the phase of the first output reference data relative to the input reference data (e.g., based on an AM-PM gain curve of the first output reference data).
Fig. 5 to 6 show examples of reconstructing the second output reference data via polynomial fitting based on the AM-AM gain curve and the AM-PM gain curve of the first output reference data.
In particular, fig. 5 shows a plot 510 of the AM gain of first output reference data relative to input reference data, and a plot 520 of the AM gain of second output reference data (which was obtained at block 220 and may be referred to as reconstructed training data) relative to input reference data. In fig. 5, the horizontal axis represents the amplitude of the input reference data (which may be denoted as a _ I herein), and the vertical axis represents the gain of the amplitude, which may be denoted as G _ a ═ a _ O/a _ I |. By polynomial fitting to the curve 510, a curve 520 is obtained and correspondingly the amplitude of the second output reference data. It is clear that the amplitude gain shown by curve 520 is more pronounced than that shown by curve 510, which means suppressed noise in the reconstructed second output reference data.
Also, fig. 6 shows a plot 610 of the PM gain of the first output reference data versus the input reference data, and a plot 620 of the PM gain of the reconstructed training data versus the input reference data. In fig. 6, the horizontal axis represents the amplitude of the input reference data (i.e., a _ I) and the vertical axis represents the phase gain, which may be expressed as G _ P ═ phase (a _ O/a _ I). By polynomial fitting to the curve 610, a curve 620 is obtained and correspondingly the phase of the second output reference data. It is clear that the phase gain shown by curve 620 is more pronounced than that shown by curve 610, and that curve 610 also shows the suppressed noise in the reconstructed second output reference data. Then, the second output reference data, i.e., the clean training data, is determined based on the amplitude of the second output reference data in fig. 5 and the phase of the second output reference data in fig. 6.
As another alternative, at block 220, the training apparatus may generate the second output reference data via operation 700 shown in fig. 7. Specifically, in the example shown in fig. 7, at block 710 the training apparatus may determine an in-phase (I) component of the second output reference data based on the input reference data and the first output reference data, and at block 720 a quadrature (Q) component of the second output reference data based on the input reference data and the first output reference data; and at block 730, second output reference data is generated based on the determined I and Q components. Note that in some embodiments, each of the I and Q components may be generated in a manner similar to that described with reference to fig. 4-6.
Reference is now made back to fig. 2. At block 230, the training apparatus trains the ANN based on the input reference data and the second output reference data generated at block 220 and cleaner than the original first output reference data. For purposes of illustration and not limitation, the criteria used to train the ANN may include minimizing the sum of the target data and the squared error of the output of the ANN.
In some embodiments, the trained ANN may be used to determine configurations/parameters for DPD that may be applied to the PA. That is, in some example embodiments, method 200 may also include block 240, where the training device determines the configuration/parameters for DPD based on the trained ANN.
Fig. 8 illustrates an example of configuring a DPD in a PA system based on an ANN according to an embodiment of the present disclosure. For example, the method 200 may be used to train an ANN for configuring DPD. As shown in fig. 8, data collected from the feedback chain of the PA 801 (which may include the attenuator 802, IQ modulator 803, and ADCs 804 and 805) is input to a pre-processing module 806 to generate clean training data before entering the ANN 807. The feedback data input to the pre-processing module 806 may be represented by an I component I _ out and a Q component Q _ out. For example, the pre-processing module 806 may generate clean training data having an I component I _ out _ cln and a Q component Q _ out _ cln using the operations described with reference to block 220 of the method 200 using the feedback data I _ out and Q _ out as the first output reference data. As shown in fig. 8, clean training data output from the preprocessing module 806 is input to the ANN 807 along with input reference data having an I component I _ in and a Q component Q _ in to train the ANN 807. Any suitable criteria known or to be developed in the future may be used for training, and embodiments are not limited to any particular training algorithm. In some embodiments, operations similar to those described with reference to block 230 of method 200 may be used for training.
The trained ANN 807 may then be used to determine parameters/coefficients for the DPD 808 based on input reference data (I _ in and Q _ in), which may be obtained from the input side of the PA 801, e.g. before the IQ modulator 809. As shown in fig. 8, a copy of the coefficient (Coeff) determined by the ANN 807 is applied to the DPD 808.
Fig. 9 illustrates AM-AM characteristics of DPD configured based on ANN trained with clean data, according to an embodiment of the present disclosure. The AM-AM characteristic of the DPD shown in fig. 9 is more accurate than that of the conventional DPD.
The accurate transmission characteristics of DPD designed according to the embodiments of the present disclosure result in better performance of the PA system, as shown in fig. 10 to 12. For comparison, fig. 10 shows the original spectrum of a PA system without DPD. As can be seen from fig. 10, the out-of-band attenuation is about-70 dBm, only about 25dBm lower than the in-band response, which means strong out-of-band interference.
Fig. 11 shows the spectrum of a PA system with conventional DPD. It can be seen that the out-of-band attenuation is about-90 dBm, which means that the out-of-band interference is reduced compared to figure 10. Fig. 12 shows a spectrum of a PA system with DPD designed according to an embodiment of this disclosure. In this case the out-of-band attenuation is reduced to-100 dBm, which means that the out-of-band interference is even lower compared to the PA system with conventional DPD as shown in fig. 11.
Although some embodiments are described with reference to DPD and PA systems, it should be understood that the embodiments presented herein are not limited to such specific application scenarios. Instead, the proposed solution for obtaining clean training data for an ANN via preprocessing may be applied to any application where similar problems exist and/or clean training data is needed.
Note that in some embodiments, the training apparatus implementing method 200 may be part of the ANN. In another embodiment, the training device may be a separate device that may be connected to the ANN when needed.
Alternatively or additionally, the ANN and/or the training device may be part of the DPD module. In another embodiment, the ANN and/or the training device may be connected to the DPD module only when needed.
In some embodiments, the ANN, the training device, and/or the DPD module may be part of a PA system. In another embodiment, the ANN, the training device and/or the DPD module may be connected to the PA system only when needed.
Some embodiments of the present disclosure also provide an apparatus for communication, which may include a network device (e.g., network device 101 in fig. 1) or a terminal device (e.g., terminal device 102 in fig. 1). The apparatus for communication includes a PA and a DPD coupled to an input of the PA. In addition, the parameters of the DPD are obtained based on an ANN that is trained with input reference data and output reference data, and the output reference data is generated by suppressing noise in the feedback data output from the PA, e.g., according to method 200.
Fig. 13 shows a simplified block diagram of an apparatus 1300, which apparatus 1300 may be embodied in/as a communication device, which may include, but is not limited to, a network device or a terminal device. In some embodiments, the apparatus 1300 may be separate from the communication device and may be connected to the communication device when needed.
As shown in the example of fig. 13, the apparatus 1300 includes a processor 1310 that controls the operation and functionality of the apparatus 1300. For example, in some embodiments, the processor 1310 may implement various operations by way of instructions 1330 stored in a memory 1320 coupled thereto. The memory 1320 may be of any suitable type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, as non-limiting examples. In some example embodiments, the memory 1320 may be a non-transitory computer-readable medium. Although only one memory unit is shown in fig. 13, in some embodiments, multiple physically distinct memory units may be present in the apparatus 1300.
The processor 1310 may be of any suitable type suitable to the local technical environment, and may include one or more of the following, as non-limiting examples: general purpose computers, special purpose computers, microprocessors, Digital Signal Processors (DSPs), Central Processing Units (CPUs), Field Programmable Gate Arrays (FPGAs), application specific circuits (ASICs), GPUs (graphics processing units), NPUs (neural network processing units), AI (artificial intelligence) accelerators, and processors based on a multi-core processor architecture. The apparatus 1300 may also include multiple processors 1310 in any combination thereof.
The processor 1310 may also be coupled with one or more transceivers 1340, which transceivers 1340 enable communication with other apparatuses, modules, or devices. In some embodiments, the processor 1310 and the memory 1320 may cooperate to implement the method 200 described with reference to fig. 2-7. It should be understood that all of the features described above with reference to fig. 2-12 may also be applicable to the apparatus 1300, and therefore will not be described in detail herein.
Various embodiments of the disclosure may be implemented by a computer program or computer program product executable by one or more of: a processor (e.g., processor 1310 in fig. 13), software, firmware, hardware, or a combination thereof.
Although some embodiments are described in the context of DPD and PA, they should not be construed as limiting the spirit and scope of the present disclosure. The principles and concepts of the present disclosure may be more generally applicable to other application scenarios.
Additionally, the present disclosure may also provide a carrier (e.g., computer instructions/program 1330 in fig. 13) containing a computer program as described above. The carrier includes a computer readable storage medium. The computer-readable storage medium may include, for example, an optical or electronic memory device, such as a RAM (random access memory), ROM (read only memory), flash memory, magnetic tape, CD-ROM, DVD, blu-ray disc, and so forth.
The techniques described herein may be implemented by various means, such that an apparatus implementing one or more functions of a corresponding apparatus described with an embodiment includes not only prior art means but also means for implementing one or more functions of the corresponding apparatus, and the apparatus may include separate means for each separate function or means that may be configured to perform two or more functions. For example, these techniques may be implemented in hardware (e.g., a circuit or processor), firmware, software, or a combination thereof. For firmware or software, implementation can be through modules (e.g., procedures, functions, and so on) that perform the functions described herein.
Some example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatus. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any implementation or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular implementations. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
It is clear to a person skilled in the art that with the advancement of technology, the inventive concept may be implemented in various ways. The above-described embodiments are given for the purpose of illustration and not limitation of the present disclosure, and it is to be understood that modifications and variations may be made without departing from the spirit and scope of the disclosure, as will be readily understood by those skilled in the art. Such modifications and variations are considered to be within the scope of the disclosure and the appended claims. The scope of the disclosure is defined by the appended claims.

Claims (20)

1. An apparatus for data processing, comprising:
at least one processor; and
at least one memory including computer program code;
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to:
obtaining input reference data and first output reference data for training an Artificial Neural Network (ANN);
generating second output reference data by suppressing noise in the first output reference data; and
training the ANN based on the input reference data and the second output reference data.
2. The apparatus of claim 1, wherein the at least one memory and the computer program code are configured to, with the at least one processor, further cause the apparatus to:
determining parameters to be applied to a Digital Predistortion (DPD) of a Power Amplifier (PA) based on the trained ANN.
3. The apparatus of claim 2, wherein the at least one memory and the computer program code are configured to, with the at least one processor, further cause the apparatus to:
obtaining the input reference data and the first output reference data further comprises:
obtaining training data input to the PA as the input reference data; and
obtaining feedback data output from the PA in response to the training data as the first output reference data.
4. The apparatus of claim 1, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
generating the second output reference data by polynomial fitting based on the input reference data and the first output reference data.
5. The apparatus of claim 4, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
generating the second output reference data based on a Least Squares (LS) criterion.
6. The apparatus of any of claims 1-5, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
determining an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively; and
generating the second output reference data based on the determined amplitude and phase.
7. The apparatus of claim 6, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
determining the amplitude and the phase of the second output reference data further comprises:
determining a magnitude of the first output reference data relative to the input reference data by a polynomial fit based on the magnitude; and
determining a phase of the first output reference data relative to the input reference data by polynomial fitting.
8. The apparatus of any of claims 1-5, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
determining an in-phase component and a quadrature component of the second output reference data based on the input reference data and the first output reference data, respectively; and
generating the second output reference data based on the determined in-phase and quadrature components.
9. A method of data processing, comprising:
obtaining input reference data and first output reference data for training an Artificial Neural Network (ANN);
generating second output reference data by suppressing noise in the first output reference data; and
training the ANN based on the input reference data and the second output reference data.
10. The method of claim 9, further comprising:
determining parameters to be applied to a Digital Predistortion (DPD) of a Power Amplifier (PA) based on the trained ANN.
11. The method of claim 10, wherein obtaining the input reference data and the first output reference data comprises:
obtaining training data input to the PA as the input reference data; and
obtaining feedback data output from the PA in response to the training data as the first output reference data.
12. The method of claim 9, wherein generating the second output reference data further comprises:
generating the second output reference data by polynomial fitting based on the input reference data and the first output reference data.
13. The method of claim 12, wherein generating the second output reference data further comprises:
generating the second output reference data by polynomial fitting based on a Least Squares (LS) criterion.
14. The method of any of claims 9 to 13, wherein generating the second output reference data comprises:
determining an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively; and
generating the second output reference data based on the determined amplitude and phase.
15. The method of claim 14, wherein determining the amplitude and the phase of the second output reference data comprises:
determining a magnitude of the first output reference data relative to the input reference data by a polynomial fit based on the magnitude; and
determining a phase of the first output reference data relative to the input reference data by polynomial fitting.
16. The method of any of claims 9 to 13, wherein generating the second output reference data comprises:
determining an in-phase component and a quadrature component of the second output reference data based on the input reference data and the first output reference data, respectively; and
generating the second output reference data based on the determined in-phase and quadrature components.
17. An apparatus for data processing, comprising:
means for obtaining input reference data and first output reference data for training an Artificial Neural Network (ANN);
means for generating second output reference data by suppressing noise in the first output reference data; and
means for training the ANN based on the input reference data and the second output reference data.
18. The apparatus of claim 17, wherein the means comprises:
at least one processor; and
at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause execution of the apparatus.
19. A computer-readable medium, on which a computer program is stored which, when executed by at least one processor of an apparatus, causes the apparatus to perform the method according to any one of claims 9 to 16.
20. An apparatus for communication, comprising:
a power amplifier PA; and
a digital pre-distortion (DPD) coupled to an input of the PA;
wherein the parameters of the DPD are obtained based on an Artificial Neural Network (ANN) trained with input reference data and output reference data; and
wherein the output reference data is generated by suppressing noise in feedback data output from the PA.
CN201880094548.3A 2018-07-26 2018-07-26 Method, apparatus and computer readable medium for data processing Active CN112262369B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097217 WO2020019240A1 (en) 2018-07-26 2018-07-26 Method, apparatus and computer readable media for data processing

Publications (2)

Publication Number Publication Date
CN112262369A true CN112262369A (en) 2021-01-22
CN112262369B CN112262369B (en) 2024-04-02

Family

ID=69180322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880094548.3A Active CN112262369B (en) 2018-07-26 2018-07-26 Method, apparatus and computer readable medium for data processing

Country Status (2)

Country Link
CN (1) CN112262369B (en)
WO (1) WO2020019240A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431300B2 (en) 2020-06-12 2022-08-30 Nokia Technologies Oy Machine learning based digital pre-distortion for power amplifiers

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288341A (en) * 1999-09-14 2001-03-21 朗迅科技公司 Method and device for reducing adjacent channel power in radio communication system
CN1453968A (en) * 2002-04-23 2003-11-05 华为技术有限公司 Method of raising efficiency of RF power amplifier based on base band digital predistortion technology
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN101686069A (en) * 2008-09-24 2010-03-31 大唐移动通信设备有限公司 Device and method for calibrating predistortion in time division mobile communication system
US20100159856A1 (en) * 2008-12-22 2010-06-24 Kabushiki Kaisha Toshiba Distortion compensator, distortion compensation method, and transmitter
CN101764577A (en) * 2009-12-16 2010-06-30 电子科技大学 Baseband pre-distortion power amplifier linearization method based on one-way feedback and non-iterative technique
CN102055696A (en) * 2010-12-06 2011-05-11 西安电子科技大学 Digital predistortion system for inhibiting noise of feedback signal
CN102082751A (en) * 2009-11-27 2011-06-01 电子科技大学 Neural network pre-distortion method based on improved MLBP (Levenberg-Marquardt back propagation) algorithm
KR20110105318A (en) * 2010-03-18 2011-09-26 한국방송공사 Apparatus and method for digital predistortion using adaptive noise cancelation
CN102427336A (en) * 2011-11-30 2012-04-25 上海瑞和安琦通信科技有限公司 Radio frequency power amplification system with function of adaptive digital predistortion linearization
CN103685110A (en) * 2013-12-17 2014-03-26 京信通信系统(中国)有限公司 Predistortion processing method and system and predistortion factor arithmetic unit
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
US20180040333A1 (en) * 2016-08-03 2018-02-08 Apple Inc. System and method for performing speech enhancement using a deep neural network-based signal
CN107834983A (en) * 2017-10-18 2018-03-23 宁波大学 A kind of digital pre-distortion linearization parameter extracting method based on cloud platform

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288341A (en) * 1999-09-14 2001-03-21 朗迅科技公司 Method and device for reducing adjacent channel power in radio communication system
CN1453968A (en) * 2002-04-23 2003-11-05 华为技术有限公司 Method of raising efficiency of RF power amplifier based on base band digital predistortion technology
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN101686069A (en) * 2008-09-24 2010-03-31 大唐移动通信设备有限公司 Device and method for calibrating predistortion in time division mobile communication system
US20100159856A1 (en) * 2008-12-22 2010-06-24 Kabushiki Kaisha Toshiba Distortion compensator, distortion compensation method, and transmitter
CN102082751A (en) * 2009-11-27 2011-06-01 电子科技大学 Neural network pre-distortion method based on improved MLBP (Levenberg-Marquardt back propagation) algorithm
CN101764577A (en) * 2009-12-16 2010-06-30 电子科技大学 Baseband pre-distortion power amplifier linearization method based on one-way feedback and non-iterative technique
KR20110105318A (en) * 2010-03-18 2011-09-26 한국방송공사 Apparatus and method for digital predistortion using adaptive noise cancelation
CN102055696A (en) * 2010-12-06 2011-05-11 西安电子科技大学 Digital predistortion system for inhibiting noise of feedback signal
CN102427336A (en) * 2011-11-30 2012-04-25 上海瑞和安琦通信科技有限公司 Radio frequency power amplification system with function of adaptive digital predistortion linearization
CN103685110A (en) * 2013-12-17 2014-03-26 京信通信系统(中国)有限公司 Predistortion processing method and system and predistortion factor arithmetic unit
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
US20180040333A1 (en) * 2016-08-03 2018-02-08 Apple Inc. System and method for performing speech enhancement using a deep neural network-based signal
CN107834983A (en) * 2017-10-18 2018-03-23 宁波大学 A kind of digital pre-distortion linearization parameter extracting method based on cloud platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUICHU LIU: "Tunnel FET-based ultra-low power, low-noise amplifier design for bio-signal acquisition", PROCEEDINGS OF THE 2014 INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, 31 August 2014 (2014-08-31) *
李庆;: "包西铁路GSM-R数字光纤直放站方案研究", 铁道标准设计, no. 12, 25 November 2013 (2013-11-25) *

Also Published As

Publication number Publication date
WO2020019240A1 (en) 2020-01-30
CN112262369B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN107437927B (en) Method and apparatus for signal predistortion
KR102605423B1 (en) System and method for frequency-domain weighted least square for aclr optimization
US11394412B2 (en) Predistortion circuit, method for generating a predistorted baseband signal, control circuit for a predistortion circuit, method to determine parameters for a predistortion circuit, and apparatus and method for predistorting a baseband signal
JP2017509179A (en) Method for obtaining digital predistortion parameter and predistortion system
US20180331662A1 (en) Method of reducing memory effect of power amplifier
US9755583B2 (en) Using fractional delay computations to improve intermodulation performance
JP6554265B2 (en) Baseband digital predistortion architecture
WO2021092633A2 (en) Apparatus and method of harmonic interference cancellation
US20180123622A1 (en) System for and method of reducing transmit signal distortion
Singla et al. Digital predistortion of power amplifiers using look-up table method with memory effects for LTE wireless systems
CN110720201A (en) Output power adjusting method and related product
US20140250309A1 (en) Predictive self calibrated power control
CN112262369B (en) Method, apparatus and computer readable medium for data processing
Anttila et al. Recursive learning-based joint digital predistorter for power amplifier and I/Q modulator impairments
US8824984B2 (en) Outphasing power combining by antenna
WO2018191967A1 (en) Non-linear distortion mitigation for power amplifier
US20200220564A1 (en) Transceivers for a wireless communication system, mobile device, and method for improving transceiver loopback calibration accuracy
US9432062B2 (en) Polar noise shaping
WO2022262991A1 (en) Systems and methods for multiband linearization using kernel regression
WO2021042088A2 (en) Switched envelope tracking
WO2019174051A1 (en) Method and arrangement for compensating memory effects in power amplifier
WO2017167354A1 (en) Digital predistortion for dual-band power amplifiers
WO2023230819A1 (en) Digital predistortion method and apparatus
Liu et al. A digital predistortion method for multi-band aggregation
US20140210549A1 (en) Method and apparatus for using a processor controlled switcher with a power amplifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant