CN112262369B - Method, apparatus and computer readable medium for data processing - Google Patents

Method, apparatus and computer readable medium for data processing Download PDF

Info

Publication number
CN112262369B
CN112262369B CN201880094548.3A CN201880094548A CN112262369B CN 112262369 B CN112262369 B CN 112262369B CN 201880094548 A CN201880094548 A CN 201880094548A CN 112262369 B CN112262369 B CN 112262369B
Authority
CN
China
Prior art keywords
reference data
output reference
output
input
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880094548.3A
Other languages
Chinese (zh)
Other versions
CN112262369A (en
Inventor
赵光玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Shanghai Bell Co Ltd
Nokia Solutions and Networks Oy
Original Assignee
Nokia Shanghai Bell Co Ltd
Nokia Solutions and Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co Ltd, Nokia Solutions and Networks Oy filed Critical Nokia Shanghai Bell Co Ltd
Publication of CN112262369A publication Critical patent/CN112262369A/en
Application granted granted Critical
Publication of CN112262369B publication Critical patent/CN112262369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • H03F1/3241Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
    • H03F1/3247Modifications of amplifiers to reduce non-linear distortion using predistortion circuits using feedback acting on predistortion circuits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/20Power amplifiers, e.g. Class B amplifiers, Class C amplifiers
    • H03F3/24Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages
    • H03F3/245Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages with semiconductor devices only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/32Carrier systems characterised by combinations of two or more of the types covered by groups H04L27/02, H04L27/10, H04L27/18 or H04L27/26
    • H04L27/34Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
    • H04L27/36Modulator circuits; Transmitter circuits
    • H04L27/366Arrangements for compensating undesirable properties of the transmission path between the modulator and the demodulator
    • H04L27/367Arrangements for compensating undesirable properties of the transmission path between the modulator and the demodulator using predistortion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F2200/00Indexing scheme relating to amplifiers
    • H03F2200/336A I/Q, i.e. phase quadrature, modulator or demodulator being used in an amplifying circuit
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F2200/00Indexing scheme relating to amplifiers
    • H03F2200/451Indexing scheme relating to amplifiers the amplifier being a radio frequency amplifier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Power Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Nonlinear Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Amplifiers (AREA)
  • Transmitters (AREA)

Abstract

Embodiments of the present disclosure relate to methods, apparatuses, and computer program products for data processing. A method comprising: obtaining input reference data and first output reference data for training an Artificial Neural Network (ANN); generating second output reference data by suppressing noise in the first output reference data; and training the ANN based on the input reference data and the second output reference data. In some embodiments, the trained ANN may be used to accurately determine the configuration of Digital Predistortion (DPD) in a Power Amplifier (PA) system.

Description

Method, apparatus and computer readable medium for data processing
Technical Field
Non-limiting and example embodiments of the present disclosure relate generally to the field of data processing technology and, in particular, relate to methods, apparatuses, and computer program products for training an Artificial Neural Network (ANN).
Background
This section introduces aspects that may facilitate a better understanding of the disclosure. The statements in this section are, therefore, to be read in this light, and not as admissions of prior art or of non-prior art.
Modern wireless services require efficient and linear transmission of Radio Frequency (RF) carriers modulated in amplitude as well as in phase by an envelope signal. The contradictory requirements of power efficiency and linearity place very stringent demands on the transmitter, in particular on its Power Amplifier (PA).
Although class a PAs are best in terms of linearity, they are quite inefficient compared to other amplification classes (such as "AB", "C" and Doherty amplifiers). However, higher efficiency causes higher nonlinearity and the PA output will be distorted, often to the point that the system performance requirements are not met. Thus, class AB power amplifiers or other variants are typically used with some suitable form of linearization scheme.
Digital Predistortion (DPD) has been considered as a popular method of compensating for PA nonlinearities. In a PA system with DPD, the transmission characteristics of the PA can be modeled by sampling the output of the PA and calculating its inverse. The digital baseband signal is then multiplied by the inverse of the nonlinear transmission characteristic of the PA, up-converted to RF frequency, and applied to the PA input. In this way, the DPD engine can correct the output distortion of the PA and achieve higher efficiency.
The challenge of DPD techniques is that the distortion (i.e., non-linear) characteristics of the PA can vary over time, temperature, and bias, and it is not easy to design a correct predistortion algorithm.
Disclosure of Invention
Various embodiments of the present disclosure are generally directed to methods, apparatuses, and computer storage media for data processing.
In a first aspect of the present disclosure, a method of data processing is provided. The method comprises the following steps: obtaining input reference data and first output reference data for training the ANN; generating second output reference data by suppressing noise in the first output reference data; and training the ANN based on the input reference data and the second output reference data.
In some embodiments, generating the second output reference data may further comprise: second output reference data is generated by polynomial fitting based on the input reference data and the first output reference data. In some embodiments, generating the second output reference data may further comprise: the second output reference data is generated based on a Least Squares (LS) criterion.
In some embodiments, generating the second output reference data may include: determining an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively; and generating second output reference data based on the determined amplitude and phase. In some other embodiments, determining the amplitude and phase of the second output reference data may include: determining an amplitude by polynomial fitting based on the amplitude of the first output reference data relative to the input reference data; and determining a phase by polynomial fitting based on the phase of the first output reference data relative to the input reference data.
In some embodiments, generating the second output reference data may include: determining an in-phase component and a quadrature component of the second output reference data based on the input reference data and the first output reference data, respectively; and generating second output reference data based on the determined in-phase component and quadrature component.
In some embodiments, the method may further comprise: parameters to be applied to the DPD of the PA are determined based on the trained ANN. In some embodiments, obtaining the input reference data and the first output reference data may include: obtaining training data input to the PA as input reference data; and obtaining feedback data output from the PA in response to the training data as first output reference data.
In a second aspect of the present disclosure, an apparatus for data processing is provided. The device comprises: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: obtaining input reference data and first output reference data for training the ANN; generating second output reference data by suppressing noise in the first output reference data; and training the ANN based on the input reference data and the second output reference data.
In a third aspect of the present disclosure, another apparatus for data processing is provided. The device comprises: means for obtaining input reference data and first output reference data for training the ANN; means for generating second output reference data by suppressing noise in the first output reference data; and means for training the ANN based on the input reference data and the second output reference data.
In a fourth aspect of the present disclosure, a computer program is provided. The computer program comprises instructions which, when executed by an apparatus, cause the apparatus to perform a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by at least one processor of a device, causes the device to perform the method of the first aspect of the present disclosure.
In a sixth aspect of the present disclosure, an apparatus for communication is provided. The apparatus includes: a PA; and a DPD coupled to an input of the PA; wherein the parameters of the DPD are obtained based on an ANN trained with the input reference data and the output reference data; and wherein the output reference data is generated by suppressing noise in the feedback data output from the PA.
Drawings
The above and other aspects, features and advantages of various embodiments of the present disclosure will become more fully apparent from the following detailed description with reference to the accompanying drawings in which like reference numerals are used to designate the same or equivalent elements. The accompanying drawings, which are included to provide a better understanding of embodiments of the disclosure, and are not necessarily drawn to scale, and wherein:
fig. 1 illustrates a wireless communication network in which embodiments of the present disclosure may be implemented;
FIG. 2 illustrates a flow chart of a data processing method according to an example embodiment of the present disclosure;
FIG. 3 schematically illustrates a diagram of an ANN;
FIG. 4 illustrates an example of reconstructing clean training data via polynomial fitting according to embodiments of the present disclosure;
fig. 5 to 6 illustrate another example of reconstructing clean training data according to an embodiment of the present disclosure;
FIG. 7 illustrates a flow chart of a method of reconstructing clean training data in accordance with an embodiment of the present disclosure;
fig. 8 illustrates an example of configuring DPD in an ANN-based PA system according to an embodiment of the present disclosure;
FIG. 9 illustrates a plot of amplitude-to-amplitude characteristics of DPD configured based on an ANN trained with clean data, in accordance with an embodiment of the disclosure;
fig. 10 shows the original spectrum of a PA system without DPD;
fig. 11 shows the spectrum of a PA system with a conventional DPD;
fig. 12 shows the spectrum of a PA system with DPD designed according to embodiments of this disclosure; and
fig. 13 shows a simplified block diagram of an apparatus that may be used for data processing according to an example embodiment of the present disclosure.
Detailed Description
Hereinafter, the principles and spirit of the present disclosure will be described with reference to illustrative embodiments. It should be understood that all of these embodiments are presented merely to better understand and further practice the present disclosure by those skilled in the art and are not intended to limit the scope of the present disclosure. For example, features illustrated or described as part of one embodiment can be used with another embodiment to yield still a further embodiment. In the interest of clarity, not all features of an actual implementation are described in this specification.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It will be understood that, although the terms "first" and "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," "including," "has," "having," "includes," "including" and/or "including" when used herein, specify the presence of stated features, elements, components, etc., but does not preclude the presence or addition of one or more other features, elements, components and/or groups thereof.
As used in this application, the term "circuitry" may refer to one or more or all of the following:
(a) Pure hardware circuit implementations (such as implementations in analog and/or digital circuitry only); and
(b) A combination of hardware circuitry and software, such as (as applicable):
(i) Combination of analog and/or digital hardware circuit(s) and software/firmware, and
(ii) Any portion of hardware processor(s), software, and memory(s) having software (including digital signal processor (s)) that work together to cause a device (such as a mobile phone or server) to perform various functions; and
(c) Hardware circuit(s) and/or processor(s) such as microprocessor(s) or part of microprocessor(s) that require software (e.g., firmware) to operate, but may not exist when the software is not required for operation.
This definition of circuitry applies to all uses of this term in this application, including in any claims. As another example, as used in this application, the term "circuitry" also covers an implementation of only a hardware circuit or processor (or processors) or a portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers (e.g., and if applicable to the particular claim element) baseband integrated circuits or processor integrated circuits for a computing device.
As used herein, the term "communication network" refers to a network that conforms to any suitable communication standard, such as 5G, new Radio (NR), long Term Evolution (LTE), LTE-advanced (LTE-a), wideband Code Division Multiple Access (WCDMA), high Speed Packet Access (HSPA), etc. The "communication network" may also be referred to as a "communication system". Furthermore, communication between network devices, between network devices and terminal devices, or between terminal devices in a communication network may be performed according to any suitable communication protocol, including but not limited to global system for mobile communications (GSM), universal Mobile Telecommunications System (UMTS), long Term Evolution (LTE), new Radio (NR), 5G, wireless Local Area Network (WLAN) standards, such as the IEEE 802.11 standard, and/or any other suitable communication standard currently known or to be developed in the future.
As used herein, the term "network device" refers to a node in a communication network via which a terminal device receives services. For example, network devices may include, but are not limited to, base Stations (BS) and Node BS (NB), evolved NB (eNB), 5G NB (gNB), or Access Points (AP), etc.
The term "terminal device" refers to any terminal device that may be capable of communicating. By way of example, and not limitation, a terminal device may also be referred to as a communication device, a UE, a Subscriber Station (SS), a portable subscriber station, a Mobile Station (MS), or an Access Terminal (AT). The terminal devices may include, but are not limited to, mobile phones, cellular phones, smart phones, voice over IP (VoIP) phones, wireless local loop phones, tablets, wearable terminal devices, personal Digital Assistants (PDAs), portable computers, desktop computers, image capture terminal devices (such as digital cameras), gaming terminal devices, music storage and playback appliances, in-vehicle wireless terminal devices, wireless endpoints, mobile stations, notebook computer embedded appliances (LEEs), notebook computer mounted appliances (LMEs), USB dongles, smart devices, wireless Customer Premises Equipment (CPE), and the like. In the following description, the terms "terminal device", "communication device", "terminal", "user equipment" and "UE" may be used interchangeably.
As yet another example, in an internet of things (IOT) scenario, a terminal device may represent a machine or other device that performs monitoring and/or measurements and transmits the results of such monitoring and/or measurements to another terminal device and/or network device. In this case, the terminal device may be a machine-to-machine (M2M) device, which may be referred to as a Machine Type Communication (MTC) device in the 3GPP context. As one particular example, the terminal device may be a UE implementing the 3GPP narrowband internet of things (NB-IoT) standard. Examples of such machines or devices are sensors, metering devices (such as electricity meters), industrial machines, or household or personal appliances (e.g. refrigerators, televisions, personal wearable devices (such as watches), etc.). In other cases, the terminal device may represent a vehicle or other device capable of monitoring and/or reporting its operational status or other functions associated with its operation.
Fig. 1 illustrates an example wireless communication network 100 in which embodiments of the present disclosure may be implemented. As shown in fig. 1, a wireless communication network 100 may include one or more network devices (also referred to as network nodes), e.g., network device 101, where network device 101 may be in the form of an eNB or a gNB. It will be appreciated that the network device 101 may also be in the form of an NB, a Base Transceiver Station (BTS) and/or a Base Station Subsystem (BSS), an AP, etc. The network device 101 provides radio connectivity to a group of terminal devices (e.g., terminal device 102). Both the network device 101 and the terminal device 102 are equipped with a transmitter and a receiver (or transceiver) to enable communication therebetween.
The Power Amplifier (PA) is an important component in the transmitter (or transceiver) and must be carefully designed to achieve efficient communication. Class AB and class C PAs have been widely used in transmitters/transceivers due to their high efficiency. However, high efficiency is accompanied by high nonlinearity, which may cause system performance degradation and is undesirable.
By compensating for PA nonlinearities, system performance may be improved. DPD has been considered as a candidate for compensation. In PA systems with DPD, the input signal may be predistorted before entering the PA and in this way the distortion at the PA output may be corrected.
The challenge of DPD techniques is that the distortion (i.e., nonlinearity) characteristics of the PA may change (e.g., over time, temperature, and bias), and thus, it may not be easy to determine the appropriate parameters/algorithms for DPD operation.
In the DPD field, a traditional approach for designing DPD parameters/algorithms is to use a Volterra series, some variants thereof or a combination of Volterra series and other techniques (e.g., orthogonal processing). However, these methods are often very complex and have low ability to solve the problem of non-linearity fitting.
Another way to determine the DPD parameters is to use a feedback mechanism, i.e. to sample the output signal of the PA and use it to correct the parameters of the DPD. The mechanism utilizes input training reference data and output reference data. The output reference data may be collected from feedback of the noisy and nonlinear PA. With this feedback mechanism, noise in the feedback may cause incorrect estimation of the transmission characteristics of the PA and cause improper DPD design.
In some embodiments of the present disclosure, training DPD based on ANN is proposed. For low-pass equivalent behavior modeling of wireless transmitters, both ANN and Volterra series are of particular interest in the microwave field. The inventors of the present disclosure have observed that ANN has a much stronger fitting ability than Volterra, but performs poorly in noisy situations. While certain techniques (e.g., adjustments used in an ANN) may be used to suppress the sensitivity of an ANN to noise, their performance may not meet the requirements of DPD applications.
To address this and other similar issues, in some embodiments of the present disclosure, clean training data is proposed for an ANN. With clean training data, DPD based ANN is better suited for wideband non-linearity applications.
In some embodiments, the original noisy training data (which may be obtained from the feedback of the PA) may be preprocessed (e.g., via polynomial fitting) to construct new clean training data. The scheme overcomes the drawbacks of ANNs in noisy situations while maintaining the advantages of their non-linearity fitting capabilities. In some further embodiments, the new training data may be reconstructed based on some optimization criteria (e.g., LS criteria).
By way of example and not limitation, in some embodiments, amplitude (AM) versus AM and AM versus PM curves of the output reference data relative to the input training reference data are first calculated. Regression methods (e.g., LS-based polynomial fitting) can then be used to fit these curves. The fitted polynomial may be used to reconstruct new output reference data that is noise free or noise suppressed and that maintains the non-linearity characteristics. Note that in some other embodiments, many other fitting functions (e.g., piecewise fitting) may be used for this purpose.
The reconstructed new output reference data is used to train an ANN (e.g., an ANN based on a delay tap Back (BP)) to determine appropriate parameters for the DPD. Because of the suppressed noise in the reconstructed new output reference data, the number of neurons in the ANN can be selected to be higher to achieve better performance without causing an overfitting.
To facilitate an understanding of the solutions presented herein, some embodiments will be described below with reference to fig. 2 to 13.
Fig. 2 illustrates an example method 200 according to an embodiment of this disclosure. The method may be implemented by a training apparatus, which may be implemented, for example, in a transceiver of the network device 101 or the terminal device 102 in fig. 1, or may provide an input to the transceiver. However, it should be understood that the method 200 may also be implemented by other devices, apparatuses, or clouds of data processing. For purposes of illustration only and not limitation, the method 300 will be described below with reference to a training apparatus.
As shown in fig. 2, at block 210, the training apparatus obtains input reference data and first output reference data for training an ANN. Note that embodiments are not limited to any particular application of ANN. For purposes of illustration only and not limitation, ANN may be used to determine the configuration/parameters for DPD in PA. In such an embodiment, at block 210, the training device may obtain training data input to the PA as input reference data; and obtaining feedback data output from the PA in response to the training data as first output reference data.
In addition, embodiments are not limited to any particular structure of an ANN. For illustration only, fig. 3 schematically shows a diagram of a delay tap BP ANN, however, it should be understood that embodiments of the present disclosure are not limited thereto. The example ANN shown in fig. 3 includes a plurality of neurons (represented as small circles in fig. 3). In addition, a tapped delay line (denoted by symbol v in fig. 3) is employed in the input neurons to simulate the memory effect of the PA. In FIG. 3, I in And Q in Is an input to ANN, and I out And Q out Is the output of an ANN. Although only one hidden layer is shown in the example ANN in fig. 3, it should be understood that in some embodiments, the ANN may include multiple hidden layers. Furthermore, the symbol b in fig. 3 represents a threshold value, f represents an activation function, wherein an S-shaped function may be used, and w represents coefficients of an ANN model to be learned via training.
At block 220, the training device generates second output reference data by suppressing noise in the first output reference data. In some embodiments, the first output reference data may be collected from feedback of the PA and may include noise. In this case, the relationship between the input reference data and the first output reference data cannot accurately reflect the transmission characteristics of the PA. By suppressing noise in the first output reference data, the second output reference data generated at block 220 is cleaner and more suitable for training the ANN.
Embodiments are not limited to any particular manner for suppressing noise in the first output reference data in order to obtain clean second output reference data at block 220. In other words, any suitable pre-processing (preprocessing) or pre-treatment (preprocessing) now known or developed in the future may be used for this purpose. In some embodiments, for illustration only and not limitation, the training device may generate the second output reference data by polynomial fitting based on the input reference data and the first output reference data. For example, at block 220, the training device may generate the second output reference data by polynomial fitting based on the LS criterion.
Fig. 4 shows an example of reconstructing the second output reference data via a polynomial fit. In particular, an AM-AM curve 410 of the first output reference data (obtained at block 210 and may be referred to as the original output) relative to the input reference data (also obtained at block 210 and may be referred to as the original input), and an AM-AM curve 420 reconstructed via a polynomial fit to the curve 410, are shown. In fig. 4, the horizontal axis represents the amplitude of the input reference data (which may be denoted herein as a_i), and the vertical axis represents the amplitude of the output reference data (which may be denoted herein as a_o). As shown in fig. 4, the black dots forming the AM-AM curve 410 are dispersed due to noise in the first output reference data. In contrast, the AM-AM curve 420 reconstructed by polynomial fitting to the curve 410 is thinner, which means that noise is suppressed. The second output reference data may be derived directly from the AM-AM curve 420.
Alternatively or additionally, in some embodiments, the training device may determine an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively, and generate the second output reference data based on the determined amplitude and phase.
By way of example and not limitation, at block 220, the training device may determine the magnitude of the second output reference data by polynomial fitting based on the magnitude of the first output reference data relative to the input reference data (e.g., based on an AM-AM gain curve of the first output reference data). Likewise, the training device may determine the phase of the second output reference data by polynomial fitting based on the phase of the first output reference data relative to the input reference data (e.g., based on an AM-PM gain curve of the first output reference data).
Fig. 5 to 6 show examples of reconstructing second output reference data via polynomial fitting based on AM-AM gain curves and AM-PM gain curves of the first output reference data.
In particular, fig. 5 shows a plot 510 of AM gain of the first output reference data relative to the input reference data, and a plot 520 of AM gain of the second output reference data (which is obtained at block 220 and may be referred to as reconstructed training data) relative to the input reference data. In fig. 5, the horizontal axis represents the amplitude of the input reference data (which may be denoted as a_i herein), and the vertical axis represents the gain of the amplitude, which may be denoted as g_a= |a_o/a_i|. By polynomial fitting of curve 510, curve 520 is obtained and the amplitude of the second output reference data is correspondingly obtained. Clearly, the amplitude gain shown by curve 520 is more pronounced than that shown by curve 510, which means suppressed noise in the reconstructed second output reference data.
Also, fig. 6 shows a plot 610 of the PM gain of the first output reference data relative to the input reference data, and a plot 620 of the PM gain of the reconstructed training data relative to the input reference data. In fig. 6, the horizontal axis represents the amplitude of the input reference data (i.e., a_i), and the vertical axis represents the phase gain, which may be expressed as g_p=phase (a_o/a_i). By polynomial fitting of curve 610, curve 620 is obtained and the phase of the second output reference data is correspondingly obtained. Clearly, the phase gain shown by curve 620 is more definite than that shown by curve 610, and curve 610 also shows suppressed noise in the reconstructed second output reference data. The second output reference data, i.e. the clean training data, is then determined based on the amplitude of the second output reference data in fig. 5 and the phase of the second output reference data in fig. 6.
As another alternative, at block 220, the training device may generate second output reference data via operation 700 shown in fig. 7. Specifically, in the example shown in fig. 7, at block 710, the training device may determine an in-phase (I) component of the second output reference data based on the input reference data and the first output reference data, and at block 720, determine a quadrature (Q) component of the second output reference data based on the input reference data and the first output reference data; and at block 730, second output reference data is generated based on the determined I and Q components. Note that in some embodiments, each of the I and Q components may be generated in a manner similar to that described with reference to fig. 4-6.
Reference is now back made to fig. 2. At block 230, the training device trains the ANN based on the input reference data and the second output reference data generated at block 220 and cleaner than the original first output reference data. For purposes of illustration and not limitation, the criteria for training the ANN may include minimizing a sum of square errors of the target data and the output of the ANN.
In some embodiments, the trained ANN may be used to determine the configuration/parameters for DPD that may be applied to the PA. That is, in some example embodiments, the method 200 may further include block 240, wherein the training device determines the configuration/parameters for the DPD based on the trained ANN.
Fig. 8 illustrates an example of configuring DPD in a PA system based on ANN in accordance with an embodiment of the present disclosure. For example, method 200 may be used to train an ANN for configuring DPD. As shown in fig. 8, data collected from the feedback chain of PA 801 (which may include attenuator 802, IQ modulator 803, and ADCs 804 and 805) is input to a preprocessing module 806 to generate clean training data prior to entering ANN 807. The feedback data input to the preprocessing module 806 may be represented by an I component i_out and a Q component q_out. For example, the preprocessing module 806 may use the operations described with reference to block 220 of the method 200 to generate clean training data having an I component i_out_cln and a Q component q_out_cln using the feedback data i_out and q_out as first output reference data. As shown in fig. 8, clean training data output from pre-processing module 806 is input to ANN 807 along with input reference data having I component i_in and Q component q_in to train ANN 807. Any suitable criteria, known or to be developed in the future, may be used for training, and embodiments are not limited to any particular training algorithm. In some embodiments, operations similar to those described with reference to block 230 of method 200 may be used for training.
The trained ANN 807 may then be used to determine parameters/coefficients for DPD 808 based on input reference data (i_in and q_in) that may be obtained from the input side of PA 801, e.g., prior to IQ modulator 809. As shown in fig. 8, a copy of the coefficient (Coeff) determined by ANN 807 is applied to DPD 808.
Fig. 9 illustrates AM-AM characteristics of DPD configured based on ANNs trained with clean data according to embodiments of the present disclosure. The AM-AM characteristics of the DPD shown in fig. 9 are more accurate than those of the conventional DPD.
The exact transmission characteristics of DPD designed according to embodiments of the present disclosure result in better performance of the PA system, as shown in fig. 10-12. For comparison, fig. 10 shows the original spectrum of a PA system without DPD. As can be seen from fig. 10, the out-of-band attenuation is about-70 dBm, which is only about 25dBm lower than the in-band response, which means strong out-of-band interference.
Fig. 11 shows the spectrum of a PA system with a conventional DPD. It can be seen that the out-of-band attenuation is about-90 dBm, which means that the out-of-band interference is reduced compared to fig. 10. Fig. 12 shows the spectrum of a PA system with DPD designed according to an embodiment of this disclosure. In this case the out-of-band attenuation is reduced to-100 dBm, which means that the out-of-band interference is even lower compared to the PA system with conventional DPD shown in fig. 11.
Although some embodiments are described with reference to DPD and PA systems, it should be understood that the embodiments presented herein are not limited to such specific application scenarios. Instead, the proposed solution for obtaining clean training data for an ANN via preprocessing may be applied to any application where similar problems exist and/or where clean training data is required.
Note that in some embodiments, the training apparatus implementing method 200 may be part of an ANN. In another embodiment, the training device may be a separate device that may be connected to the ANN when desired.
Alternatively or additionally, the ANN and/or training device may be part of a DPD module. In another embodiment, the ANN and/or training apparatus may be connected to the DPD module only when needed.
In some embodiments, the ANN, training apparatus, and/or DPD module may be part of a PA system. In another embodiment, the ANN, training apparatus and/or DPD module may be connected to the PA system only when needed.
Some embodiments of the present disclosure also propose a device for communication, which may comprise a network device (e.g., network device 101 in fig. 1) or a terminal device (e.g., terminal device 102 in fig. 1). The device for communicating comprises a PA and a DPD coupled to an input of the PA. In addition, parameters of the DPD are obtained based on an ANN trained with the input reference data and the output reference data, and the output reference data is generated by suppressing noise in feedback data output from the PA, e.g., according to method 200.
Fig. 13 shows a simplified block diagram of an apparatus 1300, which apparatus 1300 may be embodied in/as a communication device, which may include, but is not limited to, a network device or a terminal device. In some embodiments, apparatus 1300 may be separate from the communication device and may be connected to the communication device when desired.
As shown in the example of fig. 13, the apparatus 1300 includes a processor 1310 that controls the operation and function of the apparatus 1300. For example, in some embodiments, the processor 1310 may implement various operations by means of instructions 1330 stored in a memory 1320 coupled thereto. Memory 1320 may be of any suitable type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory, and removable memory, as non-limiting examples. In some example embodiments, the memory 1320 may be a non-transitory computer readable medium. Although only one memory unit is shown in fig. 13, in some embodiments, there may be multiple physically distinct memory units in apparatus 1300.
The processor 1310 may be of any suitable type suitable to the local technical environment and may include, as non-limiting examples, one or more of the following: general purpose computers, special purpose computers, microprocessors, digital Signal Processors (DSPs), central Processing Units (CPUs), field Programmable Gate Arrays (FPGAs), application specific circuits (ASICs), GPUs (graphics processing units), NPUs (neural network processing units), AI (artificial intelligence) accelerators, and processors based on multi-core processor architecture. The apparatus 1300 may also include a plurality of processors 1310 in any combination thereof.
The processor 1310 may also be coupled to one or more transceivers 1340, which transceivers 1340 enable communication with other apparatuses, modules, or devices. In some embodiments, processor 1310 and memory 1320 may cooperate to implement method 200 described with reference to fig. 2-7. It should be appreciated that all of the features described above with reference to fig. 2 to 12 may also be applicable to the apparatus 1300 and will therefore not be described in detail here.
Various embodiments of the present disclosure may be implemented by a computer program or computer program product executable by one or more of the following: a processor (e.g., processor 1310 in fig. 13), software, firmware, hardware, or a combination thereof.
Although some embodiments are described in the context of DPD and PA, they should not be construed to limit the spirit and scope of the present disclosure. The principles and concepts of the present disclosure may be more generally applied to other application scenarios.
In addition, the present disclosure may also provide a carrier (e.g., computer instructions/programs 1330 in fig. 13) containing a computer program as described above. The carrier includes a computer-readable storage medium. The computer readable storage medium may include, for example, an optical disk or an electronic memory device such as RAM (random access memory), ROM (read only memory), flash memory, magnetic tape, CD-ROM, DVD, blu-ray disk, etc.
The techniques described herein may be implemented by various means such that an apparatus implementing one or more functions of a corresponding apparatus described with an embodiment includes not only prior art means but also means for implementing one or more functions of a corresponding apparatus, and the apparatus may include separate means for each separate function or means that may be configured to perform two or more functions. For example, the techniques may be implemented in hardware (e.g., circuitry or a processor), firmware, software, or a combination thereof. For firmware or software, implementation can be through modules (e.g., procedures, functions, and so on) that perform the functions described herein.
Some example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatus. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any implementations or of what may be claimed, but rather as descriptions of features of particular embodiments that may be specific to particular implementations. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
It is obvious to a person skilled in the art that as technology advances, the inventive concept can be implemented in various ways. The above-described embodiments are given for illustration and not limitation of the present disclosure, and it is to be understood that modifications and variations may be made without departing from the spirit and scope of the disclosure, as will be readily appreciated by those skilled in the art. Such modifications and variations are considered to be within the purview of this disclosure and the appended claims. The scope of the present disclosure is defined by the appended claims.

Claims (18)

1. An apparatus for data processing, comprising:
at least one processor; and
at least one memory including computer program code;
the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to:
obtaining input reference data and first output reference data for training an artificial neural network ANN;
generating second output reference data by suppressing noise in the first output reference data;
training the ANN based on the input reference data and the second output reference data; and
parameters of a digital predistortion DPD to be applied to a power amplifier PA are determined based on the trained ANN.
2. The apparatus of claim 1, wherein the at least one memory and the computer program code are configured to, with the at least one processor, further cause the apparatus to:
obtaining the input reference data and the first output reference data further comprises:
obtaining training data input to the PA as the input reference data; and
feedback data output from the PA in response to the training data is obtained as the first output reference data.
3. The apparatus of claim 1, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
the second output reference data is generated by polynomial fitting based on the input reference data and the first output reference data.
4. An apparatus according to claim 3, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
the second output reference data is generated based on a least squares, LS, criterion.
5. The apparatus according to any of claims 1 to 4, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
determining an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively; and
the second output reference data is generated based on the determined amplitude and phase.
6. The apparatus of claim 5, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
determining the amplitude and the phase of the second output reference data further comprises:
determining the amplitude by polynomial fitting based on the amplitude of the first output reference data relative to the input reference data; and
the phase is determined by polynomial fitting based on the phase of the first output reference data relative to the input reference data.
7. The apparatus according to any of claims 1 to 4, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
generating the second output reference data further comprises:
determining an in-phase component and a quadrature component of the second output reference data based on the input reference data and the first output reference data, respectively; and
the second output reference data is generated based on the determined in-phase component and quadrature component.
8. A method of data processing, comprising:
obtaining input reference data and first output reference data for training an artificial neural network ANN;
generating second output reference data by suppressing noise in the first output reference data;
training the ANN based on the input reference data and the second output reference data; and
parameters of a digital predistortion DPD to be applied to a power amplifier PA are determined based on the trained ANN.
9. The method of claim 8, wherein obtaining the input reference data and the first output reference data comprises:
obtaining training data input to the PA as the input reference data; and
feedback data output from the PA in response to the training data is obtained as the first output reference data.
10. The method of claim 8, wherein generating the second output reference data further comprises:
the second output reference data is generated by polynomial fitting based on the input reference data and the first output reference data.
11. The method of claim 10, wherein generating the second output reference data further comprises:
the second output reference data is generated by polynomial fitting based on a least squares, LS, criterion.
12. The method of any of claims 8 to 11, wherein generating the second output reference data comprises:
determining an amplitude and a phase of the second output reference data based on the input reference data and the first output reference data, respectively; and
the second output reference data is generated based on the determined amplitude and phase.
13. The method of claim 12, wherein determining the amplitude and the phase of the second output reference data comprises:
determining the amplitude by polynomial fitting based on the amplitude of the first output reference data relative to the input reference data; and
the phase is determined by polynomial fitting based on the phase of the first output reference data relative to the input reference data.
14. The method of any of claims 8 to 11, wherein generating the second output reference data comprises:
determining an in-phase component and a quadrature component of the second output reference data based on the input reference data and the first output reference data, respectively; and
the second output reference data is generated based on the determined in-phase component and quadrature component.
15. An apparatus for data processing, comprising:
means for obtaining input reference data and first output reference data for training the artificial neural network ANN;
means for generating second output reference data by suppressing noise in the first output reference data;
means for training the ANN based on the input reference data and the second output reference data; and
means for determining parameters of the digital predistortion DPD to be applied to the power amplifier PA based on the trained ANN.
16. The apparatus of claim 15, wherein the means comprises:
at least one processor; and
at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause execution of the apparatus.
17. A computer readable medium having stored thereon a computer program which, when executed by at least one processor of a device, causes the device to perform the method according to any of claims 8 to 14.
18. An apparatus for communication, comprising:
a power amplifier PA; and
a digitally pre-distorted DPD coupled to an input of said PA;
wherein the parameters of the DPD are obtained based on an artificial neural network ANN, the ANN being trained with input reference data and output reference data; and
wherein the output reference data is generated by suppressing noise in feedback data output from the PA.
CN201880094548.3A 2018-07-26 2018-07-26 Method, apparatus and computer readable medium for data processing Active CN112262369B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097217 WO2020019240A1 (en) 2018-07-26 2018-07-26 Method, apparatus and computer readable media for data processing

Publications (2)

Publication Number Publication Date
CN112262369A CN112262369A (en) 2021-01-22
CN112262369B true CN112262369B (en) 2024-04-02

Family

ID=69180322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880094548.3A Active CN112262369B (en) 2018-07-26 2018-07-26 Method, apparatus and computer readable medium for data processing

Country Status (2)

Country Link
CN (1) CN112262369B (en)
WO (1) WO2020019240A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431300B2 (en) 2020-06-12 2022-08-30 Nokia Technologies Oy Machine learning based digital pre-distortion for power amplifiers

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288341A (en) * 1999-09-14 2001-03-21 朗迅科技公司 Method and device for reducing adjacent channel power in radio communication system
CN1453968A (en) * 2002-04-23 2003-11-05 华为技术有限公司 Method of raising efficiency of RF power amplifier based on base band digital predistortion technology
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN101686069A (en) * 2008-09-24 2010-03-31 大唐移动通信设备有限公司 Device and method for calibrating predistortion in time division mobile communication system
CN101764577A (en) * 2009-12-16 2010-06-30 电子科技大学 Baseband pre-distortion power amplifier linearization method based on one-way feedback and non-iterative technique
CN102055696A (en) * 2010-12-06 2011-05-11 西安电子科技大学 Digital predistortion system for inhibiting noise of feedback signal
CN102082751A (en) * 2009-11-27 2011-06-01 电子科技大学 Neural network pre-distortion method based on improved MLBP (Levenberg-Marquardt back propagation) algorithm
KR20110105318A (en) * 2010-03-18 2011-09-26 한국방송공사 Apparatus and method for digital predistortion using adaptive noise cancelation
CN102427336A (en) * 2011-11-30 2012-04-25 上海瑞和安琦通信科技有限公司 Radio frequency power amplification system with function of adaptive digital predistortion linearization
CN103685110A (en) * 2013-12-17 2014-03-26 京信通信系统(中国)有限公司 Predistortion processing method and system and predistortion factor arithmetic unit
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
CN107834983A (en) * 2017-10-18 2018-03-23 宁波大学 A kind of digital pre-distortion linearization parameter extracting method based on cloud platform

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5121691B2 (en) * 2008-12-22 2013-01-16 株式会社東芝 Distortion compensator, transmitter, distortion compensation method
US10074380B2 (en) * 2016-08-03 2018-09-11 Apple Inc. System and method for performing speech enhancement using a deep neural network-based signal

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288341A (en) * 1999-09-14 2001-03-21 朗迅科技公司 Method and device for reducing adjacent channel power in radio communication system
CN1453968A (en) * 2002-04-23 2003-11-05 华为技术有限公司 Method of raising efficiency of RF power amplifier based on base band digital predistortion technology
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN101686069A (en) * 2008-09-24 2010-03-31 大唐移动通信设备有限公司 Device and method for calibrating predistortion in time division mobile communication system
CN102082751A (en) * 2009-11-27 2011-06-01 电子科技大学 Neural network pre-distortion method based on improved MLBP (Levenberg-Marquardt back propagation) algorithm
CN101764577A (en) * 2009-12-16 2010-06-30 电子科技大学 Baseband pre-distortion power amplifier linearization method based on one-way feedback and non-iterative technique
KR20110105318A (en) * 2010-03-18 2011-09-26 한국방송공사 Apparatus and method for digital predistortion using adaptive noise cancelation
CN102055696A (en) * 2010-12-06 2011-05-11 西安电子科技大学 Digital predistortion system for inhibiting noise of feedback signal
CN102427336A (en) * 2011-11-30 2012-04-25 上海瑞和安琦通信科技有限公司 Radio frequency power amplification system with function of adaptive digital predistortion linearization
CN103685110A (en) * 2013-12-17 2014-03-26 京信通信系统(中国)有限公司 Predistortion processing method and system and predistortion factor arithmetic unit
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
CN107834983A (en) * 2017-10-18 2018-03-23 宁波大学 A kind of digital pre-distortion linearization parameter extracting method based on cloud platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Tunnel FET-based ultra-low power, low-noise amplifier design for bio-signal acquisition;Huichu Liu;Proceedings of the 2014 international symposium on Low power electronics and design;20140831;全文 *
包西铁路GSM-R数字光纤直放站方案研究;李庆;;铁道标准设计;20131125(12);全文 *

Also Published As

Publication number Publication date
CN112262369A (en) 2021-01-22
WO2020019240A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
KR102605423B1 (en) System and method for frequency-domain weighted least square for aclr optimization
CN107437927B (en) Method and apparatus for signal predistortion
US11394412B2 (en) Predistortion circuit, method for generating a predistorted baseband signal, control circuit for a predistortion circuit, method to determine parameters for a predistortion circuit, and apparatus and method for predistorting a baseband signal
US8385391B2 (en) Closed-loop receiver feedback pre-distortion
US11082013B2 (en) Method of reducing memory effect of power amplifier
JP2017509179A (en) Method for obtaining digital predistortion parameter and predistortion system
US20220200540A1 (en) Model trainer for digital pre-distorter of power amplifiers
US9755583B2 (en) Using fractional delay computations to improve intermodulation performance
JP6554265B2 (en) Baseband digital predistortion architecture
EP2907236A1 (en) Method and apparatus for predicting signal characteristics for a nonlinear power amplifier
US11424773B2 (en) Low complexity transmitter structure for active antenna systems
CN106470018B (en) Frequency error factor in time-domain digital predistortion
CN110720201A (en) Output power adjusting method and related product
CN112262369B (en) Method, apparatus and computer readable medium for data processing
US20140250309A1 (en) Predictive self calibrated power control
US20140218107A1 (en) Method and apparatus for applying predistortion to an input signal for a nonlinear power amplifier
US8824984B2 (en) Outphasing power combining by antenna
US20150055731A1 (en) Digital Transmitter With Sample Rate Digital Predistortion
CN111373706B (en) Output power adjusting method and related product
US20200220564A1 (en) Transceivers for a wireless communication system, mobile device, and method for improving transceiver loopback calibration accuracy
WO2018191967A1 (en) Non-linear distortion mitigation for power amplifier
Zeleny et al. Receiver-aided predistortion of power amplifier non-linearities in cellular networks
US9432062B2 (en) Polar noise shaping
US20200412305A1 (en) Method and arrangement for compensating memory effects in power amplifier
WO2023230819A1 (en) Digital predistortion method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant