WO2022018487A1 - State estimation for aerial user equipment (ues) operating in a wireless network - Google Patents

State estimation for aerial user equipment (ues) operating in a wireless network Download PDF

Info

Publication number
WO2022018487A1
WO2022018487A1 PCT/IB2020/056861 IB2020056861W WO2022018487A1 WO 2022018487 A1 WO2022018487 A1 WO 2022018487A1 IB 2020056861 W IB2020056861 W IB 2020056861W WO 2022018487 A1 WO2022018487 A1 WO 2022018487A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
model
aerial
wireless network
models
Prior art date
Application number
PCT/IB2020/056861
Other languages
French (fr)
Inventor
Sholeh YASINI
Torbjörn WIGREN
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2020/056861 priority Critical patent/WO2022018487A1/en
Publication of WO2022018487A1 publication Critical patent/WO2022018487A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/74Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems
    • G01S13/76Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems wherein pulse-type signals are transmitted
    • G01S13/765Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems wherein pulse-type signals are transmitted with exchange of information between interrogator and responder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/0009Transmission of position information to remote stations
    • G01S5/0018Transmission from mobile station to base station
    • G01S5/0027Transmission from mobile station to base station of actual mobile position, i.e. position determined on mobile
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0278Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves involving statistical or probabilistic considerations
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0026Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • G08G5/0082Surveillance aids for monitoring traffic from a ground station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S2205/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S2205/01Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations specially adapted for specific applications
    • G01S2205/03Airborne

Definitions

  • Figure 19 is a flow diagram illustrating an exemplary method (e.g., procedure) for estimating movement of an aerial user equipment (UE, e.g., drone) in a wireless network based on a plurality of movement models, according to various exemplary embodiments of the present disclosure.
  • UE aerial user equipment
  • EPC 130 can also include a Home Subscriber Server (HSS) 131, which manages user- and subscriber-related information.
  • HSS 131 can also provide support functions in mobility management, call and session setup, user authentication and access authorization.
  • the functions of HSS 131 can be related to the functions of legacy Home Location Register (HLR) and Authentication Centre (AuC) functions or operations.
  • HSS 131 can also communicate with MME/S-GWs 134 and 138 via respective S6a interfaces.
  • HSS 131 can communicate with a user data repository (UDR) - labelled EPC-UDR 135 in Figure 1 - via a Ud interface.
  • EPC-UDR 135 can store user credentials after they have been encrypted by AuC algorithms. These algorithms are not standardized (i.e., vendor-specific), such that encrypted credentials stored in EPC-UDR 135 are inaccessible by any other vendor than the vendor of HSS 131.
  • DL-AoD DL angle of departure
  • gNB or LMF calculates the UE angular position based upon UE DL RSRP measurement results (e.g., of PRS transmitted by network nodes).
  • Positioning servers e.g., E-SMLC and SLP, over standardized or proprietary interfaces
  • Another exemplary technique is to apply a model of the ground altitude within each cell of the network.
  • An example of this technique is disclosed in “Wireless hybrid positioning based on surface modeling with polygon support”, which was published inProc. VTC 2018 Spring, June 2018 and is incorporated by reference in its entirety.
  • Figure 10 shows a topography intended to describe a coastal region, with zero altitude representing mean sea level. Coverage of cells are depicted with overlaid contour lines.
  • This exemplary technique was used to compute a 3D surface estimate for each cell, with the result depicted in Figure 11.
  • the exemplary technique can reduce the maximum vertical errors by approximately 80%.
  • the vertical minimum mean-square error (MMSE) is less than 3 m in almost 50% of the coverage areas of the cells shown in Figures 10-11.
  • aerial UEs operating as “rogue drones”.
  • Possible solutions include limiting, releasing, and/or disconnecting aerial UE communications with the network (e.g., E-UTRAN) and/or alerting relevant government authorities who can take appropriate action against rogue drone operation.
  • a prerequisite for these and other solutions is network knowledge of the current position, speed, and directional bearing (collectively “state”) of aerial UEs.
  • state the current position, speed, and directional bearing
  • conventional UEs can provide such information based on 3GPP ECS techniques, such as assisted-GNSS, operators of aerial UEs often disable such features, leaving it up to the network to determine or estimate the current state of aerial UEs based on positioning measurements made by the network.
  • the matrices F, H, Q, and R are assumed known and possibly time varying. In other words, the system can be time varying and the noises nonstationary.
  • the initial state x(0) in general unknown, is modeled as a random variable, Gaussian distributed with known mean and covariance.
  • the two noise sequences and the initial state are assumed mutually independent, which is also referred to as a “Linear-Gaussian (LG) assumption.”
  • Design of an IMM filter requires three primary choices or decisions.
  • the first choice is selection and definition of movement modes for the object whose state is to be estimated. This amounts to definition of a state space model for each movement mode. This can include a vector difference equation that defines the model dynamics and a static vector equation that defines the measurement relation, thereby mapping states to measurements.
  • the inaccuracies or uncertainties of the measurement relation and the model dynamics need to be specified in terms of respective covariance matrices.
  • the measurement may be integrated by augmentation of the measurement matrix R 1 . or it can be handled in a separate measurement update. In case the OTDOA measurements do not contain altitude information, the barometric measurement is needed.
  • the measurement is 1 -dimensional and expressed with the linear measurement matrix (assuming it has been transformed to altitude)
  • This exemplary scenario involves the following operations by a drone whose state needs to be estimated.
  • Figure 18 also shows athird option in which state estimator 1530 is in an external “cloud” together with external client 1880.
  • state estimator 1530 can communicate with CN 1895 via AMF 1830 and/or MME 1860, essentially operating as another external LCS client.
  • the state estimator can also function as a “server” to external client 1880, providing drone state estimate information upon request from external client 1880.
  • Various signaling protocols can be defined for this purpose.
  • the exemplary method can also include the operations of block 1930, in which the node can determine a movement state for the aerial UE at the second time based on combining the models according to their estimated probabilities.
  • the IMM can also include a Hidden Markov Model (HMM) comprising respective probabilities of transition from any of the models at the first time to any of the models at the second time.
  • HMM Hidden Markov Model
  • Exemplary HMMs are discussed in more detail above. In some of these embodiments, the following probabilities of transition can be much smaller than one:
  • GNSS global navigation satellite system
  • virtualization layer 2050 can be used to provide VMs that are abstracted from the underlying hardware nodes 2030.
  • processing circuitry 2060 executes software 2095 to instantiate virtualization layer 2050, which can sometimes be referred to as a virtual machine monitor (VMM).
  • virtualization layer 2050 can present a virtual operating platform that appears like networking hardware to containers and/or pods hosted by environment 2000.
  • each VM e.g., as facilitated by virtualization layer 2050
  • Each VM can have dedicated hardware nodes 2030 or can share resources of one or more hardware nodes 2030 with other VMs.
  • each application 2040 can be arranged in a pod, which can include one or more containers 2041, such as 2041a-b shown for a particular application 2040 in Figure 20.
  • Container 2041a-b can encapsulate respective services 2042a-b of the particular application 2040.
  • a “pod” e.g., a Kubemetes pod
  • Each pod can include a plurality of resources shared by containers within the pod (e.g., resources 2043 shared by containers 2041a-b).

Abstract

Embodiments include methods for estimating movement of an aerial user equipment (UE) in a wireless network based on a plurality of movement models. Such methods include obtaining, from the wireless network, position measurements for the aerial UE at a second time subsequent to a first time, and determining a state of an interacting multiple-model (IMM) for movement of the aerial UE at the second time based on the position measurements. The IMM model includes a first constant-velocity model, a second constant-acceleration model, a third constant-position model, and estimated probabilities associated with the respective models. Such method also includes determining a movement state for the aerial UE at the second time based on combining the models according to their estimated probabilities. Other embodiments include positioning nodes configured to perform such methods.

Description

STATE ESTIMATION FOR AERIAL USER EQUIPMENT (UEs) OPERATING IN A
WIRELESS NETWORK
TECHNICAL FIELD
The present disclosure generally relates to wireless networks, and particularly relates to estimating movement states for airborne user equipment (also referred to as “aerial UEs” or “drones”) that communicate with their operators via a wireless network (e.g., cellular network).
BACKGROUND
Long Term Evolution (LTE) is an umbrella term for so-called fourth generation (4G) radio access technologies developed within the Third-Generation Partnership Project (3GPP) and initially standardized in Release 8 (Rel-8) and Release 9 (Rel-9), also known as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (UTRAN) (E- UTRAN). LTE is targeted at various licensed frequency bands and is accompanied by improvements to non-radio aspects commonly referred to as System Architecture Evolution (SAE), which includes Evolved Packet Core (EPC) network. LTE continues to evolve through subsequent releases that are developed according to standards-setting processes with 3GPP and its working groups (WGs), including the Radio Access Network (RAN) WG, and sub-working groups (e.g., RANI, RAN2, etc.).
LTE Rel-10 supports bandwidths larger than 20 MHz. One important requirement on Rel- 10 is backward compatibility with LTE Rel-8. This also includes spectrum compatibility in which a wideband LTE Rel- 10 carrier (e. g. , wider than 20 MHz) should appear as multiple carriers to an LTE Rel-8 (“legacy”) terminal (“user equipment” or UE). Each such carrier can be referred to as a Component Carrier (CC). For efficient usage, legacy terminals can be scheduled in all parts of the wideband LTE Rel-10 carrier. This can be done by Carrier Aggregation (CA), in which a Rel- 10 terminal receives multiple CCs, each having the same structure as a Rel-8 carrier. LTE Rel-12 introduced dual connectivity (DC) whereby a UE can be connected to two network nodes simultaneously, thereby improving connection robustness and/or capacity.
Communications between an LTE network and user equipment (UEs) is based on a multi- layer protocol stack that includes Physical (PHY), Medium Access Control (MAC), Radio Link Control (RLC), Packet Data Convergence Protocol (PDCP), and Radio Resource Control (RRC) layers. The multiple access scheme for the LTE PHY is based on Orthogonal Frequency Division Multiplexing (OFDM) with a cyclic prefix (CP) in the downlink (DL), e.g., E-UTRAN to user equipment (UE), and on Single-Carrier Frequency Division Multiple Access (SC-FDMA) with a cyclic prefix in the uplink (UL), e.g., UE to E-UTRAN. An LTE E-UTRAN comprises a plurality of evolved Node B’s (eNBs), each of which communicates with UEs via one or more cells.
Currently the fifth generation (“5G”) of cellular systems, also referred to as New Radio (NR), is being standardized within the Third-Generation Partnership Project (3GPP). NR is developed for maximum flexibility to support many different use cases. These include mobile broadband, machine type communication (MTC), ultra-low latency critical communications (URLCC), side-link device-to-device (D2D), and several other use cases. Fifth-generation NR technology shares many similarities with fourth-generation LTE, particularly in relation to the protocol layers and radio interface. In addition to providing coverage via cells, as in LTE, NR networks also provide coverage via “beams.” In general, a DL “beam” is a coverage area of a network-transmitted reference signal (RS) that may be measured or monitored by a UE. UL beams are transmitted by UEs in a similar manner.
3GPP standards provide various ways for positioning (e.g., determining the position of, locating, and/or determining the location of) UEs operating in LTE networks. In general, an LTE positioning node (referred to as Evolved Serving Mobile Location Center, “E-SMLC” or “location server”) configures a target device (e.g., UE), an eNB, and/or a radio network node dedicated for positioning measurements (e.g., a “location measurement unit” or “LMU”) to perform one or more positioning measurements according to one or more positioning methods. For example, the positioning measurements can include timing (and/or timing difference) measurements on UE, network, and/or satellite transmissions. The positioning measurements are used by the target device (e.g., UE), the measuring node, and/or the E-SMLC to determine the location of the target device. UE positioning (also referred to as “location services” or LCS) is also expected to be an important feature for NR networks.
Airborne radio-controlled drones (i.e., unmanned aerial vehicles or UAVs for short) are becoming more and more common. Conventionally, drones have been limited to operate within the propagation range of radio signals from dedicated or associated controllers used by drone operators. However, recently functionality allowing drones to be remotely controlled over the cellular network has increased their range considerably. However, a recent trend is to extend drone operational range by attaching an LTE UE and coupling the UE to the drone’s navigation system, thereby creating an “airborne UE” or “aerial UE”. With this arrangement, the drone can be controlled over a much wider range covering multiple cells, limited primarily by the drone’s battery capacity. In some markets, this is already being regulated, such as by requiring UEs attached to drones in this manner to be registered as aerial UEs. Even so, many operators fail (or refuse) to register their aerial UEs, such that these drones become “rogue drones”. In the following, the terms “aerial UE” and “drone” are used interchangeably unless otherwise noted.
Aerial UEs need to be restricted in flight for various reasons. For example, aerial UEs may experience radio propagation conditions that are different than those experienced by a conventional UE on or close to the ground. When an aerial UE is flying at a low altitude relative to a base station antenna height, the aerial UE behaves like a conventional UE. When the aerial UE is flying well above the base station antenna height, however, the uplink signal from the aerial UE can be received by multiple (e.g., many) cells since the lack of obstructions at this height creates highly favorable (e.g., line-of-sight) propagation conditions.
As such, the uplink signal from the aerial UE can increase interference in neighbor cells. Increased interference negatively impacts conventional UEs (e.g., smartphones, Intemet-of- Things (IoT) devices, etc.) on or near the ground. Thus, the network may need to limit the admission of aerial UE in the network to restrict the impact to the performance of the conventional UEs. Furthermore, because the base station antenna beam patterns are typically down-tilted (e.g., negative elevation angle) to serve UEs at or near ground level, conventional UEs typically receive from/transmit to the antenna pattern’s main lobe. However, aerial UEs flying significantly above antenna height are likely served by the antenna pattern’s side lobes, which can differ significantly within a small area. Accordingly, aerial UEs may experience sudden signal loss that can cause the operator to lose control of the drone.
Furthermore, aerial UEs can create hazardous situations when flying illegally in certain parts of the airspace. For example, rogue drones have endangered commercial air traffic by flying in restricted airspace near major airports, with many such events reported in both Europe and the U.S. In fact, in early 2019 there were several such events that temporarily closed Heathrow, Gatwick, and Newark international airports. Other hazardous situations include entry into military restricted areas and airspace over densely populated areas where a crash would likely cause human injuries.
Accordingly, it can be beneficial to restrict and/or limit aerial UEs operating as “rogue drones” in such scenarios. Possible solutions include limiting, releasing, and/or disconnecting aerial UE communications with the network (e.g., E-UTRAN) and/or alerting relevant government authorities who can take appropriate action against rogue drone operation. A prerequisite for these and other solutions is network knowledge of the current position, speed, and directional bearing (collectively “state”) of aerial UEs. Although conventional UEs can provide such information based on 3GPP LCS techniques, operators of aerial UEs often disable such features, leaving it up to the network to determine or estimate the current state of aerial UEs based on measurements made by the network.
Although there are existing techniques for estimating state of a moving object based on measurements, these have various problems, issues, and/or difficulties when applied to state estimation for aerial UEs or drones, particularly in relation to a drone’s unique movement patterns.
SUMMARY
Embodiments of the present disclosure provide specific improvements to controlling aerial UEs engaged in unauthorized aerial operation (e.g., as “rogue drones”), such as by providing, enabling, and/or facilitating solutions to overcome exemplary problems summarized above and described in more detail below.
Some embodiments include methods (e.g., procedures) for estimating movement of an aerial user equipment (UE, e.g., drone) in a wireless network based on a plurality of movement models. These exemplary methods can be implemented in a positioning node (e.g., E-SMLC, Secure User Plane Location (SUPL) Location Platform (SLP), Location Management Function (LMF), etc.) in a wireless network (e.g., EPC, 5G Core (5GC)), a different type of node in the wireless network, or in a cloud environment outside of the wireless network.
These exemplary methods can include obtaining, from the wireless network, position measurements for the aerial UE at a second time subsequent to a first time. These exemplary methods can also include determining a state of an interacting multiple-model (IMM) for movement of the aerial UE at the second time based on the position measurements. The IMM model can include a first constant-velocity model, a second constant-acceleration model, a third constant-position model, and estimated probabilities associated with the respective models. These exemplary methods can also include determining a movement state for the aerial UE at the second time based on combining the models according to their estimated probabilities.
In some embodiments, the IMM can also include a Hidden Markov Model (HMM) comprising respective probabilities of transition from any of the models at the first time to any of the models at the second time. In some of these embodiments, the following probabilities of transition can be much smaller than one:
• from the first model at the first time to the third model at the second time, and
• from the third model at the first time to the first model at the second time.
Furthermore, in some of these embodiments, the above-listed probabilities of transition can be much smaller than all other probabilities of transition comprising the HMM model. In some embodiments, determining the state of the IMM can include determining a mixed initial condition associated with each model based on the states of the models at the first time and the HMM; determining a likelihood function associated with each model based on the associated mixed initial condition and the position measurements; and determining the associated estimated probability at the second time based on the associated likelihood function and an associated estimated probability at the first time.
In some embodiments, determining the state of the IMM can also include determining states for the respective models at the second time based on the position measurements and corresponding states for the respective models at the first time. In some embodiments, the states of the respective models can be determined using respective Kalman filters. Furthermore, determining the movement state for the aerial UE can include combining the states of the respective models at the second time according to their respective estimated probabilities at the second time.
In various embodiments, the position measurements can include any of the following:
• two-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by a global navigation satellite system (GNSS);
• three-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by the GNSS;
• altitude measurement based on a topographical model of a region covered by the wireless network; and
• altitude measurement based on barometric pressure.
In various embodiments, the determined movement state for the aerial UE can include one or more of the following:
• three-dimensional position relative to mean sea level;
• three-dimensional position relative to local ground level;
• two-dimensional position;
• three-dimensional velocity; and
• two-dimensional velocity.
In some embodiments, these exemplary methods can be performed by one of the following positioning nodes associated with the wireless network: E-SMLC, SLP, or location management function (LMF). In other embodiments, these exemplary methods can be performed by a location services (LCS) client external to the wireless network. In these embodiments, obtaining the positioning measurements can include sending, to a positioning node associated with the wireless network, a request for the position measurements (i.e., of the aerial UE); and receiving the requested position measurements from the positioning node.
In some embodiments, these exemplary methods can also include sending the determined movement state to one or more of the following: abase station serving the aerial UE in the wireless network, or a location services (LCS) client external to the wireless network. In some embodiments, the movement state can be sent together with an identifier of the movement state, an identifier of the aerial UE, and an identifier of the second time.
Other embodiments include positioning nodes (e.g., E-SMLCs, LMFs, SLPs, etc. or components thereof) configured to perform operations corresponding to any of the exemplary methods described herein. Other embodiments include non-transitory, computer-readable media storing program instructions that, when executed by processing circuitry, configure such positioning nodes (or external LCS clients, e.g., in a cloud) to perform operations corresponding to any of the exemplary methods described herein.
These and other objects, features, and advantages of embodiments disclosed herein will become apparent upon reading the following Detailed Description in view of the Drawings briefly described below.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a high-level block diagram of an exemplary architecture of the Long-Term Evolution (LTE) Evolved UTRAN (E-UTRAN) and Evolved Packet Core (EPC) network, as standardized by 3GPP.
Figure 2A is a high-level block diagram of an exemplary E-UTRAN architecture in terms of its constituent components, protocols, and interfaces.
Figure 2B is a block diagram of exemplary protocol layers of the control-plane portion of the radio (Uu) interface between a user equipment (UE) and the E-UTRAN.
Figure 3 illustrates a high-level architecture for supporting UE positioning in LTE networks.
Figure 4 shows a more detailed network diagram of an LTE positioning architecture.
Figure 5 shows a high-level view of an exemplary 5G network architecture, according to various exemplary embodiments of the present disclosure.
Figure 6 shows an exemplary non-roaming 5G reference architecture with service-based interfaces and various network functions (NFs), as further described in 3GPP Technical Specification (TS) 23.501. Figure 7 shows an exemplary high-level positioning architecture within an NR network, which includes the Next Generation Radio Access Network (NG-RAN) and the 5GC.
Figure 8 shows an exemplary reference signal time difference (RSTD) measurement arrangement for observed time difference of arrival (OTDOA) UE positioning.
Figure 9 shows an exemplary resource block (RB) defined for the LTE downlink (DL) radio interface.
Figures 10-11 show two views of a topography model that can be used to estimate ground altitude in a coverage area of a wireless network.
Figure 12 shows an exemplary conventional air vehicle state estimation system.
Figure 13 shows an exemplary flow diagram of a Kalman filter.
Figure 14 shows a block diagram that illustrates one operation cycle of an interacting multiple -model (IMM) algorithm that includes r interacting filters operating in parallel.
Figure 15 is a block diagram illustrating the architecture of an exemplary aerial UE (or drone) state estimation system, according to various exemplary embodiments of the present disclosure.
Figures 16-17 show various results of a simulation of a drone state estimator configured according to exemplary embodiments described herein.
Figure 18 illustrates various exemplary embodiments related to 3GPP LCS network architecture and signaling associated with drone state estimation.
Figure 19 is a flow diagram illustrating an exemplary method (e.g., procedure) for estimating movement of an aerial user equipment (UE, e.g., drone) in a wireless network based on a plurality of movement models, according to various exemplary embodiments of the present disclosure.
Figure 20 is a schematic block diagram illustrating a virtualization environment suitable for implementation of various embodiments of an aerial UE (or drone) state estimator described herein.
DETAILED DESCRIPTION
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided as examples to convey the scope of the subject matter to those skilled in the art. Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods and/or procedures disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein can be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments can apply to any other embodiments, and vice versa. Other objects, features, and advantages of the enclosed embodiments will be apparent from the following description. Furthermore, the following terms are used throughout the description given below:
• Radio Node: As used herein, a “radio node” can be either a “radio access node” or a “wireless device.”
• Radio Access Node: As used herein, a “radio access node” (or equivalently “radio network node,” “radio access network node,” or “RAN node”) can be any node in a radio access network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g., a New Radio (NR) base station (gNB) in a 3GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP LTE network), base station distributed components (e.g., Centralized Unit (CU) and Distributed Unit (DU)), a high-power or macro base station, a low-power base station (e.g., micro, pico, femto, or home base station, or the like), an integrated access backhaul (IAB) node, a transmission point (TP), a transmission reception point (TRP), a remote radio unit (RRU) or Remote Radio Head (RRH), and a relay node.
• Core Network Node: As used herein, a “core network node” is any type of node in a core network. Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a serving gateway (SGW), a Packet Data Network (PDN) Gateway (P- GW), a Policy and Charging Rules Function (PCRF), an access and mobility management function (AMF), a session management function (SMF), a user plane function (UPF), a Charging Function (CHF), a Policy Control Function (PCF), an Authentication Server Function (AUSF), a location management function (LMF), or the like.
• Wireless Device: As used herein, a “wireless device” (or “WD” for short) is any type of device that has access to (i.e., is served by) a cellular communications network by communicate wirelessly with network nodes and/or other wireless devices. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. Unless otherwise noted, the term “wireless device” is used interchangeably herein with “user equipment” (or “UE” for short). Some examples of a wireless device include, but are not limited to, smart phones, mobile phones, cell phones, voice over IP (VoIP) phones, wireless local loop phones, desktop computers, personal digital assistants (PDAs), wireless cameras, gaming consoles or devices, music storage devices, playback appliances, wearable devices, wireless endpoints, mobile stations, tablets, laptops, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customer-premise equipment (CPE), mobile-type communication (MTC) devices, Intemet-of-Things (IoT) devices, vehicle -mounted wireless terminal devices, etc.
• Network Node: As used herein, a “network node” is any node that is either part of the radio access network (e.g., a radio access node or equivalent name discussed above) or of the core network (e.g., a core network node discussed above) of a cellular communications network. Functionally, a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions (e.g., administration) in the cellular communications network.
• Base station: As used herein, a “base station” may comprise a physical or a logical node transmitting or controlling the transmission of radio signals, e.g., eNB, gNB, ng-eNB, en- gNB, centralized unit (CU)/distributed unit (DU), transmitting radio network node, transmission point (TP), transmission reception point (TRP), remote radio head (RRH), remote radio unit (RRU), Distributed Antenna System (DAS), relay, etc.
• Positioning node: As used herein, “positioning node” can refer to a network node with positioning functionality, e.g., ability to provide assistance data, request positioning measurements, calculate a location based on positioning measurements, and/or provide a calculated location to other network nodes or to an external client.
• Positioning signals: As used herein, “positioning signals” may include any signal or channel to be received by a UE or a network node for performing a positioning measurement such as a DL reference signal, Positioning Reference Signals (PRS), Synchronization Signal Block (SSB), synchronization signal, Demodulation Reference Signal (DM-RS), Channel State Information Reference Signal (CSI-RS), Sounding Reference Signal (SRS), satellite signal, etc.
• Positioning measurements: As used herein, “positioning measurements” may include timing measurements (e.g., time difference of arrival, TDOA, RSTD, time-of-arrival, TOA, Rx-Tx time difference, Round Trip Time (RTT), etc.), power-based measurements (e.g., reference signal received power, RSRP, reference signal received quality, RSRQ, signal-to-interference-plus-noise ratio, SINR, etc.), identifier detection/measurement (e.g., cell ID, beam ID, etc.), and/or other sensor measurement (e.g., barometric pressure) that are configured for a positioning method (e.g., OTDOA, Enhanced Cell ID, E-CID, Assisted GNSS, A-GNSS, etc.). UE positioning measurements may be reported to a network node or may be used for positioning purposes by the UE.
• Position measurement: As used herein, “position measurement” (or equivalent “location result”) is an estimated position or location of an entity (e.g., UE) that is computed from positioning measurements.
• Geometric Dilution of Precision (GDOP) information: As used herein, “GDOP information” (or more simply, “GDOP”) refers to a ratio of position determination error to range measurement error, whereby lower GDOP indicates better positioning accuracy. For example, GDOP can be computed based on the locations of the positioning signal sources (e.g., base-stations, TPs, etc.) and the UE performing the positioning measurements.
The above definitions are not meant to be exclusive. In other words, various ones of the above terms may be explained and/or described elsewhere in the present disclosure using the same or similar terminology. Nevertheless, to the extent that such other explanations and/or descriptions conflict with the above definitions, the above definitions should control.
Note that the description given herein focuses on a 3GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. Furthermore, although the term “cell” is used herein, it should be understood that (particularly with respect to 5G NR) beams may be used instead of cells and, as such, concepts described herein apply equally to both cells and beams.
As briefly mentioned above, there are existing techniques for estimating state of a moving object based on measurements; these have various problems, issues, and/or difficulties when applied to state estimation for aerial UEs or drones, particularly in relation to a drone’s unique movement patterns. This is discussed in more detail after the following discussion of LTE and 5G/NR network architectures and LTE and NR positioning architectures.
An overall exemplary architecture of a network comprising LTE and SAE is shown in Figure 1. E-UTRAN 100 includes one or more evolved Node B’s (eNB), such as eNBs 105, 110, and 115, and one or more user equipment (UE), such as UE 120. As used within the 3GPP standards, “user equipment” or “UE” means any wireless communication device (e.g. , smartphone or computing device) that is capable of communicating with 3GPP-standard-compliant network equipment, including E-UTRAN as well as UTRAN and/or Global System for Mobile Communications (GSM) Enhanced Data rates for GSM Evolution (EDGE) Radio Access Network (GERAN), as the third-generation (“3G”) and second-generation (“2G”) 3GPP RANs are commonly known.
As specified by 3GPP, E-UTRAN 100 is responsible for all radio-related functions in the network, including radio bearer control, radio admission control, radio mobility control, scheduling, and dynamic allocation of resources to UEs (e.g., UE 120) in uplink and downlink, as well as security of the communications with UEs. These functions reside in the eNBs, such as eNBs 105, 110, and 115. Each of the eNBs can serve a geographic coverage area including one more cells, including cells 106, 111, and 116 served by eNBs 105, 110, and 115, respectively.
The eNBs in the E-UTRAN communicate with each other via the X2 interface, as shown in Figure 1. The eNBs also are responsible for the E-UTRAN interface to the EPC 130, specifically the S1 interface to the Mobility Management Entity (MME) and the Serving Gateway (SGW), shown collectively as MME/S-GWs 134 and 138 in Figure 1. Generally speaking, the MME/S-GW handles both the overall control of the UE and data flow between the UE and the rest of the EPC. More specifically, the MME processes the signaling (e.g., control plane) protocols between the UE and the EPC, which are known as the Non-Access Stratum (NAS) protocols. The SGW handles all Internet Protocol (IP) data packets (e.g, data or user plane) between the UE and the EPC and serves as the local mobility anchor for the data bearers when UE 120 moves between eNBs, such as eNBs 105, 110, and 115.
EPC 130 can also include a Home Subscriber Server (HSS) 131, which manages user- and subscriber-related information. HSS 131 can also provide support functions in mobility management, call and session setup, user authentication and access authorization. The functions of HSS 131 can be related to the functions of legacy Home Location Register (HLR) and Authentication Centre (AuC) functions or operations. HSS 131 can also communicate with MME/S-GWs 134 and 138 via respective S6a interfaces. In some embodiments, HSS 131 can communicate with a user data repository (UDR) - labelled EPC-UDR 135 in Figure 1 - via a Ud interface. EPC-UDR 135 can store user credentials after they have been encrypted by AuC algorithms. These algorithms are not standardized (i.e., vendor-specific), such that encrypted credentials stored in EPC-UDR 135 are inaccessible by any other vendor than the vendor of HSS 131.
Figure 2A shows a high-level block diagram of an exemplary LTE architecture in terms of its constituent entities - UE, E-UTRAN, and EPC - and high-level functional division into the Access Stratum (AS) and the Non-Access Stratum (NAS). Figure 2A also illustrates two particular interface points, namely Uu (UE/E-UTRAN Radio Interface) and S1 (E-UTRAN/EPC interface), each using a specific set of protocols, i.e., Radio Protocols and S1 Protocols.
Figure 2B illustrates a block diagram of an exemplary Control (C)-plane protocol stack between a UE, an eNB, and an MME. The exemplary protocol stack includes Physical (PHY), Medium Access Control (MAC), Radio Link Control (RLC), Packet Data Convergence Protocol (PDCP), and Radio Resource Control (RRC) layers between the UE and eNB. The PHY layer is concerned with how and what characteristics are used to transfer data over transport channels on the LTE radio interface. The MAC layer provides data transfer services on logical channels, maps logical channels to PHY transport channels, and reallocates PHY resources to support these services. The RLC layer provides error detection and/or correction, concatenation, segmentation, and reassembly, reordering of data transferred to or from the upper layers. The PDCP layer provides ciphering/deciphering and integrity protection for both U-plane and C-plane, as well as other functions for the U-plane such as header compression. The exemplary protocol stack also includes non-access stratum (NAS) signaling between the UE and the MME.
The RRC layer controls communications between a UE and an eNB at the radio interface, as well as the mobility of a UE between cells in the E-UTRAN. After a UE is powered ON it will be in the RRC_IDLE state until an RRC connection is established with the network, at which time the UE will transition to RRC_CONNECTED state (e.g., where data transfer can occur). The UE returns to RRC_IDLE after the connection with the network is released. In RRC_IDLE state, the UE’s radio is active on a discontinuous reception (DRX) schedule configured by upper layers. During DRX active periods (also referred to as “DRX On durations”), an RRC_IDLE UE receives system information (SI) broadcast by a serving cell, performs measurements of neighbor cells to support cell reselection, and monitors a paging channel on PDCCH for pages from the EPC via eNB. A UE in RRC_IDLE state is known in the EPC and has an assigned IP address, but is not known to the serving eNB (e.g.. there is no stored context). Logical channel communications between a UE and an eNB are via radio bearers. Since LTE Rel- 8, signaling radio bearers (SRBs) SRB0, SRB1, and SRB2 have been available for the transport of control plane (CP) messages including RRC and NAS. SRB0 is used for RRC connection setup, RRC connection resume, and RRC connection re-establishment. Once any of these operations has succeeded, SRB 1 is used for handling RRC messages (which may include a piggybacked NAS message) and for NAS messages prior to establishment of SRB2. SRB2 is used for NAS messages and lower-priority RRC messages (e.g., logged measurement information). SRB0 and SRBl are also used for establishment and modification of data radio bearers (DRBs) for carrying user plane (UP) data between the UE and eNB.
Figure 3 shows an exemplary positioning architecture within an LTE network. Three important functional elements of the LTE positioning architecture are LCS Client, LCS target, and LCS Server. The LCS Server is a physical or logical entity (e.g., as embodied by the E-SMLC or SLP in Figure 3) that manages positioning for an LCS target (e.g., as embodied by the UE in Figure 3) by collecting positioning measurements and other location information, assisting the terminal in positioning measurements when necessary, and estimating the LCS target location.
In general, LCS Servers are located in a Core Network (CN, e.g., EPC) and communicate with and/or via other CN nodes and/or functions such as MME, S-GW, and Packet Data Network Gateway (P-GW). The E-SMLC is responsible for control-plane (CP) positioning and communicates with various entities using different protocols. For example, E-SMLC communicates with MME via LCS Application Protocol (LCS-AP), with the RAN (e.g., E- UTRAN) via LTE Positioning Protocol A (LPPa) (which can be transparent to MME), and with the LCS target via LTE Positioning Protocol (LPP) (which can be transparent to both RAN and MME). In contrast, the SLP is responsible for user-plane (UP) positioning procedures. The SLP communicates with the UE via LPP and/or secure user plane location (SUPL) protocols, which can be transparent to other UP entities including RAN, S-GW, and P-GW. The LTE radio interface between RAN and UE is also referred to as LTE-Uu.
An LCS Client is a software and/or hardware entity that interacts with an LCS Server for the purpose of obtaining location information for one or more LCS targets (i.e., the entities being positioned) such as the UE in Figure 3. LCS Clients may also reside in the LCS targets themselves. An LCS Client sends a request to an LCS Server to obtain location information, and the LCS Server processes and serves the received requests and sends the positioning result and optionally a velocity estimate to the LCS Client. A positioning request can originate from the terminal or a network node or external client. For example, an external LCS client can communicate with SLP via SUPL and with E-SMLC via Gateway Mobile Location Centre (GMLC) and MME. In the LTE architecture shown in Figure 3, position calculation can be conducted, for example, by the LCS Server (e.g., E-SMLC or SLP) or by the LCS target (e.g., a UE). The former approach corresponds to the UE-assisted positioning mode when it is based on UE positioning measurements, whilst the latter corresponds to the UE-based positioning mode. The following positioning methods are supported in LTE:
• Enhanced Cell ID (E-CID). Utilizes information to associate the UE with the geographical area of a serving cell, and then additional information to determine a finer granularity position. The following positioning measurements are supported for E-CID: Angle of Arrival (AoA) (base station only), UE Rx-Tx time difference, timing advance (TA) types 1 and 2, reference signal received power (RSRP), and reference signal received quality (RSRQ).
• Assisted GNSS. The UE receives and measures Global Navigation Satellite System (GNSS) signals, supported by assistance information provided to the UE from E-SMLC.
• OTDOA (Observed Time Difference of Arrival). The UE receives and measures LTE signals transmitted by the RAN (including eNBs and radio beacons), supported by assistance information provided to the UE from E-SMLC.
• UTDOA (Uplink TDOA). The UE is requested to transmit a specific waveform that is detected by multiple location measurement units (LMUs, which may be standalone, co- located or integrated into an eNB) at known positions. These positioning measurements are forwarded to the E-SMLC for multilateration.
A Terrestrial Beacon System (TBS) may be used to further enhance the positioning methods based on radio signals received by an LCS target (e.g., UE). The TBS can include a network of ground- based transmitters that broadcast signals only for positioning purposes. These can include the (non-LTE) Metropolitan Beacon System (MBS) signals as well as LTE Positioning Reference Signals (PRS), discussed in more detail below.
In addition, one or more of the following positioning modes can be utilized in each of the positioning methods listed above:
• UE-Assisted: The UE performs positioning measurements with or without assistance from the network and sends these measurements to the E-SMLC where the position calculation may take place.
• UE-Based: The UE performs positioning measurements and calculates its own position with assistance from the network.
• Standalone: The UE performs positioning measurements and calculates its own position without network assistance. The detailed assistance data may include information about network node locations, beam directions, etc. The assistance data can be provided to the UE via unicast or via broadcast.
Figure 4 shows another view of an exemplary positioning architecture in an LTE network. For example, Figure 4 illustrates how secure user plane location (SUPL) techniques can be supported in LTE networks. In general, SUPL is run on top of the generic LTE UP protocol stack. SUPL supports and complements CP protocols to facilitate LBS support with a minimum and/or reduced impact on the CP and the deployed network. SUPL includes a location server - known as SUPL location platform (SLP) - that communicates via a SUPL bearer with a SUPL-enabled terminal (SET), which can be software and/or hardware components of a UE. The SLP also may have a proprietary interface to the E-SMLC, which is the location server for control-plane positioning in LTE.
SUPL occupies the application layer with LPP transported on a layer above SUPL. After establishing a Transmission Control Protocol (TCP)/Intemet Protocol (IP) connection and initiating SUPL and LPP sessions, the flow of LPP messages can be the same as in the CP version of LPP, just with the SUPL Enabled Terminal (SET) as the LCS target and the SUPL Location Platform (SLP) as the LCS Server. The SLP implements the SUPL Location Center (SLC) and the SUPL Positioning Center (SPC) functions with the latter either being integrated in the E- SMLC or attached to it with a proprietary interface.
The SLC system coordinates the operations of SUPL in the network and implements the following SUPL functions as it interacts with the SET over the user-plane bearer: privacy function, initiation function, security function, roaming support, charging function, service management, and position calculation. The SPC supports the following SUPL functions: security function, assistance delivery function, SUPL reference retrieval function (e.g., retrieving data from a GPS reference network), and SUPL position calculation function.
E-SMLC can also communicate with location measurement units (LMUs) via SLm interfeces. As shown in Figure 4, LMUs can be standalone or integrated with an eNB. An eNB also may include, or be associated with, one or more transmission points (TPs). The E-SMLC communicates to UEs via the serving MME and eNB, using the respective SLs, SI, and LTE-Uu interfeces shown in Figure 4. Although not shown, the RRC protocol is used to carry positioning- related information (e.g., to/from E-SMLC) between the UE and the eNB. In addition, SLm interface Application Protocol (SLmAP) over the SLm interface between LMU and E-SMLC was introduced in Rel-11 to support UTDOA.
As mentioned above, positioning is also expected to be an important application for 5G networks. Figure 5 shows a high-level view of an exemplary 5G network architecture, including a Next Generation Radio Access Network (NG-RAN) 599 and a 5G Core (5GC) 598. As shown in the figure, NG-RAN 599 can include gNBs 510 (e.g., 510a,b) and ng-eNBs 520 (e.g., 520a, b) that are interconnected with each other via respective Xn interfaces. The gNBs and ng-eNBs are also connected via the NG interfaces to 5GC 598, more specifically to the AMF (Access and Mobility Management Function) 550 (e.g., AMFs 530a, b) via respective NG-C interfaces and to the UPF (User Plane Function) 540 (e.g., UPFs 540a, b) via respective NG-U interfaces. Moreover, the AMFs 520a, b can communicate with one or more location management functions (LMFs, e.g., LMFs 550a, b) and network exposure functions (NEFs, e.g., NEFs 560a, b). The AMFs, UPFs, LMFs, and NEFs are described further below.
Each of the gNBs 510 can support the NR radio interface including frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof. In contrast, each of ng-eNBs 520 can support the LTE radio interface but, unlike conventional LTE eNBs (such as shown in Figure 1), can also connect to the 5GC via the NG interface. Each of the gNBs and ng- eNBs can serve a geographic coverage area including one or more cells, including cells 511a-b and 521a-b shown as exemplary in Figure 5. As mentioned above, the gNBs and ng-eNBs can also use various directional beams to provide coverage in the respective cells. Depending on the particular cell in which it is located, a UE 505 can communicate with the gNB or ng-eNB serving that particular cell via the NR or LTE radio interface, respectively.
Each of the gNBs 510 may include and/or be associated with a plurality of Transmission Reception Points (TRPs). Each TRP is typically an antenna array with one or more antenna elements and is located at a specific geographical location. In this manner, a gNB associated with multiple TRPs can transmit the same or different signals from each of the TRPs. For example, a gNB can transmit different version of the same signal on multiple TRPs to a single UE. Each of the TRPs can also employ beams for transmission and reception towards the UEs served by the gNB, as discussed above.
Deployments based on different 3GPP architecture options (e.g., EPC-based or 5GC- based) and UEs with different capabilities (e.g., EPC and 5GC) may coexist at the same time within one network (e.g., Public Land Mobile Network, PLMN). It is generally assumed that a UE that can support 5GC NAS procedures can also support EPC NAS procedures (e.g., as defined in 3GPP TS 24.301) to operate in legacy networks, such as when roaming. As such, the UE will use EPC NAS or 5GC NAS procedures depending on the core network (CN) by which it is served.
Another change in 5G networks (e.g., in 5GC) is that traditional peer-to-peer interfaces and protocols (e.g., those found in LTE/EPC networks) are modified by a so-called Service Based Architecture (SBA) in which Network Functions (NFs) provide one or more services to one or more service consumers. In general, each service is a self-contained functionality that can be changed and modified in an isolated manner without affecting other services. Also, the services are composed of various “service operations”, which are more granular divisions of the overall service functionality. In the 5G SBA, network repository functions (NRF) allow every network function to discover the services offered by other network functions, and Data Storage Functions (DSF) allow every network function to store its context.
Figure 6 shows an exemplary non-roaming 5G reference architecture with service-based interfaces and various 3GPP-defmed NFs within the Control Plane (CP). These include the following NFs, with additional details provided for those most relevantto the present disclosure:
• Application Function (AF, with Naf interface) interacts with the 5GC to provision information to the network operator and to subscribe to certain events happening in operator's network. An AF offers applications for which service is delivered in a different layer (i.e., transport layer) than the one in which the service has been requested (i.e. signaling layer), the control of flow resources according to what has been negotiated with the network. An AF communicates dynamic session information to PCF (via N5 interface), including description of media to be delivered by transport layer. AFs also communicate with UEs via the Ua* reference point.
• Policy Control Function (PCF, with Npcf interface) supports unified policy framework to govern the network behavior, via providing PCC rules (e.g., on the treatment of each service data flow that is under PCC control) to the SMF via the N7 reference point. PCF provides policy control decisions and flow based charging control, including service data flow detection, gating, QoS, and flow-based charging (except credit management) towards the SMF. The PCF receives session and media related information from the AF and informs the AF of traffic (or user) plane events.
• User Plane Function (UPF)- supports handling of user plane traffic based on the rules received from SMF, including packet inspection and different enforcement actions (e.g., event detection and reporting). UPFs communicate with the RAN (e.g., NG-RAN) via the N3 reference point, with SMFs (discussed below) via the N4 reference point, and with an external packet data network (PDN) via the N6 reference point. The N9 reference point is for communication between two UPFs.
• Session Management Function (SMF, with Nsmf interface) interacts with the decoupled traffic (or user) plane, including creating, updating, and removing Protocol Data Unit (PDU) sessions and managing session context with the User Plane Function (UPF), e.g., for event reporting. For example, SMF performs data flow detection (based on filter definitions included in Policy and Charging Control (PCC) rules), online and offline charging interactions, and policy enforcement.
• Access and Mobility Management Function (AMF, with Namf interface) terminates the RAN CP interface and handles all mobility and connection management of UEs (similar to MME in EPC). AMFs communicate with UEs via the N1 reference point and with the RAN (e.g., NG-RAN) via the N2 reference point.
• Network Exposure Function (NEF) with Nnef interface - acts as the entry point into operator's network, by securely exposing to AFs the network capabilities and events provided by 3 GPP NFs and by providing ways for the AF to securely provide information to 3 GPP network.
• Network Repository Function (NRF) with Nnrf interface - provides service registration and discovery, enabling NFs to identify appropriate services available from other NFs.
• Authentication Server Function (AUSF) with Nausf interface - based in a user’s home network (HPLMN), it performs user authentication and computes security key materials for various purposes.
• Location Management Function (LMF) with Nlmf interface (labelled 650 in Figure 6) - supports various functions related to determination of UE locations, including location determination for a UE and obtaining any of the following: DL positioning measurements or a location estimate from the UE; UL positioning measurements from the NG RAN; and non-UE associated assistance data from the NG RAN.
The Unified Data Management (UDM) function shown in Figure 6 is similar to the HSS in LTE/EPC networks discussed above. UDM supports Generation of 3GPP authentication credentials, user identification handling, access authorization based on subscription data, and other subscriber-related functions. To provide this functionality, the UDM uses subscription data (including authentication data) stored in the 5GC unified data repository (UDR). In addition to the UDM, the UDR supports storage and retrieval of policy data by the PCF, as well as storage and retrieval of application data by NEF.
Figure 7 is a block diagram illustrating a high-level architecture for supporting UE positioning in 5G networks. As shown in Figure 7, the NG-RAN 720 can include nodes such as gNB 722 and ng-eNB 721, similar to the architecture shown in Figure 5. Each ng-eNB may control several transmission points (TPs), such as remote radio heads. Moreover, some TPs can be “PRS-only” for supporting positioning reference signal (PRS)-based E-UTRAN operation.
According to NR principles, gNB 722 communicates with UE 710 via RRC over the NR radio interface, also referred to as NR-Uu. Likewise, according to LTE principles, ng-eNB 721 communicates with UE 710 via RRC over the LTE radio interface, also referred to as LTE-Uu. As discussed above in relation to Figure 4, UE 710 may include a SET.
In addition, the NG-RAN nodes communicate with an AMF 730 in 5GC 770 via respective NG-C interfaces (both of which may or may not be present), while AMF 730 and LMF 740 communicate via an NLs interface 741. In addition, positioning-related communication between UE 710 and the NG-RAN nodes occurs via the RRC protocol, while positioning-related communication between NG-RAN nodes and LMF occurs via NR Positioning Protocol A (NRPPa). Optionally, the LMF can also communicate with an E-SMLC 750 and a SUPL 760 in an LTE network via communication interfaces 751 and 761, respectively. Communication interfaces 751 and 761 can utilize and/or be based on standardized protocols, proprietary protocols, or a combination thereof.
LMF 740 can also include, or be associated with, various processing circuitry 742, by which the LMF performs various operations described herein. Processing circuitry 742 can include similar types of processing circuitry as described herein in relation to other network nodes. LMF 740 can also include, or be associated with, a non-transitory computer-readable medium 743 storing instructions (also referred to as a computer program program) that can facilitate the operations of processing circuitry 742. Medium 743 can include similar types of computer memory as described herein in relation to other network nodes.
Similarly, E-SMLC 750 can also include, or be associated with, various processing circuitry 752, by which the E-SMLC performs various operations described herein. Processing circuitry 752 can include similar types of processing circuitry as described herein in relation to other network nodes. E-SMLC 750 can also include, or be associated with, a non-transitory computer-readable medium 753 storing instructions (also referred to as a computer program program) that can facilitate the operations of processing circuitry 752. Medium 753 can include similar types of computer memory as described herein in relation to other network nodes.
Similarly, SLP 760 can also include, or be associated with, various processing circuitry 762, by which the SLP performs various operations described herein. Processing circuitry 762 can include similar types of processing circuitry as described herein in relation to other network nodes. SLP 760 can also include, or be associated with, a non-transitory computer-readable medium 763 storing instructions (also referred to as a computer program program) that can facilitate the operations of processing circuitry 762. Medium 763 can include similar types of computer memory as described herein in relation to other network nodes.
In a typical operation, the AMF can receive a request for a location service associated with a particular target UE from another entity (e.g., a gateway mobile location center (GMLC)), or the AMF itself can initiate some location service on behalf of a particular target UE (e.g., for an emergency call from the UE). The AMF then sends a location services (LS) request to the LMF. The LMF processes the LS request, which may include transferring assistance data to the target UE to assist with UE-based and/or UE-assisted positioning; and/or positioning of the target UE. The LMF then returns the result of the LS (e.g., a position estimate for the UE and/or an indication of any assistance data transferred to the UE) to the AMF or to another entity (e.g., GMLC) that requested the LS.
An LMF may have a signaling connection to an E-SMLC, enabling the LMF to access information from E-UTRAN, e.g., to support E-UTRA OTDOA positioning using downlink positioning measurements obtained by a target UE. An LMF can also have a signaling connection to an SLP, the LTE entity responsible for user-plane positioning.
Various interfaces and protocols are used for, or involved in, NR positioning. The LTE Positioning Protocol (LPP) is used between a target device (e.g., UE in the control-plane, or SET in the user-plane) and a positioning server (e.g., LMF in the control-plane, SLP in the user-plane). LPP can use either the control- or user-plane protocols as underlying transport. NR Positioning Protocol (NRPP) is terminated between a target device and the LMF. RRC protocol is used between UE and gNB (via NR radio interface) and between UE and ng-eNB (via LTE radio interface).
Furthermore, the NR Positioning Protocol A (NRPPa) carries information between the NG-RAN Node and the LMF and is transparent to the AMF. As such, the AMF routes the NRPPa PDUs transparently (e.g., without knowledge of the involved NRPPa transaction) over NG-C interface based on a Routing ID corresponding to the involved LMF. More specifically, the AMF carries the NRPPa PDUs over NG-C interface either in UE associated mode or non-UE associated mode. The NGAP protocol between the AMF and an NG-RAN node (e.g., gNB or ng-eNB) is used as transport for LPP and NRPPa messages over the NG-C interface. NGAP is also used to instigate and terminate NG-RAN-related positioning procedures.
LPP/NRPP are used to deliver messages such as positioning capability request, OTDOA positioning measurements request, and OTDOA assistance data to the UE from a positioning node (e.g., location server). LPP/NRPP are also used to deliver messages from the UEtothe positioning node including, e.g., UE capability, UE positioning measurements for UE-assisted OTDOA positioning, UE request for additional assistance data, UE configuration parameter(s) to be used to create UE-specific OTDOA assistance data, etc. NRPPa is used to deliver the information between ng-eNB/gNB and LMF in both directions. This can include LMF requesting some information from ng-eNB/gNB, and ng-eNB/gNB providing some information to LMF. For example, this can include information about PRS transmitted by ng-eNB/gNB that are to be used for OTDOA positioning measurements by the UE.
NR networks will support positioning methods similar to LTE E-CID, OTDOA, and UTDOA but based on NR positioning measurements. NR may also support one or more of the following position methods:
• Multi-RTT: The device (e.g. UE) computes UE Rx-Tx time difference and gNBs compute gNB Rx-Tx time difference. The results are combined to find the UE position based upon round trip time (RTT) calculation.
• DL angle of departure (DL-AoD): gNB or LMF calculates the UE angular position based upon UE DL RSRP measurement results (e.g., of PRS transmitted by network nodes).
• UL angle of arrival (UL-AoA): gNB calculates the UL AoA based upon positioning measurements of a UE’s UL SRS transmissions.
Each of the NR positioning methods can be supported in UE-assisted, UE-based or UE-standalone modes, similar to LTE discussed above.
In OTDOA positioning, a UE measures the reference signal time difference (RSTD) between RS transmitted by a reference cell and RS transmitted by at least two neighbor cells. Figure 8 shows an exemplary RSTD measurement arrangement for OTDOA positioning with three cells, i.e., the UE’s serving cell and two neighbor cells. The UE measures time-of-arrival (TOA) for RS transmitted by each cells in the terminal. Each measurement depends on the time when the cell (e.g., eNB or gNB) transmitted the measured RS and the propagation distance of the RS between the cell and UE antennas. For example, the TOA for cell1 can be expressed as:
Figure imgf000022_0001
where Ti denotes the RS transmission time from cell1, c is the speed of light, and bclock denotes the unknown clock offset of the UE with respect to network time. The boldface quantities r are vector locations of the transmitting antenna and the UE (or terminal). The UE can then determine differences (i.e., RSTD) between the respective measurements, such as illustrated below for cell2 and cell1 :
Figure imgf000022_0002
A similar RSTD can be formed between TOA measurements for cell3 and cell2 or between cell3 and cell1 (both shown in Figure 8). Note that the RSTD formation eliminates the common bclock. Furthermore, the left-hand side can be considered known provided that any differences between transmission times (denoted “real time differences”) can be measured, such as by LMUs, or can be fixed. The three-dimensional locations n of the transmitters (e.g., base station antennas) are generally known to within a few meters. As such, the only remaining unknown is the UE location, rterminal. In the case shown in Figure 8, the two-dimensional UE location:
Figure imgf000023_0001
can be determined from the two RSTDs. Alternately, a three-dimensional UE location:
Figure imgf000023_0002
can be determined if a fourth cell and third RSTD is used. In practice, accuracy can be improved if more positioning measurements are collected and a maximum likelihood solution is introduced. There may also be multiple (false) solutions in cases where only a minimum number of sites are detected. OTDOA produces position estimates that can be related to the Cartesian position in the local earth tangential coordinate system, e.g., used for state estimation as discussed further below.
Although OTDOA can be relatively accurate, its inaccuracy is significantly larger than that of assisted GNSS. A primary advantage of OTDOA is that it provides high precision positioning indoors, where the availability of A-GNSS is very limited.
Figure 9 shows an exemplary resource block (RB) as defined for the LTE DL radio interface. This exemplary RB is a time-frequency grid covering 180-kHz of frequency bandwidth for a 1-ms duration of a subframe. Furthermore, the 180-KHz includes 12 sub-carriers spaced 15 kHz apart, and the 1-ms subframe is further divided into 14 equal-duration symbols. A single sub- carrier during a single symbol is known as a resource element (RE).
Figure 9 illustrates that control signaling (e.g., physical downlink control channel, PDCCH) can be transmitted during the first three symbols while user data (e.g., physical downlink shared channel, PDSCH) can be transmitted during the remaining 11 symbols. In addition, various reference signals can be transmitted in various REs during all 14 symbols.
Cell-specific reference signals were specified for UE RSTD measurements in 3GPP Rel- 8. These reference signals were determined to be insufficient for OTDOA positioning, so Positioning Reference Signals (PRS) were introduced in Rel-9. PRS are pseudo-random, quadrature phase-shift keyed (QPSK) sequences that are mapped in diagonal patterns with shifts in frequency and time, thereby avoiding collision with Cell specific Reference Signal (CRS). As shown in Figure 9, PRS also do not overlap with PDCCH. It is expected that NR will use PRS transmission in positioning beams.
In contrast, for ULTDOA positioning, the network (e.g., eNBs or gNBs) measures TOA of sounding reference signals (SRS) transmitted by a UE and forms RSTDs in a similar manner as the UE does in OTDOA. Assisted GNSS is an aggregation of several national or regional navigation systems including the U.S. Global Positioning System (GPS), the Russian GLObal NAvigation Satellite System (GLONASS), the European Galileo system, and the Chinese Compass and Beidu systems. Each includes a relatively large number of satellites that transmit positioning signals with properties that facilitate timing measurements. Each also provides highly accurate satellite orbital parameters such that receivers can accurately determine satellite positions and transmission timing associated with any measured signal. Given this information, receivers can determine a “pseudorange” to each satellite measured, which includes the receiver’s unknown time offset from GNSS time. Given enough pseudoranges, the receiver can determine its own position and time offset with very high accuracy. In general, GNSS receivers produce position results that can be easily translated to Cartesian position in the local earth tangential coordinate system, e.g., used for state estimation as discussed further below.
Although conventional UEs can provide position measurements based on 3GPP A-GNSS techniques, operators of aerial UEs often disable such features. In such cases, the network (e.g., E-SMLC, LMF) must determine a position measurement and/or estimate the current state of the aerial UE based on positioning measurements made by RAN nodes.
A positioning result is a result of processing of obtained positioning measurements, including Cell IDs, power levels, received signal strengths, etc., and it may be exchanged among nodes in one of the pre-defmed formats. The signaled positioning result is represented in a pre- defined format corresponding to one of the so called geographical area description (GAD) shapes. In LTE and NR networks, a positioning result may be signaled between:
• LCS target and LCS server, e.g. using LPP protocol;
• Positioning servers (e.g., E-SMLC and SLP), over standardized or proprietary interfaces;
• Positioning server and other network nodes (e.g., E-SMLC and MME/Mobile Switching Centre (MSC)/GMLC/Operation and Maintenance (O&M)/Self-Organizing Network (SON));
• Positioning node and LCS Client (e.g., E-SMLC and Public Safety Answering Point (PSAP), SLP and External LCS Client, or E-SMLC and UE).
Furthermore, the positioning result can be represented by any of the following GAD shapes:
• Polygon: described by a list of 3-15 ellipsoid points (defined below). The first point shall be the same as the last point and no line segment between any two points is allowed to intersect another line segment between any other two points. • Ellipsoid arc: described by a center point (e.g., eNB antenna position), encoded as latitude, longitude in World Geodetic System 1984 (WGS-84) co-ordinates. Furthermore, the format contains an inner radius of the arc, a thickness of the arc as well as the offset angle (clockwise from north) and the included angle (opening angle). Together, these parameters define a circular sector, with a thickness and with left and right angles. The ellipsoid arc does carry confidence information. This format is, for example, produced by cell ID + TA positioning in LTE.
• Ellipsoid point: described by a center point, encoded as latitude, longitude in WGS-84 co- ordinates. The format neither carries uncertainty, nor confidence information.
• Ellipsoid point with uncertainty circle: described by a center point, encoded as latitude, longitude in WGS-84 co-ordinates, in combination with a radial uncertainty radius. The format does not carry confidence information.
• Ellipsoid point with uncertainty ellipse: described by a center point, encoded as latitude, longitude in WGS-84 co-ordinates. The uncertainty ellipse is encoded as a semi-major axis, a semi-minor axis, and an angle relative to north counted clockwise from the semi- major axis. The format carries confidence information. This format is typically produced by OTDOA and A-GPS positioning in LTE.
• Ellipsoid point with altitude: encoded as an ellipsoid point, together with an encoded altitude. The format neither carries uncertainty, nor confidence information.
• Ellipsoid point with altitude and uncertainty ellipsoid: commonly received from A-GPS capable UEs. It consists of an ellipsoid point with altitude and an uncertainty ellipsoid, the latter encoded with a semi -major axis, a semi -minor axis, an angle relative to north, counted clockwise from the semi-major axis, together with an uncertainty altitude. The format carries confidence information.
In general, both TDOA methods produce relatively poor estimates of UE altitude relative to local topography. This is due to inter-site measurement geometry, specifically that the base station transmitting/receiving antennas are all located at similar altitudes. Furthermore, aerial UEs are often flying at approximately the same altitudes as the antennas. Since all entities involved in these TDOA measurements are roughly in one plane, small variations in aerial UE altitude are obscured by noise or uncertainty of the TOA measurements, resulting in poor altitude accuracy. This effect is also known as a high vertical geographical dilution of precision (GDOP).
However, other positioning measurements can be used to improve altitude accuracy. One possibility is to augment TDOA measurements with barometric measurements, which can indicate altitude variation. Such measurements are standardized in LTE and NR and available in many UE brands. Barometric measurements can also be used to augment other positioning methods such as assisted GNSS.
In general, position estimates based on assisted GNSS, OTDOA, UTDOA, and barometric measurements are all typically expressed with respect to the mean sea level. For the case of aerial UEs, however, a more relevant value is altitude with respect to local ground level. Various techniques can be used to convert a position estimate between these two altitude references. One exemplary technique is to apply a complete geographical information system (GIS) that includes ground altitude maps covering a region of interest, e.g., a coverage area of the LTE or NR network in which an aerial UE can operate. Another exemplary technique is to use a configured altitude above ground for each antenna site in the network.
Another exemplary technique is to apply a model of the ground altitude within each cell of the network. An example of this technique is disclosed in “Wireless hybrid positioning based on surface modeling with polygon support”, which was published inProc. VTC 2018 Spring, June 2018 and is incorporated by reference in its entirety. Figure 10 shows a topography intended to describe a coastal region, with zero altitude representing mean sea level. Coverage of cells are depicted with overlaid contour lines. This exemplary technique was used to compute a 3D surface estimate for each cell, with the result depicted in Figure 11. When applied to UE position estimates, the exemplary technique can reduce the maximum vertical errors by approximately 80%. Furthermore, the vertical minimum mean-square error (MMSE) is less than 3 m in almost 50% of the coverage areas of the cells shown in Figures 10-11.
As briefly mentioned above, it can be beneficial to restrict and/or limit aerial UEs operating as “rogue drones”. Possible solutions include limiting, releasing, and/or disconnecting aerial UE communications with the network (e.g., E-UTRAN) and/or alerting relevant government authorities who can take appropriate action against rogue drone operation. A prerequisite for these and other solutions is network knowledge of the current position, speed, and directional bearing (collectively “state”) of aerial UEs. Although conventional UEs can provide such information based on 3GPP ECS techniques, such as assisted-GNSS, operators of aerial UEs often disable such features, leaving it up to the network to determine or estimate the current state of aerial UEs based on positioning measurements made by the network.
Figure 12 shows an exemplary conventional air vehicle state estimation system, also referred to as a “multi-sensor state estimation system”. Strobes (i.e., angle-only measurements) and plots (i.e., cartesian position measurements) are first collected from the sensors attached to the state estimation system. The plots and strobes are input for association with existing 3-D state estimates, i.e., to determine which measurements belong to each state estimate. Measurements may be accompanied by identifiers (e.g., UE ID) that are also associated with the corresponding state estimate.
The state estimates can be transformed from an earth tangential Cartesian coordinate system to the measurement space of each sensor. The state estimates can be updated in the respective sensor measurement spaces with Kalman filtering-based techniques discussed below. Plots and strobes that are not associated may originate from new state estimates and they are sent to the plot handler or the strobe handler for initiation of new state estimates. Plots and strobes that are associated with high quality state estimates are also used for computation of sensor bias parameters in the sensor registration block.
Kalman filters can be used to estimate the state of a discrete-time linear dynamic system described by a vector difference equation with additive white Gaussian noise that models unpredictable disturbances. The dynamic model of a Kalman filter is given by: x(k + 1) = F(k)x(k) + v(k), where x(k) is the nx -dimensional state vector, and v(k), k = 0,1, · · · is the sequence of zero-mean white Gaussian process noise (also nx vectors) with covariance
Figure imgf000027_0004
The measurement equation is z(k ) = H(k)x(k) + w(k ) k = 1, ···, with w(k) the sequence of zero-mean white Gaussian measurement noise with covariance
Figure imgf000027_0005
The matrices F, H, Q, and R are assumed known and possibly time varying. In other words, the system can be time varying and the noises nonstationary. The initial state x(0), in general unknown, is modeled as a random variable, Gaussian distributed with known mean and covariance. The two noise sequences and the initial state are assumed mutually independent, which is also referred to as a “Linear-Gaussian (LG) assumption.”
The conditional mean is defined as:
Figure imgf000027_0001
where denotes the sequence of observations available at time k, and is the
Figure imgf000027_0006
estimate of the state if j = k and predicted value of the state if j > k . The conditional covariance matrix of x(j) given the data Zk or the covariance associated with the estimate is
Figure imgf000027_0002
Figure 13 shows an exemplary flow diagram of a Kalman filter. The estimation algorithm starts with the initial estimate of x(0) and the associated initial covariance P(0|0),
Figure imgf000027_0003
assumed to be available. The second (conditioning) index 0 stands for Z0. the initial information. One cycle of the dynamic estimation algorithm - the Kalman filter (KF) - will thus consist of the computations to obtain the estimate
Figure imgf000028_0001
which is the conditional mean of the state at time k (the current stage) given the observation up to and including time k, and the associated covariance matrix
Figure imgf000028_0002
Although Kalman filters may be used to model a movement state of some vehicles, such as surface vehicles and aircraft, a conventional Kalman filter is inadequate to model the state of an aerial UE such as a drone. More specifically, drones have very specific modes of movement that need to be reflected by the optimal estimator applied for measurement processing. Even so, there are various methods for state estimation of an object, such as a drone, that has multiple dynamic movement modes.
A general technique for performing such estimation is based on the joint probability distribution of the object’s state. In general, propagation of the object’s state forward in time is governed by the Fokker-Planch partial differential equation. The measurement processing is performed by a multi-dimensional integration to obtain the posterior probability state distribution from the likelihood of the measurement and the prior probability distribution. This process is more generally referred to as Bayesian inference. In general, however, implementation can be very complex in terms of computational and memory requirements. Bayesian inference methods can be simplified to some degree by approximation as “particle filters” in which the probability density functions are discretized as “particles”. Even so, implementation of particle filtering can be very complex.
As an extreme simplification, each of the object’s movement modes can be modeled and estimated separately, with ad-hoc logic used to select the movement mode applicable at any given time. For example, two movement modes can be used for estimating the state of a conventional air vehicle: a constant velocity mode (i.e., straight line movement) and a maneuver mode, which can respond to measurements with much higher agility than the constant velocity mode. A maneuver detector can select the maneuver mode if it is deemed to better match incoming measurements than the constant velocity mode. After a maneuver is terminated, a re-initialized constant velocity mode can be used for state estimation. One problem, issue, and/or difficulty with this approach is selection of appropriate threshold values for the maneuver detector.
Another approach to the multi -movement mode state estimation problem is the interacting- multiple-model (IMM) filter. The IMM algorithm assumes that the system behaves according to one of a finite number of models. These models can differ in noise levels and/or structure, such as having different state dimensions and unknown inputs. In the IMM approach, at time k the state estimate is computed for each possible model using r filters, with each filter using a different combination of the previous model-conditioned estimates, so-called “mixed initial conditions”.
Figure 14 shows a block diagram that illustrates one cycle of operation of an IMM algorithm that includes r interacting filters operating in parallel. The mixing is done at the input of the filters with the probabilities, conditioned on data Zk-1. The structure of the IMM algorithm is given by:
(Ne; Nf) = (r; r), where Ne is the number of estimates at the start of the cycle of the algorithm and Nf is the number of filters. One cycle of the algorithm includes the following operations:
1. Calculation of the mixing probabilities ( i,j = 1, ··· , r). The probability that mode Mi was in effect at time k — 1 given that Mj is in effect at k conditioned on Zk-1 is:
Figure imgf000029_0001
where the normalizing constants are given by the below equation, that uses the mode transition probabilities pij which is the probability that the state estimated object is in mode j at time k, conditioned on being in mode i at time k- 1. The expression for the normalizing constant is:
2. Mixing (j = 1, ··· r). Starting with one computes the mixed initial
Figure imgf000029_0002
condition for the filter matched to Mj ( k ) as:
Figure imgf000029_0003
The covariance corresponding to the above is given by:
Figure imgf000029_0004
3. Mode-matched filtering (j = 1, ··· r). The estimate and the covariance obtained in step 2 are used as input to the filter matched to which uses z(k) to yield and
Figure imgf000030_0006
pj (k\k). The likelihood function corresponding to the r filters,
Figure imgf000030_0007
is computed using the mixed initial condition and the associated covariance as:
Figure imgf000030_0001
4. Model probability update (j = 1, ··· , r). This is done according to:
Figure imgf000030_0002
where is given above and
Figure imgf000030_0005
Figure imgf000030_0003
is the normalization factor.
5. Estimate and covariance combination. Combination of the model-conditioned estimates and covariances is done according to the mixture equations:
Figure imgf000030_0004
Although the above-described IMM algorithm can be useful for state estimation of systems that behave according to a finite number of movement modes, existing applications of it have several problems, issues, and/or difficulties with respect to state estimation for aerial UEs or drones. For example, IMM has not been applied to handle the unique drone movement mode of hovering, together with conventional flight modes like constant-velocity or straight-line motion and maneuvering. As another example, conventional state estimation techniques, including IMM, do not consider that drones are not capable of instant deceleration to hovering in place, such as by limiting switching between constant velocity and hovering models.
Furthermore, in the context of ULTDOA/OTDOA positioning, conventional state estimation techniques do not fuse Cartesian TDOA measurement and barometric altitude measurements to address the poor vertical GDOP caused by all involved entities - drone and base stations - being approximately in the same horizontal plane. Likewise, the 3GPP LCS architecture currently lacks signaling functionality needed to distribute estimated drone state (e.g., position, velocity, etc.) and information derived therefrom to CN and RAN nodes, in order to facilitate corrective solutions to rogue drone behavior.
Exemplary embodiments of the present disclosure can address these and other problems, issues, and/or difficulties by providing a novel technique for estimating movement of an aerial UE (e.g., drone), in a wireless network, based on a plurality of movement models. Other embodiments include signaling techniques for distribution of drone state estimate information and information derived therefrom, to RAN nodes, CN nodes, and/or external LCS clients. Other embodiments include a network architecture for drone state estimation, such that the state estimator can reside in and/or be associated with a positioning node or function in a 3GPP network (e.g., E-SMLC, SLP, LMF), or reside in a node or function external to the 3GPP network.
Design of an IMM filter requires three primary choices or decisions. The first choice is selection and definition of movement modes for the object whose state is to be estimated. This amounts to definition of a state space model for each movement mode. This can include a vector difference equation that defines the model dynamics and a static vector equation that defines the measurement relation, thereby mapping states to measurements. In addition, the inaccuracies or uncertainties of the measurement relation and the model dynamics need to be specified in terms of respective covariance matrices.
The second choice is the definition of transition probabilities between the movement modes and/or models. This can be based on a hidden Markov model that describes how the modes interact, expressed in terms of the probabilities of a mode transition of the object between two discrete instances of time. The third choice is selection of initial conditions of the IMM filter, e.g., the expected initial state and covariance of each model.
Exemplary embodiments of the present disclosure include novel approaches to these choices. In particular, embodiments include an IMM filter with a new combination of movement models and a new restricted mode transition probability, both of which are specifically adapted to the hovering capabilities of drones. Furthermore, some embodiments can include fusion of Cartesian positioning measurements (e.g., OTDOA or UTDOA) with a one-dimensional altitude measurement obtained from a barometric pressure sensor in the drone or from a topographical model of a region covered by the wireless network in which the drone is operating.
Figure 15 is a block diagram illustrating the architecture of an exemplary aerial UE (or drone) state estimation system, according to various exemplary embodiments of the present disclosure. In particular, Figure 15 shows a drone 1510 that communicates with a RAN 1520 that includes three network nodes (labelled 1521-1523). The three network nodes may make positioning measurements on UL signals (e.g., SRS) transmitted by drone 1510, and/or transmit PRS or other signals to facilitate positioning measurements by drone 1510. In either case, noisy measurements are provided by (or via) RAN 1520 to a state estimator 1530, which operates on them according to techniques described in more detail below to produce state estimates at time instance k - called - that correspond to multiple drone movement modes. These are input
Figure imgf000032_0005
to drone detector 1540, which operates on this information to produce a conditional drone probability metric,
Figure imgf000032_0001
Some embodiments include a multi-mode movement model that includes individual movement models for the following three modes:
1. 3D (almost) constant velocity movement Wiener process; 2. 3D (almost) constant acceleration movement Wiener process; and
3. 3D (almost hovering) constant position Wiener process.
The following discussion of these models assumes continuous time values. In practice, however, the models need to be discretized for computerized implementation. Accordingly, given a continuous time Wiener process
Figure imgf000032_0002
the corresponding discrete time state equation after sampling with the period T is given by w
Figure imgf000032_0003
and where the discretized process noise covariance is given by
Figure imgf000032_0006
It is assumed below that all continuous time equations are discretized in this manner before applying the IMM filter. In some embodiments, the continuous-time state space constant velocity model can be described using the states
Figure imgf000032_0004
where the subscripts are associated with the three respective Cartesian coordinate directions. In such embodiments, the model is given by:
Figure imgf000033_0001
The process noise covariance matrix is given by:
Figure imgf000033_0002
where q\,i, i=l,...,3 are the process noise variances. Furthermore, the measurement model is assumed to be linear for TDOA measurements, i.e.,:
Figure imgf000033_0003
Similarly, the measurement noise covariance is given by:
R1 = diag([r1 r1 r1]), are the measurement noise variances that are assumed to be constant in all dimensions. In other embodiments, the third element of R1 may be selected larger than the others to reflect that altitude inaccuracy is larger than the horizontal inaccuracies.
In case of barometric pressure, the measurement is one-dimensional and expressed with the linear measurement matrix (assuming it has been transformed to altitude)
H1 = [0 0 1 0 0 0] .
There is also a small variance associated with this positioning measurement. The measurement may be integrated by augmentation of the measurement matrix R1. or it can be handled in a separate measurement update. In case the OTDOA measurements do not contain altitude information, the barometric measurement is needed.
In some embodiments, the continuous time state space constant acceleration model can be defined using the states
Figure imgf000034_0001
where the subscripts are associated with the three respective Cartesian coordinate directions. The constant acceleration model can be expressed as:
Figure imgf000034_0002
and the process noise covariance is given by:
Figure imgf000034_0003
where q2,i, i=1, ... ,3 are the process noise variances. Furthermore, the measurement model can be assumed to be linear for TDOA measurements, such that:
Figure imgf000034_0004
with the state measurement matrix given by:
Figure imgf000034_0005
and measurement covariance given by:
R2 = diag([r2 r2 r2]).
Although the covariance above is assumed to be constant in all dimensions, the third element of
R2 may be selected larger than the others to reflect that altitude inaccuracy is larger than the horizontal inaccuracies.
In case of barometric pressure, the measurement is 1 -dimensional and expressed with the linear measurement matrix (assuming it has been transformed to altitude)
H2 = [ 0 0 1 0 0 0 0 0 0].
There is also a small variance associated with this measurement. The measurement may be integrated by augmentation of the measurement matrix R2, or it could be handled in a separate measurement update. In case the OTDOA measurements do not contain altitude information the barometric measurement is needed.
In some embodiments, the continuous-time state space constant position (or hovering) model is defined by the states:
Figure imgf000035_0001
where the subscripts are associated with the three respective Cartesian coordinate directions. The constant position model can be expressed as:
Figure imgf000035_0002
and the process noise covariance can be expressed as:
Qc3 = diag([q31 q32 q33]) where q 3,i, i=l,... ,3 are the process noise variances.
Similar to the other models discussed above, the measurement model can be assumed to be linear for TDOA measurements, such that
Figure imgf000035_0003
and the measurement noise covariance is given by:
R3 = diag([r3 r3 r3]) , where r3 is the measurement noise variance that is assumed constant across all dimensions but can be adjusted such that the third element is larger than the others to reflect that altitude inaccuracy is larger than the horizontal inaccuracies. Any available barometric pressure measurements can be handled in a similar manner as discussed above in relation to the other models.
Another novel aspect of embodiments disclosed herein is related to the physics of drone movement. When a drone is in constant velocity movement, it cannot stop immediately but must decelerate for a finite duration. In this case, the sequence of mode transitions is from mode 1 to mode 2 to mode 3. In other words, a direct transmission from mode 1 to mode 3 is forbidden. This is reflected by new constraints in the mode transition probability matrix, namely in
Figure imgf000035_0004
The new restrictions are selected as: p13 ≤ ε13 , p31 ≤ ε31 · where ε13 and ε31 are both much smaller than 1.
The following example scenario is provided to illustrate operation of the embodiments described above. This exemplary scenario involves the following operations by a drone whose state needs to be estimated.
• Drone starts at initial position [0 0 0] with the initial velocity [0 0 —10];
• Drone continues with constant velocity for 50s (mode 1);
• Drone performs coordinated turn for 20s (mode 2);
• Drone hovers for 20s (mode 3);
• Drone does a second coordinated turn for 10s (mode 2);
• Drone continues with a constant velocity for 40s (mode 1);
• Drone does a coordinated turn for 20s (mode 2); and
Drone hovers for 40s (mode 3).
Furthermore, the following model parameters are assumed: T = 1 second time interval n = 200 number of discrete time steps Noise variances rx = r2 = r3 =
Figure imgf000036_0001
Process noise variances for constant velocity, acceleration, and position models
Figure imgf000036_0002
IMM transition probability matrix
Figure imgf000036_0003
• Initial conditions:
Figure imgf000036_0004
Figures 16-17 show various results of a simulation of a drone state estimator configured according to the exemplary scenario described above. In particular, Figure 16 shows position measurements at intervals T=1 second overlaid with true and estimated (i.e., filtered) states (i.e., positions) of the drone for the duration of interest. Note that the estimated positions are very accurate, such that it is difficult to distinguish between true and estimated positions in the graph. Figure 17 illustrates the estimated (i.e., filtered) probabilities for the three models over the duration of interest. These are overlaid with the actual (i.e., true) model probabilities, at which any given time are 1.0 for one model (i.e., for the actual drone movement) and 0.0 for the other two models. Figure 17 shows that the estimated probabilities track the actual drone movement patterns very well.
Figure 18 illustrates various exemplary embodiments related to 3GPP LCS network architecture and signaling associated with drone state estimation. In particular, Figure 18 shows conventional interfaces and protocols between a UE 1810, a RAN 1890 (which includes a gNB 1820 and an eNB 1850), and a CN 1895 (which includes AMF 1830, LMF 1840, MME 1860, and E-SMLC 1870). All of these are discussed above in relation to Figures 3 and 7. Figure 18 also shows an external client 1880 that can communicate with the network via AMF 1830 and/or MME 1860. In addition, Figure 18 shows various alternatives for placement of a drone state estimator within the exemplary architecture. In this case, the drone state estimator is labelled with the same reference number (1530) as in Figure 15.
In some embodiments, state estimator 1530 can be included within a positioning node or function in the network, such as LMF 1840 and/or E-SMLC 1870. The positioning node can receive positioning measurement requests from the drone state estimator and sets up repeated measurements of OTDOA or UTDOA by communication over LPPa (to eNB) or NRPPa (to gNB). The positioning node can also set up barometric measurements using LPP communication. The positioning node receives OTDOA-based positions or positioning measurements (and optionally barometric pressure measurements) from the UE over LPP. The positioning node may also receive UTDOA positions or positioning measurements over LPPa (from eNB) or NRPPa (from gNB). The positioning node may then, if not done in the UE, compute OTDOA and UTDOA positions and provide these to the drone state estimator. These computed positions are referred to as “position measurements.” All of this signaling can be handled by conventional LCS protocols, except for any new intra-node signaling for the drone state estimator within the positioning node.
The drone state estimator can produce an estimated state vector based on the received position measurements and the respective multi -mode movement models. This state information can be useful for example for interference mitigation, since aerial UEs create more interference at higher altitudes than conventional ground based UEs. Therefore, a new STATE ESTIMATE INFORMATION message can be defined for LPPa and/or NRPPa, whereby the drone state estimator can send the estimated state information to eNBs and/or gNBs for use in interference mitigation.
The STATE ESTIMATE INFORMATION may also be used to prevent UE penetration of restricted airspace. That would require that STATE ESTIMATE INFORMATION be sent to an external client, e.g., via AMF 1830 or MME 1860. A new STATE ESTIMATE INFORMATION message can be defined for LCS-AP and/or NLs interface for this purpose, along with any corresponding new message(s) required between AMF/MME and the external client.
An exemplary STATE ESTIMATE INFORMATION can include a state estimate identifier, a drone identity, a time (or duration) when the STATE ESTIMATE INFORMATION is valid, and a state estimate. The state estimate can include a 3D position and, in some embodiments, a 3D velocity. The information may also include ground altitude information, as discussed above. Alternatively, the ground altitude information could be subtracted from the state estimate information to obtain the altitude above ground, signaled in the STATE ESTIMATE INFORMATION. That may require an additional indicator of whether ground altitude information is subtracted from the state estimate included in the message.
Figure 18 also shows a second option in which state estimator 1530 is located within CN 1895 but is separate from LMF 1840 and/or E-SLMC 1870. For example, a new network function can be defined for drone state estimation. Other than a new interface between state estimator 1530 and LMF 1840 and/or E-SLMC 1870, this option can operate in a substantially identical manner as the first option discussed above.
Figure 18 also shows athird option in which state estimator 1530 is in an external “cloud” together with external client 1880. In this option, state estimator 1530 can communicate with CN 1895 via AMF 1830 and/or MME 1860, essentially operating as another external LCS client. Furthermore, the state estimator can also function as a “server” to external client 1880, providing drone state estimate information upon request from external client 1880. Various signaling protocols can be defined for this purpose.
In this third option, the positioning node (e.g., LMF 1840 and/or E-SLMC 1870) receives a positioning request from the external drone state estimator and sets up OTDOA and/or ULTDOA positioning measurements via LPPa/NRPPa, and/or barometric pressure measurements via LPP. The positioning node receives positions or measurements in a similar manner as described above for the first option and computes a drone position measurement (if not provided by the UE). The positioning node can provide the position measurement to the external drone state estimator to use as a measurement input for computing the drone state estimate, which it can provide to the requesting external client, e.g., via the STATE ESTIMATE INFORMATION message discussed above. Communication between the drone state estimator and the external client can utilize various well-known protocols such as IP, TCP, Hypertext Transfer Protocol (HTTP), etc.
In some embodiments of the third option, the external drone state estimator can provide the drone state estimate to the RAN to facilitate interference reduction, as discussed above. In such embodiments, new messages can be defined to carry the drone state estimate to the LMF via the AMF and NLs, and/or to the E-SMLC via the MME and LCS-AP. The LPPa and/or NRPPa protocols can also be updated to carry such information to the gNBs and/or eNBs, in a similar manner as described above in relation to the first option.
The embodiments described above can be further illustrated with reference to Figure 19, which depicts an exemplary method (e.g., procedure) for estimating movement of an aerial user equipment (UE, e.g., drone) in a wireless network based on a plurality of movement models, according to various exemplary embodiments of the present disclosure. The exemplary method shown in Figure 19 can be implemented in a positioning node (e.g., E-SMLC, SLP, LMF, etc.) in a wireless network (e.g., EPC, 5GC). Alternately, the exemplary method can be implemented in a different type of node within the wireless network, or in a cloud environment outside of the wireless network. In the following description, the term “node” is used generically to refer to any computing entity that can perform the operations of the method. Although Figure 19 shows specific blocks in a particular order, the operations corresponding to the blocks can be performed in a different order than shown and can be combined and/or divided into blocks having different functionality than shown. Optional operations are indicated by dashed lines.
The exemplary method can include the operations of block 1910, in which the node can obtain, from the wireless network, position measurements for the aerial UE at a second time subsequent to a first time. For example, the first and second times can be periodic (e.g., one- second) sampling intervals for the position measurements. The exemplary method can also include the operations of block 1920, in which the node can determine a state of an interacting multiple-model (IMM) for movement of the aerial UE at the second time based on the position measurements. The IMM model can include a first constant-velocity model, a second constant- acceleration model, a third constant-position model, and estimated probabilities associated with the respective models. The exemplary method can also include the operations of block 1930, in which the node can determine a movement state for the aerial UE at the second time based on combining the models according to their estimated probabilities. In some embodiments, the IMM can also include a Hidden Markov Model (HMM) comprising respective probabilities of transition from any of the models at the first time to any of the models at the second time. Exemplary HMMs are discussed in more detail above. In some of these embodiments, the following probabilities of transition can be much smaller than one:
• from the first model at the first time to the third model at the second time, and
• from the third model at the first time to the first model at the second time.
Furthermore, in some of these embodiments, the above-listed probabilities of transition can be much smaller than all other probabilities of transition comprising the HMM model.
In some embodiments, determining the state of the IMM (e.g., block 1920) can include the operations of sub-blocks 1921-1923. In sub-block 1921, the node can determine a mixed initial condition associated with each model based on the states of the models at the first time and the HMM. In sub-block 1922, the node can determine a likelihood function associated with each model based on the associated mixed initial condition and the position measurements. In sub- block 1923, the node can, for each model (i.e., first, second, and third models), determine the associated estimated probability at the second time based on the associated likelihood function and an associated estimated probability at the first time. Exemplary operations corresponding to sub-blocks 1921-1923 are described in detail above with reference to Figures 14-16.
In some embodiments, determining the state of the IMM (e.g., block 1920) can also include the operations of sub-block 1924, where the node can determine states for the respective models at the second time based on the position measurements and corresponding states for the respective models at the first time. In some embodiments, the states of the respective models can be determined in block 1924 using respective Kalman filters. Furthermore, determining the movement state for the aerial UE (e.g., block 1930) can include the operations of sub-block 1931, where the node can combine the states of the respective models at the second time according to their respective estimated probabilities at the second time. Exemplary operations corresponding to sub-blocks 1924 and 1931 are described in detail above with reference to Figures 14-16.
In various embodiments, the position measurements obtained in block 1910 can include any of the following:
• two-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by a global navigation satellite system (GNSS);
• three-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by the GNSS;
• altitude measurement based on a topographical model of a region covered by the wireless network; and • altitude measurement based on barometric pressure.
In various embodiments, the movement state for the aerial UE (e.g., determined in block 1930) can include one or more of the following:
• three-dimensional position relative to mean sea level;
• three-dimensional position relative to local ground level;
• two-dimensional position;
• three-dimensional velocity; and
• two-dimensional velocity.
In some embodiments, the exemplary method can be performed by one of the following positioning nodes associated with the wireless network: E-SMLC, secure user plane location, (SUPL) location platform (SLP), or location management function (LMF). In other embodiments, the exemplary method can be performed by a location services (LCS) client external to the wireless network. In these embodiments, obtaining the positioning measurements (e.g., in block 1910) can include the operations of sub-blocks 1911-1912. In sub-block 1911, the external LCS client can send, to a positioning node associated with the wireless network, a request for the position measurements (i.e., of the aerial UE). For example, the positioning node can be an E- SMLC, SLP, or LMF. In sub-block 1912, the external LCS client can receive the requested position measurements from the positioning node.
In some embodiments, the exemplary method can also include the operations of block 1940, where the node can send the determined movement state to one or more of the following: a base station serving the aerial UE in the wireless network, or a location services (LCS) client external to the wireless network. In some embodiments, the movement state can be sent together with an identifier of the movement state, an identifier of the aerial UE, and an identifier of the second time. The STATE ESTIMATE INFORMATION message discussed above is an example of such embodiments.
Although various embodiments are described above in terms of methods, techniques, and/or procedures, the person of ordinary skill will readily comprehend that such methods, techniques, and/or procedures can be embodied by various combinations of hardware and software in various systems, communication devices, computing devices, control devices, apparatuses, non-transitory computer-readable media, computer program products, etc.
Figure 20 is a schematic block diagram illustrating a virtualization environment 2000 suitable for implementing various embodiments of an aerial UE (or drone) state estimator described herein. In other words, the functions described herein as being performed by an aerial UE drone state estimator can be performed by the hardware and software of virtualization environment 2000, described below. For example, virtualization environment 2000 can be used to implement a positioning node such as an E-SMLC, SLP, or LMF, or it can be deployed as “cloud” infrastructure that hosts one or more aerial UE state estimator applications.
For example, some or all of the functions described herein can be implemented as virtual components executed by one or more virtual machines implemented in virtual environment 2000 hosted by one or more of hardware nodes 2030. Such hardware nodes can be computing machines arranged in a cluster (e.g., such as in a data center or customer premise equipment, CPE) where many hardware nodes work together and are managed via management and orchestration (MANO) 20100, which, among others, oversees lifecycle management of applications 2040. In some embodiments, however, such virtual components can be executed by one or more physical computing machines, e.g., without (or with less) virtualization of the underlying resources of hardware nodes 2030.
Hardware nodes 2030 can include processing circuitry 2060 and memory 2090. Memory 2090 contains instructions 2095 executable by processing circuitry 2060 whereby application 2040 can be operative for various features, functions, procedures, etc. of the embodiments disclosed herein. Processing circuitry 2060 can include general-purpose or special-purpose hardware devices such as one or more processors (e.g., custom and/or commercial off-the-shelf), dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware node can comprise memory 2090-1 which can be non-persistent memory for temporarily storing instructions 2095 or software executed by processing circuitry 2060. For example, instructions 2095 can include program instructions (also referred to as a computer program product) that, when executed by processing circuitry 2060, can configure hardware node 2010 to perform operations corresponding to the methods/ procedures described herein.
Each hardware node can include various communication interface circuitry 2070 such as network interface controllers (NICs), network interface cards, etc. These can include respective physical (layer) network interfaces 2080. Each hardware node can also include non-transitory, persistent, machine-readable storage media 2090-2 having stored therein software 2095 and/or instructions executable by processing circuitry 2060. Software 2095 can include any type of software including software for instantiating one or more optional virtualization layers/ hypervisors 2050, and software to execute applications 2040.
In some embodiments, virtualization layer 2050 can be used to provide VMs that are abstracted from the underlying hardware nodes 2030. In such embodiments, processing circuitry 2060 executes software 2095 to instantiate virtualization layer 2050, which can sometimes be referred to as a virtual machine monitor (VMM). For example, virtualization layer 2050 can present a virtual operating platform that appears like networking hardware to containers and/or pods hosted by environment 2000. Moreover, each VM (e.g., as facilitated by virtualization layer 2050) can manifest itself as a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each VM can have dedicated hardware nodes 2030 or can share resources of one or more hardware nodes 2030 with other VMs.
Various applications 2040 (which can alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, containers, pods, etc) can be hosted and/or run by environment 2000. Such applications can implement various features, functions, procedures, etc. of various embodiments disclosed herein. For example, applications 2040 can include one or more instances of an aerial UE state estimator.
In some embodiments, each application 2040 can be arranged in a pod, which can include one or more containers 2041, such as 2041a-b shown for a particular application 2040 in Figure 20. Container 2041a-b can encapsulate respective services 2042a-b of the particular application 2040. For example, a “pod” (e.g., a Kubemetes pod) can be a basic execution unit of an application, i.e., the smallest and simplest unit that can be created and deployed in environment 2000. Each pod can include a plurality of resources shared by containers within the pod (e.g., resources 2043 shared by containers 2041a-b). For example, a pod can represent processes running on a cluster and can encapsulates an application’s containers (including services therein), storage resources, a unique network IP address, and options that govern how the container(s) should run. In general, containers can be relatively decoupled from underlying physical or virtual computing infrastructure.
In addition to the applications 2040, a traffic controller 2045 can also be run in the virtualization environment 2000 shown in Figure 20. Traffic controller 2045 can include an ingress controller that controls external traffic into the application pods (e.g., from application clients, such as wireless devices), and an egress controller that controls traffic from the application pods towards external destinations (e.g., to application clients). For example, ingress traffic can include requests from application client(s) to an application service, and egress traffic can include responses from the application service to the application client(s).
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art.
The term unit, as used herein, can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
As described herein, device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor. Furthermore, functionality of a device or apparatus can be implemented by any combination of hardware and software. A device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other. Moreover, devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. In addition, certain terms used in the present disclosure, including the specification and drawings, can be used synonymously in certain instances (e.g., “data” and “information”). It should be understood, that although these terms (and/or other terms that can be synonymous to one another) can be used synonymously herein, there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.

Claims

1. A method for estimating movement of an aerial user equipment, UE, in a wireless network based on a plurality of movement models, the method comprising: obtaining (1910), from the wireless network, position measurements for the aerial UE at a second time subsequent to a first time; determining (1920) a state of an interacting multiple-model, IMM, for movement of the aerial UE at the second time based on the position measurements, wherein the IMM model includes: a first constant-velocity model, a second constant-acceleration model, a third constant-position model, and estimated probabilities associated with the respective models; and determining (1930) a movement state for the aerial UE at the second time based on combining the models according to their estimated probabilities.
2. The method of claim 1, wherein the IMM also includes a Hidden Markov Model, HMM, comprising respective probabilities of transition from any of the models at the first time to any of the models at the second time.
3. The method of claim 2, wherein the following probabilities of transition are much smaller than one: from the first model at the first time to the third model at the second time, and from the third model at the first time to the first model at the second time.
4. The method of any of claims 2-3, wherein the following probabilities of transition are at least an order of magnitude smaller than all other probabilities of transition comprising the HMM model: from the first model at the first time to the third model at the second time; and from the third model at the first time to the first model at the second time.
5. The method of any of claims 2-4, wherein determining (1920) the state of the IMM at the second time comprises: determining (1921) a mixed initial condition associated with each model based on the states of the models at the first time and the HMM; determining (1922) a likelihood function associated with each model based on the associated mixed initial condition and the position measurements; and for each model, determining (1923) the associated estimated probability at the second time based on the associated likelihood function and an associated estimated probability at the first time.
6. The method of claim 5, wherein: determining (1920) the state of the IMM at the second time further comprises determining (1924) states for the respective models at the second time based on the position measurements and corresponding states for the respective models at the first time; and determining (1930) the movement state for the aerial UE at the second time comprises combining (1931) the states of the respective models at the second time according to their respective estimated probabilities at the second time.
7. The method of claim 6, wherein the states of the respective models are determined using respective Kalman filters.
8. The method of any of claims 1-7, wherein the position measurements include one or more of the following: two-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by a global navigation satellite system, GNSS; three-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by the GNSS; altitude measurement based on a topographical model of a region covered by the wireless network; and altitude measurement based on barometric pressure.
9. The method of any of claims 1-8, wherein the movement state for the aerial UE includes one or more of the following: three-dimensional position relative to mean sea level; three-dimensional position relative to local ground level; two-dimensional position; three-dimensional velocity; and two-dimensional velocity.
10. The method of any of claims 1-9, wherein the method is performed by one of the following positioning nodes associated with the wireless network: E-SMLC; secure user plane location, SUPL, location platform, SLP; or location management function, LMF.
11. The method of any of claims 1-9, wherein: the method is performed by a location services, LCS, client external to the wireless network; obtaining (1910) the position measurements comprises: sending (1911), to a positioning node associated with the wireless network, a request for the position measurements, and receiving (1912) the requested position measurements from the positioning node; and the positioning node is one of the following: E-SMLC; secure user plane location,
SUPL, location platform, SLP; or location management function, LMF.
12. The method of any of claims 1-11, further comprising sending (1940) the determined movement state to one or more of the following: a base station serving the aerial UE in the wireless network; or a location services, LCS, client external to the wireless network.
13. The method of claim 12, wherein the movement state is sent together with an identifier of the movement state, an identifier of the aerial UE, and an identifier of the second time.
14. A positioning node (740, 750, 760, 1840, 1870, 2030) associated with a wireless network (770, 1895), the positioning node being configured to estimate movement of an aerial user equipment, UE (710, 1510, 1810) in the wireless network based on a plurality of movement models, the positioning node comprising: communication interface circuitry (741, 751, 761, 2070, 2080) configured to communicate with one or more nodes in the wireless network and with a client external to the wireless network; and processing circuitry (742, 752, 762, 2060) operably coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to: obtain, from the wireless network, position measurements for the aerial UE at a second time subsequent to a first time; determine a state of an interacting multiple-model, IMM, for movement of the aerial UE at the second time based on the position measurements, wherein the IMM model includes: a first constant-velocity model, a second constant-acceleration model, a third constant-position model, and estimated probabilities associated with the respective models; and determine a movement state for the aerial UE at the second time based on combining the models according to their estimated probabilities.
15. The positioning node of claim 14, wherein the IMM also includes a Hidden Markov Model, HMM, comprising respective probabilities of transition from any of the models at the first time to any of the models at the second time.
16. The positioning node of claim 15, wherein the following probabilities of transition are much smaller than one: from the first model at the first time to the third model at the second time, and from the third model at the first time to the first model at the second time.
17. The positioning node of any of claims 15-16, wherein the following probabilities of transition are at least an order of magnitude smaller than all other probabilities of transition comprising the HMM model: from the first model at the first time to the third model at the second time; and from the third model at the first time to the first model at the second time.
18. The positioning node of any of claims 15-17, wherein the processing circuitry is configured to determine the state of the IMM at the second time based on: determining a mixed initial condition associated with each model based on the states of the models at the first time and the HMM; determining a likelihood function associated with each model based on the associated mixed initial condition and the position measurements; and for each model, determining the associated estimated probability at the second time based on the associated likelihood function and an associated estimated probability at the first time.
19. The positioning node of claim 18, wherein the processing circuitry is configured to: determine the state of the IMM at the second time by determining states for the respective models at the second time based on the position measurements and corresponding states for the respective models at the first time; and determine the movement state for the aerial UE at the second time by combining the states of the respective models at the second time according to their respective estimated probabilities at the second time.
20. The positioning node of claim 19, wherein the states of the respective models are determined using respective Kalman filters.
21. The positioning node of any of claims 14-20, wherein the position measurements include one or more of the following: two-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by a global navigation satellite system, GNSS; three-dimensional position measurement based on signals transmitted by the wireless network, by the aerial UE, or by the GNSS; altitude measurement based on a topographical model of a region covered by the wireless network; and altitude measurement based on barometric pressure.
22. The positioning node of any of claims 14-21, wherein the movement state for the aerial UE includes one or more of the following: three-dimensional position relative to mean sea level; three-dimensional position relative to local ground level; two-dimensional position; three-dimensional velocity; and two-dimensional velocity.
23. The positioning node of any of claims 14-22, wherein the positioning node is one of the following: E-SMLC; secure user plane location, SUPL, location platform, SLP; or location management function, LMF.
24. The positioning node of any of claims 14-23, wherein the processing circuitry and the communication interface circuitry are further configured to send the determined movement state to one or more of the following: a base station serving the aerial UE in the wireless network; or a location services, LCS, client external to the wireless network.
25. The positioning node of claim 24, wherein the movement state is sent together with an identifier of the movement state, an identifier of the aerial UE, and an identifier of the second time.
26. A positioning node (740, 750, 760, 1840, 1870, 2030) associated with a wireless network (770, 1895), the positioning node being configured to estimate movement of an aerial user equipment, UE (710, 1510, 1810) in the wireless network based on a plurality of movement models, the positioning node being further arranged to perform operations corresponding to any of the methods of claims 1-13.
27. A non-transitory, computer-readable medium (743, 753, 763, 2090) storing computer- executable instructions (2095) that, when executed by processing circuitry (742, 752, 762,
2070), configure a positioning node (740, 750, 760, 1840, 1870, 2030) to perform operations corresponding to any of the methods of claims 1-13.
28. A computer program product comprising computer-executable instructions (2095) that, when executed by processing circuitry (742, 752, 762, 2070), configure a positioning node (740, 750, 760, 1840, 1870, 2030) to perform operations corresponding to any of the methods of claims 1-13.
PCT/IB2020/056861 2020-07-22 2020-07-22 State estimation for aerial user equipment (ues) operating in a wireless network WO2022018487A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2020/056861 WO2022018487A1 (en) 2020-07-22 2020-07-22 State estimation for aerial user equipment (ues) operating in a wireless network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2020/056861 WO2022018487A1 (en) 2020-07-22 2020-07-22 State estimation for aerial user equipment (ues) operating in a wireless network

Publications (1)

Publication Number Publication Date
WO2022018487A1 true WO2022018487A1 (en) 2022-01-27

Family

ID=71842716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/056861 WO2022018487A1 (en) 2020-07-22 2020-07-22 State estimation for aerial user equipment (ues) operating in a wireless network

Country Status (1)

Country Link
WO (1) WO2022018487A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090243920A1 (en) * 2008-04-01 2009-10-01 Seiko Epson Corporation Positioning method, program, and positioning apparatus
US20180262868A1 (en) * 2016-05-13 2018-09-13 Qualcomm Incorporated Method and/or system for positioning of a mobile device
US20190310637A1 (en) * 2017-08-10 2019-10-10 Patroness, LLC Systems and Methods for Enhanced Autonomous Operations of A Motorized Mobile System
WO2020190184A1 (en) * 2019-03-19 2020-09-24 Telefonaktiebolaget Lm Ericsson (Publ) User equipment state estimation
WO2020218958A1 (en) * 2019-04-26 2020-10-29 Telefonaktiebolaget Lm Ericsson (Publ) Sharing of user equipment states
WO2020218957A1 (en) * 2019-04-26 2020-10-29 Telefonaktiebolaget Lm Ericsson (Publ) Device type state estimation
WO2020256603A1 (en) * 2019-06-18 2020-12-24 Telefonaktiebolaget Lm Ericsson (Publ) User equipment kinematic state estimation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090243920A1 (en) * 2008-04-01 2009-10-01 Seiko Epson Corporation Positioning method, program, and positioning apparatus
US20180262868A1 (en) * 2016-05-13 2018-09-13 Qualcomm Incorporated Method and/or system for positioning of a mobile device
US20190310637A1 (en) * 2017-08-10 2019-10-10 Patroness, LLC Systems and Methods for Enhanced Autonomous Operations of A Motorized Mobile System
WO2020190184A1 (en) * 2019-03-19 2020-09-24 Telefonaktiebolaget Lm Ericsson (Publ) User equipment state estimation
WO2020218958A1 (en) * 2019-04-26 2020-10-29 Telefonaktiebolaget Lm Ericsson (Publ) Sharing of user equipment states
WO2020218957A1 (en) * 2019-04-26 2020-10-29 Telefonaktiebolaget Lm Ericsson (Publ) Device type state estimation
WO2020256603A1 (en) * 2019-06-18 2020-12-24 Telefonaktiebolaget Lm Ericsson (Publ) User equipment kinematic state estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Proc. VTC", June 2018, SPRING, article "Wireless hybrid positioning based on surface modeling with polygon support"

Similar Documents

Publication Publication Date Title
US20230324541A1 (en) Radio frequency sensing communication
US20230273287A1 (en) Initializing State Estimation for Aerial User Equipment (UES) Operating in a Wireless Network
EP4260620A1 (en) Ue-to-ue positioning
US20230176163A1 (en) Positioning reference signal configuration and management
US20230175865A1 (en) User equipment sensor calibration
US20230341502A1 (en) User Equipment (UE) Movement State Estimation based on Measurements for Two or More Sites in a Wireless Network
US11647480B2 (en) Positioning with geographically-similar anchors including a mobile anchor
WO2022018487A1 (en) State estimation for aerial user equipment (ues) operating in a wireless network
WO2021216388A1 (en) Ue receive-transmit time difference measurement reporting
US20240096212A1 (en) Virtual traffic light via c-v2x
US20240061063A1 (en) Joint network entity/user equipment-and-user equipment/user equipment ranging
US20230413026A1 (en) Vehicle nudge via c-v2x
US20230353312A1 (en) Low-layer positioning measurement reporting
US20240049168A1 (en) Prs measurement sharing for virtual ue
WO2022271281A1 (en) Ue flight path reporting
EP4342242A1 (en) On-demand positioning reference signal scheduling
EP4342194A1 (en) Network edge centralized location determination
WO2023141004A1 (en) Radio frequency sensing using positioning reference signals (prs)
WO2023129791A1 (en) Network entity and method for cooperative mono-static radio frequency sensing in cellular network
WO2024025643A1 (en) Reported mobile device location assessment
WO2023048900A1 (en) Detection of radio frequency signal transfer anomalies
EP4275417A1 (en) Multi-measurement reporting per reference signal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20747122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20747122

Country of ref document: EP

Kind code of ref document: A1