CN117440501A - Positioning model training method and device, positioning method and device - Google Patents

Positioning model training method and device, positioning method and device Download PDF

Info

Publication number
CN117440501A
CN117440501A CN202210815550.3A CN202210815550A CN117440501A CN 117440501 A CN117440501 A CN 117440501A CN 202210815550 A CN202210815550 A CN 202210815550A CN 117440501 A CN117440501 A CN 117440501A
Authority
CN
China
Prior art keywords
training
positioning model
data sample
channel state
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210815550.3A
Other languages
Chinese (zh)
Inventor
贾承璐
杨昂
孙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210815550.3A priority Critical patent/CN117440501A/en
Publication of CN117440501A publication Critical patent/CN117440501A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a positioning model training method, a positioning model training device, a positioning method and a positioning model training device, which belong to the technical field of communication, and the positioning model training method in the embodiment of the application comprises the following steps: the method comprises the steps that first equipment obtains data samples, wherein the data samples comprise data samples with position labels and data samples without position labels; the first equipment performs semi-supervised learning based on the data sample to obtain a positioning model; the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.

Description

Positioning model training method and device, positioning method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a positioning model training method, a positioning model training device, a positioning method and a positioning device.
Background
In use cases of artificial intelligence (Artificial Intelligence, AI) based positioning enhancement, the accuracy of AI models is largely dependent on the size and quality of the data set;
however, the data acquisition carrying the accurate position label consumes excessive resources, is not easy to acquire, and has smaller training sample scale, so that the positioning accuracy of the trained AI model is lower.
Disclosure of Invention
The embodiment of the application provides a positioning model training method, a positioning model training device, a positioning method and a positioning device, which can solve the problem of low positioning accuracy of an AI model.
In a first aspect, a positioning model training method is provided, the method comprising:
the method comprises the steps that first equipment obtains data samples, wherein the data samples comprise data samples with position labels and data samples without position labels;
the first equipment performs semi-supervised learning based on the data sample to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
In a second aspect, there is provided a positioning method comprising:
the method comprises the steps that first equipment inputs channel state data of target equipment to be positioned into a positioning model, and position prediction information and/or position prediction auxiliary information of the target equipment, which are output from the positioning model, are obtained;
wherein the positioning model is obtained based on semi-supervised learning.
In a third aspect, there is provided a positioning model training apparatus, the apparatus comprising:
The system comprises a data sample acquisition module, a data sample acquisition module and a data processing module, wherein the data sample acquisition module is used for acquiring data samples, and the data samples comprise data samples with position labels and data samples without position labels;
the positioning model acquisition module is used for performing semi-supervised learning based on the data samples to acquire a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
In a fourth aspect, there is provided a positioning device comprising:
the position information acquisition module is used for inputting channel state data of target equipment to be positioned into the positioning model and acquiring position prediction information and/or position prediction auxiliary information of the target equipment, which are output from the positioning model;
wherein the positioning model is obtained based on semi-supervised learning.
In a fifth aspect, there is provided a first device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a sixth aspect, a first device is provided, including a processor and a communication interface, where the processor is configured to:
acquiring a data sample, wherein the data sample comprises a data sample with a position tag and a data sample without a position tag;
based on the data sample, semi-supervised learning is carried out to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
In a seventh aspect, there is provided a first device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the second aspect.
In an eighth aspect, a first device is provided, including a processor and a communication interface, where the processor is configured to:
inputting channel state data of target equipment to be positioned into a positioning model, and obtaining position prediction information and/or position prediction auxiliary information of the target equipment output from the positioning model;
Wherein the positioning model is obtained based on semi-supervised learning.
In a ninth aspect, there is provided a positioning model training system comprising: a first device operable to perform a positioning model training method as described in the first aspect.
In a tenth aspect, a positioning method system is provided, including: a first device operable to perform the positioning method as described in the second aspect.
In an eleventh aspect, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor implements the positioning model training method according to the first aspect.
In a twelfth aspect, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor implements the positioning method according to the second aspect.
In a thirteenth aspect, a chip is provided, the chip comprising a processor and a communication interface, the communication interface and the processor being coupled, the processor being configured to execute programs or instructions to implement the positioning model training method according to the first aspect.
In a fourteenth aspect, there is provided a chip comprising a processor and a communication interface, the communication interface and the processor being coupled, the processor being for running a program or instructions to implement the positioning method according to the second aspect.
In a fifteenth aspect, a computer program/program product is provided, stored in a storage medium, for execution by at least one processor to implement the positioning model training method according to the first aspect.
In a sixteenth aspect, there is provided a computer program/program product stored in a storage medium, the computer program/program product being executable by at least one processor to implement a positioning method as described in the second aspect.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
Drawings
Fig. 1 shows a block diagram of a wireless communication system to which embodiments of the present application are applicable;
FIG. 2 is a schematic diagram of a neural network provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a neuron provided in an embodiment of the present application;
FIG. 4 is a flow chart of a training method for positioning model according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a ternary network provided in an embodiment of the present application;
FIG. 6 is a flow chart of positioning model training provided by an embodiment of the present application;
fig. 7 is a schematic flow chart of a positioning method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of positioning accuracy provided by an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a positioning model training device according to an embodiment of the present disclosure;
FIG. 10 is a schematic structural view of a positioning device according to an embodiment of the present disclosure;
fig. 11 is one of schematic structural diagrams of a communication device according to an embodiment of the present application;
FIG. 12 is a second schematic diagram of a communication device according to an embodiment of the present application;
fig. 13 is a schematic diagram of a hardware structure of a first device implementing an embodiment of the present application;
fig. 14 is a second schematic hardware structure of a first device implementing an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the terms "first" and "second" are generally intended to be used in a generic sense and not to limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
It is noted that the techniques described in embodiments of the present application are not limited to long term evolution (Long Term Evolution, LTE)/LTE evolution (LTE-Advanced, LTE-a) systems, but may also be used in other wireless communication systems, such as code division multiple access (Code Division Multiple Access, CDMA), time division multiple access (Time Division Multiple Access, TDMA), frequency division multiple access (Frequency Division Multiple Access, FDMA), orthogonal frequency division multiple access (Orthogonal Frequency Division Multiple Access, OFDMA), single carrier frequency division multiple access (Single-carrier Frequency Division Multiple Access, SC-FDMA), and other systems. The terms "system" and "network" in embodiments of the present application are often used interchangeably, and the techniques described may be used for both the above-mentioned systems and radio technologies, as well as other systems and radio technologies. The following description describes a New air interface (NR) system for purposes of example and uses NR terminology in much of the description that follows, but these techniques are also applicable to applications other than NR system applications, such as generation 6 (6) th Generation, 6G) communication system.
Fig. 1 shows a block diagram of a wireless communication system to which embodiments of the present application are applicable. The wireless communication system includes a terminal 11 and a network device 12. The terminal 11 may be a mobile phone, a tablet (Tablet Personal Computer), a Laptop (Laptop Computer) or a terminal-side Device called a notebook, a personal digital assistant (Personal Digital Assistant, PDA), a palm top, a netbook, an ultra-mobile personal Computer (ultra-mobile personal Computer, UMPC), a mobile internet appliance (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) Device, a robot, a Wearable Device (weather Device), a vehicle-mounted Device (VUE), a pedestrian terminal (PUE), a smart home (home Device with a wireless communication function, such as a refrigerator, a television, a washing machine, or a furniture), a game machine, a personal Computer (personal Computer, PC), a teller machine, or a self-service machine, and the Wearable Device includes: intelligent wrist-watch, intelligent bracelet, intelligent earphone, intelligent glasses, intelligent ornament (intelligent bracelet, intelligent ring, intelligent necklace, intelligent anklet, intelligent foot chain etc.), intelligent wrist strap, intelligent clothing etc.. Note that, the specific type of the terminal 11 is not limited in the embodiment of the present application. The network-side device 12 may comprise an access network device or a core network device, wherein the access network device 12 may also be referred to as a radio access network device, a radio access network (Radio Access Network, RAN), a radio access network function or a radio access network element. Access network device 12 may include a base station, a WLAN access point, a WiFi node, or the like, which may be referred to as a node B, an evolved node B (eNB), an access point, a base transceiver station (Base Transceiver Station, BTS), a radio base station, a radio transceiver, a basic service set (Basic Service Set, BSS), an extended service set (Extended Service Set, ESS), a home node B, a home evolved node B, a transmission and reception point (Transmitting Receiving Point, TRP), or some other suitable terminology in the art, and the base station is not limited to a particular technical vocabulary so long as the same technical effect is achieved, and it should be noted that in the embodiments of the present application, only a base station in an NR system is described as an example, and the specific type of the base station is not limited. The core network device may include, but is not limited to, at least one of: core network nodes, core network functions, mobility management entities (Mobility Management Entity, MME), access mobility management functions (Access and Mobility Management Function, AMF), session management functions (Session Management Function, SMF), user plane functions (User Plane Function, UPF), policy control functions (Policy Control Function, PCF), policy and charging rules function units (Policy and Charging Rules Function, PCRF), edge application service discovery functions (Edge Application Server Discovery Function, EASDF), unified data management (Unified Data Management, UDM), unified data repository (Unified Data Repository, UDR), home subscriber server (Home Subscriber Server, HSS), centralized network configuration (Centralized network configuration, CNC), network storage functions (Network Repository Function, NRF), network opening functions (Network Exposure Function, NEF), local NEF (or L-NEF), binding support functions (Binding Support Function, BSF), application functions (Application Function, AF), and the like. In the embodiment of the present application, only the core network device in the NR system is described as an example, and the specific type of the core network device is not limited.
The following will be described first:
artificial Intelligence (AI) is widely used in various fields at present, and is integrated into a wireless communication network, so that the technical indexes such as throughput, time delay, user capacity and the like are remarkably improved, which is an important task of the future wireless communication network. There are various implementations of AI modules, such as neural networks, decision trees, support vector machines, or bayesian classifiers. The present application is described with reference to neural networks, but is not limited to a particular type of AI module.
Fig. 2 is a schematic diagram of a neural network provided in an embodiment of the present application, fig. 3 is a schematic diagram of a neuron provided in an embodiment of the present application, and as shown in fig. 2 and 3, the neural network is composed of neurons, where a1, a2, … aK is an input, w is a weight (multiplicative coefficient), b is a bias (additive coefficient), and σ (-) is an activation function. Common activation functions include Sigmoid, tanh, or ReLU (Rectified Linear Unit, linear rectification function, modified linear units), and so forth.
Parameters of the neural network are optimized through a gradient optimization algorithm. Gradient optimization algorithms are a class of algorithms that minimize or maximize an objective function (sometimes called a loss function), which is often a mathematical combination of model parameters and data. For example, given data X and its corresponding label Y, a neural network model f (), with the model, a predicted output f (X) can be obtained from the input X, and the difference (f (X) -Y) between the predicted value and the actual value, which is the loss function, can be calculated. The proper W can be found, b minimizes the value of the loss function, and the smaller the loss value, the closer the model is to the real situation.
The most common optimization algorithms are basically based on error back propagation (error Back Propagation, BP) algorithms. The basic idea of the BP algorithm is that the learning process consists of two processes, forward propagation of the signal and backward propagation of the error. In forward propagation, an input sample is transmitted from an input layer, is processed layer by each hidden layer, and is transmitted to an output layer. If the actual output of the output layer does not match the desired output, the back propagation phase of the error is shifted. The error back transmission is to make the output error pass through hidden layer to input layer in a certain form and to distribute the error to all units of each layer, so as to obtain the error signal of each layer unit, which is used as the basis for correcting the weight of each unit. The process of adjusting the weights of the layers of forward propagation and error back propagation of the signal is performed repeatedly. The constant weight adjustment process is the learning training process of the network. This process is continued until the error in the network output is reduced to an acceptable level or until a preset number of learnings is performed.
Common optimization algorithms are Gradient Descent (Gradient Descent), random Gradient Descent (Stochastic Gradient Descent, SGD), mini-batch Gradient Descent (small lot Gradient Descent), momentum method (Momentum), nestrov (name of the inventor, specifically random Gradient Descent with Momentum), adagard (ADAptive GRADient Descent ), adadelta, RMSprop (root mean square prop, root mean square error Descent), adam (Adaptive Moment Estimation, adaptive Momentum estimation), etc.
When the errors are counter-propagated, the optimization algorithms are all used for obtaining errors/losses according to the loss function, obtaining derivatives/partial derivatives of the current neurons, adding influence of learning rate, previous gradients/derivatives/partial derivatives and the like to obtain gradients, and transmitting the gradients to the upper layer.
The positioning model training method, the positioning model training device, the positioning method and the positioning device provided by the embodiment of the application are described in detail below through some embodiments and application scenes of the embodiments with reference to the accompanying drawings.
Fig. 4 is a schematic flow chart of a positioning model training method provided in an embodiment of the present application, as shown in fig. 4, the flow chart includes the following steps:
step 400, a first device acquires a data sample, wherein the data sample comprises a data sample with a position tag and a data sample without a position tag;
step 410, the first device performs semi-supervised learning based on the data sample to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
Alternatively, the high-dimensional channel state data may be mapped to a low-dimensional manifold space (e.g., two-dimensional) that is the same as the location dimension, and such mapping may be considered to implement the principle of near reservation, i.e., similar channel state data at similar locations in real space, and similar channel state data mapping in the low-dimensional manifold space may be similar, and subsequent location-based services may be replaced with locations in manifold space.
Thus, in training the AI model for positioning, channel state data may be taken as training data samples;
optionally, AI technology can significantly improve positioning accuracy. In a wireless communication network, the input of the AI model may be channel state data, such as channel impulse responses, from a device to be located, such as a plurality of TRPs, and the output of the AI model is position prediction information, such as a position prediction result, of the device to be located, or an intermediate feature quantity, which may assist in position calculation (i.e., position prediction information and/or position prediction assistance information, of the device to be located), of the device to be located. However, AI-based positioning accuracy enhancement requires a large number of available, real-labeled training data, i.e., position-labeled data samples, whose acquisition requires a significant amount of resources. Relatively, data samples without location tags are easier to obtain, e.g., only the user's channel state data may be collected without collecting location tags.
Alternatively, semi-supervised learning is a method of training AI models using both partially position-tagged data samples and partially non-position-tagged data samples.
Alternatively, semi-supervised learning in embodiments of the present application may include a method of training AI models based on a small number of position-tagged data samples and a large number of non-position-tagged data samples;
Alternatively, semi-supervised learning may be performed based on the position-tagged data samples and the non-position-tagged data samples to obtain the positioning model.
The embodiment of the application provides a positioning model training method based on semi-supervised learning in an AI-based positioning enhancement use case, which can remarkably improve positioning accuracy after utilizing a data sample without a position tag compared with a supervised learning method.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
Optionally, the first device performs semi-supervised learning based on the data samples to obtain a positioning model, including:
the first device executes an iterative training process based on the data sample to obtain an output parameter after iterative training, wherein the iterative training process comprises: at least one first training based on the data samples without the position tags, and at least one second training based on the data samples with the position tags; the first training is a training process of a first model, wherein the first model is a ternary network and comprises three branches with the same structure;
The first device determines the positioning model based on the iteratively trained output parameters.
Alternatively, the first training may comprise a training process for a first model, which may be a ternary network, i.e. the first training may be a training process for a ternary network based on data samples without location tags;
optionally, the first model may be a ternary network, the learning rate may be L1, the training number may be E1, and the number of samples for each training may be S1, where L1, E1, and S1 may be preset or predefined by a protocol or indicated by a higher layer;
optionally, fig. 5 is a schematic structural diagram of a ternary network provided in an embodiment of the present application, where, as shown in fig. 5, the ternary network structure includes three branches, and each branch has the same structure and parameters;
alternatively, since the AI model needs to be trained with data samples without position tags, i.e. the physical positions of the channel state data samples are unknown, training can be performed with the relative distances corresponding to the channel state data samples of the same device (second device);
alternatively, the second device may be a device that provides channel state data samples;
alternatively, the first training may be a training process on data samples without location tags;
Alternatively, the second training may be a training process on the data samples with position tags;
optionally, in order to perform semi-supervised learning based on the data samples, multiple alternating training may be performed on the data samples without the position tags and the data samples with the position tags, that is, multiple alternating iterations may be performed on the first training and the second training;
alternatively, when the first training and the second training are alternately iterated for a plurality of times, any alternate iteration sequence may be adopted; for example, the first training is performed once every two times, or the second training is performed once every three times, or the second training is performed once every two times; or randomly determining the next training process when the training process is executed once; or based on a preset random alternate iteration sequence, performing multiple alternate iterations on the first training and the second training; the embodiments of the present application are not limited in this regard.
Optionally, any of the training processes adjacent to the first training may be a previous training and/or a subsequent training process of the first training;
alternatively, any of the second training-adjacent training processes may be a previous training and/or a subsequent training process of the second training.
Optionally, any one of the first training adjacent training is the first training or the second training, and any one of the second training adjacent training is the first training or the second training.
Optionally, the first device performs an iterative training process based on the data samples, to obtain output parameters after iterative training, including:
the first device executes an iterative training process based on the data sample until the first device determines that a first condition is met, ending the iterative training to obtain output parameters after the iterative training;
wherein the first device determines that a first condition is satisfied, comprising at least one of:
the first device determines that the total number of first training in the iterative training process reaches a first training number threshold;
the first equipment determines that the total number of second training in the iterative training process reaches a second training number threshold;
the first equipment determines that the total times of the first training and the second training in the iterative training process reach a training total times threshold; or (b)
The first device determines that the position prediction information obtained after the last training in the iterative training process meets the position prediction information precision requirement and/or the position prediction auxiliary information meets the position prediction auxiliary information precision requirement, wherein the last training is the first training or the second training.
Optionally, the first device may perform multiple alternating iterative training on the first training and the second training until the first device determines that the total number of first training times reaches a first training time threshold;
optionally, the first device may perform multiple alternating iterative training on the first training and the second training until the first device determines that the total number of second training times reaches the second training time threshold;
optionally, the first device may perform multiple alternating iterative training on the first training and the second training until the total number of training of the first training and the second training reaches a total number of training threshold;
optionally, the first device may perform multiple iterative training alternately on the first training and the second training until the last training ends to obtain the position prediction information of the model on the test sample, so as to meet the accuracy requirement of the position prediction information.
Optionally, the location tag-free data samples include a first channel state data sample of a second device in a first location, a second channel state data sample of the second device in a second location, and a third channel state data sample of the second device in a third location;
the first channel state data sample is an input of a first branch in the first training in the three branches with the same structure, and an output of the first branch comprises a mapping of the first channel state data sample in a low-dimensional manifold space of the first channel state data sample;
The second channel state data sample is an input of a second branch in the first training in the three branches with the same structure, and an output of the second branch comprises a mapping of the second channel state data sample in a low-dimensional manifold space of the second channel state data sample;
the third channel state data sample is an input of a third branch in the first training in the three branches with the same structure, and an output of the third branch comprises a mapping of the third channel state data sample in a low-dimensional manifold space of the third channel state data sample;
the distance between the first position and the third position is less than or equal to a first threshold; a distance between the first location and the second location is greater than or equal to a second threshold, location information of the first location is unknown, and the first location is determined randomly or based on an indication;
the second training is a training process of a second model, the second model is a unitary network, and the structure of the unitary network is the same as that of any branch in the first model; the input of the unitary network is the position-tagged data sample and the output of the unitary network is a mapping of the position-tagged data sample in a low-dimensional manifold space of the position-tagged data sample.
Optionally, three input samples of the ternary network are respectively an anchor sample (a first channel state data sample of the second device at the first position), a near sample (a third channel state data sample of the second device at the third position) and a far sample (a second channel state data sample of the second device at the second position), and three branches are respectively input;
optionally, three ternary networks are output, which correspond to the anchor point sample, the near sample and the far sample respectively, and the physical meaning of the ternary network is mapping of the input sample in the high-dimensional space in the low-dimensional space;
optionally, the anchor point sample (a first channel state data sample of the second device at the first location) is an input of a first branch in a first training, an output of the first branch comprising a mapping of the first channel state data sample in a low-dimensional manifold space of the first channel state data sample;
optionally, the far samples (second channel state data samples of the second device in the second position) are inputs of a second branch in the first training, the outputs of the second branch comprising a mapping of the second channel state data samples in a low-dimensional manifold space of the second channel state data samples;
Optionally, the near samples (third channel state data samples of the second device in a third position) are inputs of a third branch in the first training, the output of the third branch comprising a mapping of the third channel state data samples in a low-dimensional manifold space of the third channel state data samples.
Alternatively, the second training may comprise a training process for a second model, which may be a unitary network, i.e. the first training may be a training process for a unitary network based on channel state data samples with location tags;
optionally, the second model may be a unary network, the learning rate may be L2, the training number may be E2, and the number of samples for each training may be S2, where L2, E2, and S2 may be preset or predefined by a protocol or indicated by a high layer;
alternatively, the structure of the meta-network may be the same as the structure of any branch in the ternary network comprised by the first model.
Alternatively, a data sample set may be obtained in advance, including a data sample with a position tag and a data sample without a position tag;
the data sample without the position tag can have Y1 group data, the data sample with the position tag has Y2 group data, the data sample without the position tag can be used for training of the first model in the first training stage, and each group of data has three samples:
1) The anchor sample is a reference sample;
2) Near samples are samples having a distance from the anchor point of less than N1 meters, or less than or equal to N2 meters;
3) The far sample is a sample with a distance from an anchor point sample of more than M1 meters and less than M2 meters or more than M3 meters, wherein M1 and M3 are more than or equal to N1 and N2;
the data samples with position tags are used for training of the second model of the second training phase, each set of data has one sample (x, p), x being the CIR and p being the position tag.
Wherein Y1, Y2, N1, N2, M1, M2, M3 are positive integers, wherein Y1, Y2, N1, N2, M1, M2, M3 may be preset or protocol predefined or high-level indicated;
optionally, the third training in the iterative training process includes:
in the case that the third training is not the first training in the iterative training process, the first device determines initial parameters in the third training based on parameters obtained from the end of the previous training of the third training;
in the case that the third training is the first training in the iterative training process, the first device determines initial parameters in the third training based on a protocol predefined or preconfigured or preset initial parameter configuration mode;
Wherein the third training is any one training in the iterative training process.
Optionally, for an initial parameter of any one of the iterative training processes (non-first training process), determining based on a parameter of a model obtained at the end of a previous training of the one training process;
optionally, the initial parameters for the first training process in the iterative training process are determined based on a protocol pre-defined or pre-configured or preset initial parameter configuration.
Optionally, in the case that the previous training of the third training is the first training, the parameter obtained by ending the previous training is a parameter obtained by ending any one of the three branches of the same structure at the previous training.
Optionally, when a certain first training process ends and the parameters of the model obtained by the three branches in the first training process are all θ, the initial parameters of the subsequent training process of the first training process may be determined based on θ.
Optionally, in the case that the previous training of the third training is the second training, the parameter obtained by ending the previous training is a parameter obtained by the unary network at the end of the previous training.
Optionally, when a certain second training process ends, the parameters of the model obtained by the unary network in the second training process are θ, and then the initial parameters of the training process after the second training process can be determined based on θ.
Optionally, in the case that the third training is the first training, the determining, based on a parameter obtained at the end of a previous training of the third training, an initial parameter in the third training includes:
the first device uses the parameters obtained by ending the previous training of the third training as initial parameters of each of three branches of the same structure of the ternary network in the third training.
Optionally, the next training process of a certain first training process A1 is the first training process A2, and when the first training process A1 ends, parameters of models obtained by the three branches in the first training process A1 are all θ1, and then initial parameters of the three branches in the first training process A2 can be determined to be all θ1.
Optionally, the next training process of a certain second training B2 is the first training A4, and when the second training B2 ends, the parameters of the model obtained by the unary network in the second training A4 are θ2, and then it can be determined that initial parameters of three branches of the first training A4 are all θ2.
Optionally, in the case that the third training is the second training, the determining, based on a parameter obtained by ending a previous training of the third training, an initial parameter in the third training includes:
the first device uses the parameters obtained by ending the previous training of the third training as initial parameters of the unitary network in the third training.
Optionally, the subsequent training process of a certain first training process A3 is the second training process B1, and when the first training process A3 ends, the parameters of the models obtained by the three branches in the first training process A3 are all θ3, and then it may be determined that the initial parameters of the unary network in the second training process B1 are θ3.
Optionally, the next training process of a certain second training process B3 is the second training process B4, and when the second training process B3 ends, the parameters of the model obtained by the unary network in the second training process B3 are θ4, and then it may be determined that the initial parameters of the unary network in the second training process B4 are θ4.
Optionally, the determining the positioning model based on the output parameters after the iterative training includes at least one of:
under the condition that the last training of the iterative training is the first training, the first equipment takes the parameters obtained by any one of the three branches with the same structure at the end of the last training as the parameters of the positioning model; or (b)
And under the condition that the last training of the iterative training is the second training, the first device takes the parameters obtained by the unitary network at the end of the last training as the parameters of the positioning model.
Optionally, if the last training in the iterative training process is the first training A5, and the parameters of the model obtained by the three branches of the first training A5 at the end of training are θ5, the parameters of the trained positioning model can be determined to be θ5;
optionally, if the last training in the iterative training process is the second training B5, and the parameter of the model obtained by the second training B5 through the unary network at the end of training is θ6, it may be determined that the parameter of the trained positioning model is θ6.
Optionally, the loss function of the ternary network is used to increase the distance D1 between the output of the first branch and the output of the second branch, to decrease the distance D2 between the output of the first branch and the output of the third branch, and to ensure that D1-D2 is less than M, which is preset or predefined by the protocol or determined based on the indication.
Alternatively, the loss function of the ternary network may comprise a class of functions, i.e. functions that can satisfy the following conditions: the distance D1 of the far sample yk (output of the second branch) from the anchor sample yi (output of the first branch) in the low dimensional space Y is as large as possible, the distance D2 of the near sample yj (output of the third branch) from the anchor sample yi (output of the first branch) is as small as possible, and it is satisfied that D1-D2 is smaller than M.
Alternatively, the loss function of the ternary network may be:
where M is a superparameter, N is the number of samples in a batch,and->The CIR of the anchor sample, the near sample and the far sample of the nth group of samples in one batch are respectively represented; the parameter M and other super parameters can obtain optimal M value through ablation experiment>For representing anchor samples yi (output of first branch), for example>For representing near samples yj (output of the third branch), for example>For representing the far samples yk (output of the second branch).
Optionally, the loss function of the unary network is used to reduce the distance between the mapping of the position-tagged data samples in a low-dimensional manifold space and the position tags.
Alternatively, the loss function of the monobasic network may comprise a class of functions, i.e. functions that fulfil the following conditions: the distance between the model predictor (mapping of the data samples with position labels in the low-dimensional manifold space, i.e. the output of the unigram) and the real label of the channel state data sample is as small as possible, and the distance may be: average absolute distance, euclidean distance, etc.
Optionally, the channel state data samples include at least one of:
channel impulse response (Channel Impulse Response, CIR);
Parameters of the channel;
wherein the parameters of the channel include at least one of:
head path delay, head path power, head path phase, head path angle, maximum N path delay, maximum N path power, maximum N path phase, maximum N path angle, time of Arrival (TOA), time difference of Arrival (Time Delay of Arrival, TDOA), or reference signal received power (Reference Signal Receiving Power, RSRP).
In one embodiment, fig. 6 is a schematic flow chart of positioning model training provided in the embodiment of the present application, as shown in fig. 6, may be performed:
a) A first training phase:
training a first model through data samples without position labels, wherein the first model is a ternary network, the learning rate is L1, the training times are E1, and the number of samples trained each time is S1; further, model parameters θ of any of the three branches of the first model may be saved;
b) A second training phase:
taking the model parameter theta obtained in the first training stage as an initial parameter of a second model, wherein the second model and one of branches of the first model have the same structure; training a second model through data samples with position labels, wherein the second model is a unitary model, the learning rate is L2, the training times are E2, and the number of samples trained each time is S2; saving model parameters theta' of the second model; taking the model parameters theta' obtained in the second stage as initial parameters of each branch of the first model, and returning to the step a);
c) And sequentially iterating until the second model reaches the target positioning accuracy on the test set.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
Fig. 7 is a flow chart of a positioning method provided in an embodiment of the present application, as shown in fig. 7, including the following steps:
step 700, the first device inputs channel state data of a target device to be positioned into a positioning model, and obtains position prediction information and/or position prediction auxiliary information of the target device output from the positioning model;
wherein the positioning model is obtained based on semi-supervised learning.
Alternatively, the high-dimensional channel state data may be mapped to a low-dimensional manifold space (e.g., two-dimensional) that is the same as the location dimension, and such mapping may be considered to implement the principle of near reservation, i.e., similar channel state data at similar locations in real space, and similar channel state data mapping in the low-dimensional manifold space may be similar, and subsequent location-based services may be replaced with locations in manifold space.
Thus, in training the AI model for positioning, channel state data may be used as training samples;
optionally, AI technology can significantly improve positioning accuracy. In a wireless communication network, the input of the AI model may be channel state data, such as channel impulse responses, from a device to be located, such as a plurality of TRPs, and the output of the AI model is position prediction information, such as a position prediction result, of the device to be located, or an intermediate feature quantity, which may assist in position calculation, of the device to be located. However, AI-based positioning accuracy enhancement requires a large number of available, real-labeled training data, i.e., position-labeled data samples, whose acquisition requires a significant amount of resources. Relatively, data samples without location tags are easier to obtain, e.g., only the user's channel state data may be collected without collecting location tags.
Alternatively, semi-supervised learning is a method of training AI models using both partially position-tagged data samples and partially non-position-tagged data samples.
Alternatively, semi-supervised learning in embodiments of the present application may include a method of training an AI model based on data of a small number of data samples with position tags and a large number of data samples without position tags;
Alternatively, semi-supervised learning may be performed based on the position-tagged data samples and the non-position-tagged data samples to obtain the positioning model.
Alternatively, for the positioning model obtained by training in any of the foregoing embodiments, channel state data of the target device to be positioned may be input into the positioning model, and position prediction information and/or position prediction assistance information for the target device output from the positioning model may be obtained.
In one embodiment, fig. 8 is a schematic diagram of positioning accuracy provided in the embodiment of the present application, as shown in fig. 8, a positioning model obtained by semi-supervised learning may be compared with a positioning model obtained by supervised learning, and it is obviously determined that the positioning accuracy of the positioning model obtained by semi-supervised learning is higher, as shown in the following table 1:
TABLE 1
Wherein 50%, 67%, 80%, 90% are cumulative probability density distribution of positioning errors, and according to simulation results, the semi-supervised learning method of the scheme can significantly improve positioning accuracy by using the data set acquisition method.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
According to the positioning model training method provided by the embodiment of the application, the execution main body can be a positioning model training device. In the embodiment of the application, the positioning model training device provided in the embodiment of the application is described by taking an example that the positioning model training device executes a positioning model training method.
Fig. 9 is a schematic structural diagram of a positioning model training device according to an embodiment of the present application, as shown in fig. 9, the device 900 includes: a data sample acquisition module 910 and a positioning model acquisition module 920; wherein:
the data sample acquisition module 910 is configured to acquire data samples, where the data samples include a data sample with a location tag and a data sample without a location tag;
the positioning model obtaining module 920 is configured to perform semi-supervised learning based on the data samples to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
The positioning model training device provided by the embodiment of the application can realize each process realized by each method embodiment and achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
Optionally, the positioning model acquisition module is configured to:
performing an iterative training process based on the data samples to obtain output parameters after the iterative training, wherein the iterative training process comprises: at least one first training based on the data samples without the position tags, and at least one second training based on the data samples with the position tags; the first training is a training process of a first model, wherein the first model is a ternary network and comprises three branches with the same structure;
and determining the positioning model based on the output parameters after the iterative training.
Optionally, the positioning model acquisition module is configured to:
executing an iterative training process based on the data sample until the first condition is determined to be met, ending the iterative training to obtain output parameters after the iterative training;
Wherein the determining meets a first condition comprising at least one of:
determining that the total number of first training in the iterative training process reaches a first training number threshold;
determining that the total number of second training in the iterative training process reaches a second training number threshold;
determining that the total times of the first training and the second training in the iterative training process reach a total times threshold of training; or (b)
And determining that the position prediction information obtained after the last training in the iterative training process meets the position prediction information precision requirement and/or the position prediction auxiliary information meets the position prediction auxiliary information precision requirement, wherein the last training is the first training or the second training.
Optionally, the location tag-free data samples include a first channel state data sample of a second device in a first location, a second channel state data sample of the second device in a second location, and a third channel state data sample of the second device in a third location;
the first channel state data sample is an input of a first branch in the first training in the three branches with the same structure, and an output of the first branch comprises a mapping of the first channel state data sample in a low-dimensional manifold space of the first channel state data sample;
The second channel state data sample is an input of a second branch in the first training in the three branches with the same structure, and an output of the second branch comprises a mapping of the second channel state data sample in a low-dimensional manifold space of the second channel state data sample;
the third channel state data sample is an input of a third branch in the first training in the three branches with the same structure, and an output of the third branch comprises a mapping of the third channel state data sample in a low-dimensional manifold space of the third channel state data sample;
the distance between the first position and the third position is less than or equal to a first threshold; a distance between the first location and the second location is greater than or equal to a second threshold, location information of the first location is unknown, and the first location is determined randomly or based on an indication;
the second training is a training process of a second model, the second model is a unitary network, and the structure of the unitary network is the same as that of any branch in the first model; the input of the unitary network is the position-tagged data sample and the output of the unitary network is a mapping of the position-tagged data sample in a low-dimensional manifold space of the position-tagged data sample.
Optionally, the positioning model acquisition module is configured to:
determining initial parameters in the third training based on parameters obtained from the end of a previous training of the third training, if the third training is not the first training in the iterative training process;
determining initial parameters in the third training based on a protocol pre-defined or pre-configured or preset initial parameter configuration mode under the condition that the third training is the first training in the iterative training process;
wherein the third training is any one training in the iterative training process.
Optionally, in the case that the previous training of the third training is the first training, the parameter obtained by ending the previous training is a parameter obtained by ending any one of the three branches of the same structure at the previous training.
Optionally, in the case that the previous training of the third training is the second training, the parameter obtained by ending the previous training is a parameter obtained by the unary network at the end of the previous training.
Optionally, in the case that the third training is the first training, the positioning model obtaining module is configured to:
And taking the parameters obtained after the previous training of the third training is finished as initial parameters of each of three branches of the same structure of the ternary network in the third training.
Optionally, in the case that the third training is the second training, the positioning model obtaining module is configured to:
and taking the parameters obtained after the previous training of the third training is finished as initial parameters of the unitary network in the third training.
Optionally, the positioning model acquisition module is configured to at least one of:
under the condition that the last training of the iterative training is the first training, taking the parameters obtained by any one of the three branches with the same structure at the end of the last training as the parameters of the positioning model; or (b)
And under the condition that the last training of the iterative training is the second training, taking the parameters obtained by the unitary network at the end of the last training as the parameters of the positioning model.
Optionally, the loss function of the ternary network is used to increase the distance D1 between the output of the first branch and the output of the second branch, to decrease the distance D2 between the output of the first branch and the output of the third branch, and to ensure that D1-D2 is less than M, which is preset or predefined by the protocol or determined based on the indication.
Optionally, the loss function of the unary network is used to reduce the distance between the mapping of the position-tagged data samples in a low-dimensional manifold space and the position tags.
Optionally, the channel state data samples include at least one of:
channel impulse response, CIR;
parameters of the channel;
wherein the parameters of the channel include at least one of:
the first path delay, the first path power, the first path phase, the first path angle, the maximum N path delay, the maximum N path power, the maximum N path phase, the maximum N path angle, the arrival time TOA, the arrival time difference TDOA, or the reference signal received power RSRP.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
The positioning model training device in the embodiment of the application may be an electronic device, for example, an electronic device with an operating system, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, terminals may include, but are not limited to, the types of terminals 11 listed above, other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., and embodiments of the application are not specifically limited.
The positioning model training device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 4 to fig. 6, and achieve the same technical effects, so that repetition is avoided, and no further description is provided here.
According to the positioning method provided by the embodiment of the application, the execution main body can be a positioning device. In the embodiment of the present application, an example of a positioning method performed by a positioning device is described as a positioning device provided in the embodiment of the present application.
Fig. 10 is a schematic structural diagram of a positioning device according to an embodiment of the present application, as shown in fig. 10, the device 1000 includes: a positional information acquisition module 1010; wherein:
the location information obtaining module 1010 is configured to input channel state data of a target device to be located into a location model, and obtain location prediction information and/or location prediction auxiliary information output from the location model for the target device;
wherein the positioning model is obtained based on semi-supervised learning.
The positioning device provided in the embodiment of the present application can implement each process implemented by each method embodiment described above, and achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
The positioning device in the embodiment of the application may be an electronic device, for example, an electronic device with an operating system, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, terminals may include, but are not limited to, the types of terminals 11 listed above, other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., and embodiments of the application are not specifically limited.
The positioning device provided in this embodiment of the present application can implement each process implemented by the method embodiment of fig. 7, and achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
Optionally, fig. 11 is a schematic structural diagram of a communication device according to an embodiment of the present application, provided as shown in fig. 11, and further provides a communication device 1100, including a processor 1101 and a memory 1102, where the memory 1102 stores a program or an instruction that can be executed on the processor 1101, for example, when the communication device 1100 is a first device, the program or the instruction is executed by the processor 1101 to implement each step of the foregoing positioning model training method embodiment, and achieve the same technical effect.
Optionally, fig. 12 is a second schematic structural diagram of a communication device according to the embodiment of the present application, provided as shown in fig. 12, and further provides a communication device 1200, including a processor 1201 and a memory 1202, where the memory 1202 stores a program or an instruction that can be executed on the processor 1201, for example, when the communication device 1200 is a first device, the program or the instruction is executed by the processor 1201 to implement the steps of the positioning method embodiment described above, and achieve the same technical effects.
The embodiment of the application also provides a first device, which comprises a processor and a communication interface, wherein the processor is used for:
acquiring a data sample, wherein the data sample comprises a data sample with a position tag and a data sample without a position tag;
based on the data sample, semi-supervised learning is carried out to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
The first device embodiment corresponds to the first device-side method embodiment, and each implementation process and implementation manner of the method embodiment are applicable to the first device embodiment, and the same technical effects can be achieved. Specifically, fig. 13 is one of the hardware structural diagrams of a first device implementing an embodiment of the present application.
Specifically, the embodiment of the application also provides first equipment. As shown in fig. 13, the first apparatus 1300 includes: an antenna 1301, a radio frequency device 1302, a baseband device 1303, a processor 1304, and a memory 1305. The antenna 1301 is connected to a radio frequency device 1302. In the uplink direction, the radio frequency device 1302 receives information via the antenna 1301, and transmits the received information to the baseband device 1303 for processing. In the downlink direction, the baseband device 1303 processes information to be transmitted, and transmits the processed information to the radio frequency device 1302, and the radio frequency device 1302 processes the received information and transmits the processed information through the antenna 1301.
The method performed by the first device in the above embodiment may be implemented in a baseband apparatus 1303, where the baseband apparatus 1303 includes a baseband processor.
The baseband apparatus 1303 may, for example, include at least one baseband board, where a plurality of chips are disposed, as shown in fig. 13, where one chip, for example, a baseband processor, is connected to the memory 1305 through a bus interface, so as to call a program in the memory 1305 to perform the first device operation shown in the above method embodiment.
The first device may also include a network interface 1306, such as a common public radio interface (common public radio interface, CPRI).
Specifically, the first apparatus 1300 of the embodiment of the present invention further includes: instructions or programs stored in the memory 1305 and executable on the processor 1304, the processor 1304 invokes the instructions or programs in the memory 1305 to perform the methods performed by the modules shown in fig. 9 and achieve the same technical effects, and are not repeated here.
Wherein the processor 1304 is configured to:
acquiring a data sample, wherein the data sample comprises a data sample with a position tag and a data sample without a position tag;
based on the data sample, semi-supervised learning is carried out to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
Optionally, the processor 1304 is configured to:
Performing an iterative training process based on the data samples to obtain output parameters after the iterative training, wherein the iterative training process comprises: at least one first training based on the data samples without the position tags, and at least one second training based on the data samples with the position tags; the first training is a training process of a first model, wherein the first model is a ternary network and comprises three branches with the same structure;
and determining the positioning model based on the output parameters after the iterative training.
Optionally, the processor 1304 is configured to:
executing an iterative training process based on the data sample until the first condition is determined to be met, ending the iterative training to obtain output parameters after the iterative training;
wherein the determining meets a first condition comprising at least one of:
determining that the total number of first training in the iterative training process reaches a first training number threshold;
determining that the total number of second training in the iterative training process reaches a second training number threshold;
determining that the total times of the first training and the second training in the iterative training process reach a total times threshold of training; or (b)
And determining that the position prediction information obtained after the last training in the iterative training process meets the position prediction information precision requirement and/or the position prediction auxiliary information meets the position prediction auxiliary information precision requirement, wherein the last training is the first training or the second training.
Optionally, the location tag-free data samples include a first channel state data sample of a second device in a first location, a second channel state data sample of the second device in a second location, and a third channel state data sample of the second device in a third location;
the first channel state data sample is an input of a first branch in the first training in the three branches with the same structure, and an output of the first branch comprises a mapping of the first channel state data sample in a low-dimensional manifold space of the first channel state data sample;
the second channel state data sample is an input of a second branch in the first training in the three branches with the same structure, and an output of the second branch comprises a mapping of the second channel state data sample in a low-dimensional manifold space of the second channel state data sample;
the third channel state data sample is an input of a third branch in the first training in the three branches with the same structure, and an output of the third branch comprises a mapping of the third channel state data sample in a low-dimensional manifold space of the third channel state data sample;
The distance between the first position and the third position is less than or equal to a first threshold; a distance between the first location and the second location is greater than or equal to a second threshold, location information of the first location is unknown, and the first location is determined randomly or based on an indication;
the second training is a training process of a second model, the second model is a unitary network, and the structure of the unitary network is the same as that of any branch in the first model; the input of the unitary network is the position-tagged data sample and the output of the unitary network is a mapping of the position-tagged data sample in a low-dimensional manifold space of the position-tagged data sample.
Optionally, the processor 1304 is configured to:
determining initial parameters in the third training based on parameters obtained from the end of a previous training of the third training, if the third training is not the first training in the iterative training process;
determining initial parameters in the third training based on a protocol pre-defined or pre-configured or preset initial parameter configuration mode under the condition that the third training is the first training in the iterative training process;
Wherein the third training is any one training in the iterative training process.
Optionally, in the case that the previous training of the third training is the first training, the parameter obtained by ending the previous training is a parameter obtained by ending any one of the three branches of the same structure at the previous training.
Optionally, in the case that the previous training of the third training is the second training, the parameter obtained by ending the previous training is a parameter obtained by the unary network at the end of the previous training.
Optionally, in the case that the third training is the first training, the processor 1304 is configured to:
and taking the parameters obtained after the previous training of the third training is finished as initial parameters of each of three branches of the same structure of the ternary network in the third training.
Optionally, in the case that the third training is the second training, the processor 1304 is configured to:
and taking the parameters obtained after the previous training of the third training is finished as initial parameters of the unitary network in the third training.
Optionally, the processor 1304 is configured to at least one of:
Under the condition that the last training of the iterative training is the first training, taking the parameters obtained by any one of the three branches with the same structure at the end of the last training as the parameters of the positioning model; or (b)
And under the condition that the last training of the iterative training is the second training, taking the parameters obtained by the unitary network at the end of the last training as the parameters of the positioning model.
Optionally, the loss function of the ternary network is used to increase the distance D1 between the output of the first branch and the output of the second branch, to decrease the distance D2 between the output of the first branch and the output of the third branch, and to ensure that D1-D2 is less than M, which is preset or predefined by the protocol or determined based on the indication.
Optionally, the loss function of the unary network is used to reduce the distance between the mapping of the position-tagged data samples in a low-dimensional manifold space and the position tags.
Optionally, the channel state data samples include at least one of:
channel impulse response, CIR;
parameters of the channel;
wherein the parameters of the channel include at least one of:
The first path delay, the first path power, the first path phase, the first path angle, the maximum N path delay, the maximum N path power, the maximum N path phase, the maximum N path angle, the arrival time TOA, the arrival time difference TDOA, or the reference signal received power RSRP.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
The embodiment of the application also provides a first device, which comprises a processor and a communication interface, wherein the processor is used for:
inputting channel state data of target equipment to be positioned into a positioning model, and obtaining position prediction information and/or position prediction auxiliary information of the target equipment output from the positioning model;
wherein the positioning model is obtained based on semi-supervised learning.
The first device embodiment corresponds to the first device-side method embodiment, and each implementation process and implementation manner of the method embodiment are applicable to the first device embodiment, and the same technical effects can be achieved. Specifically, fig. 14 is a second schematic hardware structure of a first device implementing an embodiment of the present application.
Specifically, the embodiment of the application also provides first equipment. As shown in fig. 14, the first apparatus 1400 includes: an antenna 1401, radio frequency means 1402, baseband means 1403, a processor 1404 and a memory 1405. An antenna 1401 is coupled to a radio 1402. In the uplink direction, the radio frequency device 1402 receives information via the antenna 1401 and transmits the received information to the baseband device 1403 for processing. In the downlink direction, the baseband device 1403 processes information to be transmitted, and transmits the processed information to the radio frequency device 1402, and the radio frequency device 1402 processes the received information and transmits the processed information through the antenna 1401.
The method performed by the first device in the above embodiment may be implemented in a baseband arrangement 1403, the baseband arrangement 1403 comprising a baseband processor.
The baseband apparatus 1403 may, for example, include at least one baseband board, where a plurality of chips are disposed, as shown in fig. 14, where one chip, for example, a baseband processor, is connected to the memory 1405 through a bus interface, so as to invoke a program in the memory 1405 to perform the first device operation shown in the above method embodiment.
The first device may also include a network interface 1406, such as a common public radio interface (common public radio interface, CPRI).
Specifically, the first device 1400 of the embodiment of the present invention further includes: instructions or programs stored in the memory 1405 and executable on the processor 1404, the processor 1404 invokes the instructions or programs in the memory 1405 to perform the method performed by the modules shown in fig. 10 and achieve the same technical effects, and are not repeated here.
Wherein the processor 1404 is configured to:
inputting channel state data of target equipment to be positioned into a positioning model, and obtaining position prediction information and/or position prediction auxiliary information of the target equipment output from the positioning model;
wherein the positioning model is obtained based on semi-supervised learning.
In the embodiment of the application, in the use case of positioning enhancement based on AI, AI model training is performed based on semi-supervised learning, and the data sample with the position label and the data sample without the position label which is easy to obtain are utilized to increase the scale of the training sample, improve the positioning accuracy of the trained AI model, and further improve the positioning accuracy.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the positioning model training method, and the same technical effect can be achieved, so that repetition is avoided, and no detailed description is given here.
Wherein the processor is a processor in the terminal described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the positioning method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the terminal described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so that each process of the embodiment of the positioning model training method can be implemented, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or an instruction, implementing each process of the above positioning method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
The embodiments of the present application further provide a computer program/program product, where the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement each process of the above positioning model training method embodiment, and the same technical effects can be achieved, so that repetition is avoided, and details are not repeated herein.
The embodiments of the present application further provide a computer program/program product, where the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement each process of the above positioning method embodiment, and achieve the same technical effects, so that repetition is avoided, and details are not repeated herein.
The embodiment of the application also provides a positioning model training system, which comprises: a first device operable to perform the steps of the positioning model training method as described above.
The embodiment of the application also provides a positioning system, which comprises: a first device operable to perform the steps of the positioning method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (32)

1. A positioning model training method, comprising:
the method comprises the steps that first equipment obtains data samples, wherein the data samples comprise data samples with position labels and data samples without position labels;
the first equipment performs semi-supervised learning based on the data sample to obtain a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
2. The positioning model training method according to claim 1, wherein the first device performs semi-supervised learning based on the data samples to obtain a positioning model, comprising:
the first device executes an iterative training process based on the data sample to obtain an output parameter after iterative training, wherein the iterative training process comprises: at least one first training based on the data samples without the position tags, and at least one second training based on the data samples with the position tags; the first training is a training process of a first model, wherein the first model is a ternary network and comprises three branches with the same structure;
The first device determines the positioning model based on the iteratively trained output parameters.
3. The positioning model training method according to claim 2, wherein the first device performs an iterative training process based on the data samples, to obtain iteratively trained output parameters, comprising:
the first device executes an iterative training process based on the data sample until the first device determines that a first condition is met, ending the iterative training to obtain output parameters after the iterative training;
wherein the first device determines that a first condition is satisfied, comprising at least one of:
the first device determines that the total number of first training in the iterative training process reaches a first training number threshold;
the first equipment determines that the total number of second training in the iterative training process reaches a second training number threshold;
the first equipment determines that the total times of the first training and the second training in the iterative training process reach a training total times threshold; or (b)
The first device determines that the position prediction information obtained after the last training in the iterative training process meets the position prediction information precision requirement and/or the position prediction auxiliary information meets the position prediction auxiliary information precision requirement, wherein the last training is the first training or the second training.
4. A positioning model training method as claimed in claim 2 or 3, wherein the location tag-free data samples comprise a first channel state data sample of a second device in a first location, a second channel state data sample of the second device in a second location, and a third channel state data sample of the second device in a third location;
the first channel state data sample is an input of a first branch in the first training in the three branches with the same structure, and an output of the first branch comprises a mapping of the first channel state data sample in a low-dimensional manifold space of the first channel state data sample;
the second channel state data sample is an input of a second branch in the first training in the three branches with the same structure, and an output of the second branch comprises a mapping of the second channel state data sample in a low-dimensional manifold space of the second channel state data sample;
the third channel state data sample is an input of a third branch in the first training in the three branches with the same structure, and an output of the third branch comprises a mapping of the third channel state data sample in a low-dimensional manifold space of the third channel state data sample;
The distance between the first position and the third position is less than or equal to a first threshold; a distance between the first location and the second location is greater than or equal to a second threshold, location information of the first location is unknown, and the first location is determined randomly or based on an indication;
the second training is a training process of a second model, the second model is a unitary network, and the structure of the unitary network is the same as that of any branch in the first model; the input of the unitary network is the position-tagged data sample and the output of the unitary network is a mapping of the position-tagged data sample in a low-dimensional manifold space of the position-tagged data sample.
5. The positioning model training method of claim 4, wherein the third training in the iterative training process comprises:
in the case that the third training is not the first training in the iterative training process, the first device determines initial parameters in the third training based on parameters obtained from the end of the previous training of the third training;
in the case that the third training is the first training in the iterative training process, the first device determines initial parameters in the third training based on a protocol predefined or preconfigured or preset initial parameter configuration mode;
Wherein the third training is any one training in the iterative training process.
6. The positioning model training method according to claim 5, wherein, in the case where the previous training of the third training is the first training, the parameter obtained by the end of the previous training is the parameter obtained by any one of the three branches of the same structure at the end of the previous training.
7. The positioning model training method according to claim 5, wherein in the case where the previous training of the third training is the second training, the parameter obtained by the end of the previous training is the parameter obtained by the unary network at the end of the previous training.
8. The positioning model training method according to any one of claims 5-7, wherein, in the case where the third training is the first training, the determining initial parameters in the third training based on parameters obtained at the end of a previous training of the third training includes:
the first device uses the parameters obtained by ending the previous training of the third training as initial parameters of each of three branches of the same structure of the ternary network in the third training.
9. The positioning model training method according to any one of claims 5-7, wherein, in the case where the third training is the second training, the determining initial parameters in the third training based on parameters obtained at the end of a previous training of the third training includes:
the first device uses the parameters obtained by ending the previous training of the third training as initial parameters of the unitary network in the third training.
10. The positioning model training method of claim 4, wherein the determining the positioning model based on the iteratively trained output parameters comprises at least one of:
under the condition that the last training of the iterative training is the first training, the first equipment takes the parameters obtained by any one of the three branches with the same structure at the end of the last training as the parameters of the positioning model; or (b)
And under the condition that the last training of the iterative training is the second training, the first device takes the parameters obtained by the unitary network at the end of the last training as the parameters of the positioning model.
11. Positioning model training method according to any of the claims 2-10, characterized in that the loss function of the ternary network is used to increase the distance D1 between the output of the first branch and the output of the second branch, to decrease the distance D2 between the output of the first branch and the output of the third branch, and to ensure that D1-D2 is smaller than M, which is preset or predefined by a protocol or determined based on an indication.
12. Positioning model training method according to any of the claims 4-10, characterized in that the loss function of the unary network is used to reduce the distance between the map of the position tagged data samples in a low dimensional manifold space and the position tags.
13. The positioning model training method of any of claims 1-12, wherein the channel state data samples comprise at least one of:
channel impulse response, CIR;
parameters of the channel;
wherein the parameters of the channel include at least one of:
the first path delay, the first path power, the first path phase, the first path angle, the maximum N path delay, the maximum N path power, the maximum N path phase, the maximum N path angle, the arrival time TOA, the arrival time difference TDOA, or the reference signal received power RSRP.
14. A positioning method, comprising:
the method comprises the steps that first equipment inputs channel state data of target equipment to be positioned into a positioning model, and position prediction information and/or position prediction auxiliary information of the target equipment, which are output from the positioning model, are obtained;
wherein the positioning model is obtained based on semi-supervised learning.
15. A positioning model training device, comprising:
the system comprises a data sample acquisition module, a data sample acquisition module and a data processing module, wherein the data sample acquisition module is used for acquiring data samples, and the data samples comprise data samples with position labels and data samples without position labels;
the positioning model acquisition module is used for performing semi-supervised learning based on the data samples to acquire a positioning model;
the input of the positioning model comprises channel state data of equipment to be positioned, and the output of the positioning model comprises position prediction information and/or position prediction auxiliary information of the equipment to be positioned.
16. The positioning model training device of claim 15, wherein the positioning model acquisition module is configured to:
performing an iterative training process based on the data samples to obtain output parameters after the iterative training, wherein the iterative training process comprises: at least one first training based on the data samples without the position tags, and at least one second training based on the data samples with the position tags; the first training is a training process of a first model, wherein the first model is a ternary network and comprises three branches with the same structure;
And determining the positioning model based on the output parameters after the iterative training.
17. The positioning model training device of claim 16, wherein the positioning model acquisition module is configured to:
executing an iterative training process based on the data sample until the first condition is determined to be met, ending the iterative training to obtain output parameters after the iterative training;
wherein the determining meets a first condition comprising at least one of:
determining that the total number of first training in the iterative training process reaches a first training number threshold;
determining that the total number of second training in the iterative training process reaches a second training number threshold;
determining that the total times of the first training and the second training in the iterative training process reach a total times threshold of training; or (b)
And determining that the position prediction information obtained after the last training in the iterative training process meets the position prediction information precision requirement and/or the position prediction auxiliary information meets the position prediction auxiliary information precision requirement, wherein the last training is the first training or the second training.
18. Positioning model training apparatus according to claim 16 or 17, characterized in that the position label free data samples comprise a first channel state data sample of a second device in a first position, a second channel state data sample of the second device in a second position, and a third channel state data sample of the second device in a third position;
The first channel state data sample is an input of a first branch in the first training in the three branches with the same structure, and an output of the first branch comprises a mapping of the first channel state data sample in a low-dimensional manifold space of the first channel state data sample;
the second channel state data sample is an input of a second branch in the first training in the three branches with the same structure, and an output of the second branch comprises a mapping of the second channel state data sample in a low-dimensional manifold space of the second channel state data sample;
the third channel state data sample is an input of a third branch in the first training in the three branches with the same structure, and an output of the third branch comprises a mapping of the third channel state data sample in a low-dimensional manifold space of the third channel state data sample;
the distance between the first position and the third position is less than or equal to a first threshold; a distance between the first location and the second location is greater than or equal to a second threshold, location information of the first location is unknown, and the first location is determined randomly or based on an indication;
The second training is a training process of a second model, the second model is a unitary network, and the structure of the unitary network is the same as that of any branch in the first model; the input of the unitary network is the position-tagged data sample and the output of the unitary network is a mapping of the position-tagged data sample in a low-dimensional manifold space of the position-tagged data sample.
19. The positioning model training device of claim 18, wherein the positioning model acquisition module is configured to:
determining initial parameters in a third training based on parameters obtained from the end of a previous training of the third training, in the case where the third training is not the first training in the iterative training process;
determining initial parameters in the third training based on a protocol pre-defined or pre-configured or preset initial parameter configuration mode under the condition that the third training is the first training in the iterative training process;
wherein the third training is any one training in the iterative training process.
20. Positioning model training device according to claim 19, characterized in that in case the previous training of the third training is the first training, the parameters obtained by the end of the previous training are the parameters obtained by any of the three branches of the same structure at the end of the previous training.
21. The positioning model training device of claim 19, wherein in the case where a previous training of the third training is the second training, the parameter obtained by the previous training end is a parameter obtained by the unary network at the previous training end.
22. Positioning model training apparatus according to any of the claims 19-21, characterized in that in case the third training is the first training, the positioning model acquisition module is adapted to:
and taking the parameters obtained after the previous training of the third training is finished as initial parameters of each of three branches of the same structure of the ternary network in the third training.
23. Positioning model training apparatus according to any of the claims 19-21, characterized in that in case the third training is the second training, the positioning model acquisition module is adapted to:
and taking the parameters obtained after the previous training of the third training is finished as initial parameters of the unitary network in the third training.
24. The positioning model training device of claim 18, wherein the positioning model acquisition module is configured to at least one of:
Under the condition that the last training of the iterative training is the first training, taking the parameters obtained by any one of the three branches with the same structure at the end of the last training as the parameters of the positioning model; or (b)
And under the condition that the last training of the iterative training is the second training, taking the parameters obtained by the unitary network at the end of the last training as the parameters of the positioning model.
25. Positioning model training device according to any of the claims 16-24, characterized in that the loss function of the ternary network is used to increase the distance D1 between the output of the first branch and the output of the second branch, to decrease the distance D2 between the output of the first branch and the output of the third branch, and to ensure that D1-D2 is smaller than M, which is preset or predefined by a protocol or determined based on an indication.
26. Positioning model training apparatus according to any of the claims 18-24 characterized in that the loss function of the unary network is used to reduce the distance between the map of the position tagged data samples in a low dimensional manifold space and the position tags.
27. Positioning model training apparatus according to any of the claims 15-26, characterized in that the channel state data samples comprise at least one of the following:
channel impulse response, CIR;
parameters of the channel;
wherein the parameters of the channel include at least one of:
the first path delay, the first path power, the first path phase, the first path angle, the maximum N path delay, the maximum N path power, the maximum N path phase, the maximum N path angle, the arrival time TOA, the arrival time difference TDOA, or the reference signal received power RSRP.
28. A positioning device, comprising:
the position information acquisition module is used for inputting channel state data of target equipment to be positioned into the positioning model and acquiring position prediction information and/or position prediction auxiliary information of the target equipment, which are output from the positioning model;
wherein the positioning model is obtained based on semi-supervised learning.
29. A first device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implements the positioning model training method of any of claims 1 to 13.
30. A first device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implements the positioning method of claim 14.
31. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implements the positioning model training method according to any of claims 1 to 13.
32. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the positioning method of claim 14.
CN202210815550.3A 2022-07-08 2022-07-08 Positioning model training method and device, positioning method and device Pending CN117440501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210815550.3A CN117440501A (en) 2022-07-08 2022-07-08 Positioning model training method and device, positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210815550.3A CN117440501A (en) 2022-07-08 2022-07-08 Positioning model training method and device, positioning method and device

Publications (1)

Publication Number Publication Date
CN117440501A true CN117440501A (en) 2024-01-23

Family

ID=89552164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210815550.3A Pending CN117440501A (en) 2022-07-08 2022-07-08 Positioning model training method and device, positioning method and device

Country Status (1)

Country Link
CN (1) CN117440501A (en)

Similar Documents

Publication Publication Date Title
Nabati et al. Using synthetic data to enhance the accuracy of fingerprint-based localization: A deep learning approach
CN109902186A (en) Method and apparatus for generating neural network
CN111867049B (en) Positioning method, positioning device and storage medium
Chen et al. A wifi indoor localization method based on dilated cnn and support vector regression
CN116940857A (en) Enhanced fingerprint positioning
CN116668351A (en) Quality of service prediction method, device, computer equipment and storage medium
CN117440501A (en) Positioning model training method and device, positioning method and device
CN116567806A (en) Positioning method and communication equipment based on artificial intelligence AI model
CN116488747A (en) Information interaction method and device and communication equipment
CN115685054A (en) Positioning estimation method, device and terminal
CN114445692A (en) Image recognition model construction method and device, computer equipment and storage medium
CN115913486A (en) Information reporting method, device, terminal and readable storage medium
CN117574951A (en) AI network model training method, positioning method, device and communication equipment
WO2023098661A1 (en) Positioning method and communication device
WO2023098662A1 (en) Positioning method and communication device
CN117411602A (en) Data acquisition method and device
CN111198351A (en) DOA-based positioning method, device, equipment and storage medium
WO2024120445A1 (en) Model input information determination method, apparatus, device and system, and storage medium
WO2024011741A1 (en) Data-efficient updating for channel classification
WO2024125525A1 (en) Ai computing power reporting method, and terminal and network-side device
WO2024067280A1 (en) Method and apparatus for updating ai model parameter, and communication device
WO2024067281A1 (en) Ai model processing method and apparatus, and communication device
CN118042399A (en) Model supervision method, terminal and network side equipment
CN116418432A (en) Model updating method and communication equipment
WO2024083004A1 (en) Ai model configuration method, terminal, and network side device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination