CN118042399A - Model supervision method, terminal and network side equipment - Google Patents

Model supervision method, terminal and network side equipment Download PDF

Info

Publication number
CN118042399A
CN118042399A CN202211426144.4A CN202211426144A CN118042399A CN 118042399 A CN118042399 A CN 118042399A CN 202211426144 A CN202211426144 A CN 202211426144A CN 118042399 A CN118042399 A CN 118042399A
Authority
CN
China
Prior art keywords
model
terminal
motion state
state information
supervision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211426144.4A
Other languages
Chinese (zh)
Inventor
贾承璐
孙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211426144.4A priority Critical patent/CN118042399A/en
Publication of CN118042399A publication Critical patent/CN118042399A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a model supervision method, a terminal and network side equipment, belonging to the technical field of communication, wherein the model supervision method of the embodiment of the application comprises the following steps: the terminal acquires motion state information; the terminal performs a transmission operation, the transmission operation including one of: sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.

Description

Model supervision method, terminal and network side equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a model supervision method, a terminal and network side equipment.
Background
In some communication systems, communication-related information is introduced through an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) model, such as: and acquiring the position information through an AI model. In some related technologies, the AI model is mainly supervised by pre-configured tag information, and the pre-configured tag information is often fixed, so that the AI model is difficult to adapt to a terminal with a position which is changed, and the supervision effect on the AI model is poor.
Disclosure of Invention
The embodiment of the application provides a model supervision method, a terminal and network side equipment, which can solve the problem of relatively poor supervision effect on an AI model.
In a first aspect, a method for supervising a model is provided, which is characterized by comprising:
The terminal acquires motion state information;
The terminal performs a transmission operation, the transmission operation including one of:
Sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
In a second aspect, a method for model supervision is provided, including:
The network side equipment executes a receiving operation, wherein the receiving operation comprises the following steps:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
In a third aspect, a model supervision apparatus is provided, comprising:
The first acquisition module is used for acquiring motion state information;
the execution module is used for executing a sending operation, and the sending operation comprises the following steps:
Sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
In a fourth aspect, there is provided a model supervision apparatus comprising:
The execution module is used for executing a receiving operation, and the receiving operation comprises the following steps:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
In a fifth aspect, a terminal is provided, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions implement steps of a terminal-side model supervision method as provided in an embodiment of the present application when executed by the processor.
In a sixth aspect, a terminal is provided, including a processor and a communication interface, where the processor is configured to obtain motion state information; the communication interface is configured to perform a transmission operation, where the transmission operation includes one of: sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
In a seventh aspect, a network side device is provided, where the network side device includes a processor and a memory, where the memory stores a program or an instruction that can be executed by the processor, where the program or the instruction implements the steps of the model supervision method of the network side provided by the embodiment of the present application.
In an eighth aspect, a network side device is provided, including a processor and a communication interface, where the communication interface is configured to perform a receiving operation, and the receiving operation includes one of: receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
In a ninth aspect, there is provided a model supervision system comprising: the terminal can be used for executing the steps of the model supervision method of the terminal side, and the network side device can be used for executing the steps of the model supervision method of the network side.
In a tenth aspect, a readable storage medium is provided, where a program or an instruction is stored, where the program or the instruction, when executed by a processor, implement a step of a model supervision method at a terminal side as provided by an embodiment of the present application, or implement a step of a model supervision method at a network side as provided by an embodiment of the present application
In an eleventh aspect, a chip is provided, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions, implement a model supervision method on a terminal side provided by an embodiment of the present application, or implement a model supervision method on a network side provided by an embodiment of the present application.
In a twelfth aspect, a computer program/program product is provided, which is stored in a storage medium, and which is executed by at least one processor to implement the steps of the model supervision method at the terminal side as provided by the embodiment of the present application, or which is executed by at least one processor to implement the steps of the model supervision method at the network side as provided by the embodiment of the present application.
In the embodiment of the application, a terminal acquires motion state information; the terminal performs a transmission operation, the transmission operation including one of: sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model. Therefore, the movement state information is used for supervising the AI model, namely, the AI model is supported to be supervised based on the movement state information, and compared with the AI model supervised by adopting the pre-configured label information, the AI model supervision method and device can improve the AI model supervision effect.
Drawings
Fig. 1 is a block diagram of a wireless communication system to which embodiments of the present application are applicable;
FIG. 2 is a simplified schematic diagram of an AI model provided by an embodiment of the application;
FIG. 3 is a flow chart of a method of model supervision provided by an embodiment of the present application;
FIG. 4 is a flow chart of another method of model supervision provided by an embodiment of the present application;
FIG. 5 is a block diagram of a model supervision apparatus according to an embodiment of the present application;
FIG. 6 is a block diagram of another model supervision apparatus according to an embodiment of the application;
Fig. 7 is a block diagram of a communication device according to an embodiment of the present application;
fig. 8 is a block diagram of another terminal according to an embodiment of the present application;
fig. 9 is a block diagram of another network side device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
It should be noted that the techniques described in the embodiments of the present application are not limited to long term evolution (Long Term Evolution, LTE)/LTE evolution (LTE-Advanced, LTE-a) systems, but may also be used in other wireless communication systems, such as code division multiple access (Code Division Multiple Access, CDMA), time division multiple access (Time Division Multiple Access, TDMA), frequency division multiple access (Frequency Division Multiple Access, FDMA), orthogonal frequency division multiple access (Orthogonal Frequency Division Multiple Access, OFDMA), single carrier frequency division multiple access (Single-carrier Frequency Division Multiple Access, SC-FDMA), and other systems. The terms "system" and "network" in embodiments of the application are often used interchangeably, and the techniques described may be used for both the above-mentioned systems and radio technologies, as well as other systems and radio technologies. The following description describes a New Radio (NR) system for exemplary purposes and NR terminology is used in much of the following description, but these techniques may also be applied to applications other than NR system applications, such as 6 th Generation (6G) communication systems.
Fig. 1 shows a block diagram of a wireless communication system to which an embodiment of the present application is applicable. The wireless communication system includes a terminal 11 and a network device 12. The terminal 11 may be a Mobile phone, a tablet Computer (Tablet Personal Computer), a Laptop (Laptop Computer) or a terminal-side device called a notebook, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a palm Computer, a netbook, an ultra-Mobile Personal Computer (ultra-Mobile Personal Computer, UMPC), a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device (Wearable Device), a vehicle-mounted device (VUE), a pedestrian terminal (PUE), a smart home (home device with a wireless communication function, such as a refrigerator, a television, a washing machine, a furniture, etc.), a game machine, a Personal Computer (Personal Computer, a PC), a teller machine, or a self-service machine, etc., and the wearable device includes: intelligent wrist-watch, intelligent bracelet, intelligent earphone, intelligent glasses, intelligent ornament (intelligent bracelet, intelligent ring, intelligent necklace, intelligent anklet, intelligent foot chain etc.), intelligent wrist strap, intelligent clothing etc.. It should be noted that the specific type of the terminal 11 is not limited in the embodiment of the present application. The network-side device 12 may include an access network device or a core network device, where the access network device may also be referred to as a radio access network device, a radio access network (Radio Access Network, RAN), a radio access network function, or a radio access network element. The access network device may include a base station, a WLAN access Point, a WiFi node, or the like, where the base station may be referred to as a node B, an evolved node B (eNB), an access Point, a base transceiver station (Base Transceiver Station, BTS), a radio base station, a radio transceiver, a Basic service set (Basic SERVICE SET, BSS), an Extended service set (Extended SERVICE SET, ESS), a home node B, a home evolved node B, a transmission and reception Point (TRANSMITTING RECEIVING Point, TRP), or some other suitable term in the art, and the base station is not limited to a specific technical vocabulary so long as the same technical effect is achieved, and it should be noted that, in the embodiment of the present application, only the base station in the NR system is described by way of example, and the specific type of the base station is not limited. The core network device may include, but is not limited to, at least one of: core network nodes, core network functions, mobility management entities (Mobility MANAGEMENT ENTITY, MME), access Mobility management functions (ACCESS AND Mobility Management Function, AMF), session management functions (Session Management Function, SMF), user plane functions (User Plane Function, UPF), policy control functions (Policy Control Function, PCF), policy and Charging Rules Function (PCRF), edge application service discovery functions (Edge Application Server Discovery Function, EASDF), unified data management (Unified DATA MANAGEMENT, UDM), unified data warehousing (Unified Data Repository, UDR), home subscriber server (Home Subscriber Server, HSS), centralized network configuration (Centralized network configuration, CNC), network storage functions (Network Repository Function, NRF), network opening functions (Network Exposure Function, NEF), local NEF (Local NEF, or L-NEF), binding support functions (Binding Support Function, BSF), application functions (Application Function, AF), and the like. It should be noted that, in the embodiment of the present application, only the core network device in the NR system is described as an example, and the specific type of the core network device is not limited.
1. Artificial intelligence.
Artificial intelligence is currently in wide-spread use in various fields. The artificial intelligence is integrated into the wireless communication network, so that the technical indexes such as throughput, time delay, user capacity and the like are obviously improved, and the artificial intelligence is an important task of the future wireless communication network. There are various implementations of AI modules, such as neural networks, decision trees, support vector machines, bayesian classifiers, etc. The present application is illustrated by way of example with respect to a neural network, but is not limited to a particular type of AI module.
The neural network is composed of neurons, and a schematic diagram of the neurons is shown in fig. 2. Wherein ,z=a1w1+···+a1w1+···+akwk+b,a1,a2,…aK is the input, w is the weight (multiplicative coefficient), b is the bias (additive coefficient), and σ () is the activation function. Common activation functions include Sigmoid, hyperbolic tangent (tanh), linear rectification function ReLU (RECTIFIED LINEAR Unit), and the like.
The parameters of the neural network are optimized by an optimization algorithm. An optimization algorithm is a class of algorithms that can help us minimize or maximize an objective function (sometimes called a loss function). Whereas the objective function is often a mathematical combination of model parameters and data. For example, given data X and its corresponding label Y, we construct a neural network model f (), with the model, the predicted output f (X) can be obtained from the input X, and the difference (f (X) -Y) between the predicted value and the true value, which is the loss function, can be calculated. Our aim is to find a suitable W, b that minimizes the value of the above-mentioned loss function, the smaller the loss value, the closer our model is to reality.
The most common optimization algorithms are basically based on an error back propagation (error Back Propagation, BP) algorithm. The basic idea of the BP algorithm is that the learning process consists of two processes, forward propagation of the signal and backward propagation of the error. In forward propagation, an input sample is transmitted from an input layer, is processed layer by each hidden layer, and is transmitted to an output layer. If the actual output of the output layer does not match the desired output, the back propagation phase of the error is shifted. The error back transmission is to make the output error pass through hidden layer to input layer in a certain form and to distribute the error to all units of each layer, so as to obtain the error signal of each layer unit, which is used as the basis for correcting the weight of each unit. The process of adjusting the weights of the layers of forward propagation and error back propagation of the signal is performed repeatedly. The constant weight adjustment process is the learning training process of the network. This process is continued until the error in the network output is reduced to an acceptable level or until a preset number of learnings is performed.
Common optimization algorithms are gradient descent (GRADIENT DESCENT), random gradient descent (Stochastic GRADIENT DESCENT, SGD), small-batch gradient descent (mini-batch GRADIENT DESCENT), momentum method (Momentum), random gradient descent with Momentum (also known as Nesterov), adaptive gradient descent (ADAPTIVE GRADIENT DESCENT, adagrad), adadelta, root mean square error descent (root mean square prop, RMSprop), and adaptive Momentum estimation (Adaptive Moment Estimation, adam).
When the errors are counter-propagated, the optimization algorithms are all used for obtaining errors/losses according to the loss function, obtaining derivatives/partial derivatives of the current neurons, adding influences such as learning rate, previous gradients/derivatives/partial derivatives and the like to obtain gradients, and transmitting the gradients to the upper layer.
It should be noted that the foregoing is only a simple description of the AI technology, and the types of the AI models, the optimization algorithm, and the like are not limited in the embodiment of the present application.
The model supervision method, the terminal and the network side device provided by the embodiment of the application are described in detail below through some embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a flowchart of a model supervision method according to an embodiment of the application, as shown in fig. 3, including the following steps:
step 301, the terminal acquires motion state information.
The acquiring the motion state information may be acquiring motion state information of the terminal itself, and may acquire motion state information at one or more moments.
In an embodiment of the present application, the motion state information includes, but is not limited to, at least one of the following:
Motion speed, motion direction, acceleration, motion displacement.
The above steps may be triggered by the terminal or by the network side device.
Step 302, the terminal executes a sending operation, where the sending operation includes one of the following:
sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
The AI model may be an AI model deployed in the terminal, or may be an AI model deployed on the network-side device.
In some embodiments, the AI model may be an AI model for predicting or outputting location information, such as location information of a terminal, or location-related feature information, such as time of arrival (TOA). In addition, in embodiments of the present application, AI models include, but are not limited to: convolutional neural networks, fully-connected neural networks, transformer (Transformer) networks, and the like.
The supervision result sent to the network side device may be a supervision result obtained by the terminal supervising the AI model based on the motion state information.
The sending of the motion state information to the network side device may enable the network side device to monitor the AI model after receiving the motion state information.
In the embodiment of the application, the motion state information is used for supervising the AI model, namely, the AI model is supported to be supervised based on the motion state information, so that compared with the AI model is supervised by adopting the pre-configured label information, the embodiment of the application can improve the supervision effect on the AI model, and the motion state information can reflect the position change of the terminal. In addition, the supervision of the AI model can be realized under the condition that the real position label is difficult to acquire, and the supervision effect of the AI model is further improved.
It should also be noted that, because the embodiment of the present application supports supervision of the AI model based on the motion state information, it is beneficial to improve the reliability of the AI model. In addition, in the case where the AI model is used to predict or output the positioning information, the reliability of the positioning information can be improved.
As an alternative embodiment, the method further comprises:
The terminal receives first indication information sent by network side equipment, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
In this embodiment, the terminal may obtain the motion state information based on the first indication information, so that the terminal may be prevented from obtaining the motion state information without supervision of the AI model, so as to save power consumption of the terminal and reduce overhead of data storage.
In some embodiments, the terminal acquiring the motion state information may also be triggered by the terminal.
As an alternative to this embodiment of the present invention,
The supervision result is used for indicating at least one of the following:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
The model validity information may indicate that the AI model is valid or invalid.
The confidence level of the supervision result of the model can indicate the confidence level of the supervision result acquired by the terminal.
In some embodiments, the confidence level may be determined according to the reliability of the acquired result of the terminal sensor, and if the reliability of the acquired motion state information of the sensor is higher, the confidence level of the model supervision result is higher.
The model supervision index indicates an index adopted by the terminal to supervise the AI.
In some embodiments, the model supervision index may be as follows:
Where (x 1,y1) denotes the position coordinates at one time, (x 2,y2) denotes the position coordinates at another time, and S denotes the movement displacement determined based on the movement state information.
It should be noted that the above model supervision index is only an example, for example: in some embodiments, the model supervision indicator may be a difference between a terminal movement direction represented by position coordinates of two moments and a terminal movement direction determined based on the motion state information, or the model supervision indicator may be a difference between a terminal acceleration represented by position coordinates of a plurality of moments and a terminal acceleration determined based on the motion state information, or the model supervision indicator may be a difference between a terminal motion speed represented by position coordinates of a plurality of moments and a terminal motion speed determined based on the motion state information.
The preset threshold value can be configured by the network side or agreed by the protocol or determined by the terminal
The difference between the model supervision index and the preset threshold value may be the following difference:
Wherein, R is a preset threshold value.
It should be noted that the above difference is only an example of the model supervision index, for example: in some embodiments, the difference may be a difference in the direction of movement of the terminal, or a difference in the acceleration of the terminal, or a difference in the velocity of movement of the terminal.
In this embodiment, the at least one supervision result may enable the supervision result reported by the terminal to the network side device to be more flexible.
As an alternative embodiment, the method further comprises:
And the terminal reports the capability information for acquiring the motion state information to the network side equipment.
The capability information may indicate that the terminal may obtain specific motion state information, for example: the indication terminal may obtain at least one of:
Motion speed, motion direction, acceleration, motion displacement.
Or the capability information may indicate that the terminal is capable of acquiring motion state information.
Therefore, through the reporting of the capability information, the terminal and the network side equipment can perform corresponding interaction under the condition that the terminal capability is supervised and matched with the AI model, so that the resource waste is avoided.
As an optional implementation manner, the sending operation includes sending a supervision result to the network side device, and the method further includes:
the terminal receives second indication information sent by the network side equipment, wherein the second indication information is used for indicating to supervise the AI model.
In the embodiment, the terminal can monitor the AI model based on the second indication information, so that the terminal monitors the AI model only when the terminal needs the network side, and the power consumption of the terminal is saved. For example: and after the network side transmits the positioning result, the second indication information is transmitted to indicate the terminal to monitor the AI model.
As an alternative embodiment, the supervising result includes:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
The positioning result of the AI model may refer to a positioning result predicted or obtained by the AI model.
The positioning result of the AI model may be predicted by the terminal by the AI model, or the positioning result of the AI model may be predicted by the network side device based on the AI model and sent to the terminal. For example: the method further comprises at least one of:
the terminal obtains a positioning result through the AI model;
And the terminal receives the positioning result of the AI model sent by the network side equipment.
In this embodiment, since the AI model is supervised based on the motion state information and the positioning result of the AI model, the supervision effect of the AI model can be further improved.
Optionally, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result.
The time information corresponding to the motion state information and the time information corresponding to the positioning result may be matched, that is, the time stamp corresponding to the motion state information is consistent with the time stamp corresponding to the positioning result.
In some embodiments, the network side device may indicate, by using the first indication information, that time information corresponding to the motion state information matches with time information corresponding to the positioning result, and in addition, the first indication information may further indicate that the terminal stores the motion state information in advance, for example, the terminal extracts and obtains the motion state information; or the terminal determines that the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result by itself.
In this way, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result, so that the AI model can be better supervised.
Optionally, the terminal obtains motion state information, including:
And the terminal acquires motion state information of future time, wherein the future time comprises time corresponding to the positioning result.
The future time may be motion state information for one or more time units in the future.
For example: the future time includes one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
Where the time units may be units of seconds, milliseconds, frames, time slots, sub-slots, or symbols, etc.
The aforementioned future mth time unit may be a future mth time unit determined from the current time.
The N, M and P may be indicated by the network side, may be agreed by a protocol, or may be determined automatically by the terminal.
The future first period may be a period from the T1 st to the T2 nd time.
In this embodiment, the terminal may acquire motion state information of N consecutive time units in the future;
The terminal can acquire the motion state information of the Mth time unit in the future;
the terminal can acquire motion state information of P time stamps in the future;
the terminal may acquire motion state information at the times T1 to T2.
It should be noted that any one or a combination of the above may be used to individually monitor the AI model.
Optionally, the method further comprises:
the terminal receives third indication information sent by the network side equipment, wherein the third indication information is used for indicating the terminal to acquire the motion state information of the future time.
In the embodiment, the terminal can acquire the motion state information of the future time based on the indication of the network side, so that the terminal can be prevented from acquiring the motion state information of excessive time, and the power consumption of the terminal is saved.
It should be noted that, in some embodiments, the terminal may also determine itself or the protocol agrees with the terminal to obtain the motion state information of the future time.
Optionally, the positioning result includes time information of the positioning result, and further includes at least one of:
Location information, signal TOA.
The time information of the positioning result may be a specific time corresponding to the positioning result or a time stamp of the positioning result.
The signal TOA may represent a linear distance between the network side device and the terminal, or a linear distance between the network side device and the terminal divided by the information related to the position such as the speed of light.
The terminal may calculate the position information based on the signal TOA and then supervise the AI model based on the position information and the motion state information.
Optionally, the terminal receiving network side device sends a positioning result of the AI model, including:
and the terminal receives a plurality of positioning results sent by the network side equipment and sent by the AI model, wherein the positioning results are sent separately or combined.
The separate transmission may be transmitting the positioning results at one time each time, or transmitting the positioning results after acquiring the positioning results at S times, where S < Z, Z are the total number of the positioning results, and the combined transmission may be transmitting the positioning results at multiple times together after acquiring the positioning results at multiple times, for example, transmitting the positioning results together after acquiring all Z positioning results.
Optionally, the supervising result includes:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
The plurality of positioning results of the AI model may be positioning results at a plurality of times.
The AI model can be better and more conveniently supervised based on the motion state information and the positioning results. In addition, since the AI model is supervised based on the motion information and the positioning results of the terminal, other auxiliary information is not needed, so that the privacy of the terminal can be protected, and the configuration of the network side is protected from exposure.
Optionally, the plurality of positioning results include: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second time based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
Wherein the AI model may be determined to be valid if the difference between the displacement and the distance is less than or equal to a preset threshold.
In the case where the difference between the displacement and the distance is greater than the preset threshold, it may be determined that the AI model fails.
For example: the terminal obtains the motion displacement from T1 to T2 as S meters, the position of the T1 time indicated by the positioning result is (x 1,y1), the position of the T2 time is (x 2,y2) meters, and if the following formula is satisfied, the model is judged to be invalid
Therefore, the difference between the displacement obtained according to the AI model deducing result and the displacement obtained by the terminal sensor is larger than the threshold value R, and the model is judged to be invalid.
In the above embodiment, the accuracy of the supervised AI model can be improved by the above distance.
Note that the above-described distance supervision AI is only one embodiment, and in some embodiments, the AI model may be supervised by a difference between a terminal moving direction represented by position coordinates at two times and a terminal moving direction determined based on the movement state information, or may be supervised by a difference between a terminal acceleration represented by position coordinates at a plurality of times and a terminal acceleration determined based on the movement state information, or may be supervised by a difference between a terminal movement speed represented by position coordinates at a plurality of times and a terminal movement speed determined based on the movement state information.
It should be noted that, in the embodiment of the present application, the above-described supervision process may also be performed by the network side device.
In the embodiment of the application, a terminal acquires motion state information; the terminal performs a transmission operation, the transmission operation including one of: sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model. Therefore, the movement state information is used for supervising the AI model, namely, the AI model is supported to be supervised based on the movement state information, and compared with the AI model supervised by adopting the pre-configured label information, the AI model supervision method and device can improve the AI model supervision effect.
Referring to fig. 4, fig. 4 is a flowchart of another model supervision method according to an embodiment of the application, as shown in fig. 4, including the following steps:
step 401, the network side device executes a receiving operation, where the receiving operation includes one of the following:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
Optionally, the method further comprises:
The network side equipment sends first indication information to the terminal, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
Optionally, the supervision result is used to indicate at least one of the following:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
Optionally, the motion state information includes at least one of:
Motion speed, motion direction, acceleration, motion displacement.
Optionally, the method further comprises:
and the network side equipment receives the capability information which is reported by the terminal and used for acquiring the motion state information.
Optionally, the sending operation includes receiving a supervision result sent by the terminal, and the method further includes:
and the network side equipment sends second indicating information to the terminal, wherein the second indicating information is used for indicating to supervise the AI model.
Optionally, the supervising result includes:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the method further comprises:
and the network side equipment sends the positioning result of the AI model to the terminal.
Optionally, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result.
Optionally, the method further comprises:
The network side equipment sends third indication information to the terminal equipment, wherein the third indication information is used for indicating the terminal to acquire motion state information of future time.
Optionally, the future time includes one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
Optionally, the positioning result includes time information of the positioning result, and further includes at least one of:
location information, signal arrival time TOA.
Optionally, the network side device sends a positioning result of the AI model to the terminal, including:
and the network side equipment sends a plurality of positioning results of the AI model to the terminal, wherein the positioning results are sent separately or in a combined mode.
Optionally, the supervising result includes:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the plurality of positioning results include: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
Optionally, the AI model is valid if the difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And under the condition that the difference value between the displacement and the distance is larger than the preset threshold value, the AI model fails.
It should be noted that, as an implementation manner of the network side device corresponding to the embodiment shown in fig. 3, a specific implementation manner of the embodiment may refer to a related description of the embodiment shown in fig. 3, so that in order to avoid repeated description, the embodiment is not repeated.
Referring to fig. 5, fig. 5 is a block diagram of a model supervision apparatus according to an embodiment of the application, and as shown in fig. 5, a model supervision apparatus 500 includes:
a first obtaining module 501, configured to obtain motion state information;
An execution module 502, configured to execute a sending operation, where the sending operation includes one of:
Sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
Optionally, the apparatus further includes:
the first receiving module is used for receiving first indication information sent by the network side equipment, and the first indication information is used for indicating the terminal to acquire the motion state information.
Optionally, the supervision result is used to indicate at least one of the following:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
Optionally, the motion state information includes at least one of:
Motion speed, motion direction, acceleration, motion displacement.
Optionally, the method further comprises:
And the reporting module is used for reporting the capability information for acquiring the motion state information to the network side equipment.
Optionally, the sending operation includes sending a supervision result to the network side device, and the apparatus further includes:
The second receiving module is used for receiving second indication information sent by the network side equipment, and the second indication information is used for indicating to supervise the AI model.
Optionally, the supervising result includes:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the apparatus further comprises at least one of:
The second acquisition module is used for acquiring a positioning result through the AI model;
And the third receiving module is used for receiving the positioning result of the AI model sent by the network side equipment.
Optionally, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result.
Optionally, the first obtaining module is configured to obtain motion state information of a future time, where the future time includes a time corresponding to the positioning result.
Optionally, the future time includes one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
Optionally, the apparatus further includes:
And the fourth receiving module is used for receiving third indication information sent by the network side equipment, wherein the third indication information is used for indicating the terminal to acquire the motion state information of the future time.
Optionally, the positioning result includes time information of the positioning result, and further includes at least one of:
location information, signal arrival time TOA.
Optionally, the third receiving module is configured to:
And the receiving network side equipment transmits a plurality of positioning results of the AI model, wherein the positioning results are transmitted separately or in a combined mode.
Optionally, the supervising result includes:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the plurality of positioning results include: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
Optionally, determining that the AI model is valid if the difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And determining that the AI model fails under the condition that the difference value between the displacement and the distance is larger than the preset threshold value.
The model supervision device can improve the supervision effect on the AI model.
The model supervision device in the embodiment of the application can be an electronic device, such as an electronic device with an operating system, or can be a component in the electronic device, such as an integrated circuit or a chip. For example: the electronic device may be a terminal, or may be other devices than a terminal. By way of example, the terminals may include, but are not limited to, the types of terminals listed in the embodiments of the present application, and the other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., and the embodiments of the present application are not limited in detail.
The model supervision device provided by the embodiment of the application can realize each process realized by the method embodiment shown in fig. 3 and achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Referring to fig. 6, fig. 6 is a block diagram of a model supervision apparatus according to an embodiment of the application, and as shown in fig. 6, a model supervision apparatus 600 includes:
an execution module 601, configured to execute a receiving operation, where the receiving operation includes one of:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
Optionally, the apparatus further includes:
the first sending module is used for sending first indication information to the terminal, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
Optionally, the supervision result is used to indicate at least one of the following:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
Optionally, the motion state information includes at least one of:
Motion speed, motion direction, acceleration, motion displacement.
Optionally, the apparatus further includes:
And the first receiving module is used for reporting the capability information for acquiring the motion state information to the network side equipment.
Optionally, the sending operation includes receiving a supervision result sent by the terminal, and the apparatus further includes:
and the second sending module is used for sending second indicating information to the terminal, wherein the second indicating information is used for indicating to supervise the AI model.
Optionally, the supervising result includes:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the apparatus further includes:
And the third sending module is used for sending the positioning result of the AI model to the terminal.
Optionally, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result.
Optionally, the apparatus further includes:
and the fourth sending module is used for sending third indication information to the terminal equipment, wherein the third indication information is used for indicating the terminal to acquire motion state information of future time.
Optionally, the future time includes one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
Optionally, the positioning result includes time information of the positioning result, and further includes at least one of:
location information, signal arrival time TOA.
Optionally, the third sending module is configured to:
and sending a plurality of positioning results of the AI model to the terminal, wherein the positioning results are sent separately or combined.
Optionally, the supervising result includes:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the plurality of positioning results include: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
Optionally, the AI model is valid if the difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And under the condition that the difference value between the displacement and the distance is larger than the preset threshold value, the AI model fails.
The model supervision device can improve the supervision effect on the AI model.
The model supervision device in the embodiment of the application can be an electronic device, such as an electronic device with an operating system, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal or a network side device.
The measurement information receiving device provided by the embodiment of the present application can implement each process implemented by the method embodiment shown in fig. 4, and achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides a communication device 700, including a processor 701 and a memory 702, where the memory 702 stores a program or an instruction that can be executed on the processor 701, for example, when the communication device 700 is a terminal, the program or the instruction is executed by the processor 701 to implement the steps of the above-mentioned embodiment of the model supervision method on the terminal side, and the same technical effects can be achieved. When the communication device 700 is a network side device, the program or the instruction, when executed by the processor 701, implements the steps of the above-mentioned network side model supervision method embodiment, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The embodiment of the application also provides communication equipment which comprises a processor and a communication interface, wherein the processor is used for acquiring the motion state information; the communication interface is configured to perform a transmission operation, where the transmission operation includes one of: sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model. The communication equipment embodiment corresponds to the measurement information feedback method embodiment, and each implementation process and implementation manner of the method embodiment can be applied to the communication equipment embodiment and can achieve the same technical effect.
Specifically, fig. 8 is a schematic diagram of a hardware structure of a terminal for implementing an embodiment of the present application.
The terminal 800 includes, but is not limited to: at least part of the components of the radio frequency unit 801, the network module 802, the audio output unit 803, the input unit 804, the sensor 805, the display unit 806, the user input unit 807, the interface unit 808, the memory 809, and the processor 810, etc.
Those skilled in the art will appreciate that the terminal 800 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 810 by a power management system for performing functions such as managing charging, discharging, and power consumption by the power management system. The terminal structure shown in fig. 8 does not constitute a limitation of the terminal, and the terminal may include more or less components than shown, or may combine certain components, or may be arranged in different components, which will not be described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processing unit (Graphics Processing Unit, GPU) 8041 and a microphone 8042, with the graphics processing unit 8041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes at least one of a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
In the embodiment of the present application, after receiving downlink data from the network side device, the radio frequency unit 801 may transmit the downlink data to the processor 810 for processing; in addition, the radio frequency unit 801 may send uplink data to the network side device. In general, the radio frequency unit 801 includes, but is not limited to, an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 809 may be used to store software programs or instructions and various data. The memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 809 may include volatile memory or nonvolatile memory, or the memory 809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 809 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 810 may include one or more processing units; optionally, the processor 810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
Wherein the processor 810 is configured to obtain motion state information;
a radio frequency unit 801, configured to perform a transmission operation, where the transmission operation includes one of:
Sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
Optionally, the radio frequency unit 801 is further configured to:
and receiving first indication information sent by network side equipment, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
Optionally, the supervision result is used to indicate at least one of the following:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
Optionally, the motion state information includes at least one of:
Motion speed, motion direction, acceleration, motion displacement.
Optionally, the radio frequency unit 801 is further configured to:
And reporting the capability information for acquiring the motion state information to the network side equipment.
Optionally, the sending operation includes sending a supervision result to the network side device, and the radio frequency unit 801 is further configured to:
And receiving second indicating information sent by the network side equipment, wherein the second indicating information is used for indicating to supervise the AI model.
Optionally, the supervising result includes:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the processor 810 is further configured to:
the terminal obtains a positioning result through the AI model;
optionally, the radio frequency unit 801 is further configured to:
And the terminal receives the positioning result of the AI model sent by the network side equipment.
Optionally, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result.
Optionally, the acquiring motion state information includes:
And acquiring motion state information of a future time, wherein the future time comprises the time corresponding to the positioning result.
Optionally, the future time includes one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
Optionally, the radio frequency unit 801 is further configured to:
And receiving third indication information sent by the network side equipment, wherein the third indication information is used for indicating the terminal to acquire the motion state information of the future time.
Optionally, the positioning result includes time information of the positioning result, and further includes at least one of:
location information, signal arrival time TOA.
Optionally, the receiving network side device sends a positioning result of the AI model, including:
And the receiving network side equipment transmits a plurality of positioning results of the AI model, wherein the positioning results are transmitted separately or in a combined mode.
Optionally, the supervising result includes:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the plurality of positioning results include: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
Optionally, determining that the AI model is valid if the difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And determining that the AI model fails under the condition that the difference value between the displacement and the distance is larger than the preset threshold value.
The terminal can improve the supervision effect on the AI model.
The embodiment of the application also provides communication equipment, which comprises a processor and a communication interface, wherein the communication interface is used for executing receiving operation, and the receiving operation comprises the following steps: receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information; and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model. . The communication device embodiment corresponds to the measurement information receiving method embodiment, and each implementation process and implementation manner of the method embodiment can be applied to the communication device embodiment, and the same technical effects can be achieved.
Specifically, the embodiment of the application also provides network side equipment. As shown in fig. 9, the network side device 900 includes: an antenna 901, a radio frequency device 902, a baseband device 903, a processor 904, and a memory 905. The antenna 901 is connected to a radio frequency device 902. In the uplink direction, the radio frequency device 902 receives information via the antenna 901, and transmits the received information to the baseband device 903 for processing. In the downlink direction, the baseband device 903 processes information to be transmitted, and transmits the processed information to the radio frequency device 902, and the radio frequency device 902 processes the received information and transmits the processed information through the antenna 901.
The method performed by the network side device in the above embodiment may be implemented in the baseband apparatus 903, where the baseband apparatus 903 includes a baseband processor.
The baseband apparatus 903 may, for example, include at least one baseband board, where a plurality of chips are disposed, as shown in fig. 9, where one chip, for example, a baseband processor, is connected to the memory 905 through a bus interface, so as to call a program in the memory 905 to perform the network device operation shown in the above method embodiment.
The network-side device may also include a network interface 906, such as a common public radio interface (common public radio interface, CPRI).
Specifically, the network side device 900 of the embodiment of the present invention further includes: instructions or programs stored in the memory 905 and executable on the processor 904, the processor 904 calls the instructions or programs in the memory 905 to perform the method performed by the modules shown in fig. 6, and achieve the same technical effects, so that repetition is avoided and therefore a description thereof is omitted.
The radio frequency device 902 is configured to perform a receiving operation, where the receiving operation includes one of:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
Optionally, the radio frequency device 902 is further configured to:
and sending first indication information to the terminal, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
Optionally, the supervision result is used to indicate at least one of the following:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
Optionally, the motion state information includes at least one of:
Motion speed, motion direction, acceleration, motion displacement.
Optionally, the radio frequency device 902 is further configured to:
And receiving the capability information which is reported by the terminal and used for acquiring the motion state information.
Optionally, the sending operation includes receiving a supervision result sent by the terminal, and the radio frequency device 902 is further configured to:
and sending second indicating information to the terminal, wherein the second indicating information is used for indicating to supervise the AI model.
Optionally, the supervising result includes:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the radio frequency device 902 is further configured to:
And sending the positioning result of the AI model to the terminal.
Optionally, the time information corresponding to the motion state information is matched with the time information corresponding to the positioning result.
Optionally, the radio frequency device 902 is further configured to:
And sending third indication information to the terminal, wherein the third indication information is used for indicating the terminal to acquire motion state information of future time.
Optionally, the future time includes one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
Optionally, the positioning result includes time information of the positioning result, and further includes at least one of:
location information, signal arrival time TOA.
Optionally, the sending the positioning result of the AI model to the terminal includes:
and sending a plurality of positioning results of the AI model to the terminal, wherein the positioning results are sent separately or combined.
Optionally, the supervising result includes:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
Optionally, the plurality of positioning results include: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
Optionally, the AI model is valid if the difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And under the condition that the difference value between the displacement and the distance is larger than the preset threshold value, the AI model fails.
The communication device can improve the supervision effect on the AI model.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or instructions, which when executed by a processor, implement the steps of the model supervision method provided by the embodiment of the application.
Wherein the processor is a processor in the terminal described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the model supervision method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
The embodiment of the present application further provides a computer program/program product, where the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement each process of the above-mentioned embodiment of the model supervision method, and the same technical effects can be achieved, so that repetition is avoided, and details are not repeated here.
The embodiment of the application also provides a model supervision system, which comprises: the terminal can be used for executing the steps of the model supervision method of the terminal side, and the network side device can be used for executing the steps of the model supervision method of the network side.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (38)

1. A method of model supervision, comprising:
The terminal acquires motion state information;
The terminal performs a transmission operation, the transmission operation including one of:
Sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
2. The method of claim 1, wherein the method further comprises:
The terminal receives first indication information sent by network side equipment, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
3. The method of claim 1, wherein the supervision result is used to indicate at least one of:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
4. The method of claim 1, wherein the motion state information comprises at least one of:
Motion speed, motion direction, acceleration, motion displacement.
5. The method of claim 1, wherein the method further comprises:
And the terminal reports the capability information for acquiring the motion state information to the network side equipment.
6. The method of claim 1, wherein the transmitting operation comprises transmitting a supervision result to a network side device, the method further comprising:
the terminal receives second indication information sent by the network side equipment, wherein the second indication information is used for indicating to supervise the AI model.
7. The method of claim 1, wherein the supervising result comprises:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
8. The method of claim 7, wherein the method further comprises at least one of:
the terminal obtains a positioning result through the AI model;
And the terminal receives the positioning result of the AI model sent by the network side equipment.
9. The method of claim 7, wherein the time information corresponding to the motion state information matches the time information corresponding to the positioning result.
10. The method of claim 9, wherein the terminal obtains motion state information, comprising:
And the terminal acquires motion state information of future time, wherein the future time comprises time corresponding to the positioning result.
11. The method of claim 10, wherein the future time comprises one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
12. The method of claim 10, wherein the method further comprises:
the terminal receives third indication information sent by the network side equipment, wherein the third indication information is used for indicating the terminal to acquire the motion state information of the future time.
13. The method of claim 7, wherein the positioning result comprises time information of a positioning result, and further comprising at least one of:
location information, signal arrival time TOA.
14. The method of claim 8, wherein the terminal receiving the positioning result of the AI model sent by the network side device comprises:
and the terminal receives a plurality of positioning results sent by the network side equipment and sent by the AI model, wherein the positioning results are sent separately or combined.
15. The method of claim 7, wherein the supervising result comprises:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
16. The method of claim 15, wherein the plurality of positioning results comprises: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
17. The method of claim 16, wherein the AI model is determined to be valid if a difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And determining that the AI model fails under the condition that the difference value between the displacement and the distance is larger than the preset threshold value.
18. A method of model supervision, comprising:
The network side equipment executes a receiving operation, wherein the receiving operation comprises the following steps:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
19. The method of claim 18, wherein the method further comprises:
The network side equipment sends first indication information to the terminal, wherein the first indication information is used for indicating the terminal to acquire the motion state information.
20. The method of claim 18, wherein the supervision result is used to indicate at least one of:
Model validity information;
confidence of the model supervision result;
Model supervision indexes;
the difference between the model supervision index and a preset threshold value.
21. The method of claim 18, wherein the motion state information comprises at least one of:
Motion speed, motion direction, acceleration, motion displacement.
22. The method of claim 17, wherein the method further comprises:
and the network side equipment receives the capability information which is reported by the terminal and used for acquiring the motion state information.
23. The method of claim 17, wherein the transmitting operation comprises receiving a supervision result transmitted by a terminal, the method further comprising:
and the network side equipment sends second indicating information to the terminal, wherein the second indicating information is used for indicating to supervise the AI model.
24. The method of claim 18, wherein the supervising result comprises:
and supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
25. The method of claim 24, wherein the method further comprises:
and the network side equipment sends the positioning result of the AI model to the terminal.
26. The method of claim 25, wherein the time information corresponding to the motion state information matches the time information corresponding to the positioning result.
27. The method of claim 26, wherein the method further comprises:
The network side equipment sends third indication information to the terminal equipment, wherein the third indication information is used for indicating the terminal to acquire motion state information of future time.
28. The method of claim 27, wherein the future time comprises one of:
In the future, N time units are continued, wherein N is an integer greater than 1;
Future Mth time unit, M is a positive integer;
P time stamps in the future, wherein P is a positive integer;
A first time period in the future.
29. The method of claim 24, wherein the positioning result comprises time information of a positioning result, and further comprising at least one of:
location information, signal arrival time TOA.
30. The method of claim 25, wherein the network side device sending the positioning result of the AI model to the terminal comprises:
and the network side equipment sends a plurality of positioning results of the AI model to the terminal, wherein the positioning results are sent separately or in a combined mode.
31. The method of claim 25, wherein the supervising results comprise:
And supervising results obtained by supervising the AI model based on the motion state information and the positioning results of the AI model.
32. The method of claim 31, wherein the plurality of positioning results comprises: the first position information of the first time and the second position information of the second time, and the supervision result comprises a supervision result obtained by the following method:
Determining a displacement of the terminal from the first time to the second position based on the motion state information, calculating a distance between the first position information and the second position information, and supervising the AI model based on the displacement and the distance.
33. The method of claim 32, wherein the AI model is valid if a difference between the displacement and the distance is less than or equal to a preset threshold; and/or
And under the condition that the difference value between the displacement and the distance is larger than the preset threshold value, the AI model fails.
34. A model supervision apparatus, comprising:
The first acquisition module is used for acquiring motion state information;
the execution module is used for executing a sending operation, and the sending operation comprises the following steps:
Sending a supervision result to network side equipment, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on the motion state information;
and sending the motion state information to network side equipment, wherein the motion state information is used for supervising an AI model.
35. A model supervision apparatus, comprising:
The execution module is used for executing a receiving operation, and the receiving operation comprises the following steps:
Receiving a supervision result sent by a terminal, wherein the supervision result is obtained by supervising an artificial intelligent AI model based on motion state information;
and receiving the motion state information sent by the terminal, wherein the motion state information is used for supervising the AI model.
36. A terminal comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the model supervision method according to any one of claims 1 to 17.
37. A network side device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the model supervision method according to any one of claims 18 to 33.
38. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the model supervision method according to any one of claims 1 to 17, or which, when executed by a processor, implement the steps of the model supervision method according to any one of claims 18 to 33.
CN202211426144.4A 2022-11-14 2022-11-14 Model supervision method, terminal and network side equipment Pending CN118042399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211426144.4A CN118042399A (en) 2022-11-14 2022-11-14 Model supervision method, terminal and network side equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211426144.4A CN118042399A (en) 2022-11-14 2022-11-14 Model supervision method, terminal and network side equipment

Publications (1)

Publication Number Publication Date
CN118042399A true CN118042399A (en) 2024-05-14

Family

ID=90997386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211426144.4A Pending CN118042399A (en) 2022-11-14 2022-11-14 Model supervision method, terminal and network side equipment

Country Status (1)

Country Link
CN (1) CN118042399A (en)

Similar Documents

Publication Publication Date Title
CN116488747A (en) Information interaction method and device and communication equipment
WO2023143572A1 (en) Positioning method based on artificial intelligence (ai) model, and communication device
CN118042399A (en) Model supervision method, terminal and network side equipment
CN116419267A (en) Communication model configuration method and device and communication equipment
CN115843021A (en) Data transmission method and device
WO2024008111A1 (en) Data acquisition method and device
EP4443993A1 (en) Positioning method and communication device
WO2023103911A1 (en) Measurement method and apparatus, device, and storage medium
WO2023098661A1 (en) Positioning method and communication device
WO2024067281A1 (en) Ai model processing method and apparatus, and communication device
WO2024120445A1 (en) Model input information determination method, apparatus, device and system, and storage medium
US20240354659A1 (en) Method for Updating Model and Communication Device
WO2024083004A1 (en) Ai model configuration method, terminal, and network side device
WO2024125525A1 (en) Ai computing power reporting method, and terminal and network-side device
CN118780013A (en) Information processing method, information processing device, terminal and network side equipment
CN117998295A (en) Data labeling method and device, terminal equipment and network side equipment
CN116846493A (en) Channel prediction method and device and wireless communication equipment
CN116668309A (en) Online learning method and device of AI model, communication equipment and readable storage medium
CN116634553A (en) Information processing method and communication device
CN116847456A (en) Positioning method, positioning device, terminal and network side equipment
CN118282899A (en) Model monitoring method, device, communication equipment, system and storage medium for functional life cycle management
CN118175503A (en) Model supervision method, device, terminal, network equipment and readable storage medium
CN118175048A (en) Model supervision triggering method, device, UE, network equipment, readable storage medium and communication system
CN117834427A (en) Method and device for updating AI model parameters and communication equipment
CN117560301A (en) Event information transmission method, LMF, access network equipment and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination