WO2021062740A1 - 一种确定设备信息的方法、装置以及系统 - Google Patents

一种确定设备信息的方法、装置以及系统 Download PDF

Info

Publication number
WO2021062740A1
WO2021062740A1 PCT/CN2019/109657 CN2019109657W WO2021062740A1 WO 2021062740 A1 WO2021062740 A1 WO 2021062740A1 CN 2019109657 W CN2019109657 W CN 2019109657W WO 2021062740 A1 WO2021062740 A1 WO 2021062740A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
algorithm
training
model
request
Prior art date
Application number
PCT/CN2019/109657
Other languages
English (en)
French (fr)
Inventor
辛阳
吴晓波
崇卫微
韦庆
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980100484.8A priority Critical patent/CN114402341A/zh
Priority to PCT/CN2019/109657 priority patent/WO2021062740A1/zh
Priority to EP19947578.1A priority patent/EP4027584A4/en
Publication of WO2021062740A1 publication Critical patent/WO2021062740A1/zh
Priority to US17/707,589 priority patent/US12040947B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5058Service discovery by the service manager
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to a method, device, and system for determining device information.
  • the Network Data Analytics Function integrates a training module and an inference module.
  • the inference module includes training models based on training data
  • the inference module includes generating data analysis results based on the trained models and inference data.
  • the deployment cost of the training module is much higher than that of the inference module, in order to reduce the deployment cost, it can be considered to separate the training module and the inference module and deploy them separately as training equipment and inference equipment.
  • training equipment with high deployment cost is deployed in a centralized cloud, and inference equipment with low deployment cost is deployed in a distributed manner.
  • how the inference device addresses the training device is a technical problem that needs to be solved urgently.
  • the embodiments of the present application provide a method, a device, and a system for determining device information, so as to implement the inference device to correctly address the training device.
  • an embodiment of the present application provides a method for determining device information, including: an inference device sends a first request for information of one or more training devices to a service discovery entity.
  • the first request carries first information including the algorithm type of the first model requested by the inference device or the identifier of the algorithm.
  • the inference device receives information from one or more training devices of the service discovery entity.
  • the information of the training equipment includes: ability information.
  • the inference device determines the first training device from the information of one or more training devices according to preset conditions.
  • the embodiment of the application provides a method for determining device information.
  • the inference device can provide the service discovery entity with the algorithm type (Algorithm Type) or algorithm ID (Algorithm ID) of the first model requested by the inference device, so that the inference device can obtain one from the service discovery entity.
  • the information of or multiple training devices can enable the inference device to correctly address one or more training devices.
  • the subsequent auxiliary reasoning device selects a suitable first training device for model training. It can ensure that the model training process is smooth, the model training speed is fast, and the model generalization ability is as strong as possible.
  • the separate deployment of training equipment and inference equipment can also reduce deployment costs.
  • the capability information provided in the embodiments of the present application includes any one or more of the following information: algorithm type, algorithm identification, algorithm performance evaluation index, algorithm convergence time, algorithm convergence speed, algorithm confidence degree.
  • algorithm type the information provided in the embodiments of the present application
  • algorithm performance evaluation index the information required to evaluate the performance of the inference device.
  • the first information further includes any one or more of the following information: location information of the inference device, and address information of the inference device.
  • location information of the inference device and address information of the inference device.
  • the preset conditions include any one or more of the following: the algorithm convergence time corresponding to the algorithm of the first training device is the algorithm convergence time corresponding to one or more algorithms in the one or more training devices Or, the algorithm performance evaluation index corresponding to one or more algorithms of the first training device is the highest algorithm performance evaluation index corresponding to one or more algorithms in the one or more training devices. The accuracy of training the first model by the first training device can be improved.
  • the information of the training device further includes any one of the following information: location information, load information, and the preset condition may also include: the load of the first training device is one or more of the training devices The load is the lowest. By selecting the training device with the lowest load as the first training device, the processing burden of the first training device can be reduced.
  • the method provided in the embodiment of the present application further includes: the inference device sends a fourth request for requesting information of the first model to the first training device.
  • the fourth request includes second information, and the second information includes any one or more of the following information corresponding to the first model: algorithm type, algorithm identifier, algorithm performance requirement, data type, and data address.
  • the inference device receives the information of the first model from the first training device, and the information of the first model includes any one or more of the following information: model identification, model input, and model parameters. In this way, it is convenient for the first training device to determine the requirements satisfied by the first model required by the inference device according to the fourth request.
  • the information of the first model further includes any one or more of the following information: model output, model additional information, and model evaluation index results.
  • the result of the model evaluation index may be the optimal value.
  • the inference device sends a third request for registering or updating the information of the first model to the service discovery entity.
  • the third request includes any one or more of the following information: Analysis ID corresponding to the analysis result of the first model, and the effective area or service area or coverage area of the analysis result corresponding to the first model.
  • the reasoning device further subscribes online reasoning data to the network element that provides data for generating data analysis results. The reasoning device determines the data analysis result based on the model and the obtained online reasoning data, and sends the data analysis result to the consumer network element.
  • the third request further includes location information of the inference device.
  • an embodiment of the present application provides a method for determining device information, including: a service discovery entity receives a first request from an inference device for information about one or more training devices.
  • the first request carries first information including the algorithm type or the algorithm identifier of the first model requested by the inference device.
  • the service discovery entity determines the information of one or more training devices according to the first information, and the information of the training devices includes: capability information.
  • the service discovery entity sends the information of one or more training devices to the inference device.
  • one or more training devices are training devices that support the type of algorithm required by the inference device, and/or, one or more training devices are those whose distance from the inference device meets preset requirements Training equipment.
  • the load of one or more training devices is lower than a preset load threshold.
  • the method provided in the embodiment of the present application further includes: the service discovery entity receives a second request from the first training device, and the second request is used to request to register or update the first training at the service discovery entity.
  • Device information, the second request includes any one or more of the following information of the first training device: address information, location information, load, capability information, and the first training device is any one of the one or more training devices .
  • the method provided in the embodiment of the present application further includes: the service discovery entity sends a response message to the first training device, where the response message is used to indicate that the information of the first training device is successfully registered or updated.
  • the capability information includes any one or more of the following information: algorithm type, algorithm identification, algorithm performance evaluation index, algorithm convergence time, algorithm convergence speed, and algorithm confidence.
  • the method provided in the embodiment of the present application further includes: the service discovery entity receives a third request from the inference device, the third request is used to register or update the information of the first model, and the third request includes the following Any one or more of the information: the analytics ID corresponding to the analysis result of the first model, and the effective area or service area or coverage area of the analysis result corresponding to the first model.
  • an embodiment of the present application provides a method for determining device information, including: a first training device receives a fourth request from an inference device.
  • the fourth request includes second information, and the second information includes any one or more of the following information corresponding to the first model: algorithm type, algorithm identifier, algorithm performance requirement, data type, and data address.
  • the first training device determines the information of the first model according to the second information.
  • the information of the first model includes any one or more of the following information: model identification, model input, model parameters; the first training device sends the first model to the inference device Information about a model.
  • determining the information of the first model by the first training device according to the second information includes: the first training device collects training data according to the data type and the data address. The first training device performs model training on the data according to the algorithm determined by the algorithm identifier to obtain information of the first model, and the performance index of the first model meets the algorithm performance requirement.
  • the information of the first model further includes any one or more of the following information: model output, model additional information, and model evaluation index results.
  • the method provided in the embodiment of the present application further includes: the first training device sends a second request of the first training device to the service discovery entity, the second request is used to request registration at the service discovery entity or Update the information of the first training device, the second request includes any one or more of the following information of the first training device: address information, location information, load, capability information, and the first training device is one or more of the training devices Any training equipment.
  • the method provided in the embodiment of the present application further includes: the first training device receives a response message from the service discovery entity, where the response message is used to indicate that the information of the first training device is successfully registered or updated.
  • an embodiment of the present application provides an apparatus for determining device information.
  • the apparatus for determining device information can implement the first aspect or the method in any possible implementation of the first aspect, and therefore can also implement the first aspect or The beneficial effects in any possible implementation of the first aspect.
  • the device for determining device information may be an inference device, or a device that can support the inference device to implement the method in the first aspect or any possible implementation of the first aspect, for example, a chip applied to the inference device.
  • the device can implement the above method by software, hardware, or by hardware executing corresponding software.
  • the apparatus for determining device information includes: a communication unit configured to send a first request for requesting information of one or more training devices to a service discovery entity.
  • the first request carries first information, and the first information includes the algorithm type or the algorithm identifier of the first model requested by the device.
  • the communication unit is used to receive information from one or more training devices of the service discovery entity, and the information of the training devices includes: capability information.
  • the processing unit is configured to determine the first training device from the information of one or more training devices according to preset conditions.
  • the capability information provided in the embodiments of the present application includes any one or more of the following information: algorithm type, algorithm identification, algorithm performance evaluation index, algorithm convergence time, algorithm convergence speed, algorithm confidence degree.
  • the first information further includes any one or more of the following information: location information of the device, and address information of the device.
  • the preset conditions include any one or more of the following: the algorithm convergence time corresponding to the algorithm of the first training device is the algorithm convergence time corresponding to one or more algorithms in the one or more training devices Or, the algorithm performance evaluation index corresponding to one or more algorithms of the first training device is the highest algorithm performance evaluation index corresponding to one or more algorithms in the one or more training devices.
  • the information of the training device further includes any one of the following information: location information, load information, and the preset condition may also include: the load of the first training device is one or more of the training devices The load is the lowest.
  • the communication unit is further configured to send a fourth request for requesting information of the first model to the first training device.
  • the fourth request includes the second information.
  • the second information includes any one or more of the following information corresponding to the first model: algorithm type, algorithm identification, algorithm performance requirement, data type, and data address.
  • the communication unit is further configured to receive information about the first model from the first training device, where the information about the first model includes any one or more of the following information: model identification, model input, and model parameters.
  • the information of the first model further includes any one or more of the following information: model output, model additional information, and model evaluation index results.
  • the communication unit is configured to send a third request to the service discovery entity, the third request is used to register or update the information of the first model, and the third request includes any one or more of the following information : Analysis ID corresponding to the analysis result of the first model, the effective area or service area or coverage area of the analysis result corresponding to the first model.
  • the third request further includes location information of the inference device.
  • an embodiment of the present application provides a device for determining device information.
  • the device for determining device information may be an inference device or a chip in the inference device.
  • the communication unit may be a communication interface.
  • the processing unit may be a processor.
  • the apparatus for determining device information may further include a storage unit.
  • the storage unit may be a memory.
  • the storage unit is used to store computer program code, and the computer program code includes instructions.
  • the processing unit executes the instructions stored in the storage unit, so that the inference device implements the method for determining device information described in the first aspect or any one of the possible implementation manners of the first aspect.
  • the processing unit may be a processor, and the communication unit may be collectively referred to as a communication interface.
  • the communication interface may be an input/output interface, pin or circuit, and so on.
  • the processing unit executes the computer program code stored in the storage unit to enable the inference device to implement the method for determining device information described in the first aspect or any one of the possible implementations of the first aspect.
  • the storage unit may be The storage unit (for example, register, cache, etc.) in the chip may also be a storage unit (for example, read-only memory, random access memory, etc.) located outside the chip in the management network element.
  • the processor, the communication interface and the memory are coupled with each other.
  • an embodiment of the present application provides a device for determining device information.
  • the device for determining device information can implement the second aspect or any possible implementation method of the second aspect, and therefore can also implement the second aspect or The beneficial effects of any possible implementation of the second aspect.
  • the device for determining device information may be a service discovery entity, or a device that can support the service discovery entity to implement the method in the second aspect or any possible implementation manner of the second aspect, for example, a chip applied to the service discovery entity.
  • the device can implement the above method by software, hardware, or by hardware executing corresponding software.
  • the apparatus for determining device information includes: a communication unit configured to receive a first request for information of one or more training devices from an inference device.
  • the first request carries first information, and the first information includes the algorithm type or the algorithm identifier of the first model requested by the inference device.
  • the processing unit is configured to determine information of one or more training devices according to the first information, and the information of the training devices includes: capability information.
  • the communication unit is also used to send information about one or more training devices to the inference device.
  • one or more training devices are training devices that support the type of algorithm required by the inference device, and/or, one or more training devices are those whose distance from the inference device meets preset requirements Training equipment.
  • the load of one or more training devices is lower than a preset load threshold.
  • the communication unit is further configured to receive a second request from the first training device, the second request is used to request to register or update the information of the first training device at the device, and the second request includes Any one or more of the following information of the first training device: address information, location information, load, capacity information, and the first training device is any one of the one or more training devices.
  • the capability information includes any one or more of the following information: algorithm type, algorithm identification, algorithm performance evaluation index, algorithm convergence time, algorithm convergence speed, and algorithm confidence.
  • the communication unit is further configured to receive a third request from the inference device, the third request is used to register or update the information of the first model, and the third request includes any one or more of the following information: A: Analysis ID corresponding to the analysis result of the first model, the effective area or service area or coverage area of the analysis result corresponding to the first model.
  • an embodiment of the present application provides an apparatus for determining device information.
  • the apparatus for determining device information may be a service discovery entity or a chip in the service discovery entity.
  • the communication unit may be a communication interface.
  • the processing unit may be a processor.
  • the communication device may also include a storage unit.
  • the storage unit may be a memory.
  • the storage unit is used to store computer program code, and the computer program code includes instructions.
  • the processing unit executes the instructions stored in the storage unit, so that the service discovery entity implements the method for determining device information described in the second aspect or any one of the possible implementation manners of the second aspect.
  • the processing unit may be a processor, and the communication unit may be collectively referred to as a communication interface.
  • the communication interface may be an input/output interface, pin or circuit, and so on.
  • the processing unit executes the computer program code stored in the storage unit, so that the service discovery entity implements the method for determining device information described in the second aspect or any one of the possible implementations of the second aspect, the storage unit may It is a storage unit (for example, register, cache, etc.) in the chip, or a storage unit (for example, read-only memory, random access memory, etc.) located outside the chip in the service discovery entity.
  • the processor, the communication interface and the memory are coupled with each other.
  • an embodiment of the present application provides an apparatus for determining device information.
  • the apparatus for determining device information can implement the third aspect or the method in any possible implementation manner of the third aspect, and therefore can also implement the third aspect or
  • the third aspect has the beneficial effects in any possible implementation manner.
  • the communication device may be a first training device, or a device that can support the first training device to implement the third aspect or the method in any possible implementation manner of the third aspect, for example, a chip applied to the first training device.
  • the device can implement the above method by software, hardware, or by hardware executing corresponding software.
  • an apparatus for determining device information includes: a communication unit configured to receive a fourth request including second information from an inference device.
  • the second information includes any one or more of the following information corresponding to the first model: algorithm type, algorithm identifier, algorithm performance requirement, data type, and data address.
  • the processing unit is configured to determine the information of the first model according to the second information.
  • the information of the first model includes any one or more of the following information: model identification, model input, model parameters; the communication unit is also used to send the information of the first model to the inference device.
  • determining the information of the first model by the first training device according to the second information includes: the first training device collects training data according to the data type and the data address. The first training device performs model training on the data according to the algorithm determined by the algorithm identifier to obtain information of the first model, and the performance index of the first model meets the algorithm performance requirement.
  • the information of the first model further includes any one or more of the following information: model output, model additional information, and model evaluation index results.
  • the communication unit is further configured to send a second request of the first training device to the service discovery entity, and the second request is used to request to register or update the information of the first training device at the service discovery entity,
  • the second request includes any one or more of the following information of the first training device: address information, location information, load, capability information, and the first training device is any one of the one or more training devices.
  • the communication unit is further configured to receive a response message from the service discovery entity, where the response message is used to indicate that the information of the first training device is successfully registered or updated.
  • the embodiments of the present application provide a computer-readable storage medium, and the computer-readable storage medium stores a computer program or instruction.
  • the computer program or instruction runs on a computer, the computer can execute the operations as described in the first aspect to the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, and a computer program or instruction is stored in the computer-readable storage medium.
  • the computer program or instruction When the computer program or instruction is run on a computer, the computer executes operations as described in the second aspect to the first aspect.
  • a method for determining device information described in any one of the possible implementations of the two aspects.
  • the embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program or instruction.
  • the computer program or instruction When the computer program or instruction is run on a computer, the computer executes operations such as the third aspect to the first aspect.
  • the embodiments of the present application provide a computer program product that includes instructions.
  • the instructions run on a computer, the computer executes the determination device described in the first aspect or various possible implementations of the first aspect.
  • Information method When the instructions run on a computer, the computer executes the determination device described in the first aspect or various possible implementations of the first aspect. Information method.
  • this application provides a computer program product that includes instructions that, when the instructions run on a computer, cause the computer to execute the second aspect or the various possible implementations of the second aspect to determine device information Methods.
  • this application provides a computer program product that includes instructions.
  • the instructions run on a computer, the computer executes the third aspect or the various possible implementations of the third aspect to determine device information Methods.
  • the embodiments of the present application provide a communication device for implementing various possible designs in any of the foregoing first to third aspects.
  • the communication device may be the above-mentioned inference device, or a device containing the above-mentioned inference device.
  • the communication device may be the above-mentioned service discovery entity, or a device including the above-mentioned service discovery entity.
  • the communication device may be the above-mentioned first training device, or a device containing the above-mentioned first training device.
  • the communication device includes a module, unit, or means corresponding to the foregoing method, and the module, unit, or means can be implemented by hardware, software, or hardware execution of corresponding software.
  • the hardware or software includes one or more modules or units corresponding to the above-mentioned functions.
  • an embodiment of the present application provides a communication device, which includes: at least one processor and a communication interface.
  • the processor executes the computer-executable instructions stored in the communication device, so that the communication device executes various possible designs as described in any one of the first aspect to the third aspect.
  • the communication device may be an inference device or a chip applied in an inference device.
  • the communication device may be a service discovery entity or a chip applied to the service discovery entity.
  • the communication device may be the first training device or a chip applied in the first training device.
  • the communication device described in the fourteenth aspect may further include a bus and a memory, and the memory is used to store code and data.
  • the memory is used to store code and data.
  • at least one processor communication interface and the memory are coupled to each other.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor and a storage medium.
  • the storage medium stores instructions. When the instructions are executed by the processor, they can implement various aspects such as the first aspect or the first aspect. Possible implementations describe a method for determining device information.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor and a storage medium.
  • the storage medium stores instructions. When the instructions are executed by the processor, they can implement various aspects such as the second aspect or the second aspect. Possible implementations describe a method for determining device information.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor and a storage medium.
  • the storage medium stores instructions. When the instructions are executed by the processor, they implement various aspects such as the third aspect or the third aspect. Possible implementations describe a method for determining device information.
  • an embodiment of the present application provides a communication device that includes a processor, the processor is coupled to a memory, and the memory is used to store instructions or computer programs. When the instructions or computer programs are run by the processor, they are used to implement A method for determining device information as described in the first aspect or various possible implementation manners of the first aspect.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor, and a memory is coupled to the processor.
  • the memory is used to store instructions or computer programs. When the instructions or computer programs are run by the processor, they are used to implement A method for determining device information as described in the second aspect or various possible implementation manners of the second aspect.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor, a memory and a processor coupled, and the memory is used to store instructions or computer programs.
  • the implementation is A method for determining device information described in the third aspect or various possible implementation manners of the third aspect.
  • an embodiment of the present application provides a communication device.
  • the communication device includes one or more modules for implementing the methods of the first, second, and third aspects described above, and the one or more The module may correspond to each step in the method of the first aspect, the second aspect, and the third aspect described above.
  • an embodiment of the present application provides a chip that includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a computer program or instruction to implement the first aspect or the first aspect.
  • the communication interface is used to communicate with other modules outside the chip.
  • an embodiment of the present application provides a chip that includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a computer program or instruction to implement the second aspect or the second aspect.
  • the communication interface is used to communicate with other modules outside the chip.
  • an embodiment of the present application provides a chip that includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a computer program or instruction to implement the third aspect or the third aspect A method for determining device information described in various possible implementations.
  • the communication interface is used to communicate with other modules outside the chip.
  • the chip provided in the embodiment of the present application further includes a memory for storing computer programs or instructions.
  • an embodiment of the present application provides a communication system
  • the communication system includes: the device described in the fourth aspect or any one of the possible implementation manners of the fourth aspect, any one of the fifth aspect or the fifth aspect may be
  • the device described in the implementation mode is any of the devices provided above.
  • the communication system may further include: the device described in the sixth aspect or any one of the possible implementation manners of the sixth aspect.
  • any device or computer storage medium or computer program product or chip or communication system provided above is used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided above The beneficial effects of the corresponding solutions in the method will not be repeated here.
  • Figure 1 is the architecture diagram of the current data analysis network element
  • FIG. 2 is a scene diagram provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of a communication system architecture provided by an embodiment of this application.
  • FIG. 4 is a schematic structural diagram of a communication device provided by an embodiment of this application.
  • FIG. 5 is a schematic flowchart of a method for determining device information according to an embodiment of this application.
  • FIG. 6 is a schematic flowchart of another method for determining device information according to an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of a device provided by an embodiment of this application.
  • FIG. 8 is a schematic structural diagram of another device provided by an embodiment of this application.
  • FIG. 9 is a schematic structural diagram of a chip provided by an embodiment of the application.
  • words such as “first” and “second” are used to distinguish the same items or similar items that have substantially the same function and effect.
  • the first training device and the first training device are only used to distinguish different training devices, and the sequence of the training devices is not limited.
  • words such as “first” and “second” do not limit the quantity and order of execution, and words such as “first” and “second” do not limit the difference.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency-division multiple access
  • SC-FDMA single carrier frequency-division multiple access
  • system can be used interchangeably with "network”.
  • 3GPP is a new version of UMTS that uses E-UTRA in long term evolution (LTE) and various versions based on LTE evolution.
  • LTE long term evolution
  • NR new radio
  • the communication system may also be applicable to future-oriented communication technologies, all of which are applicable to the technical solutions provided in the embodiments of the present application.
  • NWDAF is black-boxed, integrating the following three components into one, with a single deployment form: data lake, inference module, and training module.
  • the data lake includes functions such as data collection, data preprocessing, and feature engineering
  • the training module includes functions such as model training and model storage.
  • the reasoning module includes functions such as model deployment and model application.
  • the deployment cost of the training module is much higher than that of the inference module.
  • the training module has higher hardware requirements, such as artificial intelligence (AI) chips, large clusters, and
  • the graphics processing unit (Graphics Processing Unit, GPU) has computing capability, and the inference module can use a general central processing unit (Central Processing Unit, CPU).
  • AI artificial intelligence
  • CPU Central Processing Unit
  • the AI engineering design talents of the training module have higher requirements, especially the professional field experts with higher requirements for feature selection and model training. The designers of the inference module often only need to obtain the model instructions from the training module to implement the model locally. Just deploy.
  • small and medium operators and large operators have put forward their own demands on the deployment form of NWDAF network elements.
  • Small and medium operators (such as small operators in some countries in Africa) hope to strip the AI model training function in NWDAF network elements from the existing Rel-16 architecture in Rel-17, and then use AI public cloud (such as SoftCOM AI) As a training device to achieve AI model training.
  • AI public cloud such as SoftCOM AI
  • operators only deploy inference equipment, and then provide data to the training equipment to implement model training, and then the inference equipment obtains the model from the training equipment for local deployment, and finally generates data analysis results based on the model and local network data.
  • the application function (AF) network element can provide service data to the NWDAF network element, for example, the management and maintenance (operation, administration, and maintenance, OAM) network element (also It can be called operation management and maintenance network element), 5G network element (base station or core network network element (for example, access and mobility management function (AMF) network element/Policy Control Function)
  • the network element/user plane function (UPF) network element/session management function (Session Management Function (SMF) network element)) can provide network data (Network Data) to the NWDAF network element.
  • FIG. 2 shows a network architecture after separation of a training device and an inference device provided by an embodiment of the application.
  • Training data collection Obtain training data from inference devices or database network elements (for example, application function (AF) network elements) or 5G network elements ;
  • Model training Train the optimal model based on training data, algorithms, and training platforms (such as Huawei's Network Artificial Intelligence Engine (NAIE), Google's Tensorflow, Facebook's Caffe, Amazon's MXnet).
  • Model storage The optimal model obtained from training is serialized and stored in preparation for calling the model.
  • Inference equipment includes: data lake, model application and model deployment.
  • the main functions of the inference equipment are as follows: model deployment: request and receive models from the training equipment, and install and deploy them locally; model input data collection: collect model input data from network elements, and obtain model output based on the model; model application: model-based After output verification and data restoration, the data analysis result (data Analytics) is determined, and the corresponding Analytic ID is prepared for the request of the policy network element.
  • a lightweight model training module may be reserved in the inference device for local training of the inference device.
  • the lightweight model in the inference equipment does not require massive distributed data storage, distributed data processing, and does not require the participation of a large number of AI experts.
  • FIG. 3 shows a communication system to which a method for determining device information is provided in an embodiment of the present application.
  • the communication system includes: a service discovery entity 10 and an inference device 20 communicating with the service discovery entity 10.
  • the communication system may further include: one or more training devices (for example, training device 301, training device 302 to training device 30n). Wherein, n is an integer greater than or equal to 1.
  • the inference device 20 and one or more training devices can communicate with each other.
  • the service discovery entity 10 supports network functions or network service discovery, registration, and authentication functions.
  • the service discovery entity 10 may be a network repository function (NRF) network element.
  • the service discovery entity 10 may be a Domain Name System (DNS) server.
  • NRF network repository function
  • DNS Domain Name System
  • the service discovery entity 10 is an NRF network element as an example.
  • the service discovery entity 10 may be an NRF network element or have other names, which is not limited in this application.
  • the training device has any one of the following functions: registering capability information with the service discovery entity 10, model training, and model storage. Specifically, the training device is used to combine the address information of the training device, the load of the training device, and the list of algorithms supported by the training device (including the type, identification, additional information, evaluation index, algorithm convergence time, confidence, etc.) of each algorithm. Register with the service discovery entity 10. Receive model requests from the inference device 20 (including model: specific algorithm identification requirements used, model evaluation index requirements); feedback the model to the inference device 20 (including model identification, algorithm identification used by the model, model input, model output, Parameter list, model evaluation index result-optimal value, algorithm additional information).
  • the inference device may be a module with model deployment and model application in a data analysis network element, or the inference device may be a data analysis network element.
  • the data analysis network element may be a NWDAF network element.
  • the service discovery entity 10 has any one of the following functions: receiving registration information of the training device from the training device and saving it.
  • the training device can update the initial registration information, especially the common variables such as the load of the training device, algorithm performance indicators, and algorithm convergence time.
  • the request can carry the type of algorithm to be used and the location information of the inference device;
  • the response addressed by the training device is sent to the inference device, including the address of the training device, the algorithm identification of the specific algorithm supported by the training device, and the evaluation of the algorithm Indicators, algorithm convergence time, confidence level, etc.
  • the inference device determines the algorithm type of the algorithm corresponding to the model to be requested, and further addresses the appropriate training device to the NRF; then, the inference device requests the model from the training device, and the request parameters and the corresponding parameters of the model refer to the training device and the following The description in the above embodiment.
  • FIG. 4 shows a schematic diagram of the hardware structure of a communication device provided by an embodiment of the application.
  • the communication device includes a processor 41, a communication line 44, and at least one communication interface (the communication interface 43 is exemplarily described in FIG. 4).
  • the processor 41 can be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more programs for controlling the execution of the program of this application. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication line 44 may include a path to transmit information between the aforementioned components.
  • the communication interface 43 uses any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • the communication device may further include a memory 42.
  • the memory 42 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently and is connected to the processor through the communication line 44. The memory can also be integrated with the processor.
  • the memory 42 is used to store computer execution instructions for executing the solution of the application, and the processor 41 controls the execution.
  • the processor 41 is configured to execute computer-executable instructions stored in the memory 42 to implement a method for determining device information provided in the following embodiments of the present application.
  • the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
  • the processor 41 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 4.
  • the communication device may include multiple processors, such as the processor 41 and the processor 45 in FIG. 4.
  • processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • the specific structure of the execution subject of a method for determining device information is not particularly limited in the embodiment of this application, as long as the code of a method for determining device information of the embodiment of this application can be recorded by running
  • the procedure for determining device information according to an embodiment of the present application can be used to communicate.
  • the execution subject of the method for determining device information provided in an embodiment of the present application may be a service discovery entity or an application
  • the communication device in the service discovery entity, for example, a chip is not limited in this application.
  • the execution subject of the method for determining device information provided by the embodiment of the present application may be an inference device, or a communication device applied to the inference device, such as a chip, which is not limited in this application.
  • the execution subject of the method for determining device information provided in the embodiment of the present application may be a training device, or a communication device, such as a chip, applied to the training device, which is not limited in this application.
  • a training device or a communication device, such as a chip, applied to the training device, which is not limited in this application.
  • the following embodiments are described with an example in which the execution subject of a method for determining device information is an inference device, a service discovery entity, and a training device respectively.
  • an embodiment of the present application provides a method for determining device information, and the method includes:
  • Step 501 The inference device sends a first request to the service discovery entity, so that the service discovery entity receives the first request from the inference device.
  • the first request is used to request information of one or more training devices, and the first request includes the algorithm type (Algorithm Type) or the algorithm ID (Algorithm ID) of the first model requested by the inference device.
  • the training device set includes at least one or more training devices.
  • the process of having a training device set in the service discovery entity reference may be made to the description of the following embodiments, which will not be repeated here.
  • the first request may be a service-oriented network function discovery request (Nnrf_NFDiscovery_Request).
  • the first request may also include any one or more of the following information: location information of the inference device, address information of the inference device, or identification of the inference device.
  • the first request carries demand information, and the demand information is used to request information about one or more training devices that meet the requirements of the inference device.
  • the requirement information includes the algorithm type of the first model requested by the inference device, the algorithm identification of the first model that the inference device needs to request, the location information of the inference device, the address information of the inference device, or the identification of the inference device.
  • the address information of the inference device is used to determine the IP address of the inference device, and the location information of the inference device is used to determine the location or geographic area of the inference device.
  • the address information of the inference device is used to index or find the inference device, and the location information of the inference device is used to index the area covered by the inference device.
  • the algorithm type of the first model requested by the inference device is: any one or more of the following algorithm types: regression, clustering, association analysis, classification, and recommendation.
  • Each specific algorithm type can correspond to one or more algorithms.
  • the inference equipment is deployed by the operator.
  • the operator decides to trigger a specific data analysis task, preliminarily determines the type of algorithm required for the data analysis task (such as a classification task or a regression task), and then addresses training data that meets the requirements through its own inference equipment.
  • Step 502 The service discovery entity determines the information of the one or more training devices according to the algorithm type of the first model requested by the inference device, and the information of the training devices includes: capability information.
  • the capability information includes any one or more of the following information corresponding to any algorithm: algorithm type, algorithm identification, algorithm performance evaluation index, algorithm convergence time, algorithm convergence speed, and algorithm confidence.
  • the convergence speed of an algorithm is expressed as the number of iterations used to achieve the same algorithm performance evaluation index requirement.
  • algorithm performance evaluation indicators may include: square error, accuracy rate, recall rate, F-Score (average score after reconciling accuracy rate and recall rate).
  • Table 2 shows the algorithm performance index measurement methods corresponding to different algorithms.
  • TP True Positive
  • TN True Negative
  • FP False Positive, false positive
  • FN False Negative
  • F1 and FB the classifier evaluation metrics include accuracy rate (also called recognition rate), sensitivity or recall rate (recall), specificity, precision (precision) F1 and FB.
  • Step 503 The service discovery entity sends the information of one or more training devices to the inference device, so that the inference device receives the information of one or more training devices from the service discovery entity, and the information of the training device includes: capability information.
  • the service discovery entity may send a first response to the inference device.
  • the first response includes: information about one or more training devices.
  • the first response may be: a service-oriented network function discovery request response (Nnrf_NFDiscovery_Request response).
  • the information of the training device may further include any one or more of the location information of the training device and the load of the training device.
  • Step 504 The inference device determines the first training device from the information of one or more training devices according to preset conditions.
  • Step 504 in the embodiment of the present application may also be implemented in the following manner: the inference device selects one of the information of one or more training devices as the first training device.
  • the embodiment of the application provides a method for determining device information.
  • the inference device can provide the service discovery entity with the algorithm type (Algorithm Type) or algorithm ID (Algorithm ID) of the first model requested by the inference device, so that the inference device can obtain one from the service discovery entity.
  • the information of or multiple training devices can enable the inference device to correctly address one or more training devices.
  • the subsequent auxiliary reasoning device selects a suitable first training device for model training. It can ensure that the model training process is smooth, the model training speed is fast, and the model generalization ability is as strong as possible.
  • the separate deployment of training equipment and inference equipment can also reduce deployment costs.
  • the method provided in the embodiment of the present application may further include:
  • Step 505 The first training device or the second training device sends a second request to the service discovery entity, so that the service discovery entity receives the second request from the first training device or the second training device.
  • the second request from the first training device is used to request to register or update the information of the first training device at the service discovery entity, and the second request includes any one or more of the following information of the first training device: address: Information, location information, load, capacity information, the first training device is any one of the training device set, or the first training device or the second training device is any one of one or more training devices. One or more training devices belong to the training devices in the training device set.
  • the second request from the second training device is used to request to register or update the information of the second training device at the service discovery entity.
  • any training device in the embodiment of the present application can register with the service discovery entity or update the information of the first training device through step 505, in step 505, the first training device is used to register or update the information of the first training device with the service discovery entity.
  • the information of any training device may be referred to as artificial intelligence (AI) information
  • the process of registering any training device with the service discovery entity or updating the information of the first training device is referred to as the AI capability registration process.
  • the address information of any training device in the embodiments of this application is used to determine the IP address of the training device.
  • the location information of any training device is used to determine the location of the training device. For example, the area where the training equipment is located.
  • the second request itself has the function of requesting to register or update the information of the first training device at the service discovery entity, or the second request has indication information for requesting registration or update at the service discovery entity Information of the first training device.
  • each training device in the training device set registers with the service discovery entity or updates the information of the first training device, the information of each training device is stored at the service discovery entity.
  • step 502 in the embodiment of the present application can be implemented in the following manner: the service discovery entity determines the training device's training device from the training device set according to the algorithm type or algorithm identification of the first model requested by the inference device. information.
  • one or more training devices are training devices that support the type of algorithm required by the inference device, and/or, one or more training devices are training where the distance from the inference device meets preset requirements equipment. For example, one or more training devices are the training devices closest to the inference device.
  • the load of the first or more training devices is a training device whose load is lower than a preset load threshold.
  • one or more training devices are training devices that support the algorithm identification required by the inference device.
  • the service discovery entity determines from the training device information stored in the service discovery entity that the distance to the inference device meets the preset requirements (for example, the closest to the inference device). Or multiple training devices). Then, the service discovery entity filters out training devices that do not support the algorithm type requirements from the training device set according to the algorithm type requirements of the inference device. In addition, the service discovery entity may further filter out training devices with a heavy load, and finally obtain one or more training devices that meet the requirements of the inference device.
  • the preset requirements for example, the closest to the inference device. Or multiple training devices.
  • the service discovery entity filters out training devices that do not support the algorithm type requirements from the training device set according to the algorithm type requirements of the inference device.
  • the service discovery entity may further filter out training devices with a heavy load, and finally obtain one or more training devices that meet the requirements of the inference device.
  • step 504 in the embodiment of the present application can be implemented in the following manner:
  • Step 5041 the inference device determines the training device that meets the preset time requirement among the algorithm convergence times corresponding to the algorithms in the one or more training devices as the first training device.
  • the preset time requirement may be that the algorithm convergence time is the fastest, that is, the algorithm convergence time is the shortest, or the algorithm convergence time meets the preset convergence time threshold.
  • Training device 1 corresponds to algorithm convergence time T1
  • training device 2 corresponds to algorithm convergence time T2
  • training device 3 corresponds to algorithm convergence time T3, where T1>T2>T3, the inference device can determine that the first training device is training device 3.
  • Step 5042 the inference device determines the training device that meets the requirements of the evaluation index among the algorithm performance evaluation indexes corresponding to the algorithm in the one or more training devices as the first training device.
  • the evaluation index requirement may be that the algorithm performance evaluation index is the highest, or the algorithm performance evaluation index meets the preset algorithm performance evaluation index threshold.
  • the preset conditions provided in the embodiments of the present application include any one or more of the following: the algorithm convergence time corresponding to the algorithm of the first training device is one or more of the one or more training devices The algorithm convergence time of any one of the two algorithms meets the preset time requirement; or, the algorithm performance evaluation index corresponding to one or more algorithms of the first training device is one or more algorithms corresponding to one or more of the one or more training devices The algorithm performance evaluation index meets the requirements of the evaluation index.
  • the algorithm convergence time corresponding to the algorithm of the first training device is the fastest among the algorithm convergence time corresponding to one or more algorithms in one or more training devices; or, one or more algorithms in the first training device
  • the algorithm performance evaluation index corresponding to the algorithm is the highest algorithm performance evaluation index corresponding to one or more algorithms in one or more training devices.
  • the preset condition provided in the embodiment of the present application may further include: the load of the first training device is that the load in one or more training devices meets the load requirement.
  • meeting the load requirement can be the lowest load, or the load is lower than or equal to the preset load threshold.
  • the load of the first training device is the lowest among the one or more training devices.
  • the method provided in the embodiment of the present application further includes:
  • Step 506 The inference device sends a fourth request to the first training device, so that the first training device receives the fourth request from the inference device.
  • the fourth request is used to request information of the first model, and the fourth request includes any one or more of the following information corresponding to the first model: algorithm type, algorithm identification, algorithm performance requirements, data type (Event ID list), Data address.
  • the fourth request carries second information
  • the second information is used to request information of the first model.
  • the second information includes any one or more of the following information corresponding to the first model: algorithm type, algorithm identification, algorithm performance requirement, data type (Event ID list), and data address.
  • the algorithm identifier carried in the fourth request may be linear regression, support vector machine, or logistic regression.
  • Algorithm performance requirements For example, the sum square error is less than or equal to 0.001.
  • the algorithm performance requirement is used to indicate the algorithm performance that the first model needs to meet.
  • a data type can represent one or more data types.
  • the data type is used by the first training device to determine the type of data that needs to be collected when training the first model.
  • the data address is used to determine the address of the data that needs to be collected when training the first model.
  • the data address may be an address/IP/ID of a network function (NF) capable of providing the data.
  • NF network function
  • Step 507 The first training device determines information of the first model according to the fourth request, where the information of the first model includes any one or more of the following information: model identification, model input, and model parameters.
  • step 506 in the embodiment of the present application may be implemented in the following manner: the first training device collects training data according to the data type and the data address. The first training device performs model training on the data according to the algorithm determined by the algorithm identifier to obtain information of the first model, and the performance index of the first model meets the algorithm performance requirement.
  • h(x) w 0 x 0 +w 1 x 1 +w 2 x 2 +w 3 x 3 +w 4 x 4 +w 5 x 5 ... +w D x D.
  • h(x) represents the label data, which is also the Model Output list.
  • the model output list, model output, and Pre-Process Function ID in Table 3 correspond to h(x).
  • Table 4 shows the model representations listed in the examples of this application:
  • Table 5 shows the pretreatment methods listed in the examples of this application.
  • Step 508 The first training device sends the information of the first model to the inference device, so that the inference device receives the information of the first model from the first training device.
  • the information of the first model further includes any one or more of the following information: model output, model additional information, and model evaluation index results.
  • the result of the model evaluation index is the optimal value of the model evaluation index. For example, 0.0003.
  • additional information about the model is for neural networks, the number of hidden layers, and which activation function is used for each layer.
  • the inference device may deploy the first model according to the information of the first model.
  • the method provided in this embodiment of the present application may further include after step 508:
  • Step 509 The inference device sends a third request to the service discovery entity, so that the service discovery entity receives the third request from the inference device.
  • the third request is used to register or update the information of the first model.
  • the third request includes any one or more of the following information: Analysis ID corresponding to the analysis result of the first model, valid area of the analysis result corresponding to the first model Or service area or coverage area.
  • the third request may also include address information or location information or identification of the inference device.
  • the third request carries a registration instruction or an update instruction.
  • the registration instruction is used to indicate the registration of the first model information.
  • the update instruction is used to instruct to update the information of the first model.
  • the network element that needs to use the data analysis result corresponding to the Analytics ID queries the service discovery entity for the address of the inference device according to the Analytics ID, and then the network element requests the data analysis result from the inference device according to the Analytics ID.
  • the reasoning device further subscribes online reasoning data to the network element that provides data for generating data analysis results.
  • the reasoning device determines the data analysis result based on the model and the obtained online reasoning data, and sends the data analysis result to the consumer network element.
  • each network element such as the first training device, the inference device, the service discovery entity, etc.
  • each network element includes hardware structures and/or software modules corresponding to each function in order to realize the above-mentioned functions.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiment of the application can divide the functional units according to the above-mentioned method examples, the first training device, the inference device, and the service discovery entity.
  • each functional unit can be divided corresponding to each function, or two or more functions can be integrated in One processing unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • the method of the embodiment of the present application is described above with reference to FIGS. 1 to 6, and the device for determining device information that executes the foregoing method provided by the embodiment of the present application is described below. Those skilled in the art can understand that the methods and devices can be combined and referenced.
  • the device for determining device information provided in the embodiments of the present application can execute the above method for determining the device where the model is located.
  • the method for determining the device where the model is located is executed by the inference device, the service discovery entity, or the first training device. A step of.
  • FIG. 7 shows a device for determining device information involved in the foregoing embodiment.
  • the device for determining device information may include: a processing unit 101 and a communication unit 102.
  • the device for determining device information is an inference device, or a chip applied in an inference device.
  • the processing unit 101 is configured to support the device for determining device information to execute step 504 by the inference device in the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to execute the sending action performed by the inference device in step 501 of the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 503 of the foregoing embodiment.
  • the processing unit 101 specifically an apparatus for supporting the determination of device information, executes step 5041 and step 5042 in the foregoing embodiment.
  • the communication unit 102 is further configured to support the device for determining device information to execute the sending actions performed by the inference device in step 506 and step 509 of the foregoing embodiment.
  • the communication unit 102 is further configured to support the device for determining device information to execute the receiving action performed by the inference device in step 508 of the foregoing embodiment.
  • the device for determining device information is a service discovery entity, or a chip applied to the service discovery entity.
  • the processing unit 101 is configured to support the device for determining device information to execute step 502 performed by the service discovery entity in the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 501 of the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to execute the sending action performed by the service discovery entity in step 503 of the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to perform the receiving action performed by the service discovery entity in step 505 of the foregoing embodiment.
  • the device for determining device information is a first training device, or a chip applied to the first training device.
  • the communication unit 102 is configured to support the device for determining device information to execute the receiving action performed by the first training device in step 506 in the above-mentioned embodiment, and the sending action performed by the first training device in step 508. action.
  • the processing unit 101 is configured to support the device for determining device information to execute step 507 of the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to execute the sending action performed by the first training device in step 505 of the foregoing embodiment.
  • the communication unit 102 is configured to support the device for determining device information to execute the receiving action performed by the first training device in step 506 of the foregoing embodiment.
  • FIG. 8 shows a schematic diagram of a possible logical structure of the apparatus for determining device information involved in the foregoing embodiment.
  • the device for determining device information includes: a processing module 112 and a communication module 113.
  • the processing module 112 is configured to control and manage the actions of the device for determining device information.
  • the processing module 112 is configured to perform information/data processing steps on the device for determining device information.
  • the communication module 113 is used to support the steps of information/data sending or receiving by the device for determining device information.
  • the apparatus for determining device information may further include a storage module 111 for storing program codes and data that can be used by the apparatus for determining device information.
  • the device for determining device information is an inference device, or a chip applied in an inference device.
  • the processing module 112 is configured to support the device for determining device information to execute step 504 by the inference device in the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to execute the sending action performed by the inference device in step 501 of the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to perform the receiving action performed by the service discovery entity in step 503 of the foregoing embodiment.
  • the processing module 112 is specifically configured to support an apparatus for determining device information to perform step 5041 and step 5042 in the foregoing embodiment.
  • the communication module 113 is also used to support the device for determining device information to execute the sending actions performed by the inference device in step 506 and step 509 of the foregoing embodiment.
  • the communication module 113 is further configured to support the device for determining device information to perform the receiving action performed by the inference device in step 508 of the foregoing embodiment.
  • the device for determining device information is a service discovery entity, or a chip applied to the service discovery entity.
  • the processing module 112 is configured to support the device for determining device information to execute step 502 performed by the service discovery entity in the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 501 of the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to execute the sending action performed by the service discovery entity in step 503 of the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 505 of the foregoing embodiment.
  • the device for determining device information is a first training device, or a chip applied to the first training device.
  • the communication module 113 is configured to support the device for determining device information to execute the receiving action performed by the first training device in step 506 and the sending action performed by the first training device in step 508. action.
  • the processing module 112 is configured to support the device for determining device information to execute step 507 of the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to execute the sending action performed by the first training device in step 505 of the foregoing embodiment.
  • the communication module 113 is configured to support the device for determining device information to execute the receiving action performed by the first training device in step 506 of the foregoing embodiment.
  • the processing module 112 may be a processor or a controller, for example, a central processing unit, a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, transistor logic devices, Hardware components or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination that implements computing functions, for example, a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the communication module 113 may be a transceiver, a transceiver circuit, or a communication interface.
  • the storage module 111 may be a memory.
  • the apparatus for determining device information involved in this application may be the communication device shown in FIG. 4.
  • the device for determining device information is an inference device, or a chip applied in an inference device.
  • the processor 41 or the processor 45 is configured to support the device for determining device information to execute step 504 by the inference device in the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the sending action performed by the inference device in step 501 of the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 503 of the foregoing embodiment.
  • the processor 41 or the processor 45 is specifically configured to support an apparatus for determining device information to perform step 5041 and step 5042 in the foregoing embodiment.
  • the communication interface 43 is also used to support the device for determining device information to execute the sending actions performed by the inference device in step 506 and step 509 of the foregoing embodiment.
  • the communication interface 43 is also used to support the device for determining device information to execute the receiving action performed by the inference device in step 508 of the foregoing embodiment.
  • the device for determining device information is a service discovery entity, or a chip applied to the service discovery entity.
  • the processor 41 or the processor 45 is configured to support the device for determining device information to execute step 502 performed by the service discovery entity in the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 501 of the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the sending action performed by the service discovery entity in step 503 of the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the receiving action performed by the service discovery entity in step 505 of the foregoing embodiment.
  • the device for determining device information is a first training device, or a chip applied to the first training device.
  • the communication interface 43 is used to support the device for determining device information to execute the receiving action performed by the first training device in step 506 and the sending action performed by the first training device in step 508.
  • the processor 41 or the processor 45 is configured to support the device for determining device information to execute step 507 of the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the sending action performed by the first training device in step 505 of the foregoing embodiment.
  • the communication interface 43 is configured to support the device for determining device information to execute the receiving action performed by the first training device in step 506 of the foregoing embodiment.
  • FIG. 9 is a schematic diagram of the structure of a chip 150 provided by an embodiment of the present application.
  • the chip 150 includes one or more (including two) processors 1510 and a communication interface 1530.
  • the chip 150 further includes a memory 1540.
  • the memory 1540 may include a read-only memory and a random access memory, and provides operation instructions and data to the processor 1510.
  • a part of the memory 1540 may also include a non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1540 stores the following elements, execution modules or data structures, or their subsets, or their extended sets.
  • the corresponding operation is executed by calling the operation instruction stored in the memory 1540 (the operation instruction may be stored in the operating system).
  • One possible implementation manner is that the chips used by the first training device, the inference device, and the service discovery entity have similar structures, and different devices can use different chips to achieve their respective functions.
  • the processor 1510 controls the processing operations of any one of the first training device, the inference device, and the service discovery entity.
  • the processor 1510 may also be referred to as a central processing unit (CPU).
  • the memory 1540 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1510. A part of the memory 1540 may also include NVRAM.
  • the memory 1540, the communication interface 1530, and the memory 1540 are coupled together by a bus system 1520, where the bus system 1520 may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • various buses are marked as the bus system 1520 in FIG. 9.
  • the method disclosed in the foregoing embodiments of the present application may be applied to the processor 1510 or implemented by the processor 1510.
  • the processor 1510 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in the processor 1510 or instructions in the form of software.
  • the above-mentioned processor 1510 may be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistors. Logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 1540, and the processor 1510 reads the information in the memory 1540, and completes the steps of the foregoing method in combination with its hardware.
  • the communication interface 1530 is used to perform the steps of receiving and sending the first training device, the inference device, and the service discovery entity in the embodiment shown in FIGS. 5-6.
  • the processor 1510 is configured to execute the processing steps of the first training device, the inference device, and the service discovery entity in the embodiments shown in FIGS. 5-6.
  • the above communication unit may be a communication interface of the device for receiving signals from other devices.
  • the communication unit is a communication interface used by the chip to receive signals or send signals from other chips or devices.
  • embodiments of the present application may provide a computer-readable storage medium that stores instructions in the computer-readable storage medium.
  • the functions of the service discovery entity as shown in FIGS. 5 to 6 are realized.
  • the embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores instructions. When the instructions are executed, the functions of the inference device shown in FIGS. 5 to 6 are realized.
  • the embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores instructions. When the instructions are executed, the functions of the first training device shown in FIGS. 5 to 6 are realized.
  • the embodiment of the present application provides a computer program product including instructions.
  • the computer program product includes instructions. When the instructions are executed, the functions of the service discovery entity as shown in FIGS. 5 to 6 are realized.
  • the embodiments of the present application provide a computer program product including instructions.
  • the computer program product includes instructions. When the instructions are executed, the functions of the inference device shown in FIGS. 5 to 6 are realized.
  • the embodiments of the present application provide a computer program product including instructions.
  • the computer program product includes instructions. When the instructions are executed, the functions of the first training device shown in FIGS. 5 to 6 are realized.
  • the embodiment of the present application provides a chip, which is used in a first training device.
  • the chip includes at least one processor and a communication interface.
  • the communication interface is coupled to the at least one processor.
  • the embodiment of the present application provides a chip, which is used in an inference device.
  • the chip includes at least one processor and a communication interface.
  • the communication interface is coupled with the at least one processor.
  • the embodiment of the present application provides a chip, which is applied to a first terminal, the chip includes at least one processor and a communication interface, the communication interface is coupled with the at least one processor, and the processor is used to execute instructions, so as to realize The function of service discovery entity in 6.
  • An embodiment of the present application provides a communication system, which includes an inference device and a service discovery entity.
  • the service discovery entity is used to perform the steps performed by the service discovery entity in FIGS. 5 to 7
  • the inference device is used to perform the steps performed by the inference device in FIGS. 5 to 6.
  • the communication system may further include: the first training device is configured to perform the steps performed by the first training device in FIG. 5 to FIG. 6.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, network equipment, user equipment, or other programmable devices.
  • the computer program or instruction may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program or instruction may be downloaded from a website, computer, The server or data center transmits to another website site, computer, server or data center through wired or wireless means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center that integrates one or more available media.
  • the usable medium may be a magnetic medium, such as a floppy disk, a hard disk, and a magnetic tape; it may also be an optical medium, such as a digital video disc (digital video disc, DVD); and it may also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

一种确定设备信息的方法、装置以及系统,涉及通信技术领域,用以实现推理设备正确寻址训练设备。该方法包括:推理设备向服务发现实体发送第一请求,第一请求用于请求一个或者多个训练设备的信息,第一请求包括所述推理设备请求的第一模型的算法类型或算法标识。推理设备接收来自服务发现实体的一个或多个训练设备的信息。该训练设备的信息包括:能力信息。推理设备根据预设条件,从所述一个或多个训练设备的信息中确定第一训练设备。

Description

一种确定设备信息的方法、装置以及系统 技术领域
本申请实施例涉及通信技术领域,尤其涉及一种确定设备信息的方法、装置以及系统。
背景技术
目前,网络数据分析功能网元(Network Data Analytics Function,NWDAF)集训练模块和推理模块于一体。其中,推理模块包括基于训练数据训练模型,推理模块包括基于训练好的模型以及推理数据生成数据分析结果。
但是,由于训练模块的部署代价远远高于推理模块,为了降低部署成本,可以考虑将训练模块和推理模块分离,分别作为训练设备和推理设备单独部署。其中,部署成本高的训练设备集中云化部署,部署成本低的推理设备分布式部署。当训练设备和推理设备分别部署时,推理设备如何寻址训练设备是亟需解决的技术问题。
发明内容
本申请实施例提供一种确定设备信息的方法、装置以及系统,用以实现推理设备正确寻址训练设备。
为了达到上述目的,本申请实施例提供如下技术方案:
第一方面,本申请实施例提供一种确定设备信息的方法,包括:推理设备向服务发现实体发送用于请求一个或者多个训练设备的信息的第一请求。该第一请求携带包括推理设备请求的第一模型的算法类型、或算法的标识的第一信息。推理设备接收来自服务发现实体的一个或多个训练设备的信息。训练设备的信息包括:能力信息。推理设备根据预设条件,从一个或多个训练设备的信息中确定第一训练设备。
本申请实施例提供一种确定设备信息的方法,当训练设备和推理设备分离部署之后,由于不同的训练设备所在位置、负载以及支持的AI能力不同,由于一个或多个训练设备的信息可以注册在服务发现实体处,因此,通过推理设备可以向服务发现实体提供推理设备请求的第一模型的算法类型(Algorithm Type)或者算法标识(Algorithm ID),以便于推理设备从服务发现实体处获取一个或多个训练设备的信息,可以实现推理设备正确寻址一个或多个训练设备。后续辅助推理设备选择合适的第一训练设备进行模型训练。可以保障模型训练过程畅通、模型训练速度快、模型泛化能力尽可能强。此外,训练设备和推理设备分离部署还可以降低部署成本。
在一种可能的实现方式中,本申请实施例提供的能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。这样便于推理设备根据能力信息选择满足推理设备需求的第一训练设备。
在一种可能的实现方式中,第一信息还包括以下信息中的任一个或多个:推理设备的位置信息、推理设备的地址信息。这样便于服务发现实体根据推理设备的位置信息、推理设备的地址信息,选择距离推理设备较近的一个或多个训练设备的信息。
在一种可能的实现方式中,预设条件包括以下任一个或多个:第一训练设备的算 法对应的算法收敛时间为一个或多个训练设备中的一个或者多个算法对应的算法收敛时间中最快的;或,第一训练设备的一个或者多个算法对应的算法性能评价指标为一个或多个训练设备中的一个或者多个算法对应的算法性能评价指标最高的。可以提高第一训练设备训练第一模型的精度。
在一种可能的实现方式中,训练设备的信息,还包括以下信息中的任一个:位置信息、负载信息,预设条件还可以包括:第一训练设备的负载为一个或者多个训练设备中的负载最低的。通过选择负载最低的训练设备作为第一训练设备可以降低第一训练设备的处理负担。
在一种可能的实现方式中,本申请实施例提供的方法还包括:推理设备向第一训练设备发送用于请求第一模型的信息的第四请求。该第四请求包括第二信息,第二信息包括第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型、数据地址。推理设备接收来自第一训练设备的第一模型的信息,第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数。这样便于第一训练设备根据第四请求确定推理设备所需要的第一模型满足的要求。
在一种可能的实现方式中,第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。例如,模型评价指标的结果可以为最优值。
在一种可能的实现方式中,推理设备向服务发现实体发送用于注册或者更新第一模型的信息的第三请求。第三请求包括以下信息中的任一个或者多个:第一模型对应的分析结果标识Analytics ID,第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。这样便于后续需要使用Analytics ID对应的数据分析结果的网元根据Analytics ID向服务发现实体查询推理设备的地址,然后该网元根据Analytics ID向推理设备请求数据分析结果。推理设备进一步向提供数据用于产生数据分析结果的网元订阅在线推理数据,推理设备基于模型以及获取的在线推理数据确定数据分析结果,并向消费者网元发送数据分析结果。
在一种可能的实现方式中,第三请求还包括推理设备的位置信息。
第二方面,本申请实施例提供一种确定设备信息的方法,包括:服务发现实体接收来自推理设备的用于请求一个或者多个训练设备的信息的第一请求。第一请求携带包括推理设备请求的第一模型的算法类型或算法的标识的第一信息。服务发现实体根据第一信息确定一个或多个训练设备的信息,训练设备的信息包括:能力信息。服务发现实体向推理设备发送一个或多个训练设备的信息。
在一种可能的实现方式中,一个或多个训练设备为支持推理设备需要的算法类型的训练设备,和/或,一个或多个训练设备为与推理设备之间的距离满足预设要求的训练设备。
在一种可能的实现方式中,一个或多个训练设备的负载低于预设负载阈值。
在一种可能的实现方式中,本申请实施例提供的方法还包括:服务发现实体接收来自第一训练设备的第二请求,第二请求用于请求在服务发现实体处注册或者更新第一训练设备的信息,第二请求包括第一训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,第一训练设备为一个或者多个训练设备中任一个 训练设备。
在一种可能的实现方式中,本申请实施例提供的方法还包括:服务发现实体向第一训练设备发送响应消息,该响应消息用于表示以成功注册或更新第一训练设备的信息。
在一种可能的实现方式中,能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
在一种可能的实现方式中,本申请实施例提供的方法还包括:服务发现实体接收来自推理设备的第三请求,第三请求用于注册或者更新第一模型的信息,第三请求包括以下信息中的任一个或者多个:第一模型对应的分析结果标识Analytics ID,第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
第三方面,本申请实施例提供一种确定设备信息的方法,包括:第一训练设备接收来自推理设备的第四请求。该第四请求包括第二信息,第二信息包括第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型、数据地址。第一训练设备根据第二信息,确定第一模型的信息,第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数;第一训练设备向推理设备发送第一模型的信息。
在一种可能的实现方式中,第一训练设备根据第二信息,确定第一模型的信息,包括:第一训练设备根据数据类型、数据地址收集训练数据。第一训练设备根据算法标识确定的算法对数据进行模型训练,得到第一模型的信息,第一模型的性能指标满足算法性能要求。
在一种可能的实现方式中,第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。
在一种可能的实现方式中,本申请实施例提供的方法还包括:第一训练设备向服务发现实体发送第一训练设备的第二请求,第二请求用于请求在服务发现实体处注册或者更新第一训练设备的信息,第二请求包括第一训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,第一训练设备为一个或者多个训练设备中任一个训练设备。
在一种可能的实现方式中,本申请实施例提供的方法还包括:第一训练设备接收来自服务发现实体的响应消息,该响应消息用于表示以成功注册或更新第一训练设备的信息。
第四方面,本申请实施例提供一种确定设备信息的装置,该确定设备信息的装置可以实现第一方面或第一方面的任意可能的实现方式中的方法,因此也能实现第一方面或第一方面任意可能的实现方式中的有益效果。该确定设备信息的装置可以为推理设备,也可以为可以支持推理设备实现第一方面或第一方面的任意可能的实现方式中的方法的装置,例如应用于推理设备中的芯片。该装置可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
一种示例,该确定设备信息的装置,包括:通信单元,用于向服务发现实体发送用于请求一个或者多个训练设备的信息的第一请求。该第一请求携带第一信息,第一信息包括该装置请求的第一模型的算法类型或算法的标识。通信单元,用于接收来自 服务发现实体的一个或多个训练设备的信息,训练设备的信息包括:能力信息。处理单元,用于根据预设条件,从一个或多个训练设备的信息中确定第一训练设备。
在一种可能的实现方式中,本申请实施例提供的能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
在一种可能的实现方式中,第一信息还包括以下信息中的任一个或多个:该装置的位置信息、该装置的地址信息。
在一种可能的实现方式中,预设条件包括以下任一个或多个:第一训练设备的算法对应的算法收敛时间为一个或多个训练设备中的一个或者多个算法对应的算法收敛时间中最快的;或,第一训练设备的一个或者多个算法对应的算法性能评价指标为一个或多个训练设备中的一个或者多个算法对应的算法性能评价指标最高的。
在一种可能的实现方式中,训练设备的信息,还包括以下信息中的任一个:位置信息、负载信息,预设条件还可以包括:第一训练设备的负载为一个或者多个训练设备中的负载最低的。
在一种可能的实现方式中,通信单元,还用于向第一训练设备发送用于请求第一模型的信息的第四请求。第四请求包括第二信息。其中,第二信息包括第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型、数据地址。通信单元,还用于接收来自第一训练设备的第一模型的信息,第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数。
在一种可能的实现方式中,第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。
在一种可能的实现方式中,通信单元,用于向服务发现实体发送第三请求,第三请求用于注册或者更新第一模型的信息,第三请求包括以下信息中的任一个或者多个:第一模型对应的分析结果标识Analytics ID,第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
在一种可能的实现方式中,第三请求还包括推理设备的位置信息。
另一种示例,本申请实施例提供一种确定设备信息的装置,该确定设备信息的装置可以是推理设备,也可以是推理设备内的芯片。当该确定设备信息的装置是推理设备时,该通信单元可以为通信接口。该处理单元可以是处理器。该确定设备信息的装置还可以包括存储单元。该存储单元可以是存储器。该存储单元,用于存储计算机程序代码,计算机程序代码包括指令。该处理单元执行该存储单元所存储的指令,以使该推理设备实现第一方面或第一方面的任意一种可能的实现方式中描述的一种确定设备信息的方法。当该确定设备信息的装置是推理设备内的芯片时,该处理单元可以是处理器,该通信单元可以统称为:通信接口。例如,通信接口可以为输入/输出接口、管脚或电路等。该处理单元执行存储单元所存储的计算机程序代码,以使该推理设备实现第一方面或第一方面的任意一种可能的实现方式中描述的一种确定设备信息的方法,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该管理网元内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
可选的,处理器、通信接口和存储器相互耦合。
第五方面,本申请实施例提供一种确定设备信息的装置,该确定设备信息的装置可以实现第二方面或第二方面的任意可能的实现方式中的方法,因此也能实现第二方面或第二方面任意可能的实现方式中的有益效果。该确定设备信息的装置可以为服务发现实体,也可以为可以支持服务发现实体实现第二方面或第二方面的任意可能的实现方式中的方法的装置,例如应用于服务发现实体中的芯片。该装置可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
一种示例,该确定设备信息的装置,包括:通信单元,用于接收来自推理设备的用于请求一个或者多个训练设备的信息的第一请求。该第一请求携带第一信息,第一信息包括推理设备请求的第一模型的算法类型或算法的标识。处理单元,用于根据第一信息确定一个或多个训练设备的信息,训练设备的信息包括:能力信息。通信单元,还用于向推理设备发送一个或多个训练设备的信息。
在一种可能的实现方式中,一个或多个训练设备为支持推理设备需要的算法类型的训练设备,和/或,一个或多个训练设备为与推理设备之间的距离满足预设要求的训练设备。
在一种可能的实现方式中,一个或多个训练设备的负载低于预设负载阈值。
在一种可能的实现方式中,通信单元,还用于接收来自第一训练设备的第二请求,第二请求用于请求在该装置处注册或者更新第一训练设备的信息,第二请求包括第一训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,第一训练设备为一个或者多个训练设备中任一个训练设备。
在一种可能的实现方式中,能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
在一种可能的实现方式中,通信单元,还用于接收来自推理设备的第三请求,第三请求用于注册或者更新第一模型的信息,第三请求包括以下信息中的任一个或者多个:第一模型对应的分析结果标识Analytics ID,第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
另一种示例,本申请实施例提供一种确定设备信息的装置,该确定设备信息的装置可以是服务发现实体,也可以是服务发现实体内的芯片。当该确定设备信息的装置是服务发现实体时,该通信单元可以为通信接口。该处理单元可以是处理器。该通信装置还可以包括存储单元。该存储单元可以是存储器。该存储单元,用于存储计算机程序代码,计算机程序代码包括指令。该处理单元执行该存储单元所存储的指令,以使该服务发现实体实现第二方面或第二方面的任意一种可能的实现方式中描述的一种确定设备信息的方法。当该确定设备信息的装置是服务发现实体内的芯片时,该处理单元可以是处理器,该通信单元可以统称为:通信接口。例如,通信接口可以为输入/输出接口、管脚或电路等。该处理单元执行存储单元所存储的计算机程序代码,以使该服务发现实体实现第二方面或第二方面的任意一种可能的实现方式中描述的一种确定设备信息的方法,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该服务发现实体内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
可选的,处理器、通信接口和存储器相互耦合。
第六方面,本申请实施例提供一种确定设备信息的装置,该确定设备信息的装置可以实现第三方面或第三方面的任意可能的实现方式中的方法,因此也能实现第三方面或第三方面任意可能的实现方式中的有益效果。该通信装置可以为第一训练设备,也可以为可以支持第一训练设备实现第三方面或第三方面的任意可能的实现方式中的方法的装置,例如应用于第一训练设备中的芯片。该装置可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
一种示例,本申请实施例提供的一种确定设备信息的装置,包括:通信单元,用于接收来自推理设备的包括第二信息的第四请求。其中第二信息包括第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型、数据地址。处理单元,用于根据第二信息,确定第一模型的信息。其中,第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数;通信单元,还用于向推理设备发送第一模型的信息。
在一种可能的实现方式中,第一训练设备根据第二信息,确定第一模型的信息,包括:第一训练设备根据数据类型、数据地址收集训练数据。第一训练设备根据算法标识确定的算法对数据进行模型训练,得到第一模型的信息,第一模型的性能指标满足算法性能要求。
在一种可能的实现方式中,第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。
在一种可能的实现方式中,通信单元,还用于向服务发现实体发送第一训练设备的第二请求,第二请求用于请求在服务发现实体处注册或者更新第一训练设备的信息,第二请求包括第一训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,第一训练设备为一个或者多个训练设备中任一个训练设备。
在一种可能的实现方式中,通信单元,还用于接收来自服务发现实体的响应消息,该响应消息用于表示以成功注册或更新第一训练设备的信息。
第七方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第一方面至第一方面的任意一种可能的实现方式中描述的一种确定设备信息的方法。
第八方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第二方面至第二方面的任意一种可能的实现方式中描述的一种确定设备信息的方法。
第九方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第三方面至第三方面的任意一种可能的实现方式中描述的一种确定设备信息的方法。
第十方面,本申请实施例提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第一方面或第一方面的各种可能的实现方式中描述的一种确定设备信息的方法。
第十一方面,本申请提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第二方面或第二方面的各种可能的实现方式中描述的一种确定设备信息的方法。
第十二方面,本申请提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第三方面或第三方面的各种可能的实现方式中描述的一种确定设备信息的方法。
第十三方面,本申请实施例提供了一种通信装置用于实现上述第一方面至第三方面中任一方面的各种可能的设计中的各种方法。该通信装置可以为上述推理设备,或者包含上述推理设备的装置。或者,该通信装置可以为上述服务发现实体,或者包含上述服务发现实体的装置。或者,该通信装置可以为上述第一训练设备,或者包含上述第一训练设备的装置。该通信装置包括实现上述方法相应的模块、单元、或手段(means),该模块、单元、或means可以通过硬件实现,软件实现,或者通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块或单元。
第十四方面,本申请实施例提供了一种通信装置,该通信装置包括:至少一个处理器和通信接口。其中,当该通信装置运行时,该处理器执行该通信装置中存储的该计算机执行指令,以使该通信装置执行如上述第一方面至第三方面中任一方面的各种可能的设计中的任一项的方法。例如,该通信装置可以为推理设备,或者为应用于推理设备中的芯片。例如,该通信装置可以为服务发现实体,或者为应用于服务发现实体中的芯片。例如,该通信装置可以为第一训练设备,或者为应用于第一训练设备中的芯片。
应理解,上述第十四方面中描述的通信装置中还可以包括:总线和存储器,存储器用于存储代码和数据。可选的,至少一个处理器通信接口和存储器相互耦合。
第十五方面,本申请实施例提供一种通信装置,该通信装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第一方面或第一方面的各种可能的实现方式描述的一种确定设备信息的方法。
第十六方面,本申请实施例提供一种通信装置,该通信装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第二方面或第二方面的各种可能的实现方式描述的一种确定设备信息的方法。
第十七方面,本申请实施例提供一种通信装置,该通信装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第三方面或第三方面的各种可能的实现方式描述的一种确定设备信息的方法。
第十八方面,本申请实施例提供一种通信装置,该通信装置包括处理器,处理器和存储器耦合,存储器用于存储指令或计算机程序,指令或计算机程序被处理器运行时,用以实现如第一方面或第一方面的各种可能的实现方式描述的一种确定设备信息的方法。
第十九方面,本申请实施例提供一种通信装置,该通信装置包括处理器,存储器和处理器耦合,存储器用于存储指令或计算机程序,指令或计算机程序被处理器运行时,用以实现如第二方面或第二方面的各种可能的实现方式描述的一种确定设备信息的方法。
第二十方面,本申请实施例提供一种通信装置,该通信装置包括处理器,存储器和处理器耦合,存储器用于存储指令或计算机程序,指令或计算机程序被处理器运行时,实现如第三方面或第三方面的各种可能的实现方式描述的一种确定设备信息的方 法。
第二十一方面,本申请实施例提供了一种通信装置,该通信装置包括一个或者多个模块,用于实现上述第一方面、第二方面、第三方面的方法,该一个或者多个模块可以与上述第一方面、第二方面、第三方面的方法中的各个步骤相对应。
第二十二方面,本申请实施例提供一种芯片,该芯片包括处理器和通信接口,通信接口和处理器耦合,处理器用于运行计算机程序或指令,以实现第一方面或第一方面的各种可能的实现方式中所描述的一种确定设备信息的方法。通信接口用于与芯片之外的其它模块进行通信。
第二十三方面,本申请实施例提供一种芯片,该芯片包括处理器和通信接口,通信接口和处理器耦合,处理器用于运行计算机程序或指令,以实现第二方面或第二方面的各种可能的实现方式中所描述的一种确定设备信息的方法。通信接口用于与芯片之外的其它模块进行通信。
第二十四方面,本申请实施例提供一种芯片,该芯片包括处理器和通信接口,通信接口和处理器耦合,处理器用于运行计算机程序或指令,以实现第三方面或第三方面的各种可能的实现方式中所描述的一种确定设备信息的方法。通信接口用于与芯片之外的其它模块进行通信。
具体的,本申请实施例中提供的芯片还包括存储器,用于存储计算机程序或指令。
第二十五方面,本申请实施例提供一种通信系统,该通信系统包括:第四方面或第四方面的任一个可能的实现方式描述的装置,第五方面或第五方面的任一个可能的实现方式描述的装置上述提供的任一种装置。
在一种可选的实现方式中,该通信系统还可以包括:第六方面或第六方面的任一个可能的实现方式描述的装置。
上述提供的任一种装置或计算机存储介质或计算机程序产品或芯片或通信系统均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文提供的对应的方法中对应方案的有益效果,此处不再赘述。
附图说明
图1为目前数据分析网元的架构图;
图2为本申请实施例提供的一种场景图;
图3为本申请实施例提供的一种通信系统架构示意图;
图4为本申请实施例提供的一种通信设备的结构示意图;
图5为本申请实施例提供的一种确定设备信息的方法的流程示意图;
图6为本申请实施例提供的另一种确定设备信息的方法的流程示意图;
图7为本申请实施例提供的一种装置的结构示意图;
图8为本申请实施例提供的另一种装置的结构示意图;
图9为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一训练设备和第一训练设备仅仅是为了区分不同的训练设备,并不对其先后顺序进行限定。本 领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例的技术方案可以应用于各种通信系统,例如:码分多址(code division multiple access,CDMA)、时分多址(time division multiple access,TDMA)、频分多址(frequency division multiple access,FDMA)、正交频分多址(orthogonal frequency-division multiple access,OFDMA)、单载波频分多址(single carrier FDMA,SC-FDMA)和其它系统等。术语“系统”可以和“网络”相互替换。3GPP在长期演进(long term evolution,LTE)和基于LTE演进的各种版本是使用E-UTRA的UMTS的新版本。5G通信系统、新空口(new radio,NR)是正在研究当中的下一代通信系统。此外,通信系统还可以适用于面向未来的通信技术,都适用本申请实施例提供的技术方案。
本申请实施例描述的系统架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
如图1所示,NWDAF黑盒化,集中以下三个部件于一体,部署形态单一:数据湖(Data lake)、推理模块、训练模块。其中,数据湖,包括数据收集、数据预处理、特征工程等功能;训练模块包括模型训练、模型存储等功能。推理模块包括模型部署、模型应用等功能。
通常情况下,训练模块的部署代价远远高于推理模块,考虑到以下两个方面:一方面,训练模块对硬件要求较高,比如要求人工智能(Artificial Intelligence,AI)芯片、大型集群、具备图形处理单元(Graphics Processing Unit,GPU)计算能力,而推理模块则使用一般的中央处理单元(Central Processing Unit,CPU)即可。另一方面,训练模块的AI工程设计人才要求较高,特别是特征选择、模型训练要求较高的专业领域专家经验,而推理模块的设计人员往往只需要从训练模块获取模型的说明书实现模型本地部署即可。
因此,中小运营商、大运营商基于部署成本的考虑,对NWDAF网元的部署形态提出了各自的诉求。中小运营商(比如非洲某些国家的小型运营商)希望在Rel-17将 NWDAF网元中的AI模型训练功能从现有的Rel-16架构中剥离,然后通过AI公有云(比如SoftCOM AI)作为训练设备实现AI模型训练。在实际部署过程中,运营商仅部署推理设备,然后提供数据给训练设备实现模型的训练,进而推理设备从训练设备获取模型在本地部署,最后基于模型以及本网数据生成数据分析结果。
同样出于成本的考虑,大型运营商希望在Rel-17将NWDAF网元在同一个运营商内部分布式部署,即,将NWDAF的推理功能下沉到地市,而集中部署NWDAF网元的训练设备。比如,针对中国移动,可以考虑部署大区级NWDAF训练设备,然后省级或者市级或者县级部署具有推理模块的NWDAF网元,从而降低成本。
在图1中应用功能(application function,AF)网元可以向NWDAF网元提供业务数据(service data),例如,管理和维护(运维)(operation,administration,and maintenance,OAM)网元(也可以称为运行管理维护网元)、5G网元(基站或核心网网元(例如,接入与移动性管理功能(access and mobility management function,AMF)网元/策略控制功能(Policy Control Function)网元/用户面功能(User plane function,UPF)网元/会话管理功能(Session Management Function,SMF)网元))可以向NWDAF网元提供网络数据(Network Data)。
为此,如图2所示,图2示出了本申请实施例提供的一种训练设备与推理设备分离后的网络架构。
其中,从Rel-16NWDAF网元中剥离的训练设备的主要功能如下:训练数据收集:从推理设备或者数据库网元(例如,应用功能(application function,AF)网元)或者5G网元获取训练数据;模型训练:基于训练数据、算法以及训练平台(比如华为的网络人工智能引擎(Network Artificial Intelligence Engine,NAIE)、Google的Tensorflow、Facebook的Caffe、Amazon的MXnet)训练得到最优模型。模型存储:将训练所得最优模型序列化进行存储,以备调用请求该模型。
推理设备包括:数据湖、模型应用(Model application)和模型输出(Model deployment)。推理设备的主要功能如下:模型部署:从训练设备请求并接收模型,并在本地安装部署;模型输入数据收集:从网络网元收集模型输入数据,并基于模型得到模型输出;模型应用:基于模型输出校验、数据还原后确定数据分析结果(data Analytics),并准备对应的Analytic ID以备策略网元请求调用。
可选的,推理设备中可以保留一个轻量级模型训练模块,用于推理设备的本地训练。比如,与训练设备的重量级模型训练相比,推理设备中的轻量级模型不需要海量分布式数据存储、分布式数据处理,不需要大量的AI专家参与。
如图3所示,图3示出了本申请实施例提供一种确定设备信息的方法所应用的通信系统,该通信系统包括:服务发现实体10、与服务发现实体10通信的推理设备20。
在一种可选的实现方式中,该通信系统还可以包括:一个或多个训练设备(例如,训练设备301、训练设备302~训练设备30n)。其中,n为大于或等于1的整数。其中,推理设备20以及一个或多个训练设备之间可以通信。其中,服务发现实体10支持网络功能或网络服务的发现、注册、认证功能,在5GC中,服务发现实体10可以是网络存储功能(network repository function,NRF)网元。例如,服务发现实体10可以为域名系统(Domain Name System,DNS)服务器。
需要说明的是,本申请实施例以服务发现实体10为NRF网元为例,在未来网络中,服务发现实体10可以是NRF网元或有其他名称,本申请不作限定。
其中,训练设备具有以下任一个功能:向服务发现实体10注册能力信息、模型训练、模型存储。具体的,训练设备用于将训练设备的地址信息、训练设备的负载、训练设备所支持的算法列表(包括每个算法的类型、标识、附加信息、评价指标、算法收敛时间、置信度等)注册到服务发现实体10。从推理设备20接收模型请求(包括模型的:使用的具体的算法标识要求、模型的评价指标要求);向推理设备20反馈模型(包括模型标识、模型所用的算法标识、模型输入、模型输出、参数列表、模型评价指标的结果-最优值、算法附加信息)。
例如,推理设备可以为数据分析网元中具有模型部署、模型应用的模块,或者该推理设备即为数据分析网元。数据分析网元可以为NWDAF网元。
服务发现实体10具有以下任一个功能:从训练设备接收训练设备的注册信息,并保存。
注意:训练设备可以更新初始注册信息,特别是训练设备的负载(load)、算法性能指标、算法收敛时间等这些常见的可变量。从推理设备接收训练设备寻址请求,以寻找合适的训练设备的地址。其中,请求中可以携带想要使用的算法类型、推理设备的位置信息;向推理设备发送训练设备寻址的响应,包括训练设备的地址、训练设备支持的具体的算法的算法标识、算法的评价指标、算法收敛时间、置信度等。
推理设备,用于模型部署、模型应用。具体的,首先,推理设备确定要请求的模型对应算法的算法类型,并进一步向NRF寻址合适的训练设备;然后,推理设备向训练设备请求模型,请求参数和模型相应参数参见训练设备以及下述实施例中的描述。
图4所示为本申请实施例提供的通信设备的硬件结构示意图。本申请实施例中的服务发现实体10、推理设备20、训练设备的硬件结构均可以参考如图4所示的通信设备的硬件结构示意图。该通信设备包括处理器41,通信线路44以及至少一个通信接口(图4中示例性的以通信接口43为例进行说明)。
处理器41可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路44可包括一通路,在上述组件之间传送信息。
通信接口43,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
可选的,该通信设备还可以包括存储器42。
存储器42可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者 能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路44与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器42用于存储执行本申请方案的计算机执行指令,并由处理器41来控制执行。处理器41用于执行存储器42中存储的计算机执行指令,从而实现本申请下述实施例提供的一种确定设备信息的方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器41可以包括一个或多个CPU,例如图4中的CPU0和CPU1。
在具体实现中,作为一种实施例,通信设备可以包括多个处理器,例如图4中的处理器41和处理器45。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在本申请实施例中,一种确定设备信息的方法的执行主体的具体结构,本申请实施例并未特别限定,只要可以通过运行记录有本申请实施例的一种确定设备信息的方法的代码的程序,以根据本申请实施例的一种确定设备信息的方法进行通信即可,例如,本申请实施例提供的一种确定设备信息的方法的执行主体可以是服务发现实体,或者为应用于服务发现实体中的通信装置,例如,芯片,本申请对此不进行限定。或者,本申请实施例提供的一种确定设备信息的方法的执行主体可以是推理设备,或者为应用于推理设备中的通信装置,例如,芯片,本申请对此不进行限定。或者,本申请实施例提供的一种确定设备信息的方法的执行主体可以是训练设备,或者为应用于训练设备中的通信装置,例如,芯片,本申请对此不进行限定。下述实施例以一种确定设备信息的方法的执行主体分别为推理设备、服务发现实体以及训练设备为例进行描述。
如图5所示,结合图3所示的通信系统,本申请实施例提供一种确定设备信息的方法,该方法包括:
步骤501、推理设备向服务发现实体发送第一请求,以使得服务发现实体接收来自推理设备的第一请求。该第一请求用于请求一个或者多个训练设备的信息,第一请求包括推理设备请求的第一模型的算法类型(Algorithm Type)或者算法标识(Algorithm ID)。
应理解,服务发现实体处具有训练设备集合。该训练设备集合中至少包括一个或多个训练设备。关于服务发现实体中具有训练设备集合的过程可以参考下述实施例的描述,此处不再赘述。
示例性的,第一请求可以为服务化网络功能发现请求(Nnrf_NFDiscovery_Request)。例如,第一请求还可以包括以下信息中的任一个或多个:推理设备的位置信息、所述推理设备的地址信息、或推理设备的标识。具体的,第一请求中携带需求信息,该需求信息用于请求符合推理设备要求的一个或者多个训练设备的信息。例如,需求信息包括推理设备请求的第一模型的算法类型、推理设备需要请求的第一模型的算法标识、 推理设备的位置信息、推理设备的地址信息、或推理设备的标识。
其中,推理设备的地址信息用于确定推理设备的IP地址,推理设备的位置信息用于确定推理设备所在的位置,或者地理区域。推理设备的地址信息用于索引或查找推理设备,推理设备的位置信息用于索引推理设备所覆盖的区域。
例如,推理设备请求的第一模型的算法类型为:以下算法类型中的任一个或多个:回归、聚类、关联分析、分类、推荐。具体的各个算法类型可以对应一个或多个算法。具体每个算法类型对应的算法可以参见表1中的描述,此处不再赘述。
表1 算法类型和算法标识对应的算法罗列
Figure PCTCN2019109657-appb-000001
Figure PCTCN2019109657-appb-000002
通常情况下,推理设备由运营商自己部署。运营商决定触发某个具体的数据分析任务,初步判定数据分析任务所需要的算法类型(比如分类任务或者回归任务),然后通过自身的推理设备寻址满足要求的训练设备训练数据。
步骤502、服务发现实体根据推理设备请求的第一模型的算法类型,确定该一个或多个训练设备的信息,训练设备的信息包括:能力信息。
示例性的,能力信息包括任一个算法对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法的收敛速度、算法置信度。
例如,算法的收敛速度表示为达到同一个算法性能评价指标要求时所用的迭代次数。
例如,算法的置信度表示输出结果与真实结果的逼近程度。例如,算法性能评价指标可以包括:方误差、准确率、召回率、F-Score(调和准确率和召回率后的平均成绩)。
示例性的,如表2所示,表2示出了不同算法对应的算法性能指标度量方式。
表2 不同算法对应的算法性能指标度量方式
Figure PCTCN2019109657-appb-000003
Figure PCTCN2019109657-appb-000004
其中,TP(True Positive,真正例)表示被分类器正确分类的正元组;TN(Trure Negative,真负例)表示被分类器正确分类的负元组;FP(False Positive,假正例)表示被错误的标记为正元组的负元组;FN(False Negative,假负例)表示被错误标记为 负元组的正元组。其中,分类器评估度量包括准确率(又称为:识别率)、敏感度或称为召回率(recall)、特效性、精度(precision)F1和FB。
步骤503、服务发现实体向推理设备发送一个或多个训练设备的信息,以使得推理设备接收来自服务发现实体的一个或多个训练设备的信息,训练设备的信息包括:能力信息。
示例性的,服务发现实体可以向推理设备发送第一响应。该第一响应包括:一个或多个训练设备的信息。例如,第一响应可以为:服务化网络功能发现请求响应(Nnrf_NFDiscovery_Request response)。
示例性的,训练设备的信息还可以包括:训练设备的位置信息、训练设备的负载中的任一个或多个。
步骤504、推理设备根据预设条件,从一个或多个训练设备的信息中确定第一训练设备。
本申请实施例中步骤504还可以通过以下方式实现:推理设备从一个或多个训练设备的信息中任选一个作为第一训练设备。
本申请实施例提供一种确定设备信息的方法,当训练设备和推理设备分离部署之后,由于不同的训练设备所在位置、负载以及支持的AI能力不同,由于一个或多个训练设备的信息可以注册在服务发现实体处,因此,通过推理设备可以向服务发现实体提供推理设备请求的第一模型的算法类型(Algorithm Type)或者算法标识(Algorithm ID),以便于推理设备从服务发现实体处获取一个或多个训练设备的信息,可以实现推理设备正确寻址一个或多个训练设备。后续辅助推理设备选择合适的第一训练设备进行模型训练。可以保障模型训练过程畅通、模型训练速度快、模型泛化能力尽可能强。此外,训练设备和推理设备分离部署还可以降低部署成本。
作为再一种可能的实施例,如图6所示,本申请实施例提供的方法在步骤502之前还可以包括:
步骤505、第一训练设备或第二训练设备向服务发现实体发送第二请求,以使得服务发现实体接收来自第一训练设备或第二训练设备的第二请求。
其中,该来自第一训练设备的第二请求用于请求在服务发现实体处注册或者更新第一训练设备的信息,第二请求包括第一训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,第一训练设备为训练设备集合中的任一个,或者第一训练设备或第二训练设备为一个或者多个训练设备中任一个训练设备。一个或者多个训练设备属于该训练设备集合中的训练设备。该来自第二训练设备的第二请求用于请求在服务发现实体处注册或者更新第二训练设备的信息。
由于本申请实施例中任一个训练设备均可以通过步骤505向服务发现实体注册或者更新第一训练设备的信息,因此步骤505以第一训练设备向服务发现实体注册或者更新第一训练设备的信息为例。本申请实施例可以将任一个训练设备的信息称为人工智能(Artificial Intelligence,AI)信息,将任一个训练设备向服务发现实体注册或者更新第一训练设备的信息过程称为AI能力注册过程。
本申请实施例中任一个训练设备的地址信息用于确定该训练设备的IP地址。任一个训练设备的位置信息用于确定该训练设备的位置。例如,训练设备所处的区域。
示例性的,第二请求本身具有请求在服务发现实体处注册或者更新第一训练设备的信息的功能,或者第二请求中具有指示信息,该指示信息用于请求在服务发现实体处注册或者更新第一训练设备的信息。
可以理解的是,当训练设备集合中的每个训练设备向服务发现实体注册或者更新第一训练设备的信息之后,每个训练设备的信息便存储在服务发现实体处。
结合图6,本申请实施例中的步骤502可以通过以下方式实现:服务发现实体根据推理设备请求的第一模型的算法类型或者算法标识,从训练设备集合中确定该一个或多个训练设备的信息。
作为一种可能的实现方式,一个或多个训练设备为支持推理设备需要的算法类型的训练设备,和/或,一个或多个训练设备为与推理设备之间的距离满足预设要求的训练设备。例如,一个或多个训练设备为距离推理设备最近的训练设备。
再一种可能的实现方式,该第一个或多个训练设备的负载为低于预设负载阈值的训练设备。
作为一种可能的实现方式,一个或多个训练设备为支持推理设备需要的算法标识的训练设备。
举例说明,服务发现实体根据推理设备的位置,从服务发现实体中存储的训练设备信息中确定与推理设备之间的距离满足预设要求(例如,距离推理设备最近)的训练设备集合(包括一个或者多个训练设备)。然后,服务发现实体根据推理设备的算法类型要求从训练设备集合中筛选掉不支持算法类型要求的训练设备。此外服务发现实体还可以进一步筛选掉负载(load)很重的训练设备,最终筛选得到符合推理设备要求的一个或者多个训练设备。
结合图6,作为一种可能的实现方式,本申请实施例中的步骤504可以通过以下方式实现:
步骤5041、推理设备将一个或多个训练设备中算法对应的算法收敛时间中满足预设时间要求的训练设备确定为第一训练设备。
例如,预设时间要求可以为算法收敛时间最快,也即算法收敛时间最短,或者算法收敛时间满足预设收敛时间阈值。训练设备1对应算法收敛时间T1、训练设备2对应算法收敛时间T2、训练设备3对应算法收敛时间T3,其中,T1>T2>T3,则推理设备可以确定第一训练设备为训练设备3。
步骤5042、推理设备将一个或多个训练设备中算法对应的算法性能评价指标中满足评价指标要求的训练设备确定为第一训练设备。
例如,评价指标要求可以为算法性能评价指标最高,或者算法性能评价指标满足预设算法性能评价指标阈值。
也即作为一种可能的实现方式,本申请实施例提供的预设条件包括以下任一个或多个:第一训练设备的算法对应的算法收敛时间为一个或多个训练设备中的一个或者多个算法中任一个的算法收敛时间满足预设时间要求的;或,第一训练设备的一个或者多个算法对应的算法性能评价指标为一个或多个训练设备中的一个或者多个算法对应的算法性能评价指标满足评价指标要求的。
示例行的,第一训练设备的算法对应的算法收敛时间为一个或多个训练设备中的 一个或者多个算法对应的算法收敛时间中最快的;或,第一训练设备的一个或者多个算法对应的算法性能评价指标为一个或多个训练设备中的一个或者多个算法对应的算法性能评价指标最高的。
进一步的,本申请实施例提供的预设条件还可以包括:第一训练设备的负载为一个或者多个训练设备中的负载满足负载要求的。例如,满足负载要求可以为负载最低,或者负载低于或等于预设负载阈值。
例如,第一训练设备的负载为一个或者多个训练设备中的负载最低的。
作为再一种可能的实施例,如图6所示,本申请实施例提供的方法还包括:
步骤506、推理设备向第一训练设备发送第四请求,以使得第一训练设备接收来自推理设备的第四请求。该第四请求用于请求第一模型的信息,第四请求包括第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型(Event ID列表)、数据地址。
示例性的,第四请求中携带第二信息,该第二信息用于请求第一模型的信息。第二信息包括第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型(Event ID列表)、数据地址。
例如,第四请求携带的算法标识可以为线性回归或支持向量机、或逻辑斯特回归。
算法性能要求:比如,和方误差小于或等于0.001。
例如,算法性能要求用于表示该第一模型所需要满足的算法性能。
例如,数据类型可以代表一个或多个数据类型。数据类型用于第一训练设备确定训练第一模型时需要收集的数据类型。数据地址用于确定训练第一模型时需要收集的数据的地址。例如,数据地址可以为能够提供该数据的网络功能(network function,NF)的Address/IP/ID。
步骤507、第一训练设备根据第四请求,确定第一模型的信息,该第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数。
作为一种可能的实现方式,本申请实施例中的步骤506可以通过以下方式实现:第一训练设备根据数据类型、数据地址收集训练数据。第一训练设备根据算法标识确定的算法对数据进行模型训练,得到第一模型的信息,第一模型的性能指标满足算法性能要求。
以语音业务平均意见评分(mean opinion score,MOS)模型为例,一个简单的线性回归模型(通过一个Model ID进行标识),对应的算法为线性回归(用Algorithm ID标识)如下:h(x)=w 0x 0+w 1x 1+w 2x 2+w 3x 3+w 4x 4+w 5x 5...+w Dx D。具体的:h(x)表示标签数据,也是模型输出(Model Output list),具体的,可以是语音MOS分;x i=(i=0,1,2,...,D)为训练数据,也就是模型输入数据(Model Input list),包括QoS flow bit rate,QoS flow packet delay,QoS flow packet error rate,无线信号接收功率(radio signal received power,RSRP)或者无线信号接收质量(radio signal received quality,RSRQ)、功率余量(Power Headroom)、接收信号强度指示(received signal strength indicator,RSSI),信号与干扰加噪声比(Signal to Interference plus Noise Ratio,SINR)等等。w i=(i=0,1,2,...,D)为权重,也就是模型参数(Model Parameter list)。
综上所述,一个模型可以通过如下表3进行表征:
表3
Figure PCTCN2019109657-appb-000005
其中,表3中的模型参数列表、权重、Pre-Process Function ID对应w i=(i=0,1,2,...,D)。表3中的模型输出列表、模型输出、Pre-Process Function ID对应h(x)。表3中的模型输入列表、Pre-Process Function ID、Event ID对应x i=(i=0,1,2,...,D)。
如表4所示,表4示出了本申请实施例列举的模型表征:
表4
Figure PCTCN2019109657-appb-000006
Figure PCTCN2019109657-appb-000007
Figure PCTCN2019109657-appb-000008
如表5所示,表5示出了本申请实施例列举的预处理方法。
表5 预处理方法
Figure PCTCN2019109657-appb-000009
步骤508、第一训练设备向推理设备发送第一模型的信息,以使得推理设备接收来自第一训练设备的第一模型的信息。
示例性的,第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。
模型评价指标的结果即为模型评价指标最优值。例如,0.0003。例如,模型附加信息比如针对神经网络,隐层层数,每一层用了哪种激活函数。
应理解,在步骤507之后,推理设备便可以根据第一模型的信息部署第一模型。
结合图6,作为一种可能的实施例,本申请实施例提供的方法在步骤508之后还可以包括:
步骤509、推理设备向服务发现实体发送第三请求,以使得服务发现实体接收来自推理设备的第三请求。该第三请求用于注册或者更新第一模型的信息,第三请求包括以下信息中的任一个或者多个:第一模型对应的分析结果标识Analytics ID,第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
在一种可能的实施例中,该第三请求还可以包括推理设备的地址信息或者位置信息或者标识。在一种可能的实施例中,第三请求中携带注册指示或更新指示。其中,注册指示用于指示注册第一模型的信息。更新指示用于指示更新第一模型的信息。
后续,需要使用Analytics ID对应的数据分析结果的网元根据Analytics ID向服务发现实体查询推理设备的地址,然后该网元根据Analytics ID向推理设备请求数据分析结果。推理设备进一步向提供数据用于产生数据分析结果的网元订阅在线推理数据,推理设备基于模型以及获取的在线推理数据确定数据分析结果,并向消费者网元发送数据分析结果。
上述主要从各个网元之间交互的角度对本申请实施例的方案进行了介绍。可以理解的是,各个网元,例如第一训练设备、推理设备、服务发现实体等为了实现上述功能,其包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例第一训练设备、推理设备、服务发现实体进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
上面结合图1至图6,对本申请实施例的方法进行了说明,下面对本申请实施例提供的执行上述方法的确定设备信息的装置进行描述。本领域技术人员可以理解,方法和装置可以相互结合和引用,本申请实施例提供的确定设备信息的装置可以执行上述确定模型所在设备的方法中由推理设备、服务发现实体或第一训练设备执行的步骤。
在采用集成的单元的情况下,图7示出了上述实施例中所涉及的一种确定设备信息的装置,该确定设备信息的装置可以包括:处理单元101以及通信单元102。
一种示例,该确定设备信息的装置为推理设备,或者为应用于推理设备中的芯片。在这种情况下,处理单元101,用于支持该确定设备信息的装置执行上述实施例中由推理设备执行步骤504。通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤501中由推理设备执行的发送的动作。通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤503中由服务发现实体执行的接收的动作。
在一种可能的实施例中,处理单元101,具体用于支持确定设备信息的装置执行上述实施例中的步骤5041、步骤5042。通信单元102,还用于支持确定设备信息的装置执行上述实施例的步骤506、步骤509中由推理设备执行的发送的动作。
在一种可能的实施例中,通信单元102,还用于支持确定设备信息的装置执行上述实施例的步骤508中由推理设备执行的接收的动作。
另一种示例,该确定设备信息的装置为服务发现实体,或者为应用于服务发现实体中的芯片。在这种情况下,处理单元101,用于支持该确定设备信息的装置执行上述实施例中由服务发现实体执行的步骤502。通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤501中由服务发现实体执行的接收的动作。通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤503中由服务发现实体执行的发送的动作。
在一种可能的实施例中,通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤505中由服务发现实体执行的接收的动作。
再一种示例,该确定设备信息的装置为第一训练设备,或者为应用于第一训练设备中的芯片。在这种情况下,通信单元102用于支持该确定设备信息的装置执行上述实施例中的步骤506中由第一训练设备执行的接收的动作、步骤508中由第一训练设备执行的发送的动作。处理单元101,用于支持该确定设备信息的装置执行上述实施例的步骤507。
在一种可能的实施例中,通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤505中由第一训练设备执行的发送的动作。通信单元102,用于支持该确定设备信息的装置执行上述实施例的步骤506中由第一训练设备执行的接收的动作。
在采用集成的单元的情况下,图8示出了上述实施例中所涉及的确定设备信息的装置的一种可能的逻辑结构示意图。该确定设备信息的装置包括:处理模块112和通信模块113。处理模块112用于对确定设备信息的装置的动作进行控制管理,例如,处理模块112用于执行在确定设备信息的装置进行信息/数据处理的步骤。通信模块113用于支持确定设备信息的装置进行信息/数据发送或者接收的步骤。
在一种可能的实施例中,确定设备信息的装置还可以包括存储模块111,用于存储确定设备信息的装置可的程序代码和数据。
一种示例,该确定设备信息的装置为推理设备,或者为应用于推理设备中的芯片。在这种情况下,处理模块112,用于支持该确定设备信息的装置执行上述实施例中由推理设备执行步骤504。通信模块113,用于支持该确定设备信息的装置执行上述实施例的步骤501中由推理设备执行的发送的动作。通信模块113,用于支持该确定设备信息的装置执行上述实施例的步骤503中由服务发现实体执行的接收的动作。
在一种可能的实施例中,处理模块112,具体用于支持确定设备信息的装置执行上述实施例中的步骤5041、步骤5042。通信模块113,还用于支持确定设备信息的装置执行上述实施例的步骤506、步骤509中由推理设备执行的发送的动作。
在一种可能的实施例中,通信模块113,还用于支持确定设备信息的装置执行上述实施例的步骤508中由推理设备执行的接收的动作。
另一种示例,该确定设备信息的装置为服务发现实体,或者为应用于服务发现实体中的芯片。在这种情况下,处理模块112,用于支持该确定设备信息的装置执行上述实施例中由服务发现实体执行的步骤502。通信模块113,用于支持该确定设备信息 的装置执行上述实施例的步骤501中由服务发现实体执行的接收的动作。通信模块113,用于支持该确定设备信息的装置执行上述实施例的步骤503中由服务发现实体执行的发送的动作。
在一种可能的实施例中,通信模块113,用于支持该确定设备信息的装置执行上述实施例的步骤505中由服务发现实体执行的接收的动作。
再一种示例,该确定设备信息的装置为第一训练设备,或者为应用于第一训练设备中的芯片。在这种情况下,通信模块113用于支持该确定设备信息的装置执行上述实施例中的步骤506中由第一训练设备执行的接收的动作、步骤508中由第一训练设备执行的发送的动作。处理模块112,用于支持该确定设备信息的装置执行上述实施例的步骤507。
在一种可能的实施例中,通信模块113,用于支持该确定设备信息的装置执行上述实施例的步骤505中由第一训练设备执行的发送的动作。通信模块113,用于支持该确定设备信息的装置执行上述实施例的步骤506中由第一训练设备执行的接收的动作。
其中,处理模块112可以是处理器或控制器,例如可以是中央处理器单元,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。通信模块113可以是收发器、收发电路或通信接口等。存储模块111可以是存储器。
当处理模块112为处理器41或处理器45,通信模块113为通信接口43时,存储模块111为存储器42时,本申请所涉及的确定设备信息的装置可以为图4所示的通信设备。
一种示例,该确定设备信息的装置为推理设备,或者为应用于推理设备中的芯片。在这种情况下,处理器41或处理器45,用于支持该确定设备信息的装置执行上述实施例中由推理设备执行步骤504。通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤501中由推理设备执行的发送的动作。通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤503中由服务发现实体执行的接收的动作。
在一种可能的实施例中,处理器41或处理器45,具体用于支持确定设备信息的装置执行上述实施例中的步骤5041、步骤5042。通信接口43,还用于支持确定设备信息的装置执行上述实施例的步骤506、步骤509中由推理设备执行的发送的动作。
在一种可能的实施例中,通信接口43,还用于支持确定设备信息的装置执行上述实施例的步骤508中由推理设备执行的接收的动作。
另一种示例,该确定设备信息的装置为服务发现实体,或者为应用于服务发现实体中的芯片。在这种情况下,处理器41或处理器45,用于支持该确定设备信息的装置执行上述实施例中由服务发现实体执行的步骤502。通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤501中由服务发现实体执行的接收的动作。通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤503中由服务发现实体执行的发送的动作。
在一种可能的实施例中,通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤505中由服务发现实体执行的接收的动作。
再一种示例,该确定设备信息的装置为第一训练设备,或者为应用于第一训练设备中的芯片。在这种情况下,通信接口43用于支持该确定设备信息的装置执行上述实施例中的步骤506中由第一训练设备执行的接收的动作、步骤508中由第一训练设备执行的发送的动作。处理器41或处理器45,用于支持该确定设备信息的装置执行上述实施例的步骤507。
在一种可能的实施例中,通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤505中由第一训练设备执行的发送的动作。通信接口43,用于支持该确定设备信息的装置执行上述实施例的步骤506中由第一训练设备执行的接收的动作。
图9是本申请实施例提供的芯片150的结构示意图。芯片150包括一个或两个以上(包括两个)处理器1510和通信接口1530。
可选的,该芯片150还包括存储器1540,存储器1540可以包括只读存储器和随机存取存储器,并向处理器1510提供操作指令和数据。存储器1540的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
在一些实施方式中,存储器1540存储了如下的元素,执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
在本申请实施例中,通过调用存储器1540存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。
一种可能的实现方式中为:第一训练设备、推理设备、服务发现实体所用的芯片的结构类似,不同的装置可以使用不同的芯片以实现各自的功能。
处理器1510控制第一训练设备、推理设备、服务发现实体中任一个的处理操作,处理器1510还可以称为中央处理单元(central processing unit,CPU)。
存储器1540可以包括只读存储器和随机存取存储器,并向处理器1510提供指令和数据。存储器1540的一部分还可以包括NVRAM。例如应用中存储器1540、通信接口1530以及存储器1540通过总线系统1520耦合在一起,其中总线系统1520除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图9中将各种总线都标为总线系统1520。
上述本申请实施例揭示的方法可以应用于处理器1510中,或者由处理器1510实现。处理器1510可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1510中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1510可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器 1540,处理器1510读取存储器1540中的信息,结合其硬件完成上述方法的步骤。
一种可能的实现方式中,通信接口1530用于执行图5-图6所示的实施例中的第一训练设备、推理设备、服务发现实体的接收和发送的步骤。处理器1510用于执行图5-图6所示的实施例中的第一训练设备、推理设备、服务发现实体的处理的步骤。
以上通信单元可以是该装置的一种通信接口,用于从其它装置接收信号。例如,当该装置以芯片的方式实现时,该通信单元是该芯片用于从其它芯片或装置接收信号或发送信号的通信接口。
此外,本申请实施例可以提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令被运行时,实现如图5-图6中服务发现实体的功能。
本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令被运行时,实现如图5-图6中推理设备的功能。
本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令被运行时,实现如图5-图6中第一训练设备的功能。
本申请实施例提供一种包括指令的计算机程序产品,计算机程序产品中包括指令,当指令被运行时,实现如图5-图6中服务发现实体的功能。
本申请实施例提供一种包括指令的计算机程序产品,计算机程序产品中包括指令,当指令被运行时,实现如图5-图6中推理设备的功能。
本申请实施例提供一种包括指令的计算机程序产品,计算机程序产品中包括指令,当指令被运行时,实现如图5-图6中第一训练设备的功能。
本申请实施例提供一种芯片,该芯片应用于第一训练设备中,芯片包括至少一个处理器和通信接口,通信接口和至少一个处理器耦合,处理器用于运行指令,以实现如图5-图6中第一训练设备的功能。
本申请实施例提供一种芯片,该芯片应用于推理设备中,芯片包括至少一个处理器和通信接口,通信接口和至少一个处理器耦合,处理器用于运行指令,以实现如图5-图6中推理设备的功能。
本申请实施例提供一种芯片,该芯片应用于第一终端中,芯片包括至少一个处理器和通信接口,通信接口和至少一个处理器耦合,处理器用于运行指令,以实现如图5或图6中服务发现实体的功能。
本申请实施例提供一种通信系统,该通信系统包括推理设备、以及服务发现实体。其中,服务发现实体用于执行图5~图7中由服务发现实体执行的步骤,推理设备用于执行图5~图6中由推理设备执行的步骤。
在一种可选的实现方式中,该通信系统还可以包括:第一训练设备用于执行图5~图6中由第一训练设备执行的步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存 储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看附图、公开内容、以及所附权利要求书,可理解并实现公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。

Claims (30)

  1. 一种确定设备信息的方法,其特征在于,包括:
    推理设备向服务发现实体发送第一请求,所述第一请求用于请求一个或者多个训练设备的信息,所述第一请求包括所述推理设备请求的第一模型的算法类型或者算法标识;
    所述推理设备接收来自所述服务发现实体的所述一个或多个训练设备的信息,所述训练设备的信息包括:能力信息;
    所述推理设备根据预设条件,从所述一个或多个训练设备的信息中确定第一训练设备。
  2. 根据权利要求1所述的方法,其特征在于,所述能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
  3. 根据权利要求1或2所述的方法,其特征在于,所述预设条件包括以下任一个或多个:
    所述第一训练设备的算法对应的算法收敛时间为所述一个或多个训练设备中任一个训练设备的算法对应的算法收敛时间中最快的;或,
    所述第一训练设备的算法对应的算法性能评价指标为所述任一个训练设备的算法对应的算法性能评价指标最高的。
  4. 根据权利要求3所述的方法,其特征在于,所述训练设备的信息,还包括以下信息中的任一个:位置信息、负载信息,所述预设条件还可以包括:
    所述第一训练设备的负载为所述一个或者多个训练设备中的负载最低的。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    所述推理设备向所述第一训练设备发送第四请求,所述第四请求用于请求所述第一模型的信息,所述第四请求包括所述第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型、数据地址;
    所述推理设备接收来自所述第一训练设备的所述第一模型的信息,所述第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数。
  6. 根据权利要求5所述的方法,其特征在于,所述第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,
    所述推理设备向所述服务发现实体发送第三请求,所述第三请求用于注册或者更新所述第一模型的信息,所述第三请求包括以下信息中的任一个或者多个:
    所述第一模型对应的分析结果标识Analytics ID,所述第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
  8. 一种确定设备信息的方法,其特征在于,包括:
    服务发现实体接收来自推理设备的第一请求,所述第一请求用于请求一个或者多个训练设备的信息,所述第一请求包括所述推理设备请求的第一模型的算法类型或者算法标识;
    所述服务发现实体根据所述推理设备请求的第一模型的算法类型或者算法标识, 确定所述一个或多个训练设备的信息,所述训练设备的信息包括:能力信息;
    所述服务发现实体向所述推理设备发送所述一个或多个训练设备的信息。
  9. 根据权利要求8所述的方法,其特征在于,
    所述一个或多个训练设备为支持所述推理设备需要的算法类型的训练设备,和/或,
    所述一个或多个训练设备为与所述推理设备之间的距离满足预设要求的训练设备。
  10. 根据权利要求9所述的方法,其特征在于,所述一个或多个训练设备的负载低于预设负载阈值。
  11. 根据权利要求8-10任一项所述的方法,其特征在于,所述方法还包括:
    所述服务发现实体接收来自第二训练设备的第二请求,所述第二请求用于请求在所述服务发现实体处注册或者更新所述第二训练设备的信息,所述第二请求包括所述第二训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,所述第二训练设备为所述一个或者多个训练设备中任一个训练设备。
  12. 根据权利要求8-11任一项所述的方法,其特征在于,所述能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
  13. 根据权利要求8-12任一项所述的方法,其特征在于,所述方法还包括:
    所述服务发现实体接收来自所述推理设备的第三请求,所述第三请求用于注册或者更新所述第一模型的信息,所述第三请求包括以下信息中的任一个或者多个:
    所述第一模型对应的分析结果标识Analytics ID,所述第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
  14. 一种确定设备信息的装置,其特征在于,包括:
    通信单元,用于向服务发现实体发送第一请求,所述第一请求用于请求一个或者多个训练设备的信息,所述第一请求包括所述推理设备请求的第一模型的算法类型或者算法标识;
    所述通信单元,还用于接收来自所述服务发现实体的所述一个或多个训练设备的信息,所述训练设备的信息包括:能力信息;
    处理单元,用于根据预设条件,从所述一个或多个训练设备的信息中确定第一训练设备。
  15. 根据权利要求14所述的装置,其特征在于,所述能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
  16. 根据权利要求14或15所述的装置,其特征在于,所述预设条件包括以下任一个或多个:
    所述第一训练设备的算法对应的算法收敛时间为所述一个或多个训练设备中任一个训练设备的算法对应的算法收敛时间中最快的;或,
    所述第一训练设备的算法对应的算法性能评价指标为所述任一个训练设备的算法对应的算法性能评价指标最高的。
  17. 根据权利要求16所述的装置,其特征在于,所述训练设备的信息,还包括以 下信息中的任一个:位置信息、负载信息,所述预设条件还可以包括:
    所述第一训练设备的负载为所述一个或者多个训练设备中的负载最低的。
  18. 根据权利要求14-17任一项所述的装置,其特征在于,所述通信单元,还用于向所述第一训练设备发送第四请求,所述第四请求用于请求所述第一模型的信息,所述第四请求包括所述第一模型对应的以下信息中的任一个或多个:算法类型、算法标识、算法性能要求、数据类型、数据地址;
    所述通信单元,还用于接收来自所述第一训练设备的所述第一模型的信息,所述第一模型的信息包括以下信息中的任一个或多个:模型标识、模型输入、模型参数。
  19. 根据权利要求18所述的装置,其特征在于,所述第一模型的信息还包括以下信息中的任一个或多个:模型输出、模型附加信息、模型评价指标的结果。
  20. 根据权利要求14-19任一项所述的装置,其特征在于,
    所述通信单元,还用于向所述服务发现实体发送第三请求,所述第三请求用于注册或者更新所述第一模型的信息,所述第三请求包括以下信息中的任一个或者多个:
    所述第一模型对应的分析结果标识Analytics ID,所述第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
  21. 一种确定设备信息的装置,其特征在于,包括:
    通信单元,用于接收来自推理设备的第一请求,所述第一请求用于请求一个或者多个训练设备的信息,所述第一请求包括所述推理设备请求的第一模型的算法类型或者算法标识;
    处理单元,用于根据所述推理设备请求的第一模型的算法类型,确定所述一个或多个训练设备的信息,所述训练设备的信息包括:能力信息;
    所述通信单元,用于向所述推理设备发送所述一个或多个训练设备的信息。
  22. 根据权利要求21所述的装置,其特征在于,
    所述一个或多个训练设备为支持所述推理设备需要的算法类型的训练设备,和/或,
    所述一个或多个训练设备为与所述推理设备之间的距离满足预设要求的训练设备。
  23. 根据权利要求22所述的装置,其特征在于,所述一个或多个训练设备的负载低于预设负载阈值。
  24. 根据权利要求21-23任一项所述的装置,其特征在于,所述通信单元,还用于接收来自第二训练设备的第二请求,所述第二请求用于请求在所述服务发现实体处注册或者更新所述第二训练设备的信息,所述第二请求包括所述第二训练设备的以下信息中的任一个或多个:地址信息、位置信息、负载、能力信息,所述第二训练设备为所述一个或者多个训练设备中任一个训练设备。
  25. 根据权利要求21-24任一项所述的装置,其特征在于,所述能力信息包括以下信息中的任一个或多个:算法类型、算法标识、算法性能评价指标、算法的收敛时间、算法收敛速度、算法置信度。
  26. 根据权利要求21-25任一项所述的装置,其特征在于,所述通信单元,还用于接收来自所述推理设备的第三请求,所述第三请求用于注册或者更新所述第一模型的信息,所述第三请求包括以下信息中的任一个或者多个:
    所述第一模型对应的分析结果标识Analytics ID,所述第一模型对应的分析结果的有效区域或者服务区域或者覆盖区域。
  27. 一种通信系统,其特征在于,包括:如权利要求14-20任一项所述的装置,权利要求21-26任一项所述的装置。
  28. 一种可读存储介质,其特征在于,所述可读存储介质中存储有指令,当所述指令被执行时,实现如权利要求1-7任一项所述的方法,或如权利要求8-13任一项所述的方法。
  29. 一种芯片,其特征在于,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行计算机程序或指令,以实现如权利要求1-7任一项所述的方法,或如权利要求8-13任一项所述的方法。
  30. 一种通信装置,其特征在于,包括:处理器,所述处理器和存储器耦合,所述处理器用于执行所述存储器中存储的计算机程序或指令,以实现如权利要求1-7任一项所述的方法,或如权利要求8-13任一项所述的方法。
PCT/CN2019/109657 2019-09-30 2019-09-30 一种确定设备信息的方法、装置以及系统 WO2021062740A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980100484.8A CN114402341A (zh) 2019-09-30 2019-09-30 一种确定设备信息的方法、装置以及系统
PCT/CN2019/109657 WO2021062740A1 (zh) 2019-09-30 2019-09-30 一种确定设备信息的方法、装置以及系统
EP19947578.1A EP4027584A4 (en) 2019-09-30 2019-09-30 METHOD AND APPARATUS FOR DETERMINING DEVICE INFORMATION, AND SYSTEM
US17/707,589 US12040947B2 (en) 2019-09-30 2022-03-29 Method and apparatus for determining device information, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109657 WO2021062740A1 (zh) 2019-09-30 2019-09-30 一种确定设备信息的方法、装置以及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/707,589 Continuation US12040947B2 (en) 2019-09-30 2022-03-29 Method and apparatus for determining device information, and system

Publications (1)

Publication Number Publication Date
WO2021062740A1 true WO2021062740A1 (zh) 2021-04-08

Family

ID=75336658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/109657 WO2021062740A1 (zh) 2019-09-30 2019-09-30 一种确定设备信息的方法、装置以及系统

Country Status (4)

Country Link
US (1) US12040947B2 (zh)
EP (1) EP4027584A4 (zh)
CN (1) CN114402341A (zh)
WO (1) WO2021062740A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023039728A1 (zh) * 2021-09-14 2023-03-23 北京小米移动软件有限公司 一种基于用户设备ue能力的模型处理方法、装置、用户设备、基站及存储介质
WO2023093320A1 (zh) * 2021-11-29 2023-06-01 中国电信股份有限公司 客户自主分析方法、装置以及介质
WO2023213288A1 (zh) * 2022-05-05 2023-11-09 维沃移动通信有限公司 模型获取方法及通信设备
US12081410B2 (en) 2020-01-03 2024-09-03 Huawei Technologies Co., Ltd. Network entity for determining a model for digitally analyzing input data

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784992A (zh) * 2019-11-08 2021-05-11 中国移动通信有限公司研究院 一种网络数据分析方法、功能实体及电子设备
CN116828587A (zh) * 2022-03-28 2023-09-29 维沃移动通信有限公司 网元注册方法、模型请求方法、装置、网元、通信系统及存储介质
WO2024036454A1 (zh) * 2022-08-15 2024-02-22 华为技术有限公司 一种数据特征测量方法及相关装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510082A (zh) * 2018-03-27 2018-09-07 苏宁易购集团股份有限公司 对机器学习模型进行处理的方法及装置
CN108762768A (zh) * 2018-05-17 2018-11-06 烽火通信科技股份有限公司 网络服务智能化部署方法及系统
CN109600243A (zh) * 2017-09-30 2019-04-09 华为技术有限公司 数据分析方法和装置
US20190222489A1 (en) * 2018-04-09 2019-07-18 Intel Corporation NETWORK DATA ANALYTICS FUNCTION (NWDAF) INFLUENCING FIFTH GENERATION (5G) QUALITY OF SERVICE (QoS) CONFIGURATION AND ADJUSTMENT

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7991864B2 (en) * 2006-05-04 2011-08-02 Cisco Technology, Inc. Network element discovery using a network routing protocol
EP3007369B1 (en) * 2014-06-10 2019-11-06 Huawei Technologies Co., Ltd. Method, device and system of prompting communication event
US10367696B2 (en) * 2016-05-23 2019-07-30 Telefonaktiebolaget Lm Ericsson (Publ) Automatic network management system and methods
KR102269320B1 (ko) * 2017-04-28 2021-06-25 삼성전자주식회사 전자 장치 및 전자 장치의 근접 디스커버리 방법
US11423254B2 (en) * 2019-03-28 2022-08-23 Intel Corporation Technologies for distributing iterative computations in heterogeneous computing environments
US10980028B2 (en) * 2019-03-29 2021-04-13 At&T Intellectual Property I, L.P. Adaptive beam sweeping for 5G or other next generation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600243A (zh) * 2017-09-30 2019-04-09 华为技术有限公司 数据分析方法和装置
CN108510082A (zh) * 2018-03-27 2018-09-07 苏宁易购集团股份有限公司 对机器学习模型进行处理的方法及装置
US20190222489A1 (en) * 2018-04-09 2019-07-18 Intel Corporation NETWORK DATA ANALYTICS FUNCTION (NWDAF) INFLUENCING FIFTH GENERATION (5G) QUALITY OF SERVICE (QoS) CONFIGURATION AND ADJUSTMENT
CN108762768A (zh) * 2018-05-17 2018-11-06 烽火通信科技股份有限公司 网络服务智能化部署方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4027584A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12081410B2 (en) 2020-01-03 2024-09-03 Huawei Technologies Co., Ltd. Network entity for determining a model for digitally analyzing input data
WO2023039728A1 (zh) * 2021-09-14 2023-03-23 北京小米移动软件有限公司 一种基于用户设备ue能力的模型处理方法、装置、用户设备、基站及存储介质
WO2023093320A1 (zh) * 2021-11-29 2023-06-01 中国电信股份有限公司 客户自主分析方法、装置以及介质
WO2023213288A1 (zh) * 2022-05-05 2023-11-09 维沃移动通信有限公司 模型获取方法及通信设备

Also Published As

Publication number Publication date
EP4027584A4 (en) 2022-08-31
EP4027584A1 (en) 2022-07-13
US20220224603A1 (en) 2022-07-14
CN114402341A (zh) 2022-04-26
US12040947B2 (en) 2024-07-16

Similar Documents

Publication Publication Date Title
WO2021062740A1 (zh) 一种确定设备信息的方法、装置以及系统
WO2021218274A1 (zh) 一种通信方法、装置及系统
US20200154280A1 (en) Architecture for network slice deployment based on network resource utilization
Aazam et al. Cloud of things (CoT): Cloud-fog-IoT task offloading for sustainable Internet of Things
WO2021155579A1 (zh) 一种数据分析方法、装置及系统
WO2019133180A1 (en) Service level agreement-based multi-hardware accelerated inference
CN109672558B (zh) 一种面向第三方服务资源的聚合与优化匹配方法,设备及存储介质
CN106161610A (zh) 一种分布式存储的方法和系统
CN109964507B (zh) 网络功能的管理方法、管理单元及系统
US11853852B2 (en) Systems and methods for preventing machine learning models from negatively affecting mobile devices through intermittent throttling
US20220150154A1 (en) Automatically managing a mesh network based on dynamically self-configuring node devices
WO2021204299A1 (zh) 一种确定策略的方法、装置及系统
CN115699848A (zh) 通信方法、装置及系统
US12034596B2 (en) Template based edge cloud core deployment
JP2017506480A (ja) M2mにおける情報処理方法および装置
US20230136048A1 (en) Federated distribution of computation and operations using networked processing units
US11683228B2 (en) Automatically managing a role of a node device in a mesh network
WO2022001315A1 (zh) 信息传递方法、装置、存储介质及电子装置
Laroui et al. Virtual mobile edge computing based on IoT devices resources in smart cities
US11064031B2 (en) Method, communication terminal, and communication node device for associating resources
WO2021115447A1 (zh) 会话管理网元发现的方法、设备及系统
WO2023233471A1 (ja) ネットワークの異常の原因推定
WO2023233470A1 (ja) ネットワークの異常の原因推定
US11570066B1 (en) Slice intent efficiency assurance and enhancement in enterprise private 5G network
US20240281301A1 (en) Execution platform determination system and execution platform determination method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19947578

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019947578

Country of ref document: EP

Effective date: 20220407