WO2023143080A1 - 一种数据处理方法以及相关设备 - Google Patents

一种数据处理方法以及相关设备 Download PDF

Info

Publication number
WO2023143080A1
WO2023143080A1 PCT/CN2023/071725 CN2023071725W WO2023143080A1 WO 2023143080 A1 WO2023143080 A1 WO 2023143080A1 CN 2023071725 W CN2023071725 W CN 2023071725W WO 2023143080 A1 WO2023143080 A1 WO 2023143080A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
terminal device
server
intermediate result
data processing
Prior art date
Application number
PCT/CN2023/071725
Other languages
English (en)
French (fr)
Inventor
王仁宇
杨宇庭
张胜涛
钱莉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023143080A1 publication Critical patent/WO2023143080A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of artificial intelligence, and in particular to a method for generating and processing SIMD instructions and related equipment.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is the branch of computer science that attempts to understand the nature of intelligence and produce a new class of intelligent machines that respond in ways similar to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • the existing application model parameters based on the deep neural network often reach 10M-100M.
  • their computer resources are often difficult to complete the calculation of the entire neural network.
  • the user's pending data can be collected on the terminal device side, and the pending data can be sent to the server. After the server processes the pending data through the neural network, the prediction result corresponding to the pending data can be obtained and returned to the terminal device.
  • the degree of privacy protection for user data is weak.
  • the embodiment of the present application provides a data processing method and related equipment. Since the calculation of the second neural network is completed by the server, it can reduce the computer resources of the terminal equipment occupied in the calculation process of the entire neural network; the terminal equipment is After the data to be processed is input into the first neural network for calculation, the first intermediate result is sent to the server, which avoids the leakage of the original data to be processed and improves the protection degree of privacy of user data; and the entire neural network The calculation of the third neural network is also performed by the terminal device side, which is conducive to further improving the degree of protection of the privacy of user data.
  • the embodiment of the present application provides a data processing method that can be used in the field of artificial intelligence.
  • the method is applied to a data processing system, the data processing system includes a first terminal device and a server, a first neural network and a third neural network are deployed on the first terminal device, and a second neural network is deployed on the server, wherein the first neural network , the second neural network and the third neural network form the target neural network, the first neural network is located before the second neural network, the third neural network is located after the second neural network, and the second neural network is located between the first neural network and the third neural network between networks.
  • the first neural network is located before the second neural network
  • the data to be processed will first pass through the target neural network
  • the first neural network in the target neural network is then passed through the second neural network in the target neural network.
  • the concept of "the third neural network is located behind the second neural network” can also be understood with the help of the foregoing description, and will not be repeated here.
  • the data processing method includes: the first terminal device inputs the data to be processed into the first neural network, obtains the first intermediate result generated by the first neural network, and sends the first intermediate result to the server; "the first intermediate result generated by the first neural network The result” can also be referred to as “the first hidden vector generated by the first neural network", and the first intermediate result generated by the first neural network includes the data required by the second neural network for data processing; further, “the first neural network The first intermediate result generated by the network” includes the data generated by the last neural network layer in the first neural network, or "the first intermediate result generated by the first neural network” includes the data generated by the last neural network layer in the first neural network data, and, the data generated by other neural network layers in the first neural network.
  • the server inputs the first intermediate result into the second neural network, obtains the second intermediate result generated by the second neural network, and sends the second intermediate result to the first terminal device; the meaning of "second intermediate result” can be found in “first intermediate result” The meaning of "results” is understood, so I won't go into details here.
  • the first terminal device inputs the second intermediate result into the third neural network to obtain a prediction result generated by the third neural network corresponding to the data to be processed, and the type of information indicated by the prediction result corresponds to the type of the target task.
  • the neural network deployed on the first terminal device has the following changes: the number of neural network layers in the first neural network changes, or the third neural network The number of neural network layers in changes.
  • the computer resources of the first terminal device occupied in the calculation process of the entire target neural network can be reduced; the first terminal device inputs the data to be processed After the calculation in the first neural network, the first intermediate result is sent to the server, which avoids the leakage of the original data to be processed and improves the privacy protection of user data; and the last third in the entire target neural network
  • the calculation of the neural network is also performed by the first terminal device side, which is beneficial to further improving the degree of protection of the privacy of user data.
  • the attacker may acquire the intermediate results sent between the first terminal device and the server, and invert them according to the obtained intermediate results to obtain the original data to be processed, and for the two At different moments, the number of neural network layers deployed on the first terminal device changes, that is, at different moments, different intermediate results are sent between the first terminal device and the server, which further increases the attacker's ability to obtain the original waiting list.
  • the difficulty of processing data to further improve the degree of protection for the privacy of user data.
  • the first neural network includes N neural network layers
  • the third neural network includes S neural network layers
  • the first neural network includes n neural network layers
  • the third neural network includes s neural network layers, wherein, N and n are different and/or S and s are different
  • the method also includes: the server sends n neural network layers and s neural network layers to the first terminal device Network layer.
  • the server can send the updated first neural network and the updated third neural network to the first terminal device, further improving the The attacker determines the difficulty of the neural network deployed on the first terminal device, thereby further increasing the difficulty for the attacker to deduce the original data to be processed from the intermediate result, which is conducive to further improving the privacy protection degree of user data.
  • the method further includes: the server determines the first neural network and the third neural network from the target neural network, wherein the target neural network is a neural network that performs the target task, and the first neural network and the determining factors of the third neural network include: when the target task has not been executed, the processor resource occupation of the first terminal device and/or the memory resource occupation of the first terminal device.
  • the determination factors of the first neural network and the third neural network may also include any one or more of the following: the number of processes currently running on the first terminal device, the number of processes each process has run on the first terminal device Time, the running state of each process on the first terminal device, or other possible factors, etc., are not exhaustive here.
  • the evaluation index of "the amount of memory resources occupied by the first terminal device” may include any one or more of the following indexes: the size of the total memory resources of the first terminal device, the occupied memory of the first terminal device The size of the resource, the occupancy rate of the memory resource of the first terminal device, or other evaluation indicators.
  • the evaluation index of "the processor resource occupancy of the first terminal device” may include any one or more of the following indexes: the occupancy rate of the processor resource of the first terminal device, the The occupancy time of each processor, the load of the processor on the first terminal device for executing the target task assignment, the performance of the processor on the first terminal device for executing the target task assignment, or other factors that can reflect the first terminal device
  • the evaluation indicators of the processor resources used to execute the target task, etc., are not exhaustive here.
  • the computer resources that the first terminal device can allocate to the target task may be different, then the first neural network and the determination factors of the third neural network include the occupation of processor resources of the first terminal device and/or the occupation of memory resources of the first terminal device, which is beneficial to ensure that the neural network deployed on the first terminal device can be compared with the first terminal device.
  • the computing power of the terminal devices is matched to avoid increasing the computing pressure of the first terminal device in the process of performing the target task.
  • the data processing system further includes a second terminal device, the first neural network deployed on the first terminal device and the neural network in the first neural network deployed on the second terminal device
  • the number of layers is different, and/or, the number of neural network layers in the third neural network deployed on the first terminal device and the third neural network deployed on the second terminal device are different; wherein, the first terminal device and the second The terminal devices are terminal devices of different types, and/or, the first terminal device and the second terminal device are terminal devices of different models of the same type.
  • the first neural network and the second neural network may be obtained by splitting the target neural network by the server.
  • the first mapping relationship may be stored on the server, and the number of neural network layers deployed on each type of terminal device may be stored in the first mapping relationship.
  • two split nodes corresponding to the first terminal device of the target type may be determined according to the new target type of the first terminal device and the first mapping relationship.
  • a second mapping relationship may be stored on the server, and the number of neural network layers corresponding to at least one model of each type of terminal device may be stored in the second mapping relationship.
  • two split nodes corresponding to the first terminal device of the target type may be determined according to the target type, target model, and second mapping relationship of the new first terminal device.
  • the determination factors of the first neural network and the third neural network in the first mapping relationship may include any one or a combination of the following factors: when the first terminal device executes the target task, The estimated amount of processor resources allocated by the first terminal device, the estimated amount of memory resources allocated by the first terminal device, or other types of factors.
  • the processor resources occupied by the first terminal device during data processing through the first neural network and the third neural network are smaller than the process of data processing performed by the server through the second neural network Processor resources occupied by , and the memory resources occupied by the first terminal device during data processing through the first neural network and the third neural network are smaller than those occupied by the server during data processing through the second neural network memory resources.
  • the second neural network is deployed on the server, and more processor resources and more memory resources are occupied during the data processing process of the second neural network, which can further reduce the calculation time of the entire neural network.
  • the occupied computer resources of the first terminal device are beneficial to reduce the calculation pressure of the first terminal device in the process of performing the target task; since most of the calculations in the data processing process of the entire neural network are performed by the server, the parameter amount can be used More deep neural networks are used to generate prediction results that are consistent with the data to be processed, which is conducive to improving the accuracy of the prediction results generated by the entire neural network.
  • the data to be processed can specifically be any of the following data: voice data, image data, fingerprint data, ear contour data, sequence data that can reflect user habits, text data , point cloud data or other types of data, etc.
  • voice data voice data
  • image data fingerprint data
  • fingerprint data fingerprint data
  • ear contour data sequence data that can reflect user habits
  • text data text data
  • point cloud data point cloud data or other types of data, etc.
  • various representation forms of the data to be processed are provided, the application scenarios of the solution are expanded, and the implementation flexibility of the solution is improved.
  • the embodiment of the present application provides a data processing method, which can be used in the field of artificial intelligence.
  • the method is applied to a data processing system, the data processing system includes a first terminal device and a server, a first neural network is deployed on the first terminal device, and a second neural network is deployed on the server, the method includes: the first terminal device converts the data to be processed Input the first neural network, obtain the first intermediate result generated by the first neural network, and send the first intermediate result to the server; the server inputs the first intermediate result into the second neural network, and obtain the data to be processed generated by the second neural network Corresponding prediction results; wherein, the first neural network and the second neural network form the target neural network, and at the two different moments of the first moment and the second moment, the neural network in the first neural network deployed on the first terminal device The number of network layers changes.
  • the first neural network includes N neural network layers, and at the second moment, the first neural network includes n neural network layers, where N and n are different, the method It also includes: the server sends n neural network layers to the first terminal device.
  • the data processing system further includes a second terminal device, the first neural network deployed on the first terminal device and the neural network in the first neural network deployed on the second terminal device
  • the number of layers is different; wherein, the first terminal device and the second terminal device are terminal devices of different types, and/or, the first terminal device and the second terminal device are terminal devices of different models of the same type.
  • the data processing system provided by the second aspect of the embodiment of the present application can also perform the steps performed by the data processing system in the various possible implementations of the first aspect.
  • the second aspect of the embodiment of the present application and various possible implementations of the second aspect
  • the meaning of the terms, and the beneficial effects brought by each possible implementation manner reference may be made to the descriptions in various possible implementation manners in the first aspect, and details will not be repeated here.
  • the embodiment of the present application provides a data processing method, which can be used in the field of artificial intelligence.
  • the method is applied to the first terminal device, the first terminal device is included in the data processing system, the data processing system further includes a server, the first neural network and the third neural network are deployed on the first terminal device, and the second neural network is deployed on the server , the method includes: inputting the data to be processed into the first neural network to obtain the first intermediate result generated by the first neural network; sending the first intermediate result to the server, and the first intermediate result is used for the server to use the second neural network to obtain the first intermediate result
  • Two intermediate results receive the second intermediate result sent by the server, input the second intermediate result into the third neural network, and obtain the prediction result corresponding to the data to be processed generated by the third neural network; wherein, the first neural network, the second neural network Network and the third neural network form the target neural network.
  • the neural network deployed on the first terminal device has the following changes: the number of neural network layers in the first
  • the data processing method provided by the third aspect of the embodiment of the present application can also execute the steps performed by the first terminal device in each possible implementation manner of the first aspect.
  • the third aspect of the embodiment of the present application and various possible implementations of the third aspect
  • the specific implementation steps of the manner, and the beneficial effects brought by each possible implementation manner reference may be made to the descriptions in various possible implementation manners in the first aspect, and details will not be repeated here.
  • the embodiment of the present application provides a data processing method, which can be used in the field of artificial intelligence.
  • the method is applied to a server, the server is included in the data processing system, the data processing system also includes a first terminal device, the first neural network and the third neural network are deployed on the first terminal device, and the second neural network is deployed on the server, the method includes : Receive the first intermediate result sent by the first terminal device, the first intermediate result is obtained based on the data to be processed and the first neural network; input the first intermediate result into the second neural network, and obtain the second intermediate result generated by the second neural network ; Send the second intermediate result to the first terminal device, the second intermediate result is used for the first terminal device to use the third neural network to obtain the prediction result corresponding to the data to be processed; wherein, the first neural network, the second neural network and the third neural network to form the target neural network, at the two different moments of the first moment and the second moment, the neural network deployed on the first terminal device has the following changes: the number of neural network layers in the first neural
  • the data processing method provided by the fourth aspect of the embodiment of the present application can also execute the steps performed by the server in each possible implementation manner of the first aspect.
  • the fourth aspect of the embodiment of the application and the specific implementation manners of the fourth aspect For the implementation steps and the beneficial effects brought by each possible implementation manner, reference may be made to the descriptions in various possible implementation manners in the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a data processing method, which can be used in the field of artificial intelligence.
  • the method is applied to a first terminal device.
  • the first terminal device is included in a data processing system, and the data processing system further includes a server.
  • the first neural network is deployed on the first terminal device, and the second neural network is deployed on the server.
  • the method includes: The data to be processed is input into the first neural network to obtain the first intermediate result generated by the first neural network; the first intermediate result is sent to the server, and the first intermediate result is used for the server to use the second neural network to obtain the corresponding data to be processed Prediction results; wherein, the first neural network and the second neural network form the target neural network, and at the two different moments of the first moment and the second moment, the neural network layer in the first neural network deployed on the first terminal device The number of changes.
  • the data processing method provided by the fifth aspect of the embodiment of the present application can also execute the steps performed by the first terminal device in each possible implementation manner of the second aspect, for the fifth aspect of the embodiment of the present application and various possible implementations of the fifth aspect
  • the specific implementation steps of the manner, and the beneficial effects brought by each possible implementation manner reference may be made to the description in various possible implementation manners in the second aspect, and details will not be repeated here.
  • the embodiments of the present application provide a data processing method that can be used in the field of artificial intelligence.
  • the method is applied to a server, the server is included in a data processing system, and the data processing system further includes a first terminal device, a first neural network is deployed on the first terminal device, and a second neural network is deployed on the server, the method includes: receiving the first terminal The first intermediate result sent by the device, the first intermediate result is obtained based on the data to be processed and N first intermediate results; input the first intermediate result into the second neural network, and obtain the prediction corresponding to the data to be processed generated by the second neural network Result; wherein, the first neural network and the second neural network form the target neural network, at two different moments, the first moment and the second moment, the neural network layer in the first neural network deployed on the first terminal device Quantity changes.
  • the data processing method provided by the sixth aspect of the embodiment of the present application can also execute the steps performed by the server in each possible implementation of the second aspect.
  • the sixth aspect of the embodiment of the present application and the specific implementation of the sixth aspect For the implementation steps and the beneficial effects brought by each possible implementation manner, reference may be made to the descriptions in various possible implementation manners in the second aspect, and details are not repeated here.
  • the embodiment of the present application provides a data processing device, which can be used in the field of artificial intelligence.
  • the data processing device is deployed on the first terminal device, the first terminal device is included in the data processing system, and the data processing system further includes a server, the first neural network and the third neural network are deployed on the first terminal device, and the first neural network is deployed on the server
  • Two neural networks the device includes: an input module, used to input the data to be processed into the first neural network, to obtain the first intermediate result generated by the first neural network; a sending module, used to send the first intermediate result to the server, the first The intermediate result is used for the server to use the second neural network to obtain the second intermediate result; the receiving module is used to receive the second intermediate result sent by the server; the input module is also used to input the second intermediate result into the third neural network to obtain the second intermediate result
  • the prediction results corresponding to the data to be processed generated by the three neural networks; wherein, the first neural network, the second neural network and the third neural network form the target neural network, and at the two different
  • the data processing apparatus provided by the seventh aspect of the embodiment of the present application can also execute the steps performed by the first terminal device in each possible implementation manner of the first aspect.
  • the seventh aspect of the embodiment of the present application and various possible implementations of the seventh aspect
  • the embodiment of the present application provides a data processing device, which can be used in the field of artificial intelligence.
  • the data processing device is deployed on the server, the server is included in the data processing system, and the data processing system further includes a first terminal device, the first neural network and the third neural network are deployed on the first terminal device, and the second neural network is deployed on the server,
  • the device includes: a receiving module, configured to receive a first intermediate result sent by the first terminal device, the first intermediate result is obtained based on the data to be processed and the first neural network; an input module, configured to input the first intermediate result into the second neural network , to obtain the second intermediate result generated by the second neural network; the sending module is configured to send the second intermediate result to the first terminal device, and the second intermediate result is used for the first terminal device to use the third neural network to obtain and to be processed
  • the prediction result corresponding to the data; wherein, the first neural network, the second neural network and the third neural network form the target neural network, and at the two different moments of the first moment and the second moment,
  • the data processing device provided in the eighth aspect of the embodiments of the present application can also execute the steps performed by the server in each possible implementation of the first aspect.
  • the eighth aspect of the embodiment of the present application and the specific implementations of the eighth aspect For the implementation steps and the beneficial effects brought by each possible implementation manner, reference may be made to the descriptions in various possible implementation manners in the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a data processing device, which can be used in the field of artificial intelligence.
  • the data processing device is deployed on the first terminal device, the first terminal device is included in the data processing system, the data processing system also includes a server, the first neural network is deployed on the first terminal device, and the second neural network is deployed on the server, and the device includes : an input module, configured to input the data to be processed into the first neural network to obtain the first intermediate result generated by the first neural network; a sending module, configured to send the first intermediate result to the server, and the first intermediate result is used for the server Use the second neural network to obtain the prediction result corresponding to the data to be processed; wherein, the first neural network and the second neural network form the target neural network, and at the two different moments of the first moment and the second moment, the first terminal device The number of neural network layers in the first neural network deployed on is changed.
  • the data processing apparatus provided in the ninth aspect of the embodiments of the present application can also execute the steps performed by the first terminal device in each possible implementation manner of the second aspect.
  • the ninth aspect of the embodiments of the present application and various possible implementations of the ninth aspect
  • the embodiment of the present application provides a data processing device, which can be used in the field of artificial intelligence.
  • the server is included in the data processing system, and the data processing system also includes a first terminal device.
  • the first neural network is deployed on the first terminal device, and the second neural network is deployed on the server.
  • the device includes: a receiving module for receiving the first terminal device.
  • the first intermediate result sent by the device the first intermediate result is obtained based on the data to be processed and N first intermediate results; the input module is used to input the first intermediate result into the second neural network to obtain the result generated by the second neural network and to be Processing the prediction results corresponding to the data; wherein, the first neural network and the second neural network form the target neural network, and at two different moments, the first moment and the second moment, the first neural network deployed on the first terminal device The number of neural network layers changes.
  • the data processing device provided in the tenth aspect of the embodiment of the present application can also execute the steps executed by the server in each possible implementation of the second aspect.
  • the steps executed by the server in each possible implementation of the second aspect For the specific implementation of the tenth aspect of the embodiment of the present application and the various possible implementations of the tenth aspect, for the implementation steps and the beneficial effects brought by each possible implementation manner, reference may be made to the descriptions in various possible implementation manners in the second aspect, and details are not repeated here.
  • the embodiment of the present application provides a first terminal device, which may include a processor, the processor is coupled to a memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the above-mentioned aspects are realized Steps performed by the first terminal device in the data processing method.
  • the embodiment of the present application provides a server, which may include a processor, the processor is coupled to a memory, and the memory stores program instructions.
  • the program instructions stored in the memory are executed by the processor, the above aspects are implemented. The steps performed by the server in the data processing method.
  • the embodiment of the present application provides a data processing system, which may include a first terminal device and a server, and the first terminal device is used to perform the steps performed by the first terminal device in the method described in the first aspect above , the server is used to execute the steps executed by the server in the method described in the first aspect above; or, the first terminal device is used to execute the steps executed by the first terminal device in the method described in the second aspect above, and the server is used to execute the above-mentioned The steps performed by the server in the method described in the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the program is run on a computer, the computer is made to execute the above-mentioned various aspects.
  • the embodiment of the present application provides a computer program product, the computer program product includes a program, and when the program is run on the computer, the computer executes the first terminal device in the data processing method described in the above aspects The executed steps, or causing the computer to execute the steps executed by the server in the data processing method described in the above aspects.
  • an embodiment of the present application provides a circuit system, the circuit system includes a processing circuit configured to perform the steps performed by the first terminal device in the data processing method described in the above aspects, or , the processing circuit is configured to execute the steps executed by the server in the data processing method described in the above aspects.
  • an embodiment of the present application provides a chip system
  • the chip system includes a processor, configured to implement the functions involved in the above aspects, for example, send or process the data involved in the above methods and/or information.
  • the chip system further includes a memory, and the memory is used for saving necessary program instructions and data of the server or the communication device.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • Fig. 1a is a schematic structural diagram of an artificial intelligence subject framework provided by an embodiment of the present application.
  • Figure 1b is an application scenario diagram of the data processing method provided by the embodiment of the present application.
  • FIG. 2a is a system architecture diagram of a data processing system provided by an embodiment of the present application.
  • FIG. 2b is a system architecture diagram of a data processing system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flow diagram of a data processing method provided in an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of a data processing method provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of two split nodes corresponding to the target neural network in the data processing method provided by the embodiment of the present application;
  • FIG. 6 is a schematic diagram of the first intermediate result in the data processing method provided in the embodiment of the present application.
  • FIG. 7 is another schematic diagram of the first intermediate result in the data processing method provided in the embodiment of the present application.
  • FIG. 8 is another schematic diagram of the second intermediate result in the data processing method provided in the embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
  • FIG. 10 is a schematic flow diagram of updating split nodes corresponding to the target neural network in the embodiment of the present application.
  • Fig. 11 is a schematic diagram of splitting nodes corresponding to the target neural network in the data processing method provided by the embodiment of the present application;
  • FIG. 12 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a data processing device provided in an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a data processing device provided in an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a data processing device provided in an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a data processing device provided in an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a first terminal device provided in an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • Figure 1a shows a schematic structural diagram of the main framework of artificial intelligence.
  • the following is from the “intelligent information chain” (horizontal axis) and “IT value chain” ( Vertical axis) to illustrate the above artificial intelligence theme framework in two dimensions.
  • the "intelligent information chain” reflects a series of processes from data acquisition to processing. For example, it can be the general process of intelligent information perception, intelligent information representation and formation, intelligent reasoning, intelligent decision-making, intelligent execution and output. In this process, the data has undergone a condensed process of "data-information-knowledge-wisdom".
  • IT value chain reflects the value brought by artificial intelligence to the information technology industry from the underlying infrastructure of artificial intelligence, information (provided and processed by technology) to the systematic industrial ecological process.
  • the infrastructure provides computing power support for the artificial intelligence system, realizes communication with the outside world, and realizes support through the basic platform.
  • the computing power is provided by a smart chip, which can specifically use a central processing unit (central processing unit, CPU), an embedded neural network processing unit (neural-network processing unit, NPU), a graphics processor ( graphics processing unit (GPU), application specific integrated circuit (ASIC) or field programmable gate array (field programmable gate array, FPGA) and other hardware acceleration chips;
  • the basic platform includes distributed computing framework and network and other related platforms Guarantee and support, which may include cloud storage and computing, interconnection network, etc.
  • sensors communicate with the outside to obtain data, and these data are provided to the smart chips in the distributed computing system provided by the basic platform for calculation.
  • Data from the upper layer of the infrastructure is used to represent data sources in the field of artificial intelligence.
  • the data involves graphics, images, voice, text, and IoT data of traditional equipment, including business data of existing systems and sensory data such as force, displacement, liquid level, temperature, and humidity.
  • Data processing usually includes data training, machine learning, deep learning, search, reasoning, decision-making, etc.
  • machine learning and deep learning can symbolize and formalize intelligent information modeling, extraction, preprocessing, training, etc. of data.
  • Reasoning refers to the process of simulating human intelligent reasoning in a computer or intelligent system, and using formalized information to carry out machine thinking and solve problems according to reasoning control strategies.
  • the typical functions are search and matching.
  • Decision-making refers to the process of decision-making after intelligent information is reasoned, and usually provides functions such as classification, sorting, and prediction.
  • some general capabilities can be formed based on the results of data processing, such as algorithms or a general system, such as translation, text analysis, computer vision processing, speech recognition, image processing identification, etc.
  • Intelligent products and industry applications refer to the products and applications of artificial intelligence systems in various fields. It is the packaging of the overall solution of artificial intelligence, which commercializes intelligent information decision-making and realizes landing applications. Its application fields mainly include: intelligent terminals, intelligent manufacturing, Smart transportation, smart home, smart medical care, smart security, autonomous driving, smart city, etc.
  • the embodiments of the present application can be applied to various fields in the field of artificial intelligence, and specifically can be applied to an application scenario where the first terminal device uses a neural network to process data, and specific examples are as follows.
  • the aforementioned smart terminals may specifically be embodied as smart wearable devices such as bracelets, watches, earphones, and glasses, and may also be represented as smart terminals such as mobile phones and tablets.
  • the smart terminal can be configured with a face recognition function. When the user wants to unlock the smart terminal, open the private data on the smart terminal, or perform other operations, the smart terminal can obtain the current user's face image, and then obtain the current user's face image. The recognition result corresponding to the facial image will only trigger the execution of the corresponding operation when it is determined that the current user is a registered user.
  • Other functions can also be configured on the smart terminal, which will not be listed here.
  • the above-mentioned smart home may be embodied as a sweeping robot, an air conditioner, a lamp, a water heater, a refrigerator, or other types of smart home.
  • the smart home can obtain the voiceprint recognition result corresponding to the user's voice, and only when it is determined that the user who made the sound is a specific user, the smart home will be triggered to execute and control commands corresponding operation.
  • Figure 1b is an application scenario diagram of the data processing method provided by the embodiment of this application.
  • the air conditioner that is, an example of smart home
  • the air conditioner can obtain the voiceprint recognition result corresponding to the aforementioned control command, and determine that the user who issued the voice command "turn on the air conditioner" is a user with an air conditioner.
  • the operation of turning on the air conditioner is performed.
  • FIG. 1b is for the convenience of understanding the application scenario of this solution, and is not used to limit this solution.
  • a face recognition function can be configured on the vehicle.
  • the vehicle acquires the image data of the user’s face, and obtains the recognition result corresponding to the image data of the user’s face.
  • the vehicle will be triggered to start.
  • FIG. 2a is a system architecture diagram of a data processing system provided by an embodiment of the present application.
  • the data processing system may include a training device 210, a database 220, a terminal device 230 and a server 240, the terminal device 230 includes a first computing module, and the server 240 includes a second computing module.
  • a training data set is stored in the database 220, and the training device 210 generates the target neural network 201 for performing the target task, and the target neural network 201 includes a plurality of neural network layers; the training device 210 The target neural network 201 is iteratively trained using the training data set in the database 220 to obtain the trained target neural network 201 .
  • the server 240 can obtain the trained target neural network 201, the server 240 deploys a part of the neural network layers in the trained target neural network 201 in the first computing module of the terminal device 230, and deploys the trained target neural network 201 Another part of the neural network layer is deployed in the second computing module of the server 240 .
  • the first computing module in the terminal device 230 executes a part of data calculation in the target neural network 201
  • the second computing module in the server 240 executes another part of data computing in the target neural network 201, to The occupation of computer resources of the terminal device 230 in the calculation process of the entire neural network is reduced.
  • FIG. 2b is a system architecture diagram of a data processing system provided by an embodiment of the present application.
  • the data processing system may include a training device 210, a database 220, a terminal device 230, a first server 241 and a second server 242, the terminal device 230 includes a first computing module, and the second server 242 includes a second computing module module.
  • FIG. 2b The difference between FIG. 2b and FIG. 2a is that, in the system architecture shown in FIG. 2a, the server 240 is used to perform the allocation operation of multiple neural network layers of the target neural network 201, and the second computing module in the server 240 is used for The calculation of a part of neural network layers in the target neural network 201 is completed.
  • the first server 241 and the second server 242 are two independent devices, the first server 241 is used to perform the distribution operation of multiple neural network layers of the target neural network 201, and the second server The second calculation module in 242 is used to complete the calculation of a part of neural network layers in the target neural network 201 .
  • the "user” can directly interact with the terminal device 230, that is, the terminal device 230 can directly display the prediction results output by the entire target neural network 201 to the "user",
  • Fig. 2a and Fig. 2b are only two schematic diagrams of the data processing system provided by the embodiment of the present invention, and the positional relationship among devices, devices, modules, etc. shown in the figures does not constitute any limitation.
  • the terminal device 230 and the client device may also be independent devices, the client device is used to display the prediction results output by the entire target neural network 201 to the "user", and the terminal device 230 configures There is an input/output (in/out, I/O) interface, and the terminal device 230 performs data interaction with the client device through the I/O interface.
  • I/O input/output
  • the target neural network 201 may include a first neural network, a second neural network, and a third neural network.
  • the first neural network includes the first multiple neural network layers of the target neural network 201
  • the third neural network includes the last multiple neural network layers of the target neural network 201 . That is, the first neural network is located before the second neural network, the third neural network is located after the second neural network, and the second neural network is located between the first neural network and the third neural network.
  • the first neural network and the third neural network are deployed on the first terminal device, and the second neural network is deployed on the server.
  • the target neural network 201 may be split into two parts, the two parts respectively include a first neural network and a second neural network, the first neural network is a sub-neural network of the target neural network 201, The second neural network is another sub-neural network of the target neural network 201, and the first neural network is located before the second neural network.
  • the above-mentioned first neural network is deployed on the first terminal device, and the above-mentioned second neural network is deployed on the server.
  • the target neural network 201 adopts the above two different splitting methods, the processing flows of the first terminal device and the server are different, and the specific implementation flows of the above two splitting methods are described below respectively.
  • the target neural network includes the first neural network, the second neural network and the third neural network
  • the target neural network includes a first neural network, a second neural network and a third neural network, the aforementioned first neural network and the aforementioned third neural network are deployed on the first terminal device, and the aforementioned first neural network is deployed on the server.
  • Two neural networks A1.
  • the first terminal device inputs the original data to be processed into the first neural network, and obtains a first intermediate result generated by the first neural network.
  • the first terminal device sends the first intermediate result to the server.
  • the server inputs the first intermediate result into the second neural network, obtains the second intermediate result generated by the second neural network, and sends the second intermediate result to the first terminal device.
  • the first terminal device inputs the second intermediate result into the third neural network, and obtains the prediction result corresponding to the data to be processed generated by the third neural network; it should be understood that the example in Figure 3 is only for the convenience of understanding this solution, and is not used for Limit this program. Specifically, please refer to FIG. 4.
  • FIG. 4 is a schematic flow chart of the data processing method provided by the embodiment of the present application.
  • the data processing method provided by the embodiment of the present application may include:
  • the server sends the first neural network and the third neural network to the first terminal device, where the second neural network is deployed on the server.
  • the first neural network includes N neural network layers
  • the second neural network The network includes M neural network layers
  • the third neural network includes S neural network layers
  • the first neural network, the second neural network and the third neural network form the target neural network.
  • the server may determine the number of neural network layers and the number of third neural network layers in the first neural network corresponding to the first terminal device.
  • the first neural network includes N Neural network layer
  • the second neural network includes M neural network layers
  • the third neural network includes S neural network layers
  • the first neural network, the second neural network and the third neural network form the target neural network
  • N, M and S All are integers greater than or equal to 1.
  • the server may send the first neural network and the third neural network to the first terminal device, and the first terminal device receives and stores the first neural network and the third neural network, so as to implement the deployment of the first neural network and the third neural network on the first terminal device.
  • the second neural network is deployed on the server.
  • the server can also deploy the first neural network and the third neural network on the first terminal device in other ways, for example, using a removable storage device to deploy the first neural network and the third neural network on the first terminal device.
  • the deployment methods are not exhaustive in this embodiment of the application.
  • the target neural network in the embodiment of the present application may be a neural network after preprocessing, and the preprocessing may be pruning, distillation, or other processing methods for reducing the parameter amount of a standard neural network, etc., here Do not exhaust.
  • the target neural network in the embodiment of the present application may also be a standard neural network, and the specific expression form of the target neural network may be determined in combination with an actual application scenario, which is not limited here.
  • the server performing step 401 may be the server 240 in the data processing system shown in FIG. 2a, or may be the first server 241 in the data processing system shown in FIG. 2b.
  • the target neural network is a neural network for performing a target task
  • the target task can be any type of task.
  • the target task may be to realize the authentication function by identifying the input user data, and the authentication task may be voiceprint recognition, face recognition, fingerprint recognition, earprint recognition or other types of users to achieve authentication task.
  • the target task can be a personalized recommendation task, and the personalized recommendation task can generate charging plans, personalized recommended recipes, personalized recommended exercise programs, personalized recommended film and television works, personalized Recommended applications, etc., are not exhaustive here.
  • the target task may be a feature extraction task, and the feature extraction task may be the extraction of voiceprint features, image features, or text features.
  • the target task may also be recognizing speech content, translating text between different languages, recognizing objects in the surrounding environment, image style transfer, or other tasks performed by the first terminal device using a neural network, etc.
  • the types of tasks that the target tasks are specifically represented are not exhaustive.
  • the target neural network can be embodied as a convolutional neural network, a recurrent neural network, a residual neural network, or other types of neural networks, etc.
  • the specific form of the target neural network can be determined by combining the "target task" with a neural type of task. There is no limit.
  • the target neural network includes multiple neural network layers.
  • the first neural network, the second neural network and the third neural network are obtained by splitting the target neural network.
  • the multiple neural network layers included in the entire target neural network are split into three parts, that is, there are two split nodes corresponding to the target neural network in this embodiment, and the two split nodes
  • the split node includes a first split node and a second split node, the first split node is a split node of the first neural network and the second neural network, and the second split node is the second neural network and the third neural network split node.
  • the first neural network is located before the second neural network means that in the process of inputting the data to be processed into the target neural network and performing data processing through the target neural network, the data to be processed will first pass through the first neural network of the target neural network. After the neural network, it passes through the second neural network of the target neural network. That is to say, the order of each neural network layer in the target neural network means that during the forward propagation process of data in the target neural network, the neural network layer that the data passes through first represents the neural network layer at the front, and the neural network layer that the data passes through later Layers represent the lower neural network layers.
  • the concept of "the third neural network is located behind the second neural network” can also be understood with the help of the foregoing description, and will not be repeated here.
  • FIG. 5 is a schematic diagram of two split nodes corresponding to the target neural network in the data processing method provided by the embodiment of the present application.
  • the target neural network is used as Residual neural networks (residual networks, ResNets), the target task is to extract voiceprint features as an example, as shown in Figure 5, the target neural network includes 4 residual blocks (residual block), the target neural network is located in the first split
  • the neural network layer before the node is called the first neural network
  • the neural network layer between the first split node and the second split node is called the second neural network
  • the neural network after the second split node The layer is called the third neural network, that is, the first neural network is located before the second neural network, and the third neural network is located after the second neural network.
  • the parameters of each part of the neural network in FIG. 5 are disclosed in the form of a table as follows.
  • the first linearly connected layer 256*8*256 524288
  • the second linearly connected layer 256*256 65536
  • the first multiple neural network layers and the last multiple neural network layers in the entire target neural network can be deployed on the first terminal device, and the intermediate multiple neural network layers can be deployed on the The server can greatly reduce the computer resources on the first terminal device consumed in the data processing process of the entire target neural network.
  • the server first determines the number of neural network layers to be deployed.
  • the number of neural network layers in the first neural network deployed on the first terminal device and the first neural network deployed on the second terminal device are different, and/or, the number of neural network layers deployed on the first terminal device
  • the number of neural network layers in the third neural network and the third neural network deployed on the second terminal device are different.
  • the first terminal device and the second terminal device may be different types of terminal devices.
  • the first terminal device is a watch
  • the second terminal device is a mobile phone
  • the first terminal device is a lamp
  • the second terminal device is an air conditioner
  • the first terminal device is a mobile phone
  • the second terminal device is a tablet, etc., which are not exhaustive here.
  • the first terminal device and the second terminal device are terminal devices of the same type but of different models. It should be noted that in this solution, when two different terminal devices (that is, the first terminal device and the second terminal device) are configured with some neural network layers included in the target neural network, when the first terminal device When the number of neural network layers deployed is different from the number of neural network layers deployed on the second terminal device, the first terminal device and the second terminal device may be different types of terminal devices or different types of terminal devices of the same type, but It does not mean that the number of neural network layers deployed on any two different types of terminal devices is different, nor does it mean that the number of neural network layers deployed on any two different types of terminal devices in the same category is different.
  • the target neural network corresponds to two split nodes, "Different from the split node corresponding to the target neural network" means that the two split nodes corresponding to the first terminal device are not completely the same as the two split nodes corresponding to the second terminal device.
  • Figure 4 corresponds to the following three cases of "the split node corresponding to the target neural network is different” in the embodiment:
  • the first split node corresponding to the first terminal device is the same as the second terminal
  • the first split node corresponding to the devices is the same
  • the second split node corresponding to the first terminal device is different from the second split node corresponding to the second terminal device.
  • the first split node corresponding to the first terminal device is different from the first split node corresponding to the second terminal device
  • the second split node corresponding to the first terminal device is different from the second split node corresponding to the second terminal device.
  • the second split node corresponding to the device is the same.
  • the first split node corresponding to the first terminal device is different from the first split node corresponding to the second terminal device
  • the second split node corresponding to the first terminal device is different from the second split node corresponding to the second terminal device.
  • the second split nodes corresponding to the devices are different.
  • the same as the split node corresponding to the target neural network in the embodiment in Fig. 4 refers to the first split node corresponding to the first terminal device and the first split node corresponding to the second terminal device are the same, and the second split node corresponding to the first terminal device is the same as the second split node corresponding to the second terminal device.
  • the configurations of computer resources of different types of terminal devices may be different, and the configurations of computer resources of different models of terminal devices of the same type may also be different, then different types of terminal devices or different models of the same type
  • the computer resources that terminal devices can allocate to target tasks may also be different.
  • the number of neural network layers deployed on different types of terminal devices or different types of terminal devices of the same type is different, so as to increase the number of deployed neural network layers.
  • the process of determining two split nodes corresponding to a certain first terminal device is aimed at the server.
  • the server may be pre-configured with a first mapping relationship, the first The number of neural network layers deployed on each type of terminal device can be stored in the mapping relationship, and when the server needs to deploy the first neural network and the second neural network to a new first terminal device, it can A target type of a terminal device and the first mapping relationship determine two split nodes corresponding to the first terminal device of the target type.
  • the first terminal device when it needs to deploy part of the neural network layers in the target neural network, it may send a first request to the server, and the first request is used to request to obtain part of the neural network layers in the target neural network , the first request also carries the target type of the first terminal device.
  • the server obtains two split nodes corresponding to the target type from the first mapping relationship; The first neural network and the third neural network are separated.
  • the first mapping relationship may be stored on the server in a form, an array or other forms.
  • the first mapping relationship is shown in the form of a table below.
  • the two split nodes corresponding to the target neural network may be the same or different.
  • the two split nodes corresponding to the target neural network are different; another example is when the first terminal device behaves as a refrigerator and the second When a terminal device behaves as an air conditioner, the two split nodes corresponding to the target neural network are the same.
  • the first mapping relationship is sent to the server by other devices.
  • the first mapping relationship is generated by the server.
  • the determining factors of the first neural network and the third neural network in the first mapping relationship may include any one or a combination of multiple factors as follows: when the first terminal device executes the target task, the An estimated amount of processor resources, an estimated amount of memory resources allocated by the first terminal device, or other types of factors.
  • the server may determine the number of neural network layers deployed on each type of terminal device according to the above-mentioned indicators obtained for each type of terminal device.
  • the more the estimated amount of processor resources allocated by the first terminal device is the greater the number of neural network layers allocated on the first terminal device is, and the less the estimated amount of processor resources allocated by the first terminal device is, Then the number of neural network layers allocated on the first terminal device is smaller.
  • the larger the estimated amount of memory resources allocated by the first terminal device the greater the number of neural network layers allocated to the first terminal device, and the smaller the estimated amount of memory resources allocated by the first terminal device, the greater the number of neural network layers allocated to the first terminal device.
  • the processor can specifically be represented as a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC) or other types of processors, etc., the specific first Which types of processors are configured on the terminal device can be determined in combination with the actual product form, which is not limited here.
  • CPU central processing unit
  • GPU graphics processing unit
  • ASIC application specific integrated circuit
  • the evaluation index of "the estimated amount of processor resources allocated by the first terminal device” may include any one or more of the following elements: the first terminal device is The occupation time of the processor assigned to execute the target task and the performance of the processor assigned by the first terminal device to execute the target task. If at least two processors are allocated on the first terminal device to execute the target task, the evaluation index of "the estimated amount of processor resources allocated by the first terminal device” may include any one or more of the following elements: the first terminal device The occupation time of each processor allocated for executing the target task, the performance of each processor allocated by the first terminal device for executing the target task, the number of processors, the type of each processor or other elements, etc.
  • the evaluation index of the performance of the processor can be any one or more of the following evaluation indexes: the number of floating-point operations per second (floating-point operations per second, FLOPS) performed by the processor, the number of operations performed by the processor per second, The number of millions of instructions (dhrystone million instructions executed per second, DMIPS), that is, how many millions of instructions the processor executes per second or other indicators for evaluating the performance of the processor, or other types of processing can be used Evaluation indicators of device performance, etc., are not exhaustive here.
  • the evaluation index of "the estimated amount of memory resources allocated by the first terminal device" may be the size of the memory storage space allocated by the first terminal device for executing the target task.
  • the "occupancy time of the processor allocated by the first terminal device for executing the target task” and "the size of the memory storage space allocated by the first terminal device for executing the target task” may be an estimated value range , which can also be an estimated definite value.
  • the unit of "the occupation time of the processor assigned by the first terminal device to execute the target task” may be millions of instructions executed per second (MIPS), seconds or other types of time units, etc. , not exhaustive here.
  • the occupation time of the processor allocated by the first terminal device for executing the target task may be 0.5MIPS-1MIPS, and the size of the memory storage space allocated by the first terminal device for executing the target task may be 20M-30M; as
  • the CPU occupation time allocated by the first terminal device for executing the target task may be 1.5MIPS, and the memory storage space allocated by the first terminal device for executing the target task may be 25M. It should be understood that here The examples are only for the convenience of understanding the solution, and are not used to limit the solution.
  • the server can be configured with a second mapping relationship.
  • the number of neural network layers corresponding to at least one model of each type of terminal device may be stored in the two-mapping relationship, and when the server needs to deploy the first neural network and the second neural network to a new first terminal device, it may According to the target type, target model, and second mapping relationship of the new first terminal device, two split nodes corresponding to the first terminal device of the target type are determined.
  • the first terminal device when it needs to deploy part of the neural network layers in the target neural network, it may send a first request to the server, and the first request is used to request to obtain part of the neural network layers in the target neural network , the first request also carries the target type of the first terminal device and the target model of the first terminal device.
  • the server may receive the target type of the first terminal device and the target model of the first terminal device, and obtain two split nodes corresponding to the target type and the target model from the second mapping relationship; Two splitting nodes split the first neural network and the third neural network from the target neural network.
  • the second mapping relationship may be stored on the server in a form, an array or other forms.
  • the second mapping relationship is shown in the form of a table below.
  • the two split nodes corresponding to the target neural network may be the same or different.
  • the number of neural network layers deployed by the target neural network deployed on all models of lights is the same.
  • the two different terminal devices are mobile phones of model 0001 and mobile phone of model 0004
  • the number of neural network layers deployed by the target neural network deployed on the aforementioned two different terminal devices is different.
  • the examples in Table 3 are only for convenience of understanding The content in the second mapping relationship is not used to limit this solution.
  • the second mapping relationship is sent to the server by other devices.
  • the second mapping relationship is generated by the server.
  • the determination factors of the first neural network and the third neural network in the second mapping relationship may include any one or a combination of multiple factors as follows: when the first terminal device executes the target task, the An estimated amount of processor resources, an estimated amount of memory resources allocated by the first terminal device, or other types of factors.
  • the server can obtain the above-mentioned index of each model of the first terminal device in at least one model of each type, and generate a determined The number of neural network layers deployed on the first terminal device of the target model of the target type, and the server repeatedly performs the foregoing operations to generate the second mapping relationship.
  • the more the estimated amount of processor resources allocated by the first terminal device is the greater the number of neural network layers allocated on the first terminal device is, and the less the estimated amount of processor resources allocated by the first terminal device is, Then the number of neural network layers allocated on the first terminal device is smaller.
  • the splitting nodes corresponding to the target neural network may be the same, that is, The number of neural network layers deployed by the target neural network deployed on different first terminal devices may also be the same.
  • the first terminal device inputs the data to be processed into the first neural network, and obtains a first intermediate result generated by the first neural network.
  • step 401 is an optional step. If step 401 is executed, the first terminal device may receive the first neural network and the third neural network sent by the server, and transfer the received first neural network and the third neural network The three neural networks are stored locally.
  • the server may send the target neural network to the first terminal device The first P neural network layers and the last Q neural network layers in the target neural network, and send the first instruction information to the first terminal device; wherein, P is an integer greater than or equal to N, and Q is an integer greater than or equal to S Integer, the first indication information is used to inform the first terminal device of the positions of the two split nodes corresponding to the target neural network in the target neural network.
  • the first terminal device stores the received first P neural network layers and the last Q neural network layers locally, and determines the first neural network from the first P neural network layers according to the received first indication information,
  • the third neural network is determined from the last Q neural network layers, that is, the first neural network and the third neural network are deployed on the first terminal device.
  • the server may also send the entire trained target neural network to the first terminal device, and send first indication information to the first terminal device, where the first indication information is used to inform the first terminal device of the The positions of the two split nodes corresponding to the target neural network in the target neural network. Therefore, the first terminal device can split the received target neural network according to the received first instruction information to determine the first neural network and the third neural network, that is, to implement the deployment of the first neural network on the first terminal device.
  • a neural network and a third neural network may be used to determine the first neural network and the third neural network.
  • the first terminal device may input data to be processed into the first neural network to obtain a first intermediate result generated by the first neural network.
  • the type of data to be processed is related to the type of task that the target task specifically represents.
  • the data to be processed can be represented as any of the following data: sound data, image data, fingerprint Data, ear contour data, sequence data that can reflect user habits, text data, point cloud data, or other types of data, etc., it should be understood which type of data to be processed needs to be combined through the target neural network
  • the target task to be executed is determined by the type of task, which is not limited here. In this implementation mode, various representation forms of the data to be processed are provided, the application scenarios of the solution are expanded, and the implementation flexibility of the solution is improved.
  • the first intermediate result generated by the first neural network can also be referred to as "the first hidden vector generated by the first neural network”.
  • the first intermediate result generated by the first neural network includes the data required by the second neural network for data processing The data.
  • the first intermediate result generated by the first neural network includes data generated by the last neural network layer in the first neural network.
  • FIG. 6 is a schematic diagram of the first intermediate result in the data processing method provided by the embodiment of the present application.
  • the first intermediate result includes the data generated by the last neural network layer in the first neural network (that is, the third convolutional layer in Figure 6), it should be understood that the example in Figure 6 is only for convenience Understanding this program is not intended to limit this program.
  • the "first intermediate result generated by the first neural network” includes data generated by the last neural network layer in the first neural network, and data generated by other neural network layers in the first neural network.
  • FIG. 7 is another schematic diagram of the first intermediate result in the data processing method provided by the embodiment of the present application.
  • the two split nodes corresponding to the target neural network shown in Figure 7 are the same as the two split nodes corresponding to the target neural network shown in Figure 5, as shown in Figure 7, the first intermediate result not only includes the first The data generated by the last neural network layer (that is, the 5th convolutional layer in Figure 7) in a neural network, and includes the N-2th neural network layer (that is, the 3rd in Figure 7) in the first neural network It should be understood that the example in Figure 7 is only for the convenience of understanding this scheme, and is not used to limit this scheme.
  • the first terminal device sends the first intermediate result to the server.
  • the first terminal device may encrypt the first intermediate result, and send the encrypted first intermediate result to the server.
  • the encryption algorithms used include but are not limited to secure sockets layer (secure sockets layer, SSL) encryption algorithms or other types of encryption algorithms.
  • the server inputs the first intermediate result to the second neural network to obtain a second intermediate result generated by the second neural network.
  • the server may decrypt the encrypted first intermediate result to obtain the first intermediate result, and input the first intermediate result into the second neural network, A second intermediate result generated by the second neural network is obtained.
  • the second intermediate result generated by the second neural network can also be referred to as "the second hidden vector generated by the second neural network”.
  • the second intermediate result generated by the second neural network includes the data required by the third neural network for data processing. The data.
  • the "second intermediate result generated by the second neural network” includes data generated by the last neural network layer in the second neural network.
  • the second split node corresponding to the target neural network that is, the split node between the second neural network and the third neural network
  • the second intermediate result includes the data generated by the last neural network layer (that is, the pooling layer in Figure 7) in the second neural network
  • the "second intermediate result generated by the second neural network” includes data generated by the last neural network layer in the second neural network, and, data generated by other neural network layers in the second neural network .
  • FIG. 8 is another schematic diagram of the second intermediate result in the data processing method provided by the embodiment of the present application.
  • the second intermediate result not only includes the data generated by the last neural network layer in the second neural network (that is, the last convolutional layer in Figure 8), but also includes the data generated by the M-2th layer in the second neural network
  • the data generated by a neural network layer that is, the third last convolutional layer in FIG. 8
  • the example in FIG. 8 is only for the convenience of understanding this solution, and is not used to limit this solution.
  • the server sends the second intermediate result to the first terminal device.
  • the server may encrypt the second intermediate result, and send the encrypted second intermediate result to the first terminal device.
  • the specific encryption algorithm used please refer to step 403. description, which will not be repeated here.
  • the first terminal device inputs the second intermediate result into the third neural network, and obtains a prediction result generated by the third neural network corresponding to the data to be processed, and the type of information indicated by the prediction result corresponds to the type of the target task.
  • the first terminal device after receiving the encrypted second intermediate result, can input the second intermediate result into the third neural network, that is, input the second intermediate result into the last S neural network of the target neural network.
  • the prediction result generated by the third neural network corresponding to the data to be processed is obtained (that is, the prediction result output by the entire target neural network and corresponding to the data to be processed is obtained).
  • the type of information indicated by the prediction result corresponding to the data to be processed corresponds to the type of the target task.
  • the data to be processed may be voice data
  • the prediction result corresponding to the data to be processed is used to indicate whether the data to be processed (that is, voice data) is the preset user's voice.
  • the target task is voiceprint feature extraction
  • the data to be processed may be voice data
  • the prediction result corresponding to the data to be processed is the voiceprint feature extracted from the data to be processed.
  • the data to be processed may be image data of the user's face, and the prediction result corresponding to the data to be processed is used to indicate whether the user is a preset user.
  • the target task is fingerprint identification
  • the data to be processed is the fingerprint data of the user, and the prediction result corresponding to the data to be processed is used to indicate whether the user is a preset user.
  • the target task is to perform feature extraction on the contour data of the user's ear
  • the data to be processed is the contour data of the user's ear
  • the prediction result corresponding to the data to be processed is the contour data of the user's ear features, etc.
  • the prediction results corresponding to the data to be processed are not exhaustively listed here.
  • the processor resources occupied by the first terminal device during data processing through the first neural network and the third neural network are smaller than the processor resources occupied by the server during data processing through the second neural network, and the first The memory resources occupied by the terminal device during data processing through the first neural network and the third neural network are smaller than the memory resources occupied by the server during data processing through the second neural network.
  • the second neural network is deployed on the server, and more processor resources and more memory resources are occupied during the data processing process of the second neural network, which can further reduce the calculation process of the entire neural network
  • the computer resources of the first terminal device occupied by help to reduce the computing pressure of the first terminal device in the process of executing the target task; since most of the calculations in the data processing process of the entire neural network are performed by the server, the parameters can be used More deep neural networks are used to generate prediction results that are consistent with the data to be processed, which is conducive to improving the accuracy of the prediction results generated by the entire neural network.
  • the first terminal device after the first terminal device obtains the prediction result corresponding to the data to be processed, it can perform subsequent steps according to the prediction result corresponding to the data to be processed.
  • the specific steps to be executed can be determined in combination with the actual application scenario. Here No limit.
  • FIG. 9 is a schematic flowchart of a data processing method provided by an embodiment of the present application.
  • the target task performed by the target neural network is to extract voiceprint features
  • the first neural network, the second neural network and the third neural network are obtained by splitting the target neural network as an example, as shown in Figure 9, B1.
  • the first terminal device acquires the data to be processed input by the user (that is, the voice data input by the user shown in FIG. 9 ).
  • the first terminal device inputs the data to be processed into the first neural network (that is, the first N neural network layers of the target neural network shown in FIG. 9 ), and obtains the first intermediate result generated by the first neural network .
  • the first terminal device encrypts the first intermediate result, and sends the encrypted first intermediate result to the server, so as to implement encrypted transmission of the first intermediate result.
  • the server decrypts the encrypted first intermediate result to obtain the first intermediate result, and inputs the first intermediate result into the second neural network (that is, the M neural network layers after the N neural network layers), and obtains A second intermediate result generated by the second neural network.
  • the server encrypts the second intermediate result, and sends the encrypted second intermediate result to the first terminal device, so as to implement encrypted transmission of the second intermediate result.
  • the first terminal device decrypts the encrypted second intermediate result to obtain the second intermediate result, and inputs the second intermediate result into the third neural network (that is, the last S neural network layers of the target neural network) to obtain the entire
  • the prediction results output by the target neural network corresponding to the data to be processed that is, the voiceprint features extracted from the input voice data.
  • the first terminal device compares each of the at least one voiceprint feature stored locally with the acquired voiceprint feature to determine whether the acquired voiceprint feature is at least one pre-stored voiceprint feature Any one of them, to determine whether the aforementioned user is a user with authority, it should be understood that the example in FIG. 9 is only for the convenience of understanding this solution, and is not used to limit this solution.
  • the server acquires an updated split node corresponding to the target neural network, where the updated split node indicates that the first neural network includes n neural network layers, the second neural network includes m neural network layers, and the third neural network includes The neural network includes s neural network layers.
  • the server after the server deploys the first neural network and the third neural network on a certain first terminal device, it can acquire The neural network to which the layer belongs) corresponds to the updated split node, that is, at the two different moments of the first moment and the second moment, the neural network deployed on the first terminal device has the following changes: the first neural network The number of neural network layers in the neural network changes, or the number of neural network layers in the third neural network changes, that is, at the two different moments of the first moment and the second moment, the demolition corresponding to the target neural network The sub-nodes are different.
  • the split nodes corresponding to the target neural network may be different, but it does not mean that for the same first terminal device At any two different moments of the terminal device, the number of neural network layers deployed by the target neural network is different.
  • the meaning of "the split node corresponding to the target neural network is different" can refer to the description in the above steps, the updated split node indicates that the first neural network includes n neural network layers, and the second neural network includes m
  • the neural network layer and the third neural network include s neural network layers, the first neural network and the third neural network are deployed on the first terminal device, the second neural network is deployed on the server, and n, s, and m are all greater than or equal to 1 integers, N and n are different and/or S and s are different.
  • step 401 the positional relationship between the "first neural network” and the “second neural network” in the target neural network, and the positional relationship between the "second neural network” and the “third neural network” in the target neural network can be referred to above The description in step 401 is not repeated here.
  • the attacker may acquire the intermediate results sent between the first terminal device and the server, and invert them according to the acquired intermediate results to obtain the original data to be processed, and for the first moment and The two different moments at the second moment are different from the split nodes corresponding to the neural network, that is, at different moments, different intermediate results are sent between the first terminal device and the server, further increasing the attacker’s access to the original The difficulty of the data to be processed, in order to further improve the degree of protection of the privacy of user data.
  • a trigger point for the server to obtain the updated split node corresponding to the target neural network may be reacquire split nodes corresponding to the target neural network at regular intervals; for example, the fixed interval may be one day, one week, ten days, fifteen days, one month or Other lengths, etc., are not exhaustive here.
  • the server may reacquire the split nodes corresponding to the target neural network at a fixed time point; 3 o'clock in the morning on Monday of the week or other time points, etc., are not exhaustive here.
  • the first terminal device may send a request message to the server, where the request message is used to request to update the number of neural network layers deployed by the target neural network, that is, to request to update the number of neural network layers included in the target neural network. Deployment of the layer on the first terminal device and the server.
  • the request message may be actively triggered by the user through the first terminal device, that is, the user may actively trigger to update the number of neural network layers deployed in the target neural network, and the like.
  • the first terminal device may send a request message to the server each time it needs to execute the target task, and the request message is used to request to update the number of neural network layers deployed by the target neural network; in another In this case, the first terminal device may send a request message to the server every time the target task is executed to reach the target number of times, and the request message is used to request to update the number of neural network layers deployed by the target neural network; or in other cases trigger the first terminal device to send the request message to the server, which is not exhaustive here.
  • the factors determining the number of neural network layers deployed on the first terminal device may include: the occupation of processor resources of the first terminal device and/or the occupation of memory resources of the first terminal device.
  • the determination factors of the first neural network and the third neural network may also include any one or more of the following: the number of processes currently running on the first terminal device, the number of processes each process has run on the first terminal device The time, the running status of each process on the first terminal device, or other possible factors can be specifically determined in combination with actual application scenarios, and will not be listed here one by one.
  • the evaluation index of "the amount of memory resources occupied by the first terminal device” may include any one or more of the following indexes: the size of the total memory resources of the first terminal device, the occupied memory of the first terminal device The size of the resource, the occupancy rate of the memory resource of the first terminal device, or other evaluation indicators.
  • the evaluation index of "the processor resource occupancy of the first terminal device” may include any one or more of the following indexes: the occupancy rate of the processor resource of the first terminal device, the The occupancy time of each processor, the load of the processor on the first terminal device for executing the target task assignment, the performance of the processor on the first terminal device for executing the target task assignment, or other factors that can reflect the first terminal device
  • the evaluation indicators of the occupancy of the processor resources used to execute the target task, etc., need to be determined in conjunction with the actual product, and are not exhaustive here.
  • the server may calculate the estimated amount of processor resources allocated when the first terminal device executes the target task according to the occupied amount of processor resources of the first terminal device;
  • the occupied amount of the memory resource calculates the available amount of the memory resource of the first terminal device, and then obtains the estimated amount of the memory resource allocated when the first terminal device executes the target task.
  • the server may generate an update corresponding to the target neural network according to the estimated amount of processor resources allocated when the first terminal device executes the target task and the estimated amount of memory resources allocated when the first terminal device executes the target task After the split node.
  • the target neural network is split according to the aforementioned updated splitting nodes corresponding to the target neural network
  • the first neural network and the third neural network deployed on the first terminal device occupy
  • the processor resource is less than or equal to the estimated amount of processor resources allocated when the first terminal device executes the target task
  • the first neural network and the third neural network deployed on the first terminal device are in the process of data processing
  • the occupied memory resources are less than or equal to the aforementioned estimated amount of memory resources allocated when the first terminal device executes the target task.
  • the server obtains the estimated amount of processor resources allocated when the first terminal device executes the target task".
  • a regression model that has been trained can be stored on the server, and the aforementioned regression model is used to perform the aforementioned estimation operation; as an example, for example, the regression model can use an autoregressive integrated moving average (autoregressive integrated moving average, ARIMA) model, recursive neural network (recursive neural network, RNN) or other types of models, etc., are not exhaustive here.
  • ARIMA autoregressive integrated moving average
  • RNN recursive neural network
  • the input of the regression model may include the occupancy rate of processor resources on the first terminal device, the utilization rate of memory resources on the first terminal device, the number of currently running processes on the first terminal device, and each The running time of the process;
  • the output of the regression model can be the estimated occupancy rate of the processor resource and the estimated occupancy rate of the memory resource corresponding to each process in a period of time in the future.
  • the server may calculate the estimated available amount of processor resources and memory of the first terminal device for a period of time in the future according to the estimated occupancy rate of processor resources and the estimated occupancy rate of memory resources corresponding to each process within a period of time in the future.
  • the estimated availability of the resource may be determined.
  • the server may determine the estimated available amount of processor resources of the first terminal device within a certain period of time in the future as the estimated amount of processor resources allocated when the first terminal device executes the target task , determining the estimated available amount of memory resources of the first terminal device within a period of time in the future as the estimated amount of memory resources allocated when the first terminal device executes the target task.
  • the server may multiply the estimated amount of available processor resources of the first terminal device within a period of time in the future by the first ratio, and determine the obtained product as the time when the first terminal device executes the target task.
  • the estimated amount of allocated processor resources multiplying the estimated available amount of memory resources of the first terminal device within a period of time in the future by the first ratio, and determining the obtained product as the execution target of the first terminal device
  • the server may also determine an estimated amount of processor resources allocated when the first terminal device executes the target task according to a preset rule.
  • the server may multiply the current occupancy of the processor resource of the first terminal device by the second ratio, and determine the obtained product as the estimated occupancy of the processor resource of the first terminal device within a certain period of time in the future;
  • the current occupancy of the memory resource of the device is multiplied by the second ratio, and the obtained product is determined as the estimated occupancy of the memory resource of the first terminal device within a certain period of time in the future; the second ratio is greater than 1.
  • the server determines the estimated available amount of processor resources of the first terminal device in the future; according to the memory of the first terminal device in the future
  • the estimated occupancy of resources determines the estimated available amount of memory resources of the first terminal device within a period of time in the future.
  • the estimated amount of processor resources and memory resources allocated when the first terminal device executes the target task can be determined. Estimated amount of resources.
  • the server obtains the estimated amount of processor resources allocated when the first terminal device executes the target task according to the amount of processor resources occupied by the first terminal device.
  • the server may also use other methods to obtain the estimated amount of processor resources allocated when the first terminal device executes the target task, and each implementation method is not exhaustively listed here.
  • the updated split nodes are different from the pre-updated split nodes.
  • the server generates an After the updated split node corresponding to the target neural network, the previously determined split node can be randomly adjusted, that is, the position of the split node in the target neural network is randomly moved forward or backward , to update the updated split node corresponding to the target neural network again to obtain the final updated split node corresponding to the target neural network.
  • the positions in the target neural network can also be randomly adjusted, or only the position of the second split node in the target neural network can be randomly adjusted; the positions of the first split node and the second split node in the target neural network can also be adjusted randomly. Adjust randomly.
  • FIG. 10 is a schematic flow chart for updating split nodes corresponding to the target neural network in the embodiment of the present application.
  • FIG. 10 uses the first neural network, the second neural network and The third neural network is obtained by splitting the target neural network, as shown in Figure 10, C1.
  • the aforementioned parameters may include the amount of processor resources occupied by the first terminal device, the amount of memory resources occupied by the first terminal device, the number of processes currently running on the first terminal device, and the number of processes running on the first terminal device.
  • the first terminal device sends the aforementioned multiple parameters to the server; C2.
  • the server determines the estimated amount of processor resources and memory allocated by the first terminal device when executing the target task according to the received multiple parameters The estimated amount of resources; C3.
  • the server obtains the updated split node corresponding to the target neural network according to the estimated amount of processor resources and the estimated amount of memory resources allocated by the first terminal device when executing the target task ; C4.
  • the server randomly moves forward or backward the updated split node corresponding to the target neural network to obtain the final updated split node corresponding to the target neural network; C5.
  • the server corresponds to the target neural network according to After the final update of the split node, determine the n neural network layers included in the first neural network, the m neural network layers included in the second neural network, and the s neural network layers included in the third neural network from the target neural network ; C6.
  • the server sends the n neural network layers included in the first neural network and the s neural network layers included in the third neural network to the first terminal device, so as to deploy the first neural network and the third neural network to the first neural network. terminal device, and deploy the second neural network on the server. It should be understood that the example in FIG. 10 is only for the convenience of understanding this solution, and is not used to limit this solution.
  • the server may use different estimation algorithms at different times to obtain the processor allocated when the first terminal device executes the target task according to the available amount of processor resources of the first terminal device. resource estimates to increase the probability of different estimates of processor resources corresponding to different times; correspondingly, the server may use different estimation methods at different times, according to the available memory resources of the first terminal device amount to obtain the estimated amount of memory resources allocated when the first terminal device executes the target task, so as to increase the probability that the estimated amounts of memory resources corresponding to different moments are different. Thereby, the probability that the number of neural network layers deployed by the target corresponding to different moments is different is increased.
  • the computer resources that the first terminal device can allocate to the target task may be different at different times of the same terminal device, then the first neural network
  • the determining factors of the network and the third neural network include the occupation of processor resources of the first terminal device and/or the occupation of memory resources of the first terminal device, which is beneficial to ensure that the neural network deployed on the first terminal device can be compared with the first terminal device.
  • the computing power of a terminal device is matched to avoid increasing the computing pressure of the first terminal device in the process of performing the target task.
  • the server sends the n neural network layers included in the first neural network and the s neural network layers included in the third neural network to the first terminal device.
  • the server can split the target neural network into the first neural network, the second neural network and the third neural network according to the updated two split nodes; the server sends the first neural network to the first terminal device
  • the neural network includes n neural network layers and the third neural network includes s neural network layers, so that the first terminal device sends the first neural network and the third neural network to be deployed on the first terminal device, and the second neural network
  • the s neural network layers included in the network are deployed on the server.
  • the server may send the updated first neural network and the updated third neural network to the first terminal device, further improving This reduces the difficulty for the attacker to determine the neural network deployed on the first terminal device, thereby further increasing the difficulty for the attacker to deduce the original data to be processed from the intermediate result, which is conducive to further improving the degree of privacy protection for user data.
  • the first terminal device inputs the data to be processed into the first neural network to obtain a third intermediate result generated by the first neural network.
  • the first neural network includes n neural network layers.
  • steps 407 to 413 are optional steps. If step 407 is not performed, then steps 408 to 413 do not need to be performed, that is, for different moments of the same terminal device, the neural network deployed by the target neural network The number of layers may not be updated, so that the first neural network and the third neural network do not need to be redeployed for the first terminal device.
  • step 407 that is, for different moments of the same terminal device, the number of neural network layers deployed by the target neural network will be updated.
  • step 408 the first terminal device can receive n neural network layers included in the first neural network.
  • the network layer and the third neural network include s neural network layers, and store the received n neural network layers and s neural network layers locally.
  • step 407 is executed, step 408 is not executed and step 401 is executed, if the first neural network, the second neural network, and the third neural network are obtained by splitting the target neural network, since in step 401 the server initially converts the target neural network
  • the first neural network and the third neural network are deployed on a new first terminal device, and the basis for determining the two split nodes corresponding to the first neural network and the third neural network can be assigned by the first terminal device
  • the maximum estimated amount of computer resources "the maximum estimated amount of computer resources allocated by the first terminal device” includes “the maximum estimated amount of processing resources allocated by the first terminal device” and “the maximum estimated amount of processing resources allocated by the first terminal device”
  • the maximum estimated amount of memory resources the value of N can be greater than or equal to n, and the value of S can be greater than or equal to s.
  • the server may send second indication information to the first terminal device, and the second indication information is used to inform the first terminal device of the two nodes corresponding to the target neural network.
  • the updated split node can determine the first neural network from the first neural network and the third neural network from the third neural network according to the second instruction information, so as to implement the deployment of the first neural network and the third neural network on the first neural network. on the terminal device.
  • step 407 is executed, step 408 is not executed, and step 401 is not executed, if the first neural network, the second neural network, and the third neural network are obtained by splitting the target neural network, in one implementation, if the first terminal The first P neural network layers in the target neural network and the last Q neural network layers in the target neural network are stored on the device.
  • the server After the server obtains the updated split nodes corresponding to the target neural network, it can send to the first
  • the terminal device sends the second instruction information, and the first terminal device can determine the first neural network from the first P neural network layers according to the received second instruction information, and determine the third neural network from the last Q neural network layers.
  • P is an integer greater than or equal to n
  • Q is an integer greater than or equal to s.
  • the server may send the second Two instruction information, the second instruction information is used to inform the first terminal device of the two updated split nodes corresponding to the target neural network, so that the first terminal device can determine the first neural network from the target neural network according to the second instruction information network and the third neural network, and the server can determine the second neural network from the target neural network according to the updated split nodes corresponding to the target neural network, that is, the first neural network, the second neural network and the first neural network respectively
  • the three neural networks are deployed on the first terminal device and the server.
  • the first terminal device After deploying the first neural network and the third neural network, the first terminal device can input the data to be processed into the first neural network to obtain the third intermediate result generated by the first neural network.
  • the first terminal device can input the data to be processed into the first neural network to obtain the third intermediate result generated by the first neural network.
  • the concept of the "third intermediate result” is similar to the concept of the "first intermediate result", and will not be repeated here.
  • step 409 may be executed multiple times after step 401 is executed once.
  • the first terminal device sends the third intermediate result to the server.
  • the server inputs the third intermediate result into the second neural network to obtain a fourth intermediate result generated by the second neural network.
  • the third neural network includes m neural network layers.
  • the server sends the fourth intermediate result to the first terminal device.
  • the first terminal device inputs the fourth intermediate result into the third neural network to obtain a prediction result generated by the third neural network and corresponding to the data to be processed.
  • the third neural network includes s neural network layers.
  • steps 410 to 413 can refer to the descriptions in steps 403 to 406, the difference is that the "first intermediate result” in steps 403 to 406 is replaced by the "third intermediate result” in steps 410 to 413 Intermediate result", replace the "second intermediate result” in steps 403 to 406 with the "fourth intermediate result” in steps 410 to 413, the meaning of "fourth intermediate result” is similar to the meaning of "second intermediate result” , are not described here.
  • FIG. 11 is a schematic diagram of the splitting nodes corresponding to the target neural network in the data processing method provided by the embodiment of the present application.
  • Figure 11 shows the split node before update and the split node after update, the first split node before update is point X in Figure 11, and the first split node after update is Y in Figure 11 point, the second split node before the update and the second split node after the update are both points H in FIG. 11 .
  • the operation of the second neural network since the operation of the second neural network is completed by the server, it can reduce the computer resources of the first terminal device occupied in the calculation process of the entire target neural network; the first terminal device is to process the data After the calculation in the first neural network before input, the first intermediate result is sent to the server, which avoids the leakage of the original data to be processed and improves the privacy protection of user data; and the latter in the entire target neural network
  • the calculation of the three neural networks is also performed by the first terminal device side, which is conducive to further improving the degree of protection of the privacy of user data.
  • the attacker may acquire the intermediate results sent between the first terminal device and the server, and invert them according to the obtained intermediate results to obtain the original data to be processed, and for the two At different times, the number of neural network layers deployed on the first terminal device changes, that is, at different times, different intermediate results are sent between the first terminal device and the server, further increasing the attacker's ability to obtain the original waiting list.
  • the difficulty of processing data to further improve the degree of protection for the privacy of user data.
  • the target neural network includes the first neural network and the second neural network
  • FIG. 12 is a schematic flowchart of the data processing method provided in the embodiment of the present application.
  • the data processing method provided in the embodiment of the present application may include:
  • the server sends the first neural network to the first terminal device.
  • the second neural network is deployed on the server.
  • the first neural network includes N neural network layers
  • the second neural network includes M neural network layers.
  • the first neural network and the second neural network form the target neural network.
  • the first terminal device inputs the data to be processed into the first neural network, and obtains a first intermediate result generated by the first neural network.
  • the first terminal device sends the first intermediate result to the server.
  • steps 1201 to 1203 can refer to the description of steps 401 to 403 in the embodiment corresponding to FIG.
  • a neural network and a third neural network in steps 1201 to 1203, the target neural network includes a first neural network and a second neural network, and the first neural network is located before the second neural network.
  • the first neural network and the second neural network are obtained by splitting the target neural network
  • the first neural network refers to the neural network layer before the target split node in the target neural network
  • the second neural network Refers to the neural network layer located after the target split node in the target neural network; for the understanding of the concepts of "the first neural network is located before the second neural network" and "the first intermediate result", please refer to the corresponding embodiment in Figure 4 description, which will not be repeated here.
  • the server inputs the first intermediate result into the second neural network to obtain a prediction result generated by the second neural network corresponding to the data to be processed, and the type of information indicated by the prediction result corresponds to the type of the target task.
  • the server may decrypt the encrypted first intermediate result to obtain the first intermediate result, and input the first intermediate result into the second neural network, Obtain the prediction result corresponding to the data to be processed generated by the second neural network (that is, obtain the prediction result corresponding to the data to be processed output by the entire target neural network), and the type of information indicated by the prediction result corresponds to the type of the target task.
  • the prediction result corresponding to the data to be processed generated by the second neural network that is, obtain the prediction result corresponding to the data to be processed output by the entire target neural network
  • the type of information indicated by the prediction result corresponds to the type of the target task.
  • FIG. 13 is a schematic flowchart of a data processing method provided by an embodiment of the present application.
  • the target task performed by the target neural network is to extract voiceprint features as an example.
  • the first terminal device obtains the data to be processed input by the user (that is, the user input shown in Fig. 13 audio data).
  • the first terminal device inputs the data to be processed into the first neural network (that is, the first N neural network layers of the target neural network shown in FIG. 13 ), and obtains the first intermediate result generated by the first neural network .
  • the first terminal device encrypts the first intermediate result, and sends the encrypted first intermediate result to the server, so as to implement encrypted transmission of the first intermediate result.
  • the server decrypts the encrypted first intermediate result to obtain the first intermediate result, and inputs the first intermediate result into the second neural network (that is, the last M neural network layers of the target neural network) to obtain the entire target
  • the prediction results output by the neural network corresponding to the data to be processed that is, the voiceprint features extracted from the input voice data).
  • the server compares each of the registered at least one voiceprint feature with the acquired voiceprint feature to determine whether the acquired voiceprint feature is any of the pre-registered at least one voiceprint feature One, to determine the result of voiceprint recognition, and the result of voiceprint recognition is used to indicate whether the user is an authorized user. D6.
  • the server sends the voiceprint recognition result to the first terminal device. It should be understood that the example in FIG. 13 is only for facilitating understanding of this solution, and is not used to limit this solution.
  • step 1201 is an optional step. If step 1201 is not performed, the way the server deploys the first neural network to the first terminal device can refer to the description in step 402 in the embodiment corresponding to FIG. Do repeat.
  • the server acquires an updated split node corresponding to the target neural network, and the updated split node indicates the target neural network.
  • the first neural network includes n neural network layers and the second neural network includes m neural network layers.
  • step 1205 can refer to the description in step 407 in the embodiment corresponding to FIG. A split node corresponding to the target neural network.
  • step 1205 may be executed multiple times after step 1201 is executed once.
  • the server sends the n neural network layers included in the first neural network to the first terminal device.
  • the first terminal device inputs the data to be processed into the first neural network to obtain a third intermediate result generated by the first neural network.
  • the first neural network includes n neural network layers.
  • the first terminal device sends the third intermediate result to the server.
  • the server inputs the third intermediate result into the m neural network layers included in the second neural network, and obtains a prediction result generated by the second neural network corresponding to the data to be processed.
  • steps 1205 to 1209 are optional steps. If step 1205 is not performed, steps 1206 to 1209 do not need to be performed; if step 1205 is performed, then step 1206 is also an optional step.
  • step 1206 the manner in which the server deploys the first neural network to the first terminal device can refer to the description in step 409 in the embodiment corresponding to FIG. 4 , and details are not repeated here.
  • the computer resources of the first terminal device occupied during the calculation process of the entire neural network can be reduced; the first terminal device inputs the data to be processed After the calculation in the first neural network, the first intermediate result is sent to the server, which avoids the leakage of the original data to be processed and improves the privacy protection of user data; and the latter third neural network in the entire neural network
  • the calculation of the network is also performed by the first terminal device side, which is conducive to further improving the degree of protection of the privacy of user data.
  • Figure 14 is a schematic structural diagram of a data processing device provided in the embodiment of the present application, the data processing device 1400 is deployed on the first terminal device, the first terminal device is included in the data processing system, and the data processing system It also includes a server, the first neural network and the third neural network are deployed on the first terminal device, and the second neural network is deployed on the server, and the data processing device 1400 includes: an input module 1401 for inputting data to be processed into the first neural network, Obtain the first intermediate result generated by the first neural network; the sending module 1402 is used to send the first intermediate result to the server, and the first intermediate result is used for the server to use the second neural network to obtain the second intermediate result; the receiving module 1403, It is used to receive the second intermediate result sent by the server; the input module
  • the first neural network includes N neural network layers
  • the third neural network includes S neural network layers
  • the first neural network includes n neural network layers
  • the third neural network includes s neural network layers, where N is different from n and/or S is different from s
  • the receiving module 1403 is also configured to receive the n neural network layers and s neural network layers sent by the server.
  • FIG. 15 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • the data processing device 1500 is deployed on a server, and the server is included in the data processing system.
  • the data processing system also includes a first terminal device.
  • the first neural network and the third neural network are deployed on the first terminal device, and the second neural network is deployed on the server.
  • the data processing device 1500 includes: a receiving module 1501, configured to receive the first intermediate result sent by the first terminal device, the first The intermediate result is obtained based on the data to be processed and the first neural network; the input module 1502 is used to input the first intermediate result into the second neural network to obtain the second intermediate result generated by the second neural network; the sending module 1503 is used to input the first intermediate result to the second neural network; The second intermediate result is sent to the first terminal device, and the second intermediate result is used for the first terminal device to use the third neural network to obtain the prediction result corresponding to the data to be processed; wherein, the first neural network, the second neural network and the third neural network The neural network forms the target neural network. At two different moments, the first moment and the second moment, the neural network deployed on the first terminal device has the following changes: the number of neural network layers in the first neural network changes, or , the number of neural network layers in the third neural network is changed.
  • the first neural network includes N neural network layers
  • the third neural network includes S neural network layers
  • the first neural network includes n neural network layers
  • the third neural network includes s neural network layers, wherein N and n are different and/or S and s are different
  • the sending module 1503 is also used to send n neural network layers and s neural network layers to the first terminal device .
  • FIG. 16 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • the data processing device 1600 is deployed on the first terminal device, and the first terminal device is included in the data processing system, and the data processing system also includes Including a server, a first neural network is deployed on the first terminal device, a second neural network is deployed on the server, and the data processing device 1600 includes: an input module 1601, which is used to input data to be processed into the first neural network to obtain the neural network generated by the first neural network.
  • the sending module 1602 is used to send the first intermediate result to the server, and the first intermediate result is used for the server to use the second neural network to obtain the prediction result corresponding to the data to be processed; wherein, the first neural network
  • the target neural network is formed with the second neural network, and at two different moments, the first moment and the second moment, the number of neural network layers in the first neural network deployed on the first terminal device changes.
  • the first neural network includes N neural network layers, and at the second moment, the first neural network includes n neural network layers, where N and n are different; the data processing device 1600 also A receiving module is included for receiving the first neural network sent by the server.
  • FIG. 17 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • the data processing device 1700 is deployed on a server, and the server is included in a data processing system.
  • the data processing system also includes a first terminal device.
  • the first neural network is deployed on the first terminal device, and the second neural network is deployed on the server.
  • the data processing device 1700 includes: a receiving module 1701, configured to receive the first intermediate result sent by the first terminal device.
  • the first intermediate result is based on the The data and N first intermediate results are obtained; the input module 1702 is configured to input the first intermediate results into the second neural network, and obtain the prediction results generated by the second neural network corresponding to the data to be processed; wherein, the first neural network and The second neural network forms the target neural network, and at two different moments, the first moment and the second moment, the number of neural network layers in the first neural network deployed on the first terminal device changes.
  • the first neural network includes N neural network layers, and at the second moment, the first neural network includes n neural network layers, and N is different from n; the device also includes: sending Module for sending n neural network layers to end devices.
  • the first terminal device 1800 includes: a receiver 1801, a transmitter 1802, a processor 1803, and a memory 1804 (wherein the number of processors 1803 in the first terminal device 1800 can be one or more, and one process is used in FIG. 18 processor as an example), where the processor 1803 may include an application processor 18031 and a communication processor 18032.
  • the receiver 1801, the transmitter 1802, the processor 1803 and the memory 1804 may be connected through a bus or in other ways.
  • the memory 1804 may include read-only memory and random-access memory, and provides instructions and data to the processor 1803 .
  • a part of the memory 1804 may also include a non-volatile random access memory (non-volatile random access memory, NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1804 stores processors and operating instructions, executable modules or data structures, or their subsets, or their extended sets, wherein the operating instructions may include various operating instructions for implementing various operations.
  • the processor 1803 controls the operation of the first terminal device.
  • various components of the first terminal device are coupled together through a bus system, where the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • the various buses are referred to as bus systems in the figures.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the processor 1803 or implemented by the processor 1803 .
  • the processor 1803 may be an integrated circuit chip and has a signal processing capability. In the implementation process, each step of the above method may be implemented by an integrated logic circuit of hardware in the processor 1803 or instructions in the form of software.
  • the above-mentioned processor 1803 can be a general-purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor or a microcontroller, and can further include an application-specific integrated circuit (application specific integrated circuit, ASIC), field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable Field-programmable gate array
  • the processor 1803 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 1804, and the processor 1803 reads the information in the memory 1804, and completes the steps of the above method in combination with its hardware.
  • the receiver 1801 can be used to receive input digital or character information, and generate signal input related to related settings and function control of the first terminal device.
  • the transmitter 1802 can be used to output digital or character information through the first interface; the transmitter 1802 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 1802 can also include a display device such as a display screen .
  • the processor 1803 is configured to execute the steps executed by the first terminal device in each method embodiment corresponding to FIG. 3 to FIG. 11 .
  • the specific way for the processor 1803 to execute the aforementioned steps is based on the same concept as the method embodiments corresponding to Figures 3 to 11 in this application, and the technical effects brought about by it are the same as those in Figures 3 to 11 in this application.
  • the corresponding method embodiments are the same, and for specific content, please refer to the description in the foregoing method embodiments of the present application, which will not be repeated here.
  • the processor 1803 is configured to execute the steps executed by the first terminal device in each method embodiment corresponding to FIG. 12 or FIG. 13 .
  • the specific way for the processor 1803 to execute the aforementioned steps is based on the same concept as the method embodiments corresponding to Figure 12 or Figure 13 in this application, and the technical effects brought about by it are the same as those in Figure 12 or Figure 13 in this application.
  • the corresponding method embodiments are the same, and for specific content, please refer to the description in the foregoing method embodiments of the present application, which will not be repeated here.
  • FIG. 19 is a schematic structural diagram of the server provided in the embodiment of the present application.
  • the server 1900 is implemented by one or more servers.
  • the server 1900 can be configured or There are relatively large differences due to different performances, and may include one or more central processing units (central processing units, CPU) 1922 (for example, one or more processors) and memory 1932, one or more storage application programs 1942 or data 1944 storage medium 1930 (eg, one or more mass storage devices).
  • the memory 1932 and the storage medium 1930 may be temporary storage or persistent storage.
  • the program stored in the storage medium 1930 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the server.
  • the central processing unit 1922 may be configured to communicate with the storage medium 1930 , and execute a series of instruction operations in the storage medium 1930 on the server 1900 .
  • the server 1900 can also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input and output interfaces 1958, and/or, one or more operating systems 1941, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • the central processing unit 1922 is configured to execute the steps executed by the server in the respective embodiments corresponding to FIG. 3 to FIG. 11 .
  • the specific way for the central processing unit 1922 to execute the above-mentioned steps is based on the same concept as the method embodiments corresponding to Fig. 3 to Fig. 11 in this application, and the technical effects brought about by it are the same as those in Fig. 3 to Fig. 11 in this application.
  • the method embodiments corresponding to 11 are the same, and for specific content, refer to the description in the method embodiments shown above in this application, and details are not repeated here.
  • the central processing unit 1922 is configured to execute the steps executed by the server in each embodiment corresponding to FIG. 12 or FIG. 13 .
  • the specific way for the central processing unit 1922 to execute the above-mentioned steps is based on the same concept as the method embodiments corresponding to Fig. 12 or Fig. 13 in this application, and the technical effect it brings is the same as that in Fig. 12 or Fig. 13 in this application.
  • the method embodiments corresponding to 13 are the same, and for specific content, please refer to the description in the method embodiments shown above in this application, and details are not repeated here.
  • An embodiment of the present application also provides a computer program product that, when running on a computer, causes the computer to perform the steps performed by the first terminal device in the method described in the embodiments shown in FIGS. 3 to 11 , or , causing the computer to execute the steps performed by the server in the method described in the embodiment shown in FIG. 3 to FIG. The executed steps, or make the computer execute the steps executed by the server in the method described in the embodiment shown in FIG. 12 or FIG. 13 .
  • An embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a program for signal processing, and when it is run on a computer, the computer executes the program shown in Figure 3 to Figure 11. exemplify the steps performed by the first terminal device in the method described in the embodiment, or make the computer perform the steps performed by the server in the method described in the embodiment shown in Figure 3 to Figure 11 above, or cause the computer to perform the steps performed by the server in the method described in the preceding figure 12 or the steps performed by the first terminal device in the method described in the embodiment shown in FIG. 13 , or make the computer execute the steps performed by the server in the method described in the foregoing embodiment shown in FIG. 12 or FIG. 13 .
  • the embodiment of the present application also provides a data processing system
  • the data processing system may include a first terminal device and a server
  • the first terminal device is the first terminal device described in the embodiment shown in Figure 18
  • the server is the first terminal device described in Figure 19
  • the illustrated embodiment describes the server.
  • the data processing device provided in the embodiment of the present application may specifically be a chip, and the chip includes: a processing unit and a communication unit, the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, a pin or a circuit, etc. .
  • the processing unit can execute the computer-executed instructions stored in the storage unit, so that the chip executes the data processing method described in the embodiment shown in FIG. 12 or FIG. data processing method.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc.
  • the storage unit may also be a storage unit located outside the chip in the wireless access device, such as only Read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM), etc.
  • ROM Read-only memory
  • RAM random access memory
  • FIG. 20 is a schematic structural diagram of a chip provided by the embodiment of the present application.
  • the chip can be represented as a neural network processor NPU 200, and the NPU 200 is mounted to the main CPU (Host CPU) as a coprocessor. CPU), the tasks are assigned by the Host CPU.
  • the core part of the NPU is the operation circuit 2003, and the operation circuit 2003 is controlled by the controller 2004 to extract matrix data in the memory and perform multiplication operations.
  • the operation circuit 2003 includes multiple processing units (Process Engine, PE).
  • arithmetic circuit 2003 is a two-dimensional systolic array.
  • the arithmetic circuit 2003 may also be a one-dimensional systolic array or other electronic circuits capable of performing mathematical operations such as multiplication and addition.
  • the arithmetic circuit 2003 is a general-purpose matrix processor.
  • the operation circuit fetches the data corresponding to the matrix B from the weight memory 2002, and caches it in each PE in the operation circuit.
  • the operation circuit takes the data of matrix A from the input memory 2001 and performs matrix operation with matrix B, and the obtained partial or final results of the matrix are stored in the accumulator (accumulator) 2008 .
  • the unified memory 2006 is used to store input data and output data.
  • the weight data directly accesses the controller (Direct Memory Access Controller, DMAC) 2005 through the storage unit, and the DMAC is transferred to the weight storage 2002.
  • the input data is also transferred to the unified memory 2006 through the DMAC.
  • DMAC Direct Memory Access Controller
  • the BIU is the Bus Interface Unit, that is, the bus interface unit 2010, which is used for the interaction between the AXI bus and the DMAC and the instruction fetch buffer (Instruction Fetch Buffer, IFB) 2009.
  • IFB Instruction Fetch Buffer
  • the bus interface unit 2010 (Bus Interface Unit, BIU for short), is used for fetching the memory 2009 to obtain instructions from the external memory, and is also used for the storage unit access controller 2005 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
  • the DMAC is mainly used to move the input data in the external memory DDR to the unified memory 2006 , to move the weight data to the weight memory 2002 , or to move the input data to the input memory 2001 .
  • the vector calculation unit 2007 includes a plurality of calculation processing units, and if necessary, further processes the output of the calculation circuit, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison and so on. It is mainly used for non-convolutional/fully connected layer network calculations in neural networks, such as Batch Normalization (batch normalization), pixel-level summation, and upsampling of feature planes.
  • Batch Normalization batch normalization
  • pixel-level summation pixel-level summation
  • upsampling of feature planes upsampling of feature planes.
  • the vector computation unit 2007 can store the vector of the processed output to unified memory 2006 .
  • the vector calculation unit 2007 may apply a linear function and/or a nonlinear function to the output of the operation circuit 2003, such as performing linear interpolation on the feature plane extracted by the convolution layer, and then such as a vector of accumulated values to generate an activation value.
  • the vector computation unit 2007 generates normalized values, pixel-level summed values, or both.
  • the vector of processed outputs can be used as an activation input to the arithmetic circuit 2003, for example for use in a subsequent layer in a neural network.
  • An instruction fetch buffer (instruction fetch buffer) 2009 connected to the controller 2004 is used to store instructions used by the controller 2004;
  • the unified memory 2006, the input memory 2001, the weight memory 2002 and the fetch memory 2009 are all On-Chip memories. External memory is private to the NPU hardware architecture.
  • At least one neural network layer in the target neural network is deployed on the first terminal device and the server, and the operation of the neural network layer in the target neural network can be performed by the computing circuit 2003 or the vector calculation unit 2007 executes.
  • the processor mentioned in any of the above-mentioned places may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the program execution of the above-mentioned method in the first aspect.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be A physical unit can be located in one place, or it can be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • the connection relationship between the modules indicates that they have communication connections, which can be specifically implemented as one or more communication buses or signal lines.
  • the essence of the technical solution of this application or the part that contributes to the prior art can be embodied in the form of a software product, and the computer software product is stored in a readable storage medium, such as a floppy disk of a computer , U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute the method described in each embodiment of the present application .
  • a computer device which can be a personal computer, a server, or a network device, etc.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wired eg, coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (Solid State Disk, SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种数据处理方法以及相关设备,该方法可用于人工智能领域中。方法包括:终端设备将待处理数据输入第一神经网络得到第一中间结果,将第一中间结果发送至服务器;服务器将第一中间结果输入第二神经网络得到第二中间结果,将第二中间结果发送至终端设备;终端设备将第二中间结果输入第三神经网络得到与待处理数据对应的预测结果;在第一时刻和第二时刻这两个不同的时刻,终端设备上部署的第一神经网络或者第三神经网络中的神经网络层的数量发生改变,在不同的时刻,终端设备和服务器之间发送不同的中间结果,进一步提高对用户数据的隐私性的保护程度。

Description

一种数据处理方法以及相关设备
本申请要求于2022年01月30日提交中国专利局、申请号为202210115049.6、发明名称为“一种数据处理方法以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种单指令多数据SIMD指令的生成、处理方法以及相关设备。
背景技术
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式作出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
为了保证神经网络在执行目标任务时的性能,现存的基于深度神经网络的应用模型参数量往往会达到10M-100M不等,对于一些计算资源限制比较吃紧的终端设备(例如智能穿戴设备或智能传感器等),它们的计算机资源往往很难完成整个神经网络的计算。
目前,可以在终端设备侧采集用户的待处理数据,将待处理数据发送至服务器,服务器通过神经网络对待处理数据进行处理后,得到与待处理数据对应的预测结果,并返回至终端设备。
但由于用户的待处理数据需要在网络中传输,且服务器能够获取到原始的待处理数据,对用户数据的隐私性保护程度较弱。
发明内容
本申请实施例提供了一种数据处理方法以及相关设备,由于第二神经网络的运算是由服务器完成的,因此可以减少整个神经网络的计算过程中所占用的终端设备的计算机资源;终端设备是将待处理数据输入第一神经网络中计算之后,将第一中间结果发送给服务器,避免了原始的待处理数据的泄露,提高了对用户数据的隐私性的保护程度;且整个神经网络中的第三神经网络的计算也是由终端设备侧执行,有利于进一步提高对用户数据的隐私性的保护程度。
为解决上述技术问题,本申请实施例提供以下技术方案:
第一方面,本申请实施例提供一种数据处理方法,可用于人工智能领域中。方法应用于数据处理的系统,数据处理的系统包括第一终端设备和服务器,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,第一神经网络位于第二神经网络之前,第三神经网络位于第二神经网络之后,第二神经网络位于第一神经网络和第三神经网络之间。
进一步地,“第一神经网络位于第二神经网络之前”指的是在将待处理数据输入目标神经网络中,并通过目标神经网络进行数据处理的过程中,待处理数据会先通过目标神经网络中的第一神经网络,之后再经过目标神经网络中的第二神经网络。“第三神经网络位于第二神经网络之后”这一概念也可以借助前述描述进行理解,此处不做赘述。
数据处理方法包括:第一终端设备将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果,将第一中间结果发送至服务器;“第一神经网络生成的第一中间结果”也可以称为“第一神经网络生成的第一隐向量”,第一神经网络生成的第一中间结果包括第二神经网络进行数据处理时所需要的数据;进一步地,“第一神经网络生成的第一中间结果”包括第一神经网络中最后一个神经网络层生成的数据,或者,“第一神经网络生成的第一中间结果”包括第一神经网络中最后一个神经网络层生成的数据,和,第一神经网络中其他神经网络层生成的数据。服务器将第一中间结果输入第二神经网络,得到第二神经网络生成的第二中间结果,将第二中间结果发送至第一终端设备;“第二中间结果”的含义可以参阅“第一中间结果”的含义进行理解,此处不做赘述。第一终端设备将第二中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果,预测结果所指示的信息的类型与目标任务的类型对应。
其中,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
本实现方式中,由于第二神经网络的运算是由服务器完成的,因此可以减少整个目标神经网络的计算过程中所占用的第一终端设备的计算机资源;第一终端设备是将待处理数据输入前第一神经网络中计算之后,将第一中间结果发送给服务器,避免了原始的待处理数据的泄露,提高了对用户数据的隐私性的保护程度;且整个目标神经网络中的后第三神经网络的计算也是由第一终端设备侧执行,有利于进一步提高对用户数据的隐私性的保护程度。由于攻击者可能会在获取到第一终端设备和服务器之间发送的中间结果后,根据获取到的中间结果反推以得到原始的待处理数据,而对于第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络层的数量发生改变,也即在不同的时刻,第一终端设备和服务器之间发送不同的中间结果,进一步增加了攻击者获取到原始的待处理数据的难度,以进一步提高对用户数据的隐私性的保护程度。
在第一方面的一种可能实现方式中,在第一时刻,第一神经网络包括N个神经网络层,第三神经网络包括S个神经网络层,在第二时刻,第一神经网络包括n个神经网络层,第三神经网络包括s个神经网络层,其中,N和n不同和/或S和s不同,方法还包括:服务器向第一终端设备发送n个神经网络层和s个神经网络层。
本实现方式中,当第一终端设备上部署的神经网络层的数量发生变化时,则服务器可以向第一终端设备发送更新后的第一神经网络和更新后的第三神经网络,进一步提高了攻击者确定第一终端设备上部署的神经网络的难度,从而进一步提高攻击者从中间结果反推得到原始的待处理数据的难度,有利于进一步提高对用户数据的隐私性保护程度。
在第一方面的一种可能实现方式中,方法还包括:服务器从目标神经网络中确定第一 神经网络和第三神经网络,其中,目标神经网络为执行目标任务的神经网络,第一神经网络和第三神经网络的确定因素包括:在尚未执行目标任务时,第一终端设备的处理器资源的占用量和/或第一终端设备的内存资源的占用量。可选地,第一神经网络和第三神经网络的确定因素还可以包括如下任一种或多种:第一终端设备上目前运行的进程的数量、第一终端设备上每个进程已经运行的时间、第一终端设备上每个进程的运行状态或其他能够因素等,此处不做穷举。进一步地,“第一终端设备的内存资源的占用量”的评价指标可以包括如下任一种或多种指标:第一终端设备的总的内存资源的大小、第一终端设备的已占用的内存资源的大小、第一终端设备的内存资源的占用率或其他评价指标等。“第一终端设备的处理器资源的占用量”的评价指标可以包括如下任一种或多种指标:第一终端设备的处理器资源的占用率、第一终端设备上用于执行目标任务的每个处理器的占用时长、第一终端设备上用于执行目标任务分配的处理器的负载量、第一终端设备上用于执行目标任务分配的处理器的性能或其他能够反映第一终端设备的上用于执行目标任务的处理器资源的占用量的评价指标等,此处不做穷举。
本实现方式中,由于第一终端设备上通常需要运行多个应用程序,则在同一终端设备的不同时刻,第一终端设备能够分配给目标任务的计算机资源可能是不同的,则第一神经网络和第三神经网络的确定因素包括第一终端设备的处理器资源的占用量和/或第一终端设备的内存资源的占用量,有利于保证第一终端设备上部署的神经网络能够与第一终端设备的算力相匹配,以避免增加第一终端设备在执行目标任务过程的运算压力。
在第一方面的一种可能实现方式中,数据处理的系统还包括第二终端设备,第一终端设备上部署的第一神经网络和第二终端设备上部署的第一神经网络中的神经网络层的数量不同,和/或,第一终端设备上部署的第三神经网络和第二终端设备上部署的第三神经网络中的神经网络层的数量不同;其中,第一终端设备和第二终端设备为不同类型的终端设备,和/或,第一终端设备和第二终端设备为同一类型中不同型号的终端设备。
本实现方式中,由于不同类型的终端设备的计算机资源的配置可能不同,同一类型中不同型号的终端设备的计算机资源的配置也可能不同,则不同类型的终端设备或同一类型中不同型号的终端设备能够分配给目标任务的计算机资源也可能不同,本方案中不同类别的终端设备或同一类型中不同型号的终端设备上部署的神经网络层的数量不同,以提高部署的神经网络层的数量与第一终端设备的计算机资源之间的匹配度。
在第一方面的一种可能实现方式中,第一神经网络和第二神经网络可以为服务器对目标神经网络拆分得到的。服务器上可以存储有第一映射关系,第一映射关系中可以存储有每种类型的终端设备上部署的神经网络层的数量,当服务器需要向新的第一终端设备上部署第一神经网络和第二神经网络时,可以根据该新的第一终端设备的目标类型和第一映射关系,确定与目标类型的第一终端设备对应的两个拆分节点。或者,服务器上可以存储有第二映射关系,第二映射关系中可以存储有与每种类型的终端设备的至少一个型号对应的神经网络层的数量,当服务器需要向新的第一终端设备上部署第一神经网络和第二神经网络时,可以根据该新的第一终端设备的目标类型、目标型号和第二映射关系,确定与目标类型的第一终端设备对应的两个拆分节点。其中,第一映射关系(或第二映射关系)中的 第一神经网络和第三神经网络的确定因素可以包括如下任一种或多种因素的组合:当第一终端设备执行目标任务时,第一终端设备分配的处理器资源的预估量、第一终端设备分配的内存资源的预估量或其他类型的因素等。
在第一方面的一种可能实现方式中,第一终端设备通过第一神经网络和第三神经网络进行数据处理的过程中所占用的处理器资源小于服务器通过第二神经网络进行数据处理的过程中所占用的处理器资源,且,第一终端设备通过第一神经网络和第三神经网络进行数据处理的过程中所占用的内存资源小于服务器通过第二神经网络进行数据处理的过程中所占用的内存资源。
本实现方式中,将第二神经网络部署于服务器上,在第二神经网络进行数据处理的过程中占用处理器资源较多且占用内存资源较多,则可以进一步减少整个神经网络的计算过程中所占用的第一终端设备的计算机资源,有利于降低第一终端设备在执行目标任务过程中的计算压力;由于整个神经网络的数据处理过程中的大部分计算由服务器执行,则可以采用参数量更多的深度神经网络来生成与待处理数据定的预测结果,有利于提高整个神经网络生成的预测结果的精度。
在第一方面的一种可能实现方式中,待处理数据具体可以表现为如下任一种数据:声音数据、图像数据、指纹数据、耳部的轮廓数据、能够反映用户习惯的序列数据、文本数据、点云数据或其他类型的数据等。本实现方式中,提供了待处理数据的多种表现形式,扩展了本方案的应用场景,提高了本方案的实现灵活性。
第二方面,本申请实施例提供了一种数据处理方法,可用于人工智能领域中。方法应用于数据处理的系统,数据处理的系统包括第一终端设备和服务器,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,方法包括:第一终端设备将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果,将第一中间结果发送至服务器;服务器将第一中间结果输入第二神经网络,得到第二神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
在第二方面的一种可能实现方式中,在第一时刻,第一神经网络包括N个神经网络层,在第二时刻,第一神经网络包括n个神经网络层,N和n不同,方法还包括:服务器向第一终端设备发送n个神经网络层。
在第二方面的一种可能实现方式中,数据处理的系统还包括第二终端设备,第一终端设备上部署的第一神经网络和第二终端设备上部署的第一神经网络中的神经网络层的数量不同;其中,第一终端设备和第二终端设备为不同类型的终端设备,和/或,第一终端设备和第二终端设备为同一类型中不同型号的终端设备。
本申请实施例第二方面提供的数据处理的系统还可以执行第一方面的各个可能实现方式中数据处理的系统执行的步骤,对于本申请实施例第二方面以及第二方面的各种可能实现方式的具体实现步骤、名词的含义以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
第三方面,本申请实施例提供了一种数据处理方法,可用于人工智能领域中。方法应用于第一终端设备,第一终端设备包含于数据处理的系统,数据处理的系统还包括服务器,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,方法包括:将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果;将第一中间结果发送至服务器,第一中间结果用于供服务器利用第二神经网络得到第二中间结果;接收服务器发送的第二中间结果,将第二中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
本申请实施例的第三方面提供的数据处理方法还可以执行第一方面的各个可能实现方式中第一终端设备执行的步骤,对于本申请实施例第三方面以及第三方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
第四方面,本申请实施例提供了一种数据处理方法,可用于人工智能领域中。方法应用于服务器,服务器包含于数据处理的系统,数据处理的系统还包括第一终端设备,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,方法包括:接收第一终端设备发送的第一中间结果,第一中间结果基于待处理数据和第一神经网络得到;将第一中间结果输入第二神经网络,得到第二神经网络生成的第二中间结果;将第二中间结果发送至第一终端设备,第二中间结果用于供第一终端设备利用第三神经网络得到与待处理数据对应的预测结果;其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
本申请实施例的第四方面提供的数据处理方法还可以执行第一方面的各个可能实现方式中服务器执行的步骤,对于本申请实施例第四方面以及第四方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
第五方面,本申请实施例提供了一种数据处理方法,可用于人工智能领域中。方法应用于第一终端设备,第一终端设备包含于数据处理的系统,数据处理的系统还包括服务器,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,方法包括:将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果;将第一中间结果发送至服务器,第一中间结果用于供服务器利用第二神经网络得到与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
本申请实施例的第五方面提供的数据处理方法还可以执行第二方面的各个可能实现方式中第一终端设备执行的步骤,对于本申请实施例第五方面以及第五方面的各种可能实现 方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第二方面中各种可能的实现方式中的描述,此处不再一一赘述。
第六方面,本申请实施例提供了一种数据处理方法,可用于人工智能领域中。方法应用于服务器,服务器包含于数据处理的系统,数据处理的系统还包括第一终端设备,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,方法包括:接收第一终端设备发送的第一中间结果,第一中间结果基于待处理数据和N个第一中间结果得到;将第一中间结果输入第二神经网络,得到第二神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
本申请实施例的第六方面提供的数据处理方法还可以执行第二方面的各个可能实现方式中服务器执行的步骤,对于本申请实施例第六方面以及第六方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第二方面中各种可能的实现方式中的描述,此处不再一一赘述。
第七方面,本申请实施例提供了一种数据处理装置,可用于人工智能领域中。数据处理装置部署于第一终端设备上,第一终端设备包含于数据处理的系统,数据处理的系统还包括服务器,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,装置包括:输入模块,用于将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果;发送模块,用于将第一中间结果发送至服务器,第一中间结果用于供服务器利用第二神经网络得到第二中间结果;接收模块,用于接收服务器发送的第二中间结果;输入模块,还用于将第二中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
本申请实施例的第七方面提供的数据处理装置还可以执行第一方面的各个可能实现方式中第一终端设备执行的步骤,对于本申请实施例第七方面以及第七方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
第八方面,本申请实施例提供了一种数据处理装置,可用于人工智能领域中。数据处理装置部署于服务器,服务器包含于数据处理的系统,数据处理的系统还包括第一终端设备,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,装置包括:接收模块,用于接收第一终端设备发送的第一中间结果,第一中间结果基于待处理数据和第一神经网络得到;输入模块,用于将第一中间结果输入第二神经网络,得到第二神经网络生成的第二中间结果;发送模块,用于将第二中间结果发送至第一终端设备,第二中间结果用于供第一终端设备利用第三神经网络得到与待处理数据对应的预测结果;其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络 中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
本申请实施例的第八方面提供的数据处理装置还可以执行第一方面的各个可能实现方式中服务器执行的步骤,对于本申请实施例第八方面以及第八方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
第九方面,本申请实施例提供了一种数据处理装置,可用于人工智能领域中。数据处理装置部署于第一终端设备,第一终端设备包含于数据处理的系统,数据处理的系统还包括服务器,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,装置包括:输入模块,用于将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果;发送模块,用于将第一中间结果发送至服务器,第一中间结果用于供服务器利用第二神经网络得到与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
本申请实施例的第九方面提供的数据处理装置还可以执行第二方面的各个可能实现方式中第一终端设备执行的步骤,对于本申请实施例第九方面以及第九方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第二方面中各种可能的实现方式中的描述,此处不再一一赘述。
第十方面,本申请实施例提供了一种数据处理装置,可用于人工智能领域中。服务器包含于数据处理的系统,数据处理的系统还包括第一终端设备,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,装置包括:接收模块,用于接收第一终端设备发送的第一中间结果,第一中间结果基于待处理数据和N个第一中间结果得到;输入模块,用于将第一中间结果输入第二神经网络,得到第二神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
本申请实施例的第十方面提供的数据处理装置还可以执行第二方面的各个可能实现方式中服务器执行的步骤,对于本申请实施例第十方面以及第十方面的各种可能实现方式的具体实现步骤,以及每种可能实现方式所带来的有益效果,均可以参考第二方面中各种可能的实现方式中的描述,此处不再一一赘述。
第十一方面,本申请实施例提供了一种第一终端设备,可以包括处理器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现上述各个方面所述的数据处理方法中第一终端设备执行的步骤。
第十二方面,本申请实施例提供了一种服务器,可以包括处理器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现上述各个方面所述的数据处理方法中服务器执行的步骤。
第十三方面,本申请实施例提供了一种数据的处理系统,可以包括第一终端设备和服务器,第一终端设备用于执行上述第一方面所述的方法中第一终端设备执行的步骤,服务器用于执行上述第一方面所述的方法中服务器执行的步骤;或者,第一终端设备用于执行 上述第二方面所述的方法中第一终端设备执行的步骤,服务器用于执行上述第二方面所述的方法中服务器执行的步骤。
第十四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当所述程序在计算机上运行时,使得计算机执行上述各个方面所述的数据处理方法中第一终端设备执行的步骤,或者,使得计算机执行上述各个方面所述的数据处理方法中服务器执行的步骤。
第十五方面,本申请实施例提供了一种计算机程序产品,该计算机程序产品包括程序,当该程序在计算机上运行时,使得计算机执行上述各个方面所述的数据处理方法中第一终端设备执行的步骤,或者,使得计算机执行上述各个方面所述的数据处理方法中服务器执行的步骤。
第十六方面,本申请实施例提供了一种电路系统,所述电路系统包括处理电路,所述处理电路配置为执行上述各个方面所述的数据处理方法中第一终端设备执行的步骤,或者,所述处理电路配置为执行上述各个方面所述的数据处理方法中服务器执行的步骤。
第十七方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,用于实现上述各个方面中所涉及的功能,例如,发送或处理上述方法中所涉及的数据和/或信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存服务器或通信设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
附图说明
图1a为本申请实施例提供的人工智能主体框架的一种结构示意图;
图1b为本申请实施例提供的数据处理方法的一种应用场景图;
图2a为本申请实施例提供的数据处理系统的一种系统架构图;
图2b为本申请实施例提供的数据处理系统的一种系统架构图;
图3为本申请实施例提供的数据处理方法的一种流程示意图;
图4为本申请实施例提供的数据处理方法的一种流程示意图;
图5为本申请实施例提供的数据处理方法中目标神经网络所对应的两个拆分节点的一种示意图;
图6为本申请实施例提供的数据处理方法中第一中间结果的一种示意图;
图7为本申请实施例提供的数据处理方法中第一中间结果的另一种示意图;
图8为本申请实施例提供的数据处理方法中第二中间结果的另一种示意图;
图9为本申请实施例提供的数据处理方法的一种流程示意图;
图10为本申请实施例中更新与目标神经网络对应的拆分节点的一种流程示意图;
图11为本申请实施例提供的数据处理方法中与目标神经网络对应的拆分节点的一种示意图;
图12为本申请实施例提供的数据处理方法的一种流程示意图;
图13为本申请实施例提供的数据处理方法的一种流程示意图;
图14为本申请实施例提供的数据处理装置的一种结构示意图;
图15为本申请实施例提供的数据处理装置的一种结构示意图;
图16为本申请实施例提供的数据处理装置的一种结构示意图;
图17为本申请实施例提供的数据处理装置的一种结构示意图;
图18为本申请实施例提供的第一终端设备的一种结构示意图;
图19为本申请实施例提供的服务器的一种结构示意图;
图20为本申请实施例提供的芯片的一种结构示意图。
具体实施方式
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
首先对人工智能系统总体工作流程进行描述,请参见图1a,图1a示出的为人工智能主体框架的一种结构示意图,下面从“智能信息链”(水平轴)和“IT价值链”(垂直轴)两个维度对上述人工智能主题框架进行阐述。其中,“智能信息链”反映从数据的获取到处理的一列过程。举例来说,可以是智能信息感知、智能信息表示与形成、智能推理、智能决策、智能执行与输出的一般过程。在这个过程中,数据经历了“数据—信息—知识—智慧”的凝练过程。“IT价值链”从人智能的底层基础设施、信息(提供和处理技术实现)到系统的产业生态过程,反映人工智能为信息技术产业带来的价值。
(1)基础设施
基础设施为人工智能系统提供计算能力支持,实现与外部世界的沟通,并通过基础平台实现支撑。通过传感器与外部沟通;计算能力由智能芯片提供,该智能芯片具体可以采用中央处理器(central processing unit,CPU)、嵌入式神经网络处理器(neural-network processing unit,NPU)、图形处理器(graphics processing unit,GPU)、专用集成电路(application specific integrated circuit,ASIC)或现场可编程门阵列(field programmable gate array,FPGA)等硬件加速芯片;基础平台包括分布式计算框架及网络等相关的平台保障和支持,可以包括云存储和计算、互联互通网络等。举例来说,传感器和外部沟通获取数据,这些数据提供给基础平台提供的分布式计算系统中的智能芯片进行计算。
(2)数据
基础设施的上一层的数据用于表示人工智能领域的数据来源。数据涉及到图形、图像、语音、文本,还涉及到传统设备的物联网数据,包括已有系统的业务数据以及力、位移、 液位、温度、湿度等感知数据。
(3)数据处理
数据处理通常包括数据训练,机器学习,深度学习,搜索,推理,决策等方式。
其中,机器学习和深度学习可以对数据进行符号化和形式化的智能信息建模、抽取、预处理、训练等。
推理是指在计算机或智能系统中,模拟人类的智能推理方式,依据推理控制策略,利用形式化的信息进行机器思维和求解问题的过程,典型的功能是搜索与匹配。
决策是指智能信息经过推理后进行决策的过程,通常提供分类、排序、预测等功能。
(4)通用能力
对数据经过上面提到的数据处理后,进一步基于数据处理的结果可以形成一些通用的能力,比如可以是算法或者一个通用系统,例如,翻译,文本的分析,计算机视觉的处理,语音识别,图像的识别等等。
(5)智能产品及行业应用
智能产品及行业应用指人工智能系统在各领域的产品和应用,是对人工智能整体解决方案的封装,将智能信息决策产品化、实现落地应用,其应用领域主要包括:智能终端、智能制造、智能交通、智能家居、智能医疗、智能安防、自动驾驶、智慧城市等。本申请实施例可以应用于人工智能领域的各种领域中,具体可以应用于第一终端设备利用神经网络进行数据处理的应用场景中,具体示例如下。
一、智能终端领域
作为示例,例如在智能终端领域中,前述智能终端具体可以表现为手环、手表、耳机、眼镜等智能穿戴设备,也可以表现为手机、平板等智能终端。智能终端上可以配置有用于人脸识别功能,当用户想要解锁智能终端、打开智能终端上的隐私数据或者执行其他操作时,智能终端可以获取当前用户的脸部图像,进而获取与当前用户的脸部图像对应的识别结果,在确定当前用户是已经注册的用户的情况下,才会触发执行对应的操作,智能终端上也可以配置有其他功能,此处不再一一进行列举。
二、智能家居领域
作为示例,例如在智能家居领域中,前述智能家居具体可以表现为扫地机器人、空调、灯、热水器、冰箱或其他类型的智能家居等。当用户采用声音的方式向智能家居发出控制指令时,智能家居可以获取与用户声音对应的声纹识别结果,在确定发出声音的用户是特定用户的情况下,才会触发智能家居执行与控制指令对应的操作。
为更直观地理解本方案,请参阅图1b,图1b为本申请实施例提供的数据处理方法的一种应用场景图,如图1b所示,当用户采用声音的方式向图1b中示出的空调(也即智能家居的一个示例)发出“打开空调”的指令时,空调可以获取前述控制指令对应的声纹识别结果,在确定发出“打开空调”这一声音指令的用户为具有空调的控制权限的用户的情况下,执行打开空调的操作,应理解,图1b中的举例为方便理解本方案的应用场景,不用于限定本方案。
三、自动驾驶领域
作为示例,例如在自动驾驶领域中,车辆上可以配置有人脸识别功能,车辆获取用户脸部的图像数据,并获取与用户脸部的图像数据对应的识别结果,在确定当前用户是具有车辆启动权限的用户的情况下,才会触发启动车辆。
需要说明的是,上述种种举例仅为方便理解本申请实施例的应用场景,在其他很多应用场景中,终端设备也会需要利用神经网络进行数据处理,此处举例不用于限定本申请实施例的应用场景。在上述种种场景中,为了能够在减少整个神经网络的计算过程中对第一终端设备的计算机资源的占用的同时,能够提高对用户数据的隐私性保护程度,可以采用本申请实施例提供的数据处理方法。
先结合图2a和图2b对本申请实施例提供的数据处理系统进行介绍。在一种系统架构中,请先参阅图2a,图2a为本申请实施例提供的数据处理系统的一种系统架构图。在图2a中,数据处理系统可以包括训练设备210、数据库220、终端设备230和服务器240,终端设备230中包括第一计算模块,服务器240中包括第二计算模块。
其中,在目标神经网络201的训练阶段,数据库220中存储有训练数据集合,训练设备210生成用于执行目标任务的目标神经网络201,目标神经网络201中包括多个神经网络层;训练设备210利用数据库220中的训练数据集合对目标神经网络201进行迭代训练,得到训练后的目标神经网络201。
服务器240可以获取到训练后的目标神经网络201,服务器240将训练后的目标神经网络201中的一部分神经网络层部署于终端设备230的第一计算模块中,将训练后的目标神经网络201中的另一部分神经网络层部署于服务器240的第二计算模块中。
在目标神经网络201的推理阶段,终端设备230中的第一计算模块执行目标神经网络201中的一部分数据计算,服务器240中的第二计算模块执行目标神经网络201中的另一部分数据计算,以减少整个神经网络的计算过程中对终端设备230的计算机资源的占用。
在另一种系统架构中,请参阅图2b,图2b为本申请实施例提供的数据处理系统的一种系统架构图。在图2b中,数据处理系统可以包括训练设备210、数据库220、终端设备230、第一服务器241和第二服务器242,终端设备230中包括第一计算模块,第二服务器242中包括第二计算模块。
图2b和图2a的区别在于,在图2a示出的系统架构中,服务器240既用于执行目标神经网络201的多个神经网络层的分配操作,且服务器240中的第二计算模块用于完成目标神经网络201中的一部分神经网络层的计算。在图2b示出的系统架构中,第一服务器241和第二服务器242是两个独立的设备,第一服务器241用于执行目标神经网络201的多个神经网络层的分配操作,第二服务器242中的第二计算模块用于完成目标神经网络201中的一部分神经网络层的计算。
本申请的一些实施例中,请参阅图2a和图2b,“用户”可以直接与终端设备230交互,也即终端设备230可以直接将整个目标神经网络201输出的预测结果展示给“用户”,值得注意的,图2a和图2b仅是本发明实施例提供的数据处理系统的两种架构示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制。例如,在本申请的另一些实施例中,终端设备230和客户设备也可以为分别独立的设备,客户设备用于将整个目标神经网 络201输出的预测结果展示给“用户”,终端设备230配置有输入/输出(in/out,I/O)接口,终端设备230通过I/O接口与客户设备进行数据交互。
进一步地,在目标神经网络201的推理阶段,在一种实现方式中,目标神经网络201可以包括第一神经网络、第二神经网络和第三神经网络。
更进一步地,第一神经网络包括目标神经网络201的前多个神经网络层,第三神经网络包括目标神经网络201中后多个神经网络层。也即第一神经网络位于第二神经网络之前,第三神经网络位于第二神经网络之后,第二神经网络位于第一神经网络和第三神经网络之间。第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络。
在另一种实现方式中,目标神经网络201可以被拆分为两部分,该两部分分别包括第一神经网络和第二神经网络,第一神经网络为目标神经网络201的一个子神经网络,第二神经网络为目标神经网络201的另一个子神经网络,第一神经网络位于第二神经网络之前。第一终端设备上部署上述第一神经网络,服务器上部署上述第二神经网络。
当目标神经网络201采用上述两种不同的拆分方式时,第一终端设备和服务器的处理流程不同,以下分别对上述两种拆分方式的具体实现流程进行描述。
一、目标神经网络包括第一神经网络、第二神经网络和第三神经网络
本申请实施例中,为了更直观地理解本方案,请参阅图3,图3为本申请实施例提供的数据处理方法的一种流程示意图。如图3所示,目标神经网络包括第一神经网络、第二神经网络和第三神经网络,第一终端设备上部署有前述第一神经网络和前述第三神经网络,服务器上部署有前述第二神经网络。A1、第一终端设备将原始的待处理数据输入第一神经网络中,得到第一神经网络生成的第一中间结果。A2、第一终端设备将第一中间结果发送至服务器。A3、服务器将第一中间结果输入第二神经网络,得到第二神经网络生成的第二中间结果,将第二中间结果发送至第一终端设备。A4、第一终端设备将第二中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果;应理解,图3中的示例仅为方便理解本方案,不用于限定本方案。具体的,请参阅图4,图4为本申请实施例提供的数据处理方法的一种流程示意图,本申请实施例提供的数据处理方法可以包括:
401、服务器将第一神经网络和第三神经网络发送给第一终端设备,其中,服务器上部署有第二神经网络,在第一时刻,第一神经网络包括N个神经网络层,第二神经网络包括M个神经网络层,第三神经网络包括S个神经网络层,第一神经网络、第二神经网络和第三神经网络组成目标神经网络。
本申请的一些实施例中,服务器可以确定与第一终端设备对应的第一神经网络中的神经网络层的数量和第三神经网络层的数量,在第一时刻,第一神经网络包括N个神经网络层,第二神经网络包括M个神经网络层,第三神经网络包括S个神经网络层,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,N、M和S均为大于或等于1的整数。
服务器可以将第一神经网络和第三神经网络发送给第一终端设备,第一终端设备接收并存储第一神经网络和第三神经网络,以实现在第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络。
需要说明的是,服务器还可以通过其他方式将第一神经网络和第三神经网络部署于第 一终端设备上,例如利用可移动的存储设备将第一神经网络和第三神经网络部署于第一终端设备上,本申请实施例中不对部署方式进行穷举。
此外,本申请实施例中的目标神经网络可以为进行过预处理之后的神经网络,该预处理可以为剪枝、蒸馏或其他用于减少标准的神经网络的参数量的处理方式等,此处不做穷举。或者,本申请实施例中的目标神经网络也可以为标准的神经网络,目标神经网络的具体表现形式可以结合实际应用场景确定,此处不做限定。
其中,执行步骤401的服务器可以为图2a示出的数据处理系统中的服务器240,也可以为图2b示出的数据处理系统中的第一服务器241。
目标神经网络为用于执行目标任务的神经网络,目标任务可以为任意类型的任务。作为示例,例如目标任务可以为通过识别输入的用户数据以实现鉴权的功能,该鉴权类任务可以为声纹识别、人脸识别、指纹识别、耳纹识别或利用其他类型的用户以实现鉴权的任务。作为另一示例,例如目标任务可以为个性化推荐类的任务,该个性化推荐类任务可以为个性化生成充电方案、个性化推荐食谱、个性化推荐运动方案、个性化推荐影视作品、个性化推荐应用程序等等,此处不做穷举。作为另一示例,目标任务可以为特征提取类的任务,该特征提取类的任务可以为声纹特征的提取、图像特征的提取或文本特征的提取等等。作为另一示例,目标任务还可以为识别语音内容、将文本在不同语言之间翻译、对周围环境中的目标进行识别、图像风格迁移或其他第一终端设备利用神经网络执行的任务等等,本申请实施例中不对目标任务具体表现为哪些类型的任务进行穷举。
目标神经网络可以具体表现为卷积神经网络、循环神经网络、残差神经网络或其他类型的神经网络等,目标神经网络的具体形态可以结合“目标任务”具体为神经类型的任务来确定,此处不做限定。该目标神经网络包括多个神经网络层。
可选地,第一神经网络、第二神经网络和第三神经网络为对目标神经网络进行拆分得到的。在图4对应的实施例中,整个目标神经网络包括的多个神经网络层被拆分为三部分,也即与本实施例中的目标神经网络对应有两个拆分节点,该两个拆分节点包括第一拆分节点和第二拆分节点,第一拆分节点为第一神经网络和第二神经网络的拆分节点,第二拆分节点为第二神经网络和第三神经网络的拆分节点。
“第一神经网络位于第二神经网络之前”指的是在将待处理数据输入目标神经网络中,并通过目标神经网络进行数据处理的过程中,待处理数据会先通过目标神经网络的第一神经网络之后,再经过目标神经网络的第二神经网络。也即目标神经网络中的各个神经网络层的前后顺序是指数据在目标神经网络中正向传播的过程中,数据先经过的神经网络层代表位置靠前的神经网络层,数据后经过的神经网络层代表位置靠后的神经网络层。“第三神经网络位于第二神经网络之后”这一概念也可以借助前述描述进行理解,此处不做赘述。
为了更直观地理解本方案,请参阅图5,图5为本申请实施例提供的数据处理方法中目标神经网络所对应的两个拆分节点的一种示意图,图5中以目标神经网络为残差神经网络(residual networks,ResNets),目标任务为提取声纹特征为例,如图5所示,目标神经网络包括4个残差块(residual block),目标神经网络中位于第一拆分节点之前的神经网络层被称为第一神经网络,位于第一拆分节点和第二拆分节点之间的神经网络层被称为 第二神经网络,位于第二拆分节点之后的神经网络层被称为第三神经网络,也即第一神经网络位于第二神经网络之前,第三神经网络位于第二神经网络之后,应理解,图5中的示例仅为方便理解本方案,不用于限定本方案。进一步地,如下以表格的形式公开图5中神经网络中各个部分的参数量。
神经网络层(Layer) 参数量(Parameters)
第一个卷积层 3*3*32=288
残差块1 (3*3*32*32*2)*3=55296
残差块2 (3*3*64*64*2)*4=294912
残差块3 (3*3*128*128*2)*6=1769472
残差块4 (3*3*256*256*2)*3=3538944
池化层 -
第一个线性连接层 256*8*256=524288
第二个线性连接层 256*256=65536
表1
参阅如上表1可知,整个目标神经网络的数据处理过程中大部分的参数计算消耗在残差块1至残差块4的计算中,第一个卷积层和最后的线性连接(Linear)层的参数量较少,通过前述分析可知,可以将整个目标神经网络中的前多个神经网络层和最后的多个神经网络层部署于第一终端设备,将中间的多个神经网络层部署于服务器,能够大大减少整个目标神经网络的数据处理过程中所消耗的第一终端设备上的计算机资源。
本申请实施例中,针对服务器第一次确定目标部署的神经网络层的数量的方式。在一种实现方式中,第一终端设备上部署的第一神经网络和第二终端设备上部署的第一神经网络中的神经网络层的数量不同,和/或,第一终端设备上部署的第三神经网络和第二终端设备上部署的第三神经网络中的神经网络层的数量不同。
其中,第一终端设备和第二终端设备可以为不同类型的终端设备。作为示例,例如第一终端设备为手表,第二终端设备为手机;作为另一示例,例如第一终端设备为灯,第二终端设备为空调;作为另一示例,例如第一终端设备为手机,第二终端设备为平板等,此处不做穷举。
或者,第一终端设备和第二终端设备为同一类型中不同型号的终端设备。需要说明的是,本方案中当两个不同的终端设备(也即第一终端设备和第二终端设备)上均配置有目标神经网络所包括的部分神经网络层时,当第一终端设备上部署的神经网络层的数量和第二终端设备上部署的神经网络层的数量不同时,第一终端设备和第二终端设备可以为不同类型的终端设备或同一类型中不同型号的终端设备,但并不代表任意两个不同类型的终端设备上部署的神经网络层的数量均不同,也不代表同一类别中任意两个不同型号的终端设备上部署的神经网络层的数量均不同。
可选地,若第一神经网络、第二神经网络和第三神经网络为对目标神经网络进行拆分得到,则在图4对应的实施例中,目标神经网络对应有两个拆分节点,“与目标神经网络对应的拆分节点不同”指的是与第一终端设备对应的两个拆分节点,和,与第二终端设备对 应的两个拆分节点不完全相同。
具体的,图4对应实施例中的“与目标神经网络对应的拆分节点不同”存在如下三种情况:在一种情况下,第一终端设备所对应的第一拆分节点与第二终端设备所对应的第一拆分节点相同,第一终端设备所对应的第二拆分节点与第二终端设备所对应的第二拆分节点不同。在另一种情况下,第一终端设备所对应的第一拆分节点与第二终端设备所对应的第一拆分节点不同,第一终端设备所对应的第二拆分节点与第二终端设备所对应的第二拆分节点相同。在另一种情况下,第一终端设备所对应的第一拆分节点与第二终端设备所对应的第一拆分节点不同,第一终端设备所对应的第二拆分节点与第二终端设备所对应的第二拆分节点不同。
对应的,图4对应实施例中的“与目标神经网络对应的拆分节点相同”指的是第一终端设备所对应的第一拆分节点与第二终端设备所对应的第一拆分节点相同,且第一终端设备所对应的第二拆分节点与第二终端设备所对应的第二拆分节点相同。
本申请实施例中,由于不同类型的终端设备的计算机资源的配置可能不同,同一类型中不同型号的终端设备的计算机资源的配置也可能不同,则不同类型的终端设备或同一类型中不同型号的终端设备能够分配给目标任务的计算机资源也可能不同,本方案中不同类别的终端设备或同一类型中不同型号的终端设备上部署的神经网络层的数量不同,以提高部署的神经网络层的数量与第一终端设备的计算机资源之间的匹配度。
具体的,针对服务器确定与某一个第一终端设备对应的两个拆分节点的过程。若不同类别的终端设备上部署的神经网络层的数量可能不同,同一类型中不同型号的终端设备上部署的神经网络层的数量均相同,则服务器上可以预先配置有第一映射关系,第一映射关系中可以存储有每种类型的终端设备上部署的神经网络层的数量,当服务器需要向新的第一终端设备上部署第一神经网络和第二神经网络时,可以根据该新的第一终端设备的目标类型和第一映射关系,确定与目标类型的第一终端设备对应的两个拆分节点。
则在执行步骤401之前,当第一终端设备上需要部署目标神经网络中的部分神经网络层时,可以向服务器发送第一请求,第一请求用于请求获取目标神经网络中的部分神经网络层,第一请求中还携带有第一终端设备的目标类型。服务器根据接收到的该第一终端设备的目标类型,从第一映射关系中获取与目标类型对应的两个拆分节点;服务器根据获取到的前述两个拆分节点,从目标神经网络中拆分出该第一神经网络和第三神经网络。
其中,第一映射关系可以采用表格、数组或其他形式存储于服务器上。为更直观地理解本方案,以下以表格的形式展示第一映射关系。
Figure PCTCN2023071725-appb-000001
表2
如上述表2所示,当第一终端设备表现为不同类型的终端设备时,目标神经网络所对应的两个拆分节点可能会相同,也可能会不同。例如当第一终端设备表现为灯和第一终端设备表现为冰箱这两个不同的情况时,目标神经网络所对应的两个拆分节点不同;再例如当第一终端设备表现为冰箱和第一终端设备表现为空调这两个不同的情况时,目标神经网络所对应的两个拆分节点相同,应理解,表2中的示例仅为方便理解第一映射关系中的内容,不用于限定本方案。
更具体的,在一种实现方式中,第一映射关系由其他设备发送给服务器。在另一种实现方式中,第一映射关系是由服务器生成的。
进一步地,第一映射关系中的第一神经网络和第三神经网络的确定因素可以包括如下任一种或多种因素的组合:当第一终端设备执行目标任务时,第一终端设备分配的处理器资源的预估量、第一终端设备分配的内存资源的预估量或其他类型的因素等。
也即在执行步骤401之前,服务器可以根据获取每种类型的终端设备的上述指标,根据每种类型的终端设备的上述指标,确定每种类型的终端设备上部署的神经网络层的数量。其中,第一终端设备分配的处理器资源的预估量越多,则第一终端设备上分配的神经网络层的数量越多,第一终端设备分配的处理器资源的预估量越少,则第一终端设备上分配的神经网络层的数量越少。第一终端设备分配的内存资源的预估量越多,则第一终端设备上分配的神经网络层的数量越多,第一终端设备分配的内存资源的预估量越少,则第一终端设备上分配的神经网络层的数量越少。
处理器具体可以表现为中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、专用集成电路(application specific integrated circuit,ASIC)或其他类型的处理器等,具体第一终端设备上配置的为哪些类型的处理器可以结合实际产品形态确定,此处不做限定。
若第一终端设备上仅分配一个处理器来执行目标任务,则“第一终端设备分配的处理器资源的预估量”的评价指标可以包括如下任一个或多个元素:第一终端设备为执行目标任务分配的处理器的占用时长和第一终端设备为执行目标任务分配的处理器的性能。若第一终端设备上分配至少两个处理器来执行目标任务,则“第一终端设备分配的处理器资源的预估量”的评价指标可以包括如下任一个或多个元素:第一终端设备为执行目标任务分 配的每个处理器的占用时长、第一终端设备为执行目标任务分配的每个处理器的性能、处理器的数量、每个处理器的类型或其他元素等。
更进一步地,处理器的性能的评价指标可以为如下任一种或多种评价指标:处理器每秒执行的浮点运算次数(floating-point operations per second,FLOPS)、处理器每秒执行的百万条指令的数量(dhrystone million instructions executed per second,DMIPS),也即衡量处理器每秒执行了多少百万条指令或其他用于评价处理器的性能的指标,或者可以采用其他类型的处理器的性能的评价指标等,此处不做穷举。
“第一终端设备分配的内存资源的预估量”的评价指标可以为第一终端设备为执行目标任务分配的内存的存储空间的大小。
需要说明的是,“第一终端设备为执行目标任务分配的处理器的占用时长”和“第一终端设备为执行目标任务分配的内存的存储空间的大小”可以为一个预估的取值范围,也可以为一个预估的确定的值。进一步地,“第一终端设备为执行目标任务分配的处理器的占用时长”的单位可以为每秒执行的百万条指令(million instructions executed per second,MIPS)、秒或其他类型的时间单位等,此处不做穷举。
作为示例,例如,第一终端设备为执行目标任务分配的处理器的占用时长可以为0.5MIPS-1MIPS,第一终端设备为执行目标任务分配的内存的存储空间的大小可以为20M-30M;作为另一示例,例如第一终端设备为执行目标任务分配的处理器的占用时长可以为1.5MIPS,第一终端设备为执行目标任务分配的内存的存储空间的大小可以为25M,应理解,此处举例仅为方便理解本方案,不用于限定本方案。
若不同类别的终端设备上部署的神经网络层的数量可能不同,且同一类型中不同型号的终端设备上部署的神经网络层的数量也可能不同,则服务器上可以配置有第二映射关系,第二映射关系中可以存储有与每种类型的终端设备的至少一个型号对应的神经网络层的数量,当服务器需要向新的第一终端设备上部署第一神经网络和第二神经网络时,可以根据该新的第一终端设备的目标类型、目标型号和第二映射关系,确定与目标类型的第一终端设备对应的两个拆分节点。
则在执行步骤401之前,当第一终端设备上需要部署目标神经网络中的部分神经网络层时,可以向服务器发送第一请求,第一请求用于请求获取目标神经网络中的部分神经网络层,第一请求中还携带有第一终端设备的目标类型和该第一终端设备的目标型号。服务器可以接收到的该第一终端设备的目标类型和该第一终端设备的目标型号,从第二映射关系中获取与目标类型和目标型号对应的两个拆分节点;服务器根据获取到的前述两个拆分节点,从目标神经网络中拆分出该第一神经网络和第三神经网络。
其中,第二映射关系可以采用表格、数组或其他形式存储于服务器上。为更直观地理解本方案,以下以表格的形式展示第二映射关系。
Figure PCTCN2023071725-appb-000002
表3
如上述表3所示,对于同一类型且不同型号的两个第一终端设备,目标神经网络所对应的两个拆分节点可能会相同,也可能会不同。例如当两个不同的终端设备表现为不同型号的灯时,所有型号的灯上部署的目标神经网络部署的神经网络层的数量均相同。当两个不同的终端设备分别为型号0001的手机和型号0004的手机,前述两个不同的终端设备上部署的目标神经网络部署的神经网络层的数量不同,表3中的示例仅为方便理解第二映射关系中的内容,不用于限定本方案。
更具体的,在一种实现方式中,第二映射关系由其他设备发送给服务器。在另一种实现方式中,第二映射关系是由服务器生成的。
进一步地,第二映射关系中的第一神经网络和第三神经网络的确定因素可以包括如下任一种或多种因素的组合:当第一终端设备执行目标任务时,第一终端设备分配的处理器资源的预估量、第一终端设备分配的内存资源的预估量或其他类型的因素等。
也即服务器可以获取每种类型的至少一个型号中每个型号的第一终端设备的上述指标,根据每种类型的至少一个型号中每个型号的第一终端设备的上述指标,生成一个确定的目标类型的目标型号的第一终端设备上部署的神经网络层的数量,服务器重复执行前述操作,以生成该第二映射关系。其中,第一终端设备分配的处理器资源的预估量越多,则第一终端设备上分配的神经网络层的数量越多,第一终端设备分配的处理器资源的预估量越少,则第一终端设备上分配的神经网络层的数量越少。第一终端设备分配的内存资源的预估量越多,则第一终端设备上分配的神经网络层的数量越多,第一终端设备分配的内存资源的预估量越少,则第一终端设备上分配的神经网络层的数量越少。
对于“第一终端设备分配的处理器资源的预估量”和“第一终端设备分配的内存资源的预估量”这两个概念的理解可以参阅上述描述,此处不做赘述。
在另一种实现方式中,在第一终端设备为第一终端设备和第一终端设备为第二终端设备这两种不同的情况下,与目标神经网络对应的拆分节点可以相同,也即不同的第一终端设备上部署的目标神经网络部署的神经网络层的数量也可以均相同。
402、第一终端设备将待处理数据输入第一神经网络,得到第一神经网络生成的第一中 间结果。
本申请实施例中,步骤401为可选步骤,若执行步骤401,则第一终端设备可以接收到服务器发送的第一神经网络和第三神经网络,并将接收到的第一神经网络和第三神经网络存储至本地。
若不执行步骤401,在一种实现方式中,若第一神经网络、第二神经网络和第三神经网络为对目标神经网络进行拆分得到,服务器可以向第一终端设备发送目标神经网络中的前P个神经网络层和目标神经网络中的后Q个神经网络层,并向第一终端设备发送第一指示信息;其中,P为大于或等于N的整数,Q为大于或等于S的整数,第一指示信息用于告知第一终端设备与该目标神经网络对应的两个拆分节点在目标神经网络中的位置。
第一终端设备将接收到的上述前P个神经网络层和上述后Q个神经网络层存储至本地,根据接收到的第一指示信息从上述前P个神经网络层中确定第一神经网络,从上述后Q个神经网络层中确定第三神经网络,也即实现了在第一终端设备上部署第一神经网络和第三神经网络。
在另一种实现方式中,服务器还可以将训练后的整个目标神经网络发送给第一终端设备,并向第一终端设备发送第一指示信息,第一指示信息用于告知第一终端设备与该目标神经网络对应的两个拆分节点在目标神经网络中的位置。从而第一终端设备可以根据接收到的第一指示信息,对接收到的目标神经网络进行拆分,以确定第一神经网络和第三神经网络,也即实现了在第一终端设备上部署第一神经网络和第三神经网络。
第一终端设备在部署有第一神经网络和第三神经网络之后,可以将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果。其中,待处理数据具体表现为哪种类型的数据与目标任务具体表现为哪种类型的任务相关,作为示例,例如待处理数据具体可以表现为如下任一种数据:声音数据、图像数据、指纹数据、耳部的轮廓数据、能够反映用户习惯的序列数据、文本数据、点云数据或其他类型的数据等等,应理解,待处理数据采用的为哪种类型的数据需要结合通过目标神经网络执行的目标任务是哪种类型的任务来确定,此处不做限定。本实现方式中,提供了待处理数据的多种表现形式,扩展了本方案的应用场景,提高了本方案的实现灵活性。
“第一神经网络生成的第一中间结果”也可以称为“第一神经网络生成的第一隐向量”,第一神经网络生成的第一中间结果包括第二神经网络进行数据处理时所需要的数据。
进一步地,在一种情况中,“第一神经网络生成的第一中间结果”包括第一神经网络中最后一个神经网络层生成的数据。为更直观地理解本方案,请参阅图6,图6为本申请实施例提供的数据处理方法中第一中间结果的一种示意图。如图6所示,第一中间结果包括第一神经网络中最后一个神经网络层(也即图6中的第三个卷积层)生成的数据,应理解,图6中的示例仅为方便理解本方案,不用于限定本方案。
在另一种情况中,“第一神经网络生成的第一中间结果”包括第一神经网络中最后一个神经网络层生成的数据,和,第一神经网络中其他神经网络层生成的数据。为更直观地理解本方案,请参阅图7,图7为本申请实施例提供的数据处理方法中第一中间结果的另一种示意图。图7中示出的目标神经网络所对应的两个拆分节点和图5中示出的目标神经网 络所对应的两个拆分节点相同,如图7所示,第一中间结果不仅包括第一神经网络中最后一个神经网络层(也即图7中第5个卷积层)生成的数据,而且包括第一神经网络中第N-2个神经网络层(也即图7中的第3个卷积层)生成的数据,应理解,图7中的示例仅为方便理解本方案,不用于限定本方案。
403、第一终端设备将第一中间结果发送至服务器。
本申请实施例中,第一终端设备在得到第一中间结果后,可以对第一中间结果进行加密,并将加密后的第一中间结果发送给服务器。其中,采用的加密算法包括但不限于安全套接层(secure sockets layer,SSL)加密算法或其他类型的加密算法等。
404、服务器将第一中间结果输入第二神经网络,得到第二神经网络生成的第二中间结果。
本申请实施例中,服务器在接收到加密后的第一中间结果之后,可以对加密后的第一中间结果进行解密以得到第一中间结果,并将第一中间结果输入第二神经网络中,得到第二神经网络生成的第二中间结果。
“第二神经网络生成的第二中间结果”也可以称为“第二神经网络生成的第二隐向量”,第二神经网络生成的第二中间结果包括第三神经网络进行数据处理时所需要的数据。
进一步地,在一种情况中,“第二神经网络生成的第二中间结果”包括第二神经网络中最后一个神经网络层生成的数据。为更直观地理解本方案,请结合图7进行理解,如图7所示,目标神经网络所对应的第二拆分节点(也即第二神经网络和第三神经网络之间的拆分节点)位于池化层和第一个线性连接层之间,则第二中间结果包括第二神经网络中最后一个神经网络层(也即图7中的池化层)生成的数据,应理解,图7中的示例仅为方便理解本方案,不用于限定本方案。
在另一种情况下,“第二神经网络生成的第二中间结果”包括第二神经网络中由最后一个神经网络层生成的数据,和,第二神经网络中其他的神经网络层生成的数据。请参阅图8,图8为本申请实施例提供的数据处理方法中第二中间结果的另一种示意图。如图8所示,第二中间结果不仅包括第二神经网络中最后一个神经网络层(也即图8中的最后一个卷积层)生成的数据,还包括第二神经网络中第M-2个神经网络层(也即图8中的倒数第3个卷积层)生成的数据,应理解,图8中的示例仅为方便理解本方案,不用于限定本方案。
405、服务器将第二中间结果发送至第一终端设备。
本申请实施例中,服务器在得到第二中间结果后,可以对第二中间结果进行加密,并将加密后的第二中间结果发送给第一终端设备,具体采用的加密算法可以参阅步骤403中的描述,此处不做赘述。
406、第一终端设备将第二中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果,预测结果所指示的信息的类型与目标任务的类型对应。
本申请实施例中,第一终端设备在接收到加密后的第二中间结果后,可以将第二中间结果输入第三神经网络中,也即将第二中间结果输入目标神经网络的最后的S个神经网络层中,得到第三神经网络生成的与待处理数据对应的预测结果(也即得到整个目标神经网络输出的与待处理数据对应的预测结果)。
其中,上述与待处理数据对应的预测结果所指示的信息的类型与目标任务的类型对应。作为示例,例如目标任务是声纹识别,则待处理数据可以为声音数据,与待处理数据对应的预测结果用于指示待处理数据(也即声音数据)是否为预设用户是声音。作为另一示例,例如目标任务是声纹特征提取,则待处理数据可以为声音数据,与待处理数据对应的预测结果为从待处理数据中提取到的声纹特征。
作为另一示例,例如目标任务是人脸识别,则待处理数据可以为用户脸部的图像数据,与待处理数据对应的预测结果用于指示为该用户是否为预设用户。作为另一示例,例如目标任务是指纹识别,则待处理数据为用户的指纹数据,与待处理数据对应的预测结果用于指示该用户是否为预设用户。作为再一示例,例如目标任务是对用户的耳部的轮廓数据进行特征提取,则待处理数据为用户的耳部的轮廓数据,与待处理数据对应的预测结果为用户的耳部的轮廓数据的特征等等,此处不对与待处理数据对应的预测结果进行穷举。
第一终端设备通过第一神经网络和第三神经网络进行数据处理的过程中所占用的处理器资源小于服务器通过第二神经网络进行数据处理的过程中所占用的处理器资源,且,第一终端设备通过第一神经网络和第三神经网络进行数据处理的过程中所占用的内存资源小于服务器通过第二神经网络进行数据处理的过程中所占用的内存资源。
本申请实施例中,将第二神经网络部署于服务器上,在第二神经网络进行数据处理的过程中占用处理器资源较多且占用内存资源较多,则可以进一步减少整个神经网络的计算过程中所占用的第一终端设备的计算机资源,有利于降低第一终端设备在执行目标任务过程中的计算压力;由于整个神经网络的数据处理过程中的大部分计算由服务器执行,则可以采用参数量更多的深度神经网络来生成与待处理数据定的预测结果,有利于提高整个神经网络生成的预测结果的精度。
本申请实施例中,第一终端设备在得到与待处理数据对应的预测结果之后,可以根据与待处理数据对应的预测结果执行后续的步骤,具体执行哪些步骤可以结合实际应用场景确定,此处不做限定。
为更直观地理解本方案,请参阅图9,图9为本申请实施例提供的数据处理方法的一种流程示意图。图9中以目标神经网络所执行的目标任务为提取声纹特征,且第一神经网络、第二神经网络和第三神经网络为对目标神经网络拆分得到为例,如图9所示,B1、第一终端设备获取用户输入的待处理数据(也即图9中示出的用户输入的声音数据)。B2、第一终端设备将待处理数据输入至第一神经网络(也即图9中示出的目标神经网络的前N个神经网络层)中,得到该第一神经网络生成的第一中间结果。B3、第一终端设备将第一中间结果进行加密处理,并将加密后的第一中间结果发送给服务器,以实现对第一中间结果的加密传输。B4、服务器在对加密后的第一中间结果进行解密以得到第一中间结果,将第一中间结果输入第二神经网络(也即N个神经网络层之后的M个神经网络层)中,得到第二神经网络生成的第二中间结果。B5、服务器将第二中间结果进行加密处理,并将加密后的第二中间结果发送给第一终端设备,以实现对第二中间结果的加密传输。B6、第一终端设备对加密后的第二中间结果进行解密以得到第二中间结果,将第二中间结果输入第三神经网络(也即目标神经网络的后S个神经网络层),得到整个目标神经网络输出的与待处理 数据对应的预测结果(也即从输入的声音数据中提取出的声纹特征)。B7、第一终端设备将本地存储的至少一个声纹特征中的每个声纹特征与获取到的声纹特征进行对比,以确定获取到的声纹特征是否为预先存储的至少一个声纹特征中的任意一个,以确定前述用户是否为具有权限的用户,应理解,图9中的示例仅为方便理解本方案,不用于限定本方案。
407、服务器获取与目标神经网络对应的更新后的拆分节点,其中,更新后的拆分节点指示第一神经网络包括n个神经网络层、第二神经网络包括m个神经网络层且第三神经网络包括s个神经网络层。
本申请的一些实施例中,服务器在将第一神经网络和第三神经网络部署于一个确定的第一终端设备上之后,可以获取与目标神经网络(也即第一终端设备上部署的神经网络层所归属的神经网络)对应的更新后的拆分节点,也即在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变,也即在第一时刻和第二时刻这两个不同的时刻,与目标神经网络对应的拆分节点不同。
需要说明的是,本方案中当同一第一终端设备分别处于第一时刻和第二时刻这两个不同的时刻时,与目标神经网络对应的拆分节点可以不同,但不代表对于同一第一终端设备的任意两个不同的时刻,该目标神经网络部署的神经网络层的数量均不同。
其中,“与目标神经网络对应的拆分节点不同”的含义均可以参阅上述步骤中的描述,更新后的拆分节点指示第一神经网络包括n个神经网络层、第二神经网络包括m个神经网络层且第三神经网络包括s个神经网络层,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,n、s和m均为大于或等于1的整数,N和n不同和/或S和s不同。
进一步地,“第一神经网络”和“第二神经网络”在目标神经网络中的位置关系,“第二神经网络”和“第三神经网络”在目标神经网络中的位置关系均可以参阅上述步骤401中的描述,此处不做赘述。
本申请实施例中,由于攻击者可能会在获取到第一终端设备和服务器之间发送的中间结果后,根据获取到的中间结果反推以得到原始的待处理数据,而对于第一时刻和第二时刻这两个不同的时刻,与该神经网络对应的拆分节点不同,也即在不同的时刻,第一终端设备和服务器之间发送不同的中间结果,进一步增加了攻击者获取到原始的待处理数据的难度,以进一步提高对用户数据的隐私性的保护程度。
针对服务器获取与目标神经网络对应的更新后的拆分节点的触发点。在一种实现方式中,服务器可以每隔固定的时长重新获取与目标神经网络对应的拆分节点;作为示例,例如该固定时长可以为一天、一星期、十天、十五天、一个月或其他长度等,此处不做穷举。
在另一种实现方式中,服务器可以在固定的时间点重新获取与目标神经网络对应的拆分节点;作为示例,例如该固定的时间点可以为每个月的1号凌晨2点、每个星期的星期一的凌晨3点或其他时间点等,此处不做穷举。
在另一种实现方式中,第一终端设备可以向服务器发送请求消息,该请求消息用于请求更新目标神经网络部署的神经网络层的数量,也即请求更新目标神经网络包括的多个神 经网络层在第一终端设备和服务器上的部署情况。可选地,该请求消息可以为由用户通过第一终端设备主动触发的,也即用户可以主动触发更新目标神经网络部署的神经网络层的数量等。
进一步地,在一种情况中,第一终端设备在每次需要执行目标任务时,可以向服务器发送请求消息,该请求消息用于请求更新目标神经网络部署的神经网络层的数量;在另一种情况中,第一终端设备可以在每执行该目标任务达到目标次数时,向服务器发送请求消息,该请求消息用于请求更新目标神经网络部署的神经网络层的数量;或者还可以在其他情况中触发第一终端设备向服务器发送该请求消息,此处不做穷举。
需要说明的是,还可以存在其他方式以触发服务器获取与目标神经网络对应的更新后的拆分节点,具体实现方式可以结合具体应用场景灵活确定,此处不做限定。
针对服务器获取与目标神经网络对应的更新后的拆分节点的具体实现过程。其中,第一终端设备上部署的神经网络层的数量的确定因素可以包括:第一终端设备的处理器资源的占用量和/或第一终端设备的内存资源的占用量。可选地,第一神经网络和第三神经网络的确定因素还可以包括如下任一种或多种:第一终端设备上目前运行的进程的数量、第一终端设备上每个进程已经运行的时间、第一终端设备上每个进程的运行状态或其他能够因素等,具体可以结合实际应用场景确定,此处不一一进行列举。
进一步地,“第一终端设备的内存资源的占用量”的评价指标可以包括如下任一种或多种指标:第一终端设备的总的内存资源的大小、第一终端设备的已占用的内存资源的大小、第一终端设备的内存资源的占用率或其他评价指标等。
“第一终端设备的处理器资源的占用量”的评价指标可以包括如下任一种或多种指标:第一终端设备的处理器资源的占用率、第一终端设备上用于执行目标任务的每个处理器的占用时长、第一终端设备上用于执行目标任务分配的处理器的负载量、第一终端设备上用于执行目标任务分配的处理器的性能或其他能够反映第一终端设备的上用于执行目标任务的处理器资源的占用量的评价指标等,具体需要结合实际产品确定,此处不做穷举。
具体的,服务器可以根据第一终端设备的处理器资源的占用量,计算当第一终端设备执行目标任务时所分配的处理器资源的预估量;对应的,服务器可以根据第一终端设备的内存资源的占用量,计算第一终端设备的内存资源的可用量,进而可以获取当第一终端设备执行目标任务时所分配的内存资源的预估量。
服务器可以根据当第一终端设备执行目标任务时所分配的处理器资源的预估量和当第一终端设备执行目标任务时所分配的内存资源的预估量,生成与目标神经网络对应的更新后的拆分节点。其中,若根据前述与目标神经网络对应的更新后的拆分节点对目标神经网络进行拆分,则部署于第一终端设备上的第一神经网络和第三神经网络在数据处理过程中所占用的处理器资源小于或等于前述当第一终端设备执行目标任务时所分配的处理器资源的预估量;部署于第一终端设备上的第一神经网络和第三神经网络在数据处理过程中所占用的内存资源小于或等于前述当第一终端设备执行目标任务时所分配的内存资源的预估量。
更具体的,针对“服务器根据第一终端设备的处理器资源的占用量,获取当第一终端设备执行目标任务时所分配的处理器资源的预估量”的过程。在一种实现方式中,服务器 上可以存储有执行过训练操作的回归模型,前述回归模型用于执行前述预估操作;作为示例,例如该回归模型可以采用自回归滑动平均(autoregressive integrated moving average,ARIMA)模型、递归神经网络(recursive neural network,RNN)或其他类型的模型等,此处不做穷举。其中,该回归模型的输入可以包括第一终端设备上的处理器资源的占有率、第一终端设备上的内存资源的使用率、第一终端设备上当前运行的进程的数量和终端上每个进程已经运行的时间;该回归模型的输出可以为未来一段时间内每个进程所对应的处理器资源的预估占有率和内存资源的预估占有率。
服务器可以根据未来一段时间内每个进程所对应的处理器资源的预估占有率和内存资源的预估占有率,计算未来一段时间内第一终端设备的处理器资源的预估可用量和内存资源的预估可用量。进一步地,在一种情况中,服务器可以将该未来一段时间内第一终端设备的处理器资源的预估可用量确定为第一终端设备执行目标任务时所分配的处理器资源的预估量,将该未来一段时间内第一终端设备的内存资源的预估可用量确定为第一终端设备执行目标任务时所分配的内存资源的预估量。
在另一种情况中,服务器可以将该未来一段时间内第一终端设备的处理器资源的预估可用量与第一比例相乘,并将得到的乘积确定为第一终端设备执行目标任务时所分配的处理器资源的预估量;将该未来一段时间内第一终端设备的内存资源的预估可用量与该第一比例相乘,并将得到的乘积确定为第一终端设备执行目标任务时所分配的内存资源的预估量;其中,该第一比例小于1。
在另一种实现方式中,服务器也可以根据预设规则确定第一终端设备执行目标任务时所分配的处理器资源的预估量。服务器可以将第一终端设备的处理器资源的当前占用量乘以第二比例,将得到的乘积确定为未来一段时间内第一终端设备的处理器资源的预估占用量;将第一终端设备的内存资源的当前占用量乘以该第二比例,将得到的乘积确定为未来一段时间内第一终端设备的内存资源的预估占用量;第二比例大于1。
服务器根据未来一段时间内第一终端设备的处理器资源的预估占用量,确定未来一段时间内第一终端设备的处理器资源的预估可用量;根据未来一段时间内第一终端设备的内存资源的预估占用量,确定未来一段时间内第一终端设备的内存资源的预估可用量。进而可以根据未来一段时间内第一终端设备的处理器资源的预估可用量和内存资源的预估可用量,确定第一终端设备执行目标任务时所分配的处理器资源的预估量和内存资源的预估量。
需要说明的是,此处对于“服务器根据第一终端设备的处理器资源的占用量,获取当第一终端设备执行目标任务时所分配的处理器资源的预估量”的描述仅为证明本方案的可实现性,服务器还可以采用其他方式来得到第一终端设备执行目标任务时所分配的处理器资源的预估量,此处不对每种实现方式进行一一穷举。
可选地,为了能够实现目标神经网络的更新后的拆分节点与更新前的拆分节点不同。在一种实现方式中,服务器在根据当第一终端设备执行目标任务时所分配的处理器资源的预估量和当第一终端设备执行目标任务时所分配的内存资源的预估量,生成与目标神经网络对应的更新后的拆分节点之后,可以对前述确定的拆分节点进行随机的调整,也即随机的将拆分节点在目标神经网络中的位置进行随机的前移或后移,以对与目标神经网络对应 的更新后的拆分节点进行再次更新,得到与目标神经网络对应的最终的更新后拆分节点。
进一步地,由于在图4对应实施例中,与目标神经网络对应有两个拆分节点,则当对确定的拆分节点进行随机的调整时,可以仅对第一拆分节点在目标神经网络中的位置进行随机调整,也可以仅对第二拆分节点在目标神经网络中的位置进行随机调整;还可以对第一拆分节点和第二拆分节点在目标神经网络中的位置均做随机调整。
为更直观地理解本方案,请参阅图10,图10为本申请实施例中更新与目标神经网络对应的拆分节点的一种流程示意图,图10以第一神经网络、第二神经网络和第三神经网络为对目标神经网络拆分得到,如图10所示,C1、在第一终端设备尚未执行目标任务时,获取与第一终端设备上已经占用的计算机资源关联的多个参数,前述多个参数可以包括第一终端设备的处理器资源的占用量、第一终端设备的内存资源的占用量、第一终端设备上目前运行的进程的数量和第一终端设备上每个进程已经运行的时间,第一终端设备将前述多个参数发送给服务器;C2、服务器根据接收到的多个参数,确定第一终端设备在执行目标任务时所分配的处理器资源的预估量和内存资源的预估量;C3、服务器根据第一终端设备在执行目标任务时所分配的处理器资源的预估量和内存资源的预估量,获取与目标神经网络对应的更新后的拆分节点;C4、服务器对目标神经网络所对应的更新后的拆分节点进行随机的前移或后移,得到与目标神经网络对应的最终的更新后拆分节点;C5、服务器根据与目标神经网络对应的最终的更新后拆分节点,从目标神经网络中确定第一神经网络包括的n个神经网络层、第二神经网络包括的m个神经网络层和第三神经网络包括的s个神经网络层;C6、服务器将第一神经网络包括的n个神经网络层和第三神经网络包括的s个神经网络层发送给第一终端设备,以将第一神经网络和第三神经网络部署至第一终端设备,并将第二神经网络部署至服务器上,应理解,图10中的示例仅为方便理解本方案,不用于限定本方案。
在另一种实现方式中,服务器在不同的时刻可以采用不同的预估算法,根据第一终端设备的处理器资源的可用量,来获取当第一终端设备执行目标任务时所分配的处理器资源的预估量,以提高不同时刻所对应的处理器资源的预估量不同的概率;对应的,服务器在不同的时刻可以采用不同的预估算法,根据第一终端设备的内存资源的可用量,来获取当第一终端设备执行目标任务时所分配的内存资源的预估量,以提高不同时刻所对应的内存资源的预估量不同的概率。从而提高不同时刻所对应的目标部署的神经网络层的数量不同的概率。
本申请实施例中,由于第一终端设备上通常需要运行多个应用程序,则在同一终端设备的不同时刻,第一终端设备能够分配给目标任务的计算机资源可能是不同的,则第一神经网络和第三神经网络的确定因素包括第一终端设备的处理器资源的占用量和/或第一终端设备的内存资源的占用量,有利于保证第一终端设备上部署的神经网络能够与第一终端设备的算力相匹配,以避免增加第一终端设备在执行目标任务过程的运算压力。
408、服务器向第一终端设备发送第一神经网络包括的n个神经网络层和第三神经网络包括的s个神经网络层。
本申请的一些实施例中,服务器可以根据更新后的两个拆分节点将目标神经网络拆分 为第一神经网络、第二神经网络和第三神经网络;服务器向第一终端设备发送第一神经网络包括的n个神经网络层和第三神经网络包括的s个神经网络层,从而将第一终端设备发送第一神经网络和第三神经网络部署于第一终端设备上,将第二神经网络包括的s个神经网络层部署于服务器上。
本申请实施例中,当第一终端设备上部署的神经网络层的数量发生变化时,则服务器可以向第一终端设备发送更新后的第一神经网络和更新后的第三神经网络,进一步提高了攻击者确定第一终端设备上部署的神经网络的难度,从而进一步提高攻击者从中间结果反推得到原始的待处理数据的难度,有利于进一步提高对用户数据的隐私性保护程度。
409、第一终端设备将待处理数据输入第一神经网络,得到第一神经网络生成的第三中间结果,在第二时刻,第一神经网络包括n个神经网络层。
本申请实施例中,步骤407至413均为可选步骤,若不执行步骤407,则也不需要执行步骤408至413,也即对于同一终端设备的不同时刻,该目标神经网络部署的神经网络层的数量可以不更新,从而不需要为第一终端设备重新部署第一神经网络和第三神经网络。
若执行步骤407,也即对于同一终端设备的不同时刻,该目标神经网络部署的神经网络层的数量会更新,若执行步骤408,则第一终端设备可以接收第一神经网络包括的n个神经网络层和第三神经网络包括的s个神经网络层,并将接收到的n个神经网络层和s个神经网络层存储至本地。
若执行步骤407、不执行步骤408且执行步骤401,若第一神经网络、第二神经网络和第三神经网络为对目标神经网络拆分得到,由于步骤401中是服务器初次将目标神经网络中的第一神经网络和第三神经网络部署于一个新的第一终端设备上,第一神经网络和第三神经网络所对应的两个拆分节点的确定依据可以是第一终端设备所分配的计算机资源的最大预估量,“第一终端设备所分配的计算机资源的最大预估量”包括“第一终端设备所分配的处理资源的最大预估量”和“第一终端设备所分配的内存资源的最大预估量”,则N的取值可以大于或等于n,且S的取值可以大于或等于s。
则服务器在获取到与目标神经网络对应的更新后的拆分节点之后,可以向第一终端设备发送第二指示信息,第二指示信息用于告知第一终端设备与目标神经网络对应的两个更新后的拆分节点。第一终端设备可以根据第二指示信息从第一神经网络中确定第一神经网络,从第三神经网络中确定第三神经网络,从而实现将第一神经网络和第三神经网络部署于第一终端设备上。
若执行步骤407、不执行步骤408且不执行步骤401,若第一神经网络、第二神经网络和第三神经网络为对目标神经网络拆分得到,在一种实现方式中,若第一终端设备上存储有目标神经网络中的前P个神经网络层和目标神经网络中的后Q个神经网络层,服务器在获取到与目标神经网络对应的更新后的拆分节点之后,可以向第一终端设备发送第二指示信息,第一终端设备可以根据接收到的第二指示信息从上述前P个神经网络层中确定第一神经网络,从上述后Q个神经网络层中确定第三神经网络,也即实现了在第一终端设备上部署第一神经网络和第三神经网络,P为大于或等于n的整数,Q为大于或等于s的整数。
在另一种实现方式中,若第一终端设备上存储有训练后的整个目标神经网络,服务器 在获取到与目标神经网络对应的更新后的拆分节点之后,可以向第一终端设备发送第二指示信息,第二指示信息用于告知第一终端设备与目标神经网络对应的两个更新后的拆分节点,从而第一终端设备可以根据第二指示信息从目标神经网络中确定第一神经网络和第三神经网络,且服务器可以根据与目标神经网络对应的更新后的拆分节点,从目标神经网络中确定第二神经网络,也即分别将第一神经网络、第二神经网络和第三神经网络部署于第一终端设备和服务器上。
第一终端设备在部署有第一神经网络和第三神经网络之后,可以将待处理数据输入第一神经网络,得到第一神经网络生成的第三中间结果,前述步骤的具体实现方式可以参阅步骤402中的描述,“第三中间结果”的概念与“第一中间结果”的概念类似,此处不做赘述。
本申请实施例中不限定步骤401与步骤409之间的执行次数,可以在执行一次步骤401之后,执行步骤409多次。
410、第一终端设备将第三中间结果发送至服务器。
411、服务器将第三中间结果输入第二神经网络,得到第二神经网络生成的第四中间结果,在第二时刻,第三神经网络包括m个神经网络层。
412、服务器将第四中间结果发送至第一终端设备。
413、第一终端设备将第四中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果,在第二时刻,第三神经网络包括s个神经网络层。
本申请实施例中,步骤410至413的具体方式可以参阅步骤403至406中的描述,区别在于,将步骤403至406中的“第一中间结果”替换为步骤410至413中的“第三中间结果”,将步骤403至406中的“第二中间结果”替换为步骤410至413中的“第四中间结果”,“第四中间结果”的含义与“第二中间结果”的含义类似,此处均不做赘述。
为更直观地理解本方案,请参阅图11,图11为本申请实施例提供的数据处理方法中与目标神经网络对应的拆分节点的一种示意图。图11中示出了更新前的拆分节点和更新后的拆分节点,更新前的第一拆分节点为图11中的X点,更新后的第一拆分节点为图11中的Y点,更新前的第二拆分节点和更新后的第二拆分节点均为图11中的H点。如图11所示,对于目标神经网络的更新前的两个拆分节点和目标神经网络的更新后的两个拆分节点这两种不同的情况,第一终端设备上部署的第一神经网络发生了变化,服务器上部署的第二神经网络也发生了变化,且第一终端设备向服务器发送的中间结果也发生了变化,应理解,图11对应实施例仅为方便理解本方案,不用于限定本方案。
本申请实施例中,由于第二神经网络的运算是由服务器完成的,因此可以减少整个目标神经网络的计算过程中所占用的第一终端设备的计算机资源;第一终端设备是将待处理数据输入前第一神经网络中计算之后,将第一中间结果发送给服务器,避免了原始的待处理数据的泄露,提高了对用户数据的隐私性的保护程度;且整个目标神经网络中的后第三神经网络的计算也是由第一终端设备侧执行,有利于进一步提高对用户数据的隐私性的保护程度。由于攻击者可能会在获取到第一终端设备和服务器之间发送的中间结果后,根据获取到的中间结果反推以得到原始的待处理数据,而对于第一时刻和第二时刻这两个不同 的时刻,第一终端设备上部署的神经网络层的数量发生改变,也即在不同的时刻,第一终端设备和服务器之间发送不同的中间结果,进一步增加了攻击者获取到原始的待处理数据的难度,以进一步提高对用户数据的隐私性的保护程度。
二、目标神经网络包括第一神经网络和第二神经网络
本申请实施例中,请参阅图12,图12为本申请实施例提供的数据处理方法的一种流程示意图,本申请实施例提供的数据处理方法可以包括:
1201、服务器将第一神经网络发送给第一终端设备,服务器上部署有第二神经网络,在第一时刻,第一神经网络包括N个神经网络层,第二神经网络包括M个神经网络层,第一神经网络和第二神经网络组成目标神经网络。
1202、第一终端设备将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果。
1203、第一终端设备将第一中间结果发送至服务器。
本申请实施例中,步骤1201至1203的具体实现方式可以参阅图4对应实施例中步骤401至403中的描述,区别在于在步骤401至403中,目标神经网络包括第一神经网络、第二神经网络和第三神经网络;在步骤1201至1203中,目标神经网络包括第一神经网络和第二神经网络,第一神经网络位于第二神经网络之前。
可选地,若第一神经网络和第二神经网络为对目标神经网络拆分得到,则第一神经网络指的是目标神经网络中位于目标拆分节点之前的神经网络层,第二神经网络指的是目标神经网络中位于目标拆分节点之后的神经网络层;对于“第一神经网络位于第二神经网络之前”和“第一中间结果”的概念的理解可以参阅图4对应实施例中的描述,此处不做赘述。
1204、服务器将第一中间结果输入第二神经网络,得到第二神经网络生成的与待处理数据对应的预测结果,预测结果所指示的信息的类型与目标任务的类型对应。
本申请实施例中,服务器在接收到加密后的第一中间结果之后,可以对加密后的第一中间结果进行解密以得到第一中间结果,并将第一中间结果输入第二神经网络中,得到第二神经网络生成的与待处理数据对应的预测结果(也即得到整个目标神经网络输出的与待处理数据对应的预测结果),预测结果所指示的信息的类型与目标任务的类型对应。进一步地,对于“目标任务”、“与待处理数据对应的预测结果”的概念的理解可以参阅图4对应的实施例中的描述,此处不做赘述。
为更直观地理解本方案,请参阅图13,图13为本申请实施例提供的数据处理方法的一种流程示意图。图13中以目标神经网络所执行的目标任务为提取声纹特征为例,如图13所示,D1、第一终端设备获取用户输入的待处理数据(也即图13中示出的用户输入的声音数据)。D2、第一终端设备将待处理数据输入至第一神经网络(也即图13中示出的目标神经网络的前N个神经网络层)中,得到该第一神经网络生成的第一中间结果。D3、第一终端设备将第一中间结果进行加密处理,并将加密后的第一中间结果发送给服务器,以实现对第一中间结果的加密传输。D4、服务器在对加密后的第一中间结果进行解密以得到第一中间结果,将第一中间结果输入第二神经网络(也即目标神经网络的后M个神经网络 层)中,得到整个目标神经网络输出的与待处理数据对应的预测结果(也即从输入的声音数据中提取出的声纹特征)。D5、服务器将已经注册的至少一个声纹特征中的每个声纹特征与获取到的声纹特征进行对比,以确定获取到的声纹特征是否为预先注册的至少一个声纹特征中的任意一个,以确定声纹识别的结果,该声纹识别的结果用于指示该用户是否为具有权限的用户。D6、服务器将该声纹识别的结果发送给第一终端设备。应理解,图13中的示例仅为方便理解本方案,不用于限定本方案。
需要说明的是,步骤1201为可选步骤,若不执行步骤1201,服务器将第一神经网络部署至第一终端设备上的方式可以参阅图4对应实施例中步骤402中的描述,此处不做赘述。
1205、服务器获取与目标神经网络对应的更新后的拆分节点,更新后的拆分节点指示目标神经网络第一神经网络包括n个神经网络层且第二神经网络包括m个神经网络层。
本申请实施例中,步骤1205的具体方式可以参阅图4对应实施例中步骤407中的描述,区别在于,步骤407中是获取与目标神经网络对应的两个拆分节点,步骤1205中是获取与目标神经网络对应的一个拆分节点。
此外,本申请实施例不限定步骤1201和步骤1205之间的执行次数,可以在执行一次步骤1201之后,执行多次步骤1205。
1206、服务器向第一终端设备发送第一神经网络包括的n个神经网络层。
1207、第一终端设备将待处理数据输入第一神经网络,得到第一神经网络生成的第三中间结果,在第二时刻,第一神经网络包括n个神经网络层。
1208、第一终端设备将第三中间结果发送至服务器。
1209、服务器将第三中间结果输入第二神经网络包括的m个神经网络层,得到第二神经网络生成的与待处理数据对应的预测结果。
本申请实施例中,步骤1206至1209的具体方式可以参阅图4对应实施例中步骤401至404中的描述,区别在于,将步骤1206至1209中的“第一中间结果”替换为步骤401至404中的“第三中间结果”,此处均不做赘述。
需要说明的是,步骤1205至1209均为可选步骤,若不执行步骤1205,则不需要执行步骤1206至1209;若执行步骤1205,则步骤1206也是可选步骤,若执行步骤1205且不执行步骤1206,则服务器将第一神经网络部署至第一终端设备上的方式可以参阅图4对应实施例中步骤409中的描述,此处不做赘述。
本申请实施例中,由于第二神经网络的运算是由服务器完成的,因此可以减少整个神经网络的计算过程中所占用的第一终端设备的计算机资源;第一终端设备是将待处理数据输入前第一神经网络中计算之后,将第一中间结果发送给服务器,避免了原始的待处理数据的泄露,提高了对用户数据的隐私性的保护程度;且整个神经网络中的后第三神经网络的计算也是由第一终端设备侧执行,有利于进一步提高对用户数据的隐私性的保护程度。
在图1至图13所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关设备。具体参阅图14,图14为本申请实施例提供的 数据处理装置的一种结构示意图,数据处理装置1400部署于第一终端设备上,第一终端设备包含于数据处理的系统,数据处理的系统还包括服务器,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,数据处理装置1400包括:输入模块1401,用于将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果;发送模块1402,用于将第一中间结果发送至服务器,第一中间结果用于供服务器利用第二神经网络得到第二中间结果;接收模块1403,用于接收服务器发送的第二中间结果;输入模块1401,还用于将第二中间结果输入第三神经网络,得到第三神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
在一种可能的设计中,在第一时刻,第一神经网络包括N个神经网络层,第三神经网络包括S个神经网络层,在第二时刻,第一神经网络包括n个神经网络层,第三神经网络包括s个神经网络层,其中,N和n不同和/或S和s不同;接收模块1403,还用于接收服务器发送的n个神经网络层和s个神经网络层。
需要说明的是,数据处理装置1400中各模块/单元之间的信息交互、执行过程等内容,与本申请中图3至图11对应的各个方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
请参阅图15,图15为本申请实施例提供的数据处理装置的一种结构示意图,数据处理装置1500署于服务器,服务器包含于数据处理的系统,数据处理的系统还包括第一终端设备,第一终端设备上部署第一神经网络和第三神经网络,服务器上部署第二神经网络,数据处理装置1500包括:接收模块1501,用于接收第一终端设备发送的第一中间结果,第一中间结果基于待处理数据和第一神经网络得到;输入模块1502,用于将第一中间结果输入第二神经网络,得到第二神经网络生成的第二中间结果;发送模块1503,用于将第二中间结果发送至第一终端设备,第二中间结果用于供第一终端设备利用第三神经网络得到与待处理数据对应的预测结果;其中,第一神经网络、第二神经网络和第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的神经网络存在如下变化:第一神经网络中的神经网络层的数量发生改变,或者,第三神经网络中的神经网络层的数量发生改变。
在一种可能的设计中,在第一时刻,第一神经网络包括N个神经网络层,第三神经网络包括S个神经网络层,在第二时刻,第一神经网络包括n个神经网络层,第三神经网络包括s个神经网络层,其中,N和n不同和/或S和s不同;发送模块1503,还用于向第一终端设备发送n个神经网络层和s个神经网络层。
需要说明的是,数据处理装置1500中各模块/单元之间的信息交互、执行过程等内容,与本申请中图3至图11对应的各个方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
请参阅图16,图16为本申请实施例提供的数据处理装置的一种结构示意图,数据处 理装置1600部署于第一终端设备,第一终端设备包含于数据处理的系统,数据处理的系统还包括服务器,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,数据处理装置1600包括:输入模块1601,用于将待处理数据输入第一神经网络,得到第一神经网络生成的第一中间结果;发送模块1602,用于将第一中间结果发送至服务器,第一中间结果用于供服务器利用第二神经网络得到与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
在一种可能的设计中,在第一时刻,第一神经网络包括N个神经网络层,在第二时刻,第一神经网络包括n个神经网络层,N和n不同;数据处理装置1600还包括接收模块,用于接收服务器发送的第一神经网络。
需要说明的是,数据处理装置1600中各模块/单元之间的信息交互、执行过程等内容,与本申请中图12或图13对应的各个方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
请参阅图17,图17为本申请实施例提供的数据处理装置的一种结构示意图,数据处理装置1700部署于服务器,服务器包含于数据处理的系统,数据处理的系统还包括第一终端设备,第一终端设备上部署第一神经网络,服务器上部署第二神经网络,数据处理装置1700包括:接收模块1701,用于接收第一终端设备发送的第一中间结果,第一中间结果基于待处理数据和N个第一中间结果得到;输入模块1702,用于将第一中间结果输入第二神经网络,得到第二神经网络生成的与待处理数据对应的预测结果;其中,第一神经网络和第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
在一种可能的设计中,在第一时刻,第一神经网络包括N个神经网络层,在第二时刻,第一神经网络包括n个神经网络层,N和n不同;装置还包括:发送模块,用于向终端设备发送n个神经网络层。
需要说明的是,数据处理装置1700中各模块/单元之间的信息交互、执行过程等内容,与本申请中图12或图13对应的各个方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
接下来介绍本申请实施例提供的一种第一终端设备,请参阅图18,图18为本申请实施例提供的第一终端设备的一种结构示意图。具体的,第一终端设备1800包括:接收器1801、发射器1802、处理器1803和存储器1804(其中第一终端设备1800中的处理器1803的数量可以一个或多个,图18中以一个处理器为例),其中,处理器1803可以包括应用处理器18031和通信处理器18032。在本申请的一些实施例中,接收器1801、发射器1802、处理器1803和存储器1804可通过总线或其它方式连接。
存储器1804可以包括只读存储器和随机存取存储器,并向处理器1803提供指令和数据。存储器1804的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。存储器1804存储有处理器和操作指令、可执行模块或者数据结 构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。
处理器1803控制第一终端设备的操作。具体的应用中,第一终端设备的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
上述本申请实施例揭示的方法可以应用于处理器1803中,或者由处理器1803实现。处理器1803可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1803中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1803可以是通用处理器、数字信号处理器(digital signal processing,DSP)、微处理器或微控制器,还可进一步包括专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。该处理器1803可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1804,处理器1803读取存储器1804中的信息,结合其硬件完成上述方法的步骤。
接收器1801可用于接收输入的数字或字符信息,以及产生与第一终端设备的相关设置以及功能控制有关的信号输入。发射器1802可用于通过第一接口输出数字或字符信息;发射器1802还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据;发射器1802还可以包括显示屏等显示设备。
本申请实施例中,在一种情况下,处理器1803用于执行图3至图11对应的各个方法实施例中的第一终端设备执行的步骤。需要说明的是,处理器1803执行前述各个步骤的具体方式,与本申请中图3至图11对应的各个方法实施例基于同一构思,其带来的技术效果与本申请中图3至图11对应的各个方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
在另一种情况下,处理器1803用于执行图12或图13对应的各个方法实施例中的第一终端设备执行的步骤。需要说明的是,处理器1803执行前述各个步骤的具体方式,与本申请中图12或图13对应的各个方法实施例基于同一构思,其带来的技术效果与本申请中图12或图13对应的各个方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
本申请实施例还提供了一种服务器,请参阅图19,图19是本申请实施例提供的服务器一种结构示意图,具体的,服务器1900由一个或多个服务器实现,服务器1900可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)1922(例如,一个或一个以上处理器)和存储器1932,一个或一个以上存储应用程序1942或数据1944的存储介质1930(例如一个或一个以上海量存储 设备)。其中,存储器1932和存储介质1930可以是短暂存储或持久存储。存储在存储介质1930的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器1922可以设置为与存储介质1930通信,在服务器1900上执行存储介质1930中的一系列指令操作。
服务器1900还可以包括一个或一个以上电源1926,一个或一个以上有线或无线网络接口1950,一个或一个以上输入输出接口1958,和/或,一个或一个以上操作系统1941,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
本申请实施例中,在一种情况下,中央处理器1922用于执行图3至图11对应的各个实施例中的服务器执行的步骤。需要说明的是,中央处理器1922执行前述各个步骤的具体方式,与本申请中图3至图11对应的各个方法实施例基于同一构思,其带来的技术效果与本申请中图3至图11对应的各个方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
在另一种情况下,中央处理器1922用于执行图12或图13对应的各个实施例中的服务器执行的步骤。需要说明的是,中央处理器1922执行前述各个步骤的具体方式,与本申请中图12或图13对应的各个方法实施例基于同一构思,其带来的技术效果与本申请中图12或图13对应的各个方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
本申请实施例中还提供一种包括计算机程序产品,当其在计算机上运行时,使得计算机执行如前述图3至图11所示实施例描述的方法中第一终端设备所执行的步骤,或者,使得计算机执行如前述图3至图11所示实施例描述的方法中服务器所执行的步骤,或者,使得计算机执行如前述图12或图13所示实施例描述的方法中第一终端设备所执行的步骤,或者,使得计算机执行如前述图12或图13所示实施例描述的方法中服务器所执行的步骤。
本申请实施例中还提供一种计算机可读存储介质,该计算机可读存储介质中存储有用于进行信号处理的程序,当其在计算机上运行时,使得计算机执行如前述图3至图11所示实施例描述的方法中第一终端设备所执行的步骤,或者,使得计算机执行如前述图3至图11所示实施例描述的方法中服务器所执行的步骤,或者,使得计算机执行如前述图12或图13所示实施例描述的方法中第一终端设备所执行的步骤,或者,使得计算机执行如前述图12或图13所示实施例描述的方法中服务器所执行的步骤。
本申请实施例中还提供一种数据处理系统,该数据处理系统可以包括第一终端设备和服务器,该第一终端设备为图18所示实施例描述的第一终端设备,该服务器为图19所示实施例描述的服务器。
本申请实施例提供的数据处理装置具体可以为芯片,芯片包括:处理单元和通信单元,所述处理单元例如可以是处理器,所述通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行存储单元存储的计算机执行指令,以使芯片执行上述图12或图13所示实施例描述的数据处理方法,或者,以使芯片执行上述图3至图11所示实施例描述的数据处理方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存 储单元还可以是所述无线接入设备端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
具体的,请参阅图20,图20为本申请实施例提供的芯片的一种结构示意图,所述芯片可以表现为神经网络处理器NPU 200,NPU 200作为协处理器挂载到主CPU(Host CPU)上,由Host CPU分配任务。NPU的核心部分为运算电路2003,通过控制器2004控制运算电路2003提取存储器中的矩阵数据并进行乘法运算。
在一些实现中,运算电路2003内部包括多个处理单元(Process Engine,PE)。在一些实现中,运算电路2003是二维脉动阵列。运算电路2003还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路2003是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路从权重存储器2002中取矩阵B相应的数据,并缓存在运算电路中每一个PE上。运算电路从输入存储器2001中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)2008中。
统一存储器2006用于存放输入数据以及输出数据。权重数据直接通过存储单元访问控制器(Direct Memory Access Controller,DMAC)2005,DMAC被搬运到权重存储器2002中。输入数据也通过DMAC被搬运到统一存储器2006中。
BIU为Bus Interface Unit即,总线接口单元2010,用于AXI总线与DMAC和取指存储器(Instruction Fetch Buffer,IFB)2009的交互。
总线接口单元2010(Bus Interface Unit,简称BIU),用于取指存储器2009从外部存储器获取指令,还用于存储单元访问控制器2005从外部存储器获取输入矩阵A或者权重矩阵B的原数据。
DMAC主要用于将外部存储器DDR中的输入数据搬运到统一存储器2006或将权重数据搬运到权重存储器2002中或将输入数据数据搬运到输入存储器2001中。
向量计算单元2007包括多个运算处理单元,在需要的情况下,对运算电路的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。主要用于神经网络中非卷积/全连接层网络计算,如Batch Normalization(批归一化),像素级求和,对特征平面进行上采样等。
在一些实现中,向量计算单元2007能将经处理的输出的向量存储到统一存储器2006。例如,向量计算单元2007可以将线性函数和/或非线性函数应用到运算电路2003的输出,例如对卷积层提取的特征平面进行线性插值,再例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元2007生成归一化的值、像素级求和的值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路2003的激活输入,例如用于在神经网络中的后续层中的使用。
控制器2004连接的取指存储器(instruction fetch buffer)2009,用于存储控制器2004使用的指令;
统一存储器2006,输入存储器2001,权重存储器2002以及取指存储器2009均为On-Chip存储器。外部存储器私有于该NPU硬件架构。
其中,在图3至图13所对应的实施例中,第一终端设备和服务器上均部署有目标神经网络中的至少一个神经网络层,目标神经网络中的神经网络层的运算可以由运算电路2003或向量计算单元2007执行。
其中,上述任一处提到的处理器,可以是一个通用中央处理器,微处理器,ASIC,或一个或多个用于控制上述第一方面方法的程序执行的集成电路。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。

Claims (26)

  1. 一种数据处理方法,其特征在于,所述方法应用于数据处理的系统,所述数据处理的系统包括第一终端设备和服务器,所述第一终端设备上部署第一神经网络和第三神经网络,所述服务器上部署第二神经网络,所述方法包括:
    所述第一终端设备将待处理数据输入所述第一神经网络,得到所述第一神经网络生成的第一中间结果,将所述第一中间结果发送至所述服务器;
    所述服务器将所述第一中间结果输入所述第二神经网络,得到所述第二神经网络生成的第二中间结果,将所述第二中间结果发送至所述终端设备;
    所述第一终端设备将所述第二中间结果输入所述第三神经网络,得到所述第三神经网络生成的与所述待处理数据对应的预测结果;
    其中,所述第一神经网络、所述第二神经网络和所述第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的神经网络存在如下变化:所述第一神经网络中的神经网络层的数量发生改变,或者,所述第三神经网络中的神经网络层的数量发生改变。
  2. 根据权利要求1所述的方法,其特征在于,在所述第一时刻,所述第一神经网络包括N个神经网络层,所述第三神经网络包括S个神经网络层,在所述第二时刻,所述第一神经网络包括n个神经网络层,所述第三神经网络包括s个神经网络层,其中,所述N和所述n不同和/或所述S和所述s不同,所述方法还包括:
    所述服务器向所述第一终端设备发送所述n个神经网络层和所述s个神经网络层。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    所述服务器从所述目标神经网络中确定所述第一神经网络和所述第三神经网络,其中,所述第一神经网络和所述第三神经网络的确定因素包括:所述第一终端设备的处理器资源的占用量和/或所述第一终端设备的内存资源的占用量。
  4. 根据权利要求1或2所述的方法,其特征在于,所述数据处理的系统还包括第二终端设备,所述第一终端设备上部署的第一神经网络和所述第二终端设备上部署的第一神经网络中的神经网络层的数量不同,和/或,所述第一终端设备上部署的第三神经网络和所述第二终端设备上部署的第三神经网络中的神经网络层的数量不同;
    其中,所述第一终端设备和所述第二终端设备为不同类型的终端设备,和/或,所述第一终端设备和所述第二终端设备为同一类型中不同型号的终端设备。
  5. 根据权利要求1或2所述的方法,其特征在于,
    所述第一终端设备通过所述第一神经网络和所述第三神经网络进行数据处理的过程中所占用的处理器资源小于所述服务器通过所述第二神经网络进行数据处理的过程中所占用的处理器资源,且,所述第一终端设备通过所述第一神经网络和所述第三神经网络进行数据处理的过程中所占用的内存资源小于所述服务器通过所述第二神经网络进行数据处理的过程中所占用的内存资源。
  6. 根据权利要求1或2所述的方法,其特征在于,所述待处理数据为如下任一种数据:声音数据、脸部的图像数据、指纹数据或耳部的轮廓数据。
  7. 一种数据处理方法,其特征在于,所述方法应用于数据处理的系统,所述数据处理的系统包括第一终端设备和服务器,所述第一终端设备上部署第一神经网络,所述服务器上部署第二神经网络,所述方法包括:
    所述第一终端设备将待处理数据输入所述第一神经网络,得到所述第一神经网络生成的第一中间结果,将所述第一中间结果发送至所述服务器;
    所述服务器将所述第一中间结果输入所述第二神经网络,得到所述第二神经网络生成的与所述待处理数据对应的预测结果;
    其中,所述第一神经网络和所述第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
  8. 根据权利要求7所述的方法,其特征在于,在所述第一时刻,所述第一神经网络包括N个神经网络层,在所述第二时刻,所述第一神经网络包括n个神经网络层,所述N和所述n不同,所述方法还包括:
    所述服务器向所述第一终端设备发送所述n个神经网络层。
  9. 根据权利要求7或8所述的方法,其特征在于,所述数据处理的系统还包括第二终端设备,所述第一终端设备上部署的第一神经网络和所述第二终端设备上部署的第一神经网络中的神经网络层的数量不同;
    其中,所述第一终端设备和所述第二终端设备为不同类型的终端设备,和/或,所述第一终端设备和所述第二终端设备为同一类型中不同型号的终端设备。
  10. 一种数据处理方法,其特征在于,所述方法应用于第一终端设备,所述第一终端设备包含于数据处理的系统,所述数据处理的系统还包括服务器,所述第一终端设备上部署第一神经网络和第三神经网络,所述服务器上部署第二神经网络,所述方法包括:
    将待处理数据输入所述第一神经网络,得到所述第一神经网络生成的第一中间结果;
    将所述第一中间结果发送至所述服务器,所述第一中间结果用于供所述服务器利用所述第二神经网络得到第二中间结果;
    接收所述服务器发送的所述第二中间结果,将所述第二中间结果输入所述第三神经网络,得到所述第三神经网络生成的与所述待处理数据对应的预测结果;
    其中,所述第一神经网络、所述第二神经网络和所述第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的神经网络存在如下变化:所述第一神经网络中的神经网络层的数量发生改变,或者,所述第三神经网络中的神经网络层的数量发生改变。
  11. 一种数据处理方法,其特征在于,所述方法应用于服务器,所述服务器包含于数据处理的系统,所述数据处理的系统还包括第一终端设备,所述第一终端设备上部署第一神经网络和第三神经网络,所述服务器上部署第二神经网络,所述方法包括:
    接收所述第一终端设备发送的第一中间结果,所述第一中间结果基于待处理数据和所述第一神经网络得到;
    将所述第一中间结果输入所述第二神经网络,得到所述第二神经网络生成的第二中间 结果;
    将所述第二中间结果发送至所述第一终端设备,所述第二中间结果用于供所述第一终端设备利用所述第三神经网络得到与所述待处理数据对应的预测结果;
    其中,所述第一神经网络、所述第二神经网络和所述第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的神经网络存在如下变化:所述第一神经网络中的神经网络层的数量发生改变,或者,所述第三神经网络中的神经网络层的数量发生改变。
  12. 一种数据处理方法,其特征在于,所述方法应用于第一终端设备,所述第一终端设备包含于数据处理的系统,所述数据处理的系统还包括服务器,所述第一终端设备上部署第一神经网络,所述服务器上部署第二神经网络,所述方法包括:
    将待处理数据输入所述第一神经网络,得到所述第一神经网络生成的第一中间结果;
    将所述第一中间结果发送至所述服务器,所述第一中间结果用于供所述服务器利用所述第二神经网络得到与所述待处理数据对应的预测结果;
    其中,所述第一神经网络和所述第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
  13. 一种数据处理方法,其特征在于,所述方法应用于服务器,所述服务器包含于数据处理的系统,所述数据处理的系统还包括第一终端设备,所述第一终端设备上部署第一神经网络,所述服务器上部署第二神经网络,所述方法包括:
    接收所述第一终端设备发送的第一中间结果,所述第一中间结果基于待处理数据和所述N个第一中间结果得到;
    将所述第一中间结果输入所述第二神经网络,得到所述第二神经网络生成的与所述待处理数据对应的预测结果;
    其中,所述第一神经网络和所述第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
  14. 一种数据处理装置,其特征在于,所述数据处理装置部署于第一终端设备上,所述第一终端设备包含于数据处理的系统,所述数据处理的系统还包括服务器,所述第一终端设备上部署第一神经网络和第三神经网络,所述服务器上部署第二神经网络,所述装置包括:
    输入模块,用于将待处理数据输入所述第一神经网络,得到所述第一神经网络生成的第一中间结果;
    发送模块,用于将所述第一中间结果发送至所述服务器,所述第一中间结果用于供所述服务器利用所述第二神经网络得到第二中间结果;
    接收模块,用于接收所述服务器发送的所述第二中间结果;
    所述输入模块,还用于将所述第二中间结果输入所述第三神经网络,得到所述第三神经网络生成的与所述待处理数据对应的预测结果;
    其中,所述第一神经网络、所述第二神经网络和所述第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的神经网络存在如下变化:所述第一神经网络中的神经网络层的数量发生改变,或者,所述第三神经网络中的神经网络层的数量发生改变。
  15. 根据权利要求14所述的装置,其特征在于,在所述第一时刻,所述第一神经网络包括N个神经网络层,所述第三神经网络包括S个神经网络层,在所述第二时刻,所述第一神经网络包括n个神经网络层,所述第三神经网络包括s个神经网络层,其中,所述N和所述n不同和/或所述S和所述s不同;
    所述接收模块,还用于接收所述服务器发送的所述n个神经网络层和所述s个神经网络层。
  16. 一种数据处理装置,其特征在于,所述数据处理装置部署于服务器,所述服务器包含于数据处理的系统,所述数据处理的系统还包括第一终端设备,所述第一终端设备上部署第一神经网络和第三神经网络,所述服务器上部署第二神经网络,所述装置包括:
    接收模块,用于接收所述第一终端设备发送的第一中间结果,所述第一中间结果基于待处理数据和所述第一神经网络得到;
    输入模块,用于将所述第一中间结果输入所述第二神经网络,得到所述第二神经网络生成的第二中间结果;
    发送模块,用于将所述第二中间结果发送至所述第一终端设备,所述第二中间结果用于供所述第一终端设备利用所述第三神经网络得到与所述待处理数据对应的预测结果;
    其中,所述第一神经网络、所述第二神经网络和所述第三神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的神经网络存在如下变化:所述第一神经网络中的神经网络层的数量发生改变,或者,所述第三神经网络中的神经网络层的数量发生改变。
  17. 根据权利要求16所述的装置,其特征在于,在所述第一时刻,所述第一神经网络包括N个神经网络层,所述第三神经网络包括S个神经网络层,在所述第二时刻,所述第一神经网络包括n个神经网络层,所述第三神经网络包括s个神经网络层,其中,所述N和所述n不同和/或所述S和所述s不同;
    所述发送模块,还用于向所述第一终端设备发送所述n个神经网络层和所述s个神经网络层。
  18. 一种数据处理装置,其特征在于,所述数据处理装置部署于第一终端设备,所述第一终端设备包含于数据处理的系统,所述数据处理的系统还包括服务器,所述第一终端设备上部署第一神经网络,所述服务器上部署第二神经网络,所述装置包括:
    输入模块,用于将待处理数据输入所述第一神经网络,得到所述第一神经网络生成的第一中间结果;
    发送模块,用于将所述第一中间结果发送至所述服务器,所述第一中间结果用于供所述服务器利用所述第二神经网络得到与所述待处理数据对应的预测结果;
    其中,所述第一神经网络和所述第二神经网络组成目标神经网络,在第一时刻和第二 时刻这两个不同的时刻,所述第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
  19. 根据权利要求18所述的装置,其特征在于,在所述第一时刻,所述第一神经网络包括N个神经网络层,在所述第二时刻,所述第一神经网络包括n个神经网络层,所述N和所述n不同;
    所述装置还包括接收模块,用于接收所述服务器发送的所述第一神经网络。
  20. 一种数据处理装置,其特征在于,所述数据处理装置部署于服务器,所述服务器包含于数据处理的系统,所述数据处理的系统还包括第一终端设备,所述第一终端设备上部署第一神经网络,所述服务器上部署第二神经网络,所述装置包括:
    接收模块,用于接收所述第一终端设备发送的第一中间结果,所述第一中间结果基于待处理数据和所述N个第一中间结果得到;
    输入模块,用于将所述第一中间结果输入所述第二神经网络,得到所述第二神经网络生成的与所述待处理数据对应的预测结果;
    其中,所述第一神经网络和所述第二神经网络组成目标神经网络,在第一时刻和第二时刻这两个不同的时刻,所述第一终端设备上部署的第一神经网络中的神经网络层的数量发生改变。
  21. 根据权利要求20所述的装置,其特征在于,在所述第一时刻,所述第一神经网络包括N个神经网络层,在所述第二时刻,所述第一神经网络包括n个神经网络层,所述N和所述n不同;
    所述装置还包括:发送模块,用于向所述终端设备发送所述n个神经网络层。
  22. 一种终端设备,其特征在于,包括处理器和存储器,所述处理器与所述存储器耦合,
    所述存储器,用于存储程序;
    所述处理器,用于执行所述存储器中的程序,使得所述终端设备执行如权利要求1至9、权利要求10或权利要求12任一项所述的方法中终端设备执行的步骤。
  23. 一种服务器,其特征在于,包括处理器和存储器,所述处理器与所述存储器耦合,
    所述存储器,用于存储程序;
    所述处理器,用于执行所述存储器中的程序,使得所述服务器执行如权利要求1至9、权利要求11或权利要求13任一项所述的方法中服务器执行的步骤。
  24. 一种数据的处理系统,其特征在于,所述数据的处理系统包括终端设备和服务器,所述终端设备用于执行如权利要求1至6任一项所述的方法中终端设备执行的步骤,所述服务器用于执行如权利要求1至6任一项所述的方法中服务器执行的步骤;或者,
    所述终端设备用于执行如权利要求7至10任一项所述的方法中终端设备执行的步骤,所述服务器用于执行如权利要求7至10任一项所述的方法中服务器执行的步骤。
  25. 一种计算机程序产品,其特征在于,所述计算机程序产品包括程序,当所述程序在计算机上运行时,使得计算机执行如权利要求1至9、权利要求10或权利要求12中任意一项所述的方法中终端设备执行的步骤,或者,使得计算机执行如权利要求1至9、权利要求11或权利要求13中任意一项所述的方法中服务器执行的步骤。
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序,当所述程序在计算机上运行时,使得计算机执行如权利要求1至9、权利要求10或权利要求12中任意一项所述的方法中终端设备执行的步骤,或者,使得计算机执行如权利要求1至9、权利要求11或权利要求13中任意一项所述的方法中服务器执行的步骤。
PCT/CN2023/071725 2022-01-30 2023-01-10 一种数据处理方法以及相关设备 WO2023143080A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210115049.6A CN116579380A (zh) 2022-01-30 2022-01-30 一种数据处理方法以及相关设备
CN202210115049.6 2022-01-30

Publications (1)

Publication Number Publication Date
WO2023143080A1 true WO2023143080A1 (zh) 2023-08-03

Family

ID=87470469

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071725 WO2023143080A1 (zh) 2022-01-30 2023-01-10 一种数据处理方法以及相关设备

Country Status (2)

Country Link
CN (1) CN116579380A (zh)
WO (1) WO2023143080A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719648A (zh) * 2023-08-10 2023-09-08 泰山学院 一种用于计算机系统的数据管理方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210361A1 (en) * 2008-02-20 2009-08-20 Shiqing Chen Multi-platform control system for controlling machines
CN109685202A (zh) * 2018-12-17 2019-04-26 腾讯科技(深圳)有限公司 数据处理方法及装置、存储介质和电子装置
CN111091182A (zh) * 2019-12-16 2020-05-01 北京澎思科技有限公司 数据处理方法、电子设备及存储介质
CN113436208A (zh) * 2021-06-30 2021-09-24 中国工商银行股份有限公司 基于端边云协同的图像处理方法、装置、设备及介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210361A1 (en) * 2008-02-20 2009-08-20 Shiqing Chen Multi-platform control system for controlling machines
CN109685202A (zh) * 2018-12-17 2019-04-26 腾讯科技(深圳)有限公司 数据处理方法及装置、存储介质和电子装置
CN111091182A (zh) * 2019-12-16 2020-05-01 北京澎思科技有限公司 数据处理方法、电子设备及存储介质
CN113436208A (zh) * 2021-06-30 2021-09-24 中国工商银行股份有限公司 基于端边云协同的图像处理方法、装置、设备及介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719648A (zh) * 2023-08-10 2023-09-08 泰山学院 一种用于计算机系统的数据管理方法及系统
CN116719648B (zh) * 2023-08-10 2023-11-07 泰山学院 一种用于计算机系统的数据管理方法及系统

Also Published As

Publication number Publication date
CN116579380A (zh) 2023-08-11

Similar Documents

Publication Publication Date Title
WO2022022274A1 (zh) 一种模型训练方法及装置
WO2022068623A1 (zh) 一种模型训练方法及相关设备
US10915812B2 (en) Method and system of managing computing paths in an artificial neural network
CN114787824A (zh) 联合混合模型
US20220044120A1 (en) Synthesizing a singular ensemble machine learning model from an ensemble of models
WO2023202511A1 (zh) 一种数据的处理方法、神经网络的训练方法以及相关设备
WO2022179586A1 (zh) 一种模型训练方法及其相关联设备
CN111065999B (zh) 移动设备的功率状态控制
WO2022057433A1 (zh) 一种机器学习模型的训练的方法以及相关设备
CN113449859A (zh) 一种数据处理方法及其装置
WO2023179482A1 (zh) 一种图像处理方法、神经网络的训练方法以及相关设备
WO2023143080A1 (zh) 一种数据处理方法以及相关设备
CN114402336A (zh) 神经处理单元
WO2023274052A1 (zh) 一种图像分类方法及其相关设备
WO2020248365A1 (zh) 智能分配模型训练内存方法、装置及计算机可读存储介质
WO2022179492A1 (zh) 一种卷积神经网络的剪枝处理方法、数据处理方法及设备
WO2023231753A1 (zh) 一种神经网络的训练方法、数据的处理方法以及设备
CN116032663A (zh) 基于边缘设备的隐私数据处理系统、方法、设备及介质
WO2021036397A1 (zh) 目标神经网络模型的生成方法和装置
CN113869496A (zh) 一种神经网络的获取方法、数据处理方法以及相关设备
WO2024094094A1 (zh) 一种模型训练方法及装置
CN114169393A (zh) 一种图像分类方法及其相关设备
WO2024046473A1 (zh) 一种数据处理方法及其装置
WO2023197857A1 (zh) 一种模型切分方法及其相关设备
WO2023185541A1 (zh) 一种模型训练方法及其相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23745963

Country of ref document: EP

Kind code of ref document: A1