WO2022001092A1 - 一种数据处理方法、装置及设备 - Google Patents

一种数据处理方法、装置及设备 Download PDF

Info

Publication number
WO2022001092A1
WO2022001092A1 PCT/CN2021/073615 CN2021073615W WO2022001092A1 WO 2022001092 A1 WO2022001092 A1 WO 2022001092A1 CN 2021073615 W CN2021073615 W CN 2021073615W WO 2022001092 A1 WO2022001092 A1 WO 2022001092A1
Authority
WO
WIPO (PCT)
Prior art keywords
baseline model
target
initial
edge server
model
Prior art date
Application number
PCT/CN2021/073615
Other languages
English (en)
French (fr)
Inventor
程战战
郭大山
Original Assignee
上海高德威智能交通系统有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海高德威智能交通系统有限公司 filed Critical 上海高德威智能交通系统有限公司
Publication of WO2022001092A1 publication Critical patent/WO2022001092A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a data processing method, apparatus and device.
  • Machine learning is a way to realize artificial intelligence. It is a multi-field interdisciplinary subject involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Machine learning is used to study how computers simulate or implement human learning behaviors to acquire new knowledge or skills, and to reorganize existing knowledge structures to continuously improve their performance. Machine learning pays more attention to algorithm design, enabling computers to automatically learn rules from data and use rules to predict unknown data.
  • Machine learning has a wide range of applications, such as deep learning, data mining, computer vision, natural language processing, biometric recognition, search engines, medical diagnosis, detection of credit card fraud, stock market analysis, DNA sequence sequencing, speech and handwriting recognition , strategy games and robot use.
  • the application provides a data processing method, the method includes:
  • the central server sends the initial baseline model to the edge server
  • the edge server trains the initial baseline model through the scene data of the edge server, obtains a target baseline model, and determines whether to deploy the target baseline model;
  • the edge server sends the target baseline model to the central server
  • the central server generates a fused baseline model based on the target baseline model and the initial baseline model, trains the fused baseline model, determines the trained baseline model as the initial baseline model, and returns to the execution center server to store the initial baseline model. Action sent to the edge server.
  • the present application provides a data processing method, which is applied to an edge server, including:
  • the present application provides a data processing method, which is applied to a central server, including:
  • the fusion baseline model is trained, the trained baseline model is determined as the initial baseline model, and the operation of sending the initial baseline model to the edge server is returned to be executed.
  • the present application provides a data processing device applied to an edge server, including:
  • the acquisition module is used to acquire the initial baseline model from the central server;
  • a training module configured to train the initial baseline model through the scene data of the edge server, obtain a target baseline model, and determine whether to deploy the target baseline model;
  • a sending module configured to send the target baseline model to the central server when the target baseline model is not deployed, so that the central server generates a fusion baseline model based on the target baseline model and the initial baseline model , and re-acquire the initial baseline model based on the fusion baseline model.
  • the present application provides a data processing device, which is applied to a central server, including:
  • a sending module configured to send the initial baseline model to the edge server, so that the edge server trains the initial baseline model through the scene data of the edge server to obtain the target baseline model
  • an obtaining module configured to obtain the target baseline model from the edge server
  • the generating module is configured to generate a fusion baseline model based on the target baseline model and the initial baseline model; train the fusion baseline model, and determine the trained baseline model as the initial baseline model.
  • the present application provides an edge server, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is configured to execute machine-executable instructions to implement the following steps:
  • the present application provides a central server, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is configured to execute machine-executable instructions to implement the following steps:
  • the fusion baseline model is trained, the trained baseline model is determined as the initial baseline model, and the operation of sending the initial baseline model to the edge server is returned to be executed.
  • the edge server can obtain the initial baseline model from the central server, train the initial baseline model through the scene data, obtain the target baseline model, and send the target baseline model to the central server, and the central server is based on the target baseline model.
  • the model and the initial baseline model generate a fused baseline model, train the fused baseline model, determine the trained baseline model as the initial baseline model, send the initial baseline model to the edge server, and so on. Since the above process can be performed cyclically, the performance of the initial baseline model and the target baseline model is continuously upgraded, and the recognition capabilities of the initial baseline model and the target baseline model are continuously improved, so that the target baseline model can achieve the expected performance and achieve high-precision recognition capabilities.
  • the edge server can obtain a target baseline model with better performance, the target baseline model can match the environment where the edge server is located, and the accuracy of the intelligent analysis results is high.
  • the edge server sends the target baseline model to the central server, rather than scene data (such as license plate images), so as to protect the privacy of the scene data, and will not send the scene data to the central server, so as to achieve deprivation of privacy. Data protection function to avoid sending private license plate images to the central server. Since the target baseline model is trained based on the scene data, the information of the scene data can be embodied in the initial baseline model and the target baseline model.
  • FIG. 1 is a schematic structural diagram of a system in an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a data processing method in an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a baseline model in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a data processing method in another embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a data processing method in another embodiment of the present application.
  • 6A and 6B are schematic structural diagrams of a data processing apparatus in an embodiment of the present application.
  • FIG. 7A is a hardware structure diagram of an edge server in an embodiment of the present application.
  • FIG. 7B is a hardware structure diagram of a central server in an embodiment of the present application.
  • first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information without departing from the scope of the present application.
  • the use of the word "if” can be interpreted as "at the time of" or "when” or “in response to determining”, depending on the context.
  • Machine learning is a way to realize artificial intelligence, which is used to study how computers simulate or realize human learning behaviors to acquire new knowledge or skills, and to reorganize existing knowledge structures to continuously improve their performance.
  • Deep learning is a subcategory of machine learning and is the process of using mathematical models to model specific problems in the real world in order to solve similar problems in the field.
  • Neural network is the implementation of deep learning. For the convenience of description, this paper takes neural network as an example to introduce the structure and function of neural network. For other subclasses of machine learning, the structure and function of neural network are similar.
  • Neural networks may include but are not limited to convolutional neural networks (CNN for short), recurrent neural networks (RNN for short), fully connected networks, etc.
  • the structural units of neural networks may include but are not limited to convolutional layers (Conv), pooling Pool, excitation layer, fully connected layer (FC), etc., are not limited.
  • one or more convolutional layers, one or more pooling layers, one or more excitation layers, and one or more fully connected layers can be combined to construct a neural network according to different requirements.
  • the input data features are enhanced by performing convolution operations on the input data features by using a convolution kernel.
  • the convolution kernel can be a matrix of size m*n, and the input data features of the convolution layer are By convolution with the convolution kernel, the output data features of the convolution layer can be obtained.
  • the convolution operation is actually a filtering process.
  • the input data features (such as the output of the convolution layer) are subjected to operations such as taking the maximum value, the minimum value, and the average value, so as to use the principle of local correlation to sub-sample the input data features.
  • the pooling layer operation is actually a downsampling process.
  • an activation function (such as a nonlinear function) can be used to map the input data features, thereby introducing nonlinear factors, so that the neural network can enhance the expressive ability through nonlinear combinations.
  • the activation function may include, but is not limited to, a ReLU (Rectified Linear Units, rectified linear unit) function, where the ReLU function is used to set the features smaller than 0 to 0, while the features larger than 0 remain unchanged.
  • all data features input to the fully connected layer are fully connected to obtain a feature vector, and the feature vector may include multiple data features.
  • Baseline model of neural network (such as convolutional neural network):
  • each neural network parameter in the neural network can be trained with sample data, such as convolutional layer parameters (such as convolution kernel), pooling layer parameters , excitation layer parameters, fully connected layer parameters, etc., which are not limited.
  • the neural network can fit the mapping relationship between input and output.
  • the trained neural network is the baseline model of the neural network, which is referred to as the baseline model in this paper.
  • Artificial intelligence processing can be realized based on the baseline model, such as face detection, human body detection, vehicle detection, license plate recognition, etc. artificial intelligence processing.
  • the image including the license plate that is, the license plate image
  • the baseline model performs artificial intelligence processing on the image
  • the artificial intelligence processing result is the license plate recognition result.
  • Sample data and scene data In artificial intelligence scenarios, central servers and edge servers can be deployed.
  • the central server obtains data for training the baseline model, trains the baseline model based on these data, and uses the data used by the central server for training the baseline model.
  • the data is called sample data.
  • the edge server obtains the data for training the baseline model (which will not be sent to the central server), and trains the baseline model based on the data.
  • the data used by the edge server for training the baseline model is called scene data.
  • the sample data may be image data or other types of data, which are not limited.
  • the scene data may be image data or other types of data, which are not limited.
  • the above-mentioned baseline model may be a baseline model for realizing license plate recognition.
  • the sample data can be the license plate image (that is, the image including the license plate information) obtained by the central server for training the baseline model, and the central server can train the baseline model based on the license plate image.
  • the license plate image is only an example of the sample data , there is no restriction on this.
  • the scene data can be the license plate image obtained by the edge server for training the baseline model. The license plate image will not be sent to the central server.
  • the edge server can train the baseline model based on the license plate image.
  • the license plate image is only an example of the scene data. No restrictions.
  • the central server since the edge server will not send the scene data to the central server, the central server cannot train the baseline model through the scene data of the edge server, that is, the baseline model cannot match the environment where the edge server is located, and the baseline model is deployed to the edge server. , the performance of the baseline model is lower.
  • the central server may use license plate images as sample data to train a baseline model for implementing license plate recognition.
  • the edge server since the license plate image belongs to private data, based on the protection requirements for private data, the edge server will not send the license plate image to the central server after obtaining the license plate image, so that the central server cannot pass the license plate image of the edge server.
  • the baseline model that is, the baseline model cannot match the environment where the edge server is located.
  • the performance of the baseline model is low.
  • the edge server can use the scene data of its own environment to train the baseline model to obtain a new baseline model. Since the baseline model is trained by using the scene data of the environment where the edge server is located, the new baseline model can match the environment where the edge server is located, and the performance of the new baseline model is better.
  • FIG. 1 is a schematic structural diagram of a system according to an embodiment of the present application.
  • the system may include a center server 110 and an edge server 120 , a first relay device 130 connected to the center server 110 , and a first relay device 130 connected to the edge server 120 .
  • Two relay devices 140, the first relay device 130 and the second relay device 140 are connected through a network.
  • the number of edge servers 120 is at least one, and each edge server 120 is connected to one second transit device 140 , that is, the number of second transit devices 140 is the same as the number of edge servers 120 .
  • the central server 110 may constitute an identification center system
  • at least one edge server 120 may constitute an identification end-side system
  • the first transit device 130 and at least one second transit device 140 may constitute a deprivation system.
  • the central server 110 is a service platform for providing a baseline model, and can provide a baseline model for at least one edge server 120 .
  • the edge server 120 is a server with baseline model requirements, that is, the baseline model needs to be acquired from the central server 110, and then artificial intelligence processing is implemented according to the baseline model.
  • the first relay device 130 is a network device (such as a router, a switch, etc.) connected to the central server 110, and is used to forward the baseline model sent by the central server 110 to the edge server 120, and forward the baseline model sent by the edge server 120 to the central server. 110.
  • the second relay device 140 is a network device connected to the edge server 120 , and is used for forwarding the baseline model sent by the center server 110 to the edge server 120 , and forwarding the baseline model sent by the edge server 120 to the center server 110 .
  • An embodiment of the present application proposes a data processing method, as shown in FIG. 2 , the method includes steps 201-207.
  • Step 201 the central server sends the initial baseline model to the edge server.
  • the central server can obtain a large amount of sample data, and there is no restriction on the acquisition method.
  • the sample data has label information, such as the actual category and/or target frame, etc., and the label information is not limited. make restrictions.
  • the sample data can be a license plate image
  • the target frame can be the coordinate information of a rectangular frame in the license plate image (such as the coordinates of the upper left corner of the rectangular frame, the width and height of the rectangular frame, etc.)
  • the actual class represents the license plate identification of the rectangular box area.
  • the central server trains a neural network (such as a convolutional neural network) according to the sample data and the label information of the sample data to obtain a baseline model.
  • the baseline model is called an initial baseline model.
  • the neural network can be the initial baseline model.
  • output data corresponding to the sample data can be obtained.
  • the output data is a feature vector, and whether the neural network has converged can be determined based on the feature vector.
  • the loss value of the loss function (which can be configured according to experience) is determined according to the feature vector, and whether the neural network has converged is determined according to the loss value. For example, if the loss value is less than a preset threshold, the neural network has converged, otherwise , the neural network does not converge.
  • the neural network has converged, complete the training process and obtain the initial baseline model. If the neural network has not converged, continue to adjust the parameters of each neural network in the neural network to obtain the adjusted neural network. Then, the sample data and The label information corresponding to the sample data is input to the adjusted neural network, and the adjusted neural network is continued to be trained, and so on, until the neural network has converged.
  • this determination method is not limited. For example, if the number of iterations reaches a preset number of times threshold, it is determined that the neural network has converged; for another example, if the iteration duration reaches a preset duration threshold, it is determined that the neural network has converged.
  • the central server can send the initial baseline model to the edge server.
  • the initial baseline model can be sent to all edge servers or some edge servers. The following is combined with an edge server pair. This sending process is explained:
  • Mode 1 The central server sends the initial baseline model to the first transit device, the first transit device sends the initial baseline model to the second transit device, and the second transit device can send the initial baseline model to the edge server, so far , the initial baseline model is successfully sent to the edge server.
  • Mode 2 The central server sends the initial baseline model to the first relay device, and the first relay device performs a first conversion operation on the initial baseline model to obtain a first low-dimensional baseline model, and sends the first low-dimensional baseline model to the second transit device.
  • the second transit device performs a second conversion operation on the first low-dimensional baseline model to obtain the initial baseline model, and sends the initial baseline model to the edge server.
  • the first conversion operation may be an encryption operation
  • the second conversion operation may be a decryption operation
  • the first transformation operation may be a compression operation and the second transformation operation may be a decompression operation
  • the first conversion operation may be an encryption operation and a compression operation
  • the second conversion operation may be a decryption operation and a decompression operation.
  • the above is just an example of the first conversion operation and the second conversion operation, which is not limited.
  • the first transit device after receiving the initial baseline model, performs a compression operation on the initial baseline model, so as to convert the initial baseline model into a first low-dimensional baseline model, and convert the first low-dimensional baseline model to the first low-dimensional baseline model. Sent to the second relay device. After receiving the first low-dimensional baseline model, the second transit device decompresses the first low-dimensional baseline model to obtain a decompressed initial baseline model.
  • the compressed first low-dimensional baseline model is transmitted in the network instead of the uncompressed initial baseline model, the amount of transmitted data can be reduced and the model transmission time can be saved.
  • the first transit device after receiving the initial baseline model, performs an encryption operation on the initial baseline model, so as to convert the initial baseline model into a first low-dimensional baseline model, and convert the first low-dimensional baseline model to the first low-dimensional baseline model. Sent to the second relay device. After receiving the first low-dimensional baseline model, the second transit device performs a decryption operation on the first low-dimensional baseline model to obtain a decrypted initial baseline model.
  • the encrypted first low-dimensional baseline model is transmitted in the network instead of the unencrypted initial baseline model, the security of the initial baseline model can be ensured and the attacker can avoid illegally intercepting the initial baseline model.
  • the first transit device after receiving the initial baseline model, performs a compression operation on the initial baseline model, and performs an encryption operation on the compressed initial baseline model, so as to convert the initial baseline model into the first low wiki line model, and send the first low-dimensional line model to the second relay device.
  • the second relay device After receiving the first low-dimensional baseline model, performs a decryption operation on the first low-dimensional baseline model, and decompresses the decrypted first low-dimensional baseline model to obtain a decompressed initial baseline model.
  • the compressed and encrypted first low-dimensional baseline model is transmitted in the network, the amount of transmitted data can be reduced, the model transmission time can be saved, and an attacker can avoid illegally intercepting the initial baseline model.
  • the first transit device after receiving the initial baseline model, performs an encryption operation on the initial baseline model, and performs a compression operation on the encrypted initial baseline model, thereby converting the initial baseline model into the first low wiki line model, and send the first low-dimensional line model to the second relay device.
  • the second relay device After receiving the first low-dimensional baseline model, performs a decompression operation on the first low-dimensional baseline model, and performs a decryption operation on the decompressed first low-dimensional baseline model to obtain a decrypted initial baseline model.
  • the initial baseline model can be understood as a high-dimensional parameter vector because the initial baseline model has been encrypted and/or compressed.
  • a line model can be understood as a low-dimensional parameter vector, i.e. a smaller number of parameters.
  • a sparse algorithm can be used to compress the initial baseline model, and a sparse algorithm can be used to decompress the first low-dimensional baseline model.
  • the sparse algorithm is only an example of a compression algorithm, which is not limited. , any algorithm capable of compressing the initial baseline model can be used.
  • a cryptographic algorithm can be used to encrypt the initial baseline model, and the cryptographic algorithm can be used to decrypt the first low-dimensional baseline model.
  • the cryptographic algorithm can be any type of cryptographic algorithm, and this cryptographic algorithm is not limited. , as long as the initial baseline model can be encrypted.
  • the cryptographic algorithm can be SM1 (Security Manager, security management), SM2, SM3, SM4 and other domestic commercial cryptographic algorithms, or can be DES (Data Encryption Standard, data encryption standard), AES (Advanced Encryption Standard, advanced encryption) Standard), IDEA (International Data Encryption Algorithm, International Data Encryption Algorithm) and other international commercial cryptographic algorithms.
  • Step 202 the edge server trains the initial baseline model through the scene data of the edge server to obtain a target baseline model, that is, the trained initial baseline model is the target baseline model.
  • the edge server may acquire a large amount of scene data
  • the scene data may be scene data of the environment where the edge server is located, and the acquisition method is not limited.
  • the scene data has label information, such as an actual category and/or a target frame, and the label information is not limited.
  • the scene data can be a license plate image
  • the target frame can be the coordinate information of a rectangular frame in the license plate image (such as the coordinates of the upper left corner of the rectangular frame, the width and height of the rectangular frame, etc. )
  • the actual category can represent the license plate identification of the rectangular box area.
  • the edge server trains the initial baseline model according to the scene data and the label information of the scene data, and obtains the trained baseline model.
  • the trained baseline model is called the target baseline model.
  • a large amount of scene data and the label information corresponding to the scene data are input into the initial baseline model, so as to use these scene data and label information to train the neural network parameters in the initial baseline model, after the initial baseline model training is completed,
  • the initial baseline model that has been trained can be the target baseline model.
  • the initial baseline model In the training process of the initial baseline model, after inputting the scene data to the initial baseline model, output data corresponding to the scene data can be obtained, the output data is a feature vector, and whether the initial baseline model has converged is determined based on the feature vector. For example, the loss value of the loss function (which can be configured according to experience) is determined according to the feature vector, and whether the initial baseline model has converged is determined according to the loss value, for example, if the loss value is less than a preset threshold, the initial baseline model has converged, Otherwise, the initial baseline model did not converge.
  • the loss value of the loss function which can be configured according to experience
  • the training process is completed and the target baseline model is obtained. If the initial baseline model does not converge, continue to adjust the neural network parameters in the initial baseline model to obtain the adjusted initial baseline model, input the scene data and label information into the adjusted initial baseline model, and continue to adjust the adjusted initial baseline model. The initial baseline model is trained, and so on, until the initial baseline model has converged.
  • other methods may also be used to determine whether the initial baseline model has converged, which is not limited. For example, if the number of iterations reaches a preset number of thresholds, it is determined that the initial baseline model has converged; for another example, if the iteration duration reaches a preset duration threshold, it is determined that the initial baseline model has converged.
  • the initial baseline model can be trained based on the scene data of the edge server to obtain the target baseline model. Since the scene data is the data of the environment where the edge server is located, and the scene data is not sent to the central server, that is, the central server does not use the scene data to train the initial baseline model. Therefore, the edge server uses these scene data to train the initial baseline model.
  • the target baseline model has sample data information and new knowledge of scene data to improve model performance.
  • Step 203 the edge server determines whether to deploy the target baseline model. Exemplarily, if the target baseline model is deployed, step 204 is performed; if the target baseline model is not deployed, step 205 is performed.
  • the target baseline model may be deployed, and if the target baseline model does not meet the performance requirement, the target baseline model may not be deployed.
  • a test data set is acquired, where the test data set includes a plurality of test data, and each test data corresponds to an actual category of the test data.
  • the test data set For each test data in the test data set, input the test data to the target baseline model, and perform artificial intelligence processing on the test data through the target baseline model to obtain the artificial intelligence processing result, and the artificial intelligence processing result is the result of the test data.
  • Predicted category If the predicted category of the test data is consistent with the actual category of the test data, it means that the target baseline model recognizes the test data correctly; if the predicted category of the test data is inconsistent with the actual category of the test data, it means the target baseline model The model identified the test data incorrectly.
  • the number of correct identification results (denoted as a1) and the number of incorrect identification results (denoted as a2) can be obtained, and according to the number of correct identification results a1 and the number of wrong identification results a2 , to determine the performance metrics of the target baseline model.
  • the performance index can be a1/(a1+a2). Obviously, the larger the performance index, the better the performance of the target baseline model.
  • the performance index of the target baseline model is greater than the preset threshold, it means that the target baseline model has met the performance requirements, and the target baseline model can be deployed. If the performance index of the target baseline model is not greater than the preset threshold, it means that the target baseline model does not meet the performance requirements, and the target baseline model may not be deployed.
  • the above is just an example of determining whether to deploy the target baseline model, and there is no restriction on the determination method.
  • the preset number of times threshold is determined to determine the deployment target baseline model. For another example, when a user instruction is received, whether to deploy the target baseline model is determined based on the user instruction.
  • Step 204 the edge server processes the application data through the target baseline model.
  • the edge server can deploy the target baseline model on the edge server, and after acquiring the application data, the edge server can input the application data to the target baseline model, so as to perform artificial intelligence processing on the application data through the target baseline model, Get artificial intelligence processing results.
  • the edge server can also deploy the target baseline model to the terminal device, that is, the terminal device managed by the edge server, such as an analog camera, IPC (Internet Protocol Camera, network camera), etc.
  • the terminal device obtains the application data
  • the application data can be input into the target baseline model, so that artificial intelligence processing is performed on the application data through the target baseline model to obtain an artificial intelligence processing result.
  • the application data is an image including license plates.
  • the target baseline model performs artificial intelligence processing on the application data to obtain the artificial intelligence processing result.
  • the artificial intelligence processing result may be a license plate identification.
  • Step 205 the edge server sends the target baseline model to the central server.
  • the edge server After obtaining the target baseline model, the edge server sends the target baseline model to the central server if it is determined not to deploy the target baseline model. If it is determined to deploy the target baseline model, the target baseline model may be sent to the central server, or the target baseline model may not be sent to the central server.
  • the process of sending the target baseline model to the central server from the edge server may include:
  • Mode 1 The edge server sends the target baseline model to the second relay device, the second relay device sends the target baseline model to the first relay device, and the first relay device can send the target baseline model to the central server, so far , successfully sending the target baseline model to the central server.
  • the edge server sends the target baseline model to the second transit device, and the second transit device performs a first conversion operation on the target baseline model to obtain a second low-dimensional baseline model, and sends the second low-dimensional baseline model to the first transit device.
  • the first relay device performs a second conversion operation on the second low-dimensional baseline model to obtain the target baseline model, and sends the target baseline model to the central server.
  • the first conversion operation may be an encryption operation
  • the second conversion operation may be a decryption operation
  • the first transformation operation may be a compression operation and the second transformation operation may be a decompression operation
  • the first conversion operation may be an encryption operation and a compression operation
  • the second conversion operation may be a decryption operation and a decompression operation.
  • the above is just an example of the first conversion operation and the second conversion operation, which is not limited.
  • the second relay device performs a compression operation on the target baseline model, thereby converting the target baseline model into a second low-dimensional baseline model, and sends the second low-dimensional baseline model to the first relay device.
  • the first transit device decompresses the second low-dimensional baseline model to obtain the target baseline model.
  • the second transit device performs an encryption operation on the target baseline model, thereby converting the target baseline model into a second low-dimensional baseline model, and sends the second low-dimensional baseline model to the first transit device.
  • the first transit device decrypts the second low-dimensional baseline model to obtain the target baseline model.
  • the second transit device performs compression and encryption operations on the target baseline model, thereby converting the target baseline model into a second low-dimensional baseline model, and sends the second low-dimensional baseline model to the first transit device.
  • the first transit device performs decryption and decompression operations on the second low-dimensional baseline model to obtain the target baseline model.
  • Step 206 the central server generates a fusion baseline model based on the target baseline model and the initial baseline model.
  • the central server may perform a fusion operation on the target baseline model and the initial baseline model of at least one edge server to obtain a fusion baseline model;
  • the fusion operation may include but is not limited to one of the following operations: weighting operation, averaging operation, Take the maximum value operation, take the minimum value operation.
  • the target baseline model and the initial baseline model have the same network structure, for example, both include network layer 1 and network layer 2, network layer 1 includes parameter A and parameter B, and network layer 2 includes parameter C and parameter D.
  • the central server obtains the target baseline model 1 and the target baseline model 2.
  • the value of parameter A is a11
  • the value of parameter B is b11
  • the value of parameter C is c11
  • the value of parameter D is d11.
  • the value of parameter A is a21
  • the value of parameter B is b21
  • the value of parameter C is c21
  • the value of parameter D is d21.
  • the value of parameter A is a31
  • the value of parameter B is b31
  • the value of parameter C is c31
  • the value of parameter D is d31.
  • the central server can fuse the initial baseline model, target baseline model 1 and target baseline model 2 to obtain a fused baseline model.
  • the fused baseline model can include network layer 1 and network layer 2, and network layer 1 includes parameters A and parameter B, network layer 2 includes parameter C and parameter D.
  • the value of parameter A is obtained by fusing a11, a21 and a31.
  • the value of parameter A is the average value of a11, a21 and a31; or, the value of parameter A is the maximum value of a11, a21 and a31; or, the value of parameter A is the value of a11, a21 and a31 or, the value of parameter A is the weighted value of a11, a21 and a31, if the weight of the initial baseline model (eg 2) is greater than the weight of the target baseline model (eg 1), the value of parameter A is ( a11*2+a21*1+a31*1)/4, when the weight of the initial baseline model (eg 1) is less than the weight of the target baseline model (eg 2), the value of parameter A is (a11*1+a21*2 +a31*2)/5.
  • the above are just a few examples, which are not limited.
  • the value of parameter B is obtained by fusing b11, b21 and b31.
  • the value of parameter B is the average value of b11, b21 and b31; or, the value of parameter B is the maximum value of b11, b21 and b31; or, the value of parameter B is the value of b11, b21 and b31
  • the minimum value of ; or, the value of parameter B is the weighted value of b11, b21 and b31, which is not limited.
  • the value of parameter C is obtained by fusing c11, c21 and c31.
  • the value of parameter C is the average value of c11, c21 and c31; or, the value of parameter C is the maximum value of c11, c21 and c31; or the value of parameter C is the value of c11, c21 and c31
  • the minimum value of ; or, the value of parameter C is the weighted value of c11, c21 and c31, which is not limited.
  • the value of parameter D is obtained by the fusion operation of d11, d21 and d31.
  • the value of parameter D is the average value of d11, d21 and d31; or, the value of parameter D is the maximum value of d11, d21 and d31; or the value of parameter D is the value of d11, d21 and d31
  • the minimum value of ; or, the value of parameter D is the weighted value of d11, d21 and d31, which is not limited.
  • Step 207 the central server trains the fusion baseline model to obtain a trained baseline model, determines the trained baseline model as the initial baseline model, and returns to step 201 .
  • the central server may train the fused baseline model according to the sample data and the label information of the sample data to obtain the trained baseline model.
  • the training process refer to step 201, which is not described here. Repeat. Determine the trained baseline model as the initial baseline model, and return to step 201, that is, send the initial baseline model to the edge server.
  • the edge server can obtain the initial baseline model from the central server, train the initial baseline model through the scene data, obtain the target baseline model, and send the target baseline model to the central server, and the central server is based on the target baseline model.
  • the model and the initial baseline model generate a fused baseline model, train the fused baseline model, determine the trained baseline model as the initial baseline model, send the initial baseline model to the edge server, and so on. Since the above process can be performed cyclically, the performance of the initial baseline model and the target baseline model is continuously upgraded, and the recognition capabilities of the initial baseline model and the target baseline model are continuously improved, so that the target baseline model can achieve the expected performance and achieve high-precision recognition capabilities.
  • the edge server can obtain a target baseline model with better performance, the target baseline model can match the environment where the edge server is located, and the accuracy of the intelligent analysis results is high.
  • the edge server sends the target baseline model to the central server instead of scene data (such as license plate images), so as to protect the privacy of the scene data, and will not send the scene data to the central server. Since the target baseline model is trained based on the scene data, the information of the scene data can be embodied in the initial baseline model and the target baseline model.
  • the scene data of the edge server can be a license plate image
  • the sample data of the central server can be a license plate image
  • the target baseline model is used to perform license plate recognition processing on the application data, that is, identify the license plate image. License plate identification in license plate images.
  • a private license plate recognition method is proposed, which can achieve a good license plate recognition capability without involving user private data, that is, without sending the private license plate image to the central server.
  • the network structure of the baseline model is the same, and the initial baseline model trained by the center server has the same network structure as the target baseline model trained by the edge server.
  • Figure 3 it is a schematic diagram of the network structure of the baseline model, and the two use the same network structure.
  • edge server Since the edge server is not directly connected to the central server, the problem of a single edge server will not directly affect the operation of the central server, and the edge servers can be easily added or deleted.
  • the edge server may also be called an end-side server, the edge server is maintained by a legal (meeting local regulations) regional support provider/agent, and the edge server has a model training function.
  • the central server is maintained by the identification system service provider, and the central server has the function of model training.
  • the edge server can obtain the initial baseline model from the central server, obtain better preliminary recognition ability, and train the initial baseline model according to the local license plate image to obtain the target baseline model, so that the target baseline model can be adapted to the edge server.
  • the central server can obtain the target baseline model sent by the edge server, and complete the fusion of the target baseline model and the initial baseline model, so that the initial baseline model is further upgraded and has stronger generalization ability.
  • a data processing method is proposed in the embodiment of the present application, and the method includes:
  • step S1 the central server obtains an initial baseline model by training according to the sample data.
  • Step S2 the central server sends the initial baseline model to the first transit device.
  • Step S3 the first transfer device performs a first conversion operation on the initial baseline model to obtain a first low-dimensional baseline model, and sends the first low-dimensional baseline model to the second transfer device.
  • the first low-dimensional line model may be sent to each second relay device, and a second relay device is used as an example for description in the following.
  • Step S4 the second transit device performs a second conversion operation on the first low-dimensional baseline model to obtain the initial baseline model, and sends the initial baseline model to the edge server.
  • the initial baseline model can be sent to the edge server connected to the second transit device, that is, the initial baseline model can be sent to the edge server connected to the second transit device.
  • the baseline model is sent to multiple edge servers.
  • the processing process of one edge server is used as an example for description.
  • Step S5 the edge server trains the initial baseline model through the scene data of the edge server to obtain a target baseline model, that is, the trained initial baseline model is the target baseline model.
  • Step S6 the edge server determines whether to deploy the target baseline model. Exemplarily, if the target baseline model is deployed, step S7 is performed, and if the target baseline model is not deployed, step S8 is performed.
  • Step S7 the edge server processes the application data through the target baseline model.
  • Step S8 the edge server sends the target baseline model to the second transit device.
  • Step S9 the second transfer device performs a first conversion operation on the target baseline model to obtain a second low-dimensional baseline model, and sends the second low-dimensional baseline model to the first transfer device.
  • Step S10 the first relay device performs a second conversion operation on the second low-dimensional baseline model to obtain the target baseline model, and sends the target baseline model to the central server.
  • Step S11 the central server generates a fusion baseline model based on the target baseline model and the initial baseline model.
  • Step S12 the central server trains the fusion baseline model to obtain a trained baseline model, determines the trained baseline model as the initial baseline model, and returns to step S2.
  • the performance of the initial baseline model of the central server can be continuously upgraded, and the license plate recognition capability can be continuously improved.
  • the performance of the initial baseline model and the target baseline model can be continuously improved to achieve the expected performance.
  • the method can be applied to an edge server. Referring to FIG. 4 , the method can include steps 401-404.
  • Step 401 obtaining an initial baseline model from a central server.
  • Step 402 Train the initial baseline model through the scene data of the edge server to obtain a target baseline model, and determine whether to deploy the target baseline model. Exemplarily, if the target baseline model is deployed, step 403 is performed, and if the target baseline model is not deployed, step 404 is performed.
  • step 403 the application data is processed through the target baseline model.
  • Step 404 sending the target baseline model to the central server, so that the central server generates a fused baseline model based on the target baseline model and the initial baseline model, and re-acquires the initial baseline model based on the fused baseline model, and returns to perform the initial acquisition from the central server.
  • step 401 is executed.
  • the edge server can retrieve the initial baseline model from the central server.
  • the method can be applied to a central server.
  • the method can include steps 501-504.
  • Step 501 Send the initial baseline model to the edge server, so that the edge server trains the initial baseline model through the scene data of the edge server to obtain the target baseline model.
  • Step 502 Obtain the target baseline model from the edge server.
  • Step 503 generating a fusion baseline model based on the target baseline model and the initial baseline model.
  • Step 504 train the fused baseline model, determine the trained baseline model as the initial baseline model, and return to perform the operation of sending the initial baseline model to the edge server, that is, step 501 is performed.
  • FIG. 6A is a schematic structural diagram of the apparatus. Referring to FIG. 6A , the apparatus includes:
  • Obtaining module 611 for obtaining the initial baseline model from the central server
  • a training module 612 configured to train the initial baseline model through the scene data of the edge server, obtain a target baseline model, and determine whether to deploy the target baseline model;
  • Sending module 613 configured to send the target baseline model to the central server when the target baseline model is not deployed, so that the central server generates a fusion baseline based on the target baseline model and the initial baseline model model, and re-acquire the initial baseline model based on the fused baseline model.
  • the data processing apparatus may further include: a processing module configured to process application data by using the target baseline model when deploying the target baseline model.
  • FIG. 6B is a schematic structural diagram of the apparatus. Referring to FIG. 6B , the apparatus includes:
  • the sending module 621 is configured to send the initial baseline model to the edge server, so that the edge server can train the initial baseline model through the scene data of the edge server to obtain the target baseline model;
  • an obtaining module 622 configured to obtain the target baseline model from the edge server
  • the generating module 623 is configured to generate a fused baseline model based on the target baseline model and the initial baseline model; train the fused baseline model, and determine the trained baseline model as the initial baseline model.
  • the generating module 623 When the generating module 623 generates the fusion baseline model based on the target baseline model and the initial baseline model, it is specifically configured to: perform a fusion operation on the target baseline model and the initial baseline model of at least one edge server to obtain the fusion baseline The model; wherein, the fusion operation includes one of the following operations: weighting operation, averaging operation, maximum value operation, and minimum value operation.
  • the edge server includes: a processor 711 and a machine-readable storage medium 712, the machine-readable storage medium 712 stores machine-executable instructions that can be executed by the processor 711; the processor 711 is configured to execute the machine-executable instructions to implement the following steps:
  • the central server includes: a processor 721 and a machine-readable storage medium 722, the machine-readable storage medium 722 stores machine-executable instructions that can be executed by the processor 721; the processor 721 is configured to execute the machine-executable instructions to implement the following steps:
  • the fusion baseline model is trained, the trained baseline model is determined as the initial baseline model, and the operation of sending the initial baseline model to the edge server is returned to be executed.
  • an embodiment of the present application further provides a machine-readable storage medium, where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the present invention can be implemented. Apply the data processing method disclosed in the above example.
  • the above-mentioned machine-readable storage medium may be any electronic, magnetic, optical or other physical storage device, which may contain or store information, such as executable instructions, data, and the like.
  • the machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drives (such as hard drives), solid-state drives, any type of storage discs (such as optical discs, DVDs, etc.), or similar storage media, or a combination thereof.
  • a typical implementing device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • these computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising the instruction means,
  • the instruction means implements the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioethics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer And Data Communications (AREA)
  • Debugging And Monitoring (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本申请提供一种数据处理方法、装置及设备,该方法包括:中心服务器将初始基线模型发送给边缘服务器;边缘服务器通过场景数据对初始基线模型进行训练,得到目标基线模型,并确定是否部署目标基线模型;若否,则边缘服务器将所述目标基线模型发送给中心服务器;所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型。

Description

一种数据处理方法、装置及设备 技术领域
本申请涉及人工智能领域,尤其涉及一种数据处理方法、装置及设备。
背景技术
机器学习是实现人工智能的一种途径,是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。机器学习用于研究计算机如何模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习更加注重算法设计,使计算机能够自动地从数据中学习规律,并利用规律对未知数据进行预测。
机器学习已经有了十分广泛的应用,如深度学习、数据挖掘、计算机视觉、自然语言处理、生物特征识别、搜索引擎、医学诊断、检测信用卡欺诈、证券市场分析、DNA序列测序、语音和手写识别、战略游戏和机器人运用等。
发明内容
本申请提供一种数据处理方法,所述方法包括:
中心服务器将初始基线模型发送给边缘服务器;
所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
若否,则所述边缘服务器将所述目标基线模型发送给所述中心服务器;
所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行中心服务器将初始基线模型发送给边缘服务器的操作。
本申请提供一种数据处理方法,应用于边缘服务器,包括:
从中心服务器获取初始基线模型;
通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
若否,则将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新 获取初始基线模型,返回执行从中心服务器获取初始基线模型的操作。
本申请提供一种数据处理方法,应用于中心服务器,包括:
将初始基线模型发送给边缘服务器,以使所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
从所述边缘服务器获取所述目标基线模型;
基于所述目标基线模型和所述初始基线模型生成融合基线模型;
对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行将初始基线模型发送给边缘服务器的操作。
本申请提供一种数据处理装置,应用于边缘服务器,包括:
获取模块,用于从中心服务器获取初始基线模型;
训练模块,用于通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
发送模块,用于在不部署所述目标基线模型时,将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型。
本申请提供一种数据处理装置,应用于中心服务器,包括:
发送模块,用于将初始基线模型发送给边缘服务器,以使边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
获取模块,用于从所述边缘服务器获取所述目标基线模型;
生成模块,用于基于所述目标基线模型和所述初始基线模型生成融合基线模型;对融合基线模型进行训练,将训练后的基线模型确定为初始基线模型。
本申请提供一种边缘服务器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下的步骤:
从中心服务器获取初始基线模型;
通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
若否,则将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所 述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型,返回执行从中心服务器获取初始基线模型的操作。
本申请提供一种中心服务器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下的步骤:
将初始基线模型发送给边缘服务器,以使所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
从所述边缘服务器获取所述目标基线模型;
基于所述目标基线模型和所述初始基线模型生成融合基线模型;
对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行将初始基线模型发送给边缘服务器的操作。
由以上可见,本申请实施例中,边缘服务器可以从中心服务器获取初始基线模型,通过场景数据对初始基线模型进行训练,得到目标基线模型,将目标基线模型发送给中心服务器,中心服务器基于目标基线模型和初始基线模型生成融合基线模型,对融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,将初始基线模型发送给边缘服务器,以此类推。由于上述过程可循环执行,因此,不断升级初始基线模型和目标基线模型的性能,持续提升初始基线模型和目标基线模型的识别能力,使得目标基线模型达到预期性能,达到高精度的识别能力。经过多次迭代过程,边缘服务器可以得到性能较好的目标基线模型,目标基线模型能够匹配边缘服务器所处环境,智能分析结果的准确度较高。在上述过程中,边缘服务器向中心服务器发送的是目标基线模型,而不是场景数据(如车牌图像),从而对场景数据进行隐私保护,不会将场景数据发送给中心服务器,从而实现去隐私的数据保护功能,避免将具有隐私性的车牌图像发送给中心服务器。由于目标基线模型是基于场景数据训练的,因此,能够将场景数据的信息体现到初始基线模型和目标基线模型。
附图说明
为了更加清楚地说明本申请实施例,下面将对本申请实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅示出了本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据本申请实施例的这些附图获得其他的附图。
图1是本申请一种实施方式中的系统的结构示意图;
图2是本申请一种实施方式中的数据处理方法的流程示意图;
图3是本申请一种实施方式中的基线模型的结构示意图;
图4是本申请另一种实施方式中的数据处理方法的流程示意图;
图5是本申请另一种实施方式中的数据处理方法的流程示意图;
图6A和图6B是本申请一种实施方式中的数据处理装置的结构示意图;
图7A是本申请一种实施方式中的边缘服务器的硬件结构图;
图7B是本申请一种实施方式中的中心服务器的硬件结构图。
具体实施方式
在本申请实施例使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请。本文中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。此外,取决于语境,所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
在介绍本申请实施例之前,先介绍与本申请实施例有关的概念。
机器学习:机器学习是实现人工智能的一种途径,用于研究计算机如何模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身性能。深度学习属于机器学习的子类,是一种使用数学模型对真实世界中的特定问题进行建模,以解决该领域内相似问题的过程。神经网络是深度学习的实现方式,为了方便描述,本文以神经网络为例,介绍神经网络的结构和功能,对于机器学习的其它子类,与神经网络的结构和功能类似。
神经网络:神经网络可以包括但不限于卷积神经网络(简称CNN)、循环神经网络(简称RNN)、全连接网络等,神经网络的结构单元可以包括但不限于卷积层(Conv)、池化层(Pool)、激励层、全连接层(FC)等,对此不做限制。
在实际应用中,可以根据不同需求,将一个或多个卷积层,一个或多个池化层,一个或多个激励层,以及一个或多个全连接层进行组合构建神经网络。
在卷积层中,通过使用卷积核(convolution kernel)对输入数据特征进行卷积运算,使输入数据特征增强,该卷积核可以是m*n大小的矩阵,卷积层的输入数据特征与卷积核进行卷积,可以得到卷积层的输出数据特征,卷积运算实际是一个滤波过程。
在池化层中,通过对输入数据特征(如卷积层的输出)进行取最大值、取最小值、取平均值等操作,从而利用局部相关性的原理,对输入数据特征进行子抽样,减少处理量,并保持特征不变性,池化层运算实际是一个降采样过程。
在激励层中,可以使用激活函数(activation function)(如非线性函数)对输入数据特征进行映射,从而引入非线性因素,使得神经网络通过非线性的组合增强表达能力。该激活函数可以包括但不限于ReLU(Rectified Linear Units,整流线性单元)函数,该ReLU函数用于将小于0的特征置0,而大于0的特征保持不变。
在全连接层中,对输入给本全连接层的所有数据特征进行全连接处理,从而得到一个特征向量,且该特征向量中可以包括多个数据特征。
神经网络(如卷积神经网络)的基线模型:在神经网络的训练过程中,可以利用样本数据训练神经网络内各神经网络参数,如卷积层参数(如卷积核)、池化层参数、激励层参数、全连接层参数等,对此不做限制。通过训练神经网络内各神经网络参数,可以使神经网络拟合出输入和输出的映射关系。
在神经网络训练完成后,已经完成训练的神经网络就是神经网络的基线模型,本文简称为基线模型,可以基于该基线模型实现人工智能处理,如人脸检测、人体检测、车辆检测、车牌识别等人工智能处理。例如,针对车牌识别的应用场景来说,可以将包括车牌的图像(即车牌图像)输入给基线模型,由基线模型对该图像进行人工智能处理,而人工智能处理结果就是车牌识别结果。
样本数据和场景数据:在人工智能场景中,可以部署中心服务器和边缘服务器,中心服务器获取到用于训练基线模型的数据,并基于这些数据训练基线模型,将中心服务器使用的用于训练基线模型的数据称为样本数据。边缘服务器获取到用于训练基线模型的数据(不会发送给中心服务器),并基于这些数据训练基线模型,将边缘服务器使用的用于训练基线模型的数据称为场景数据。
示例性的,样本数据可以是图像数据,也可以是其它类型的数据,对此不做限制。场景数据可以是图像数据,也可以是其它类型的数据,对此不做限制。
示例性的,当人工智能场景是智能交通领域的人工智能场景时,上述基线模型可以是用于实现车牌识别的基线模型。在此基础上,样本数据可以是中心服务器获取到的用于训练基线模型的车牌图像(即包括车牌信息的图像),中心服务器可以基于车牌图像训练基线模型,当然,车牌图像只是样本数据的示例,对此不做限制。场景数据可以是边缘服务器获取到的用于训练基线模型的车牌图像,该车牌图像不会发送给中心服务器,边缘服务器可以基于车牌图像训练基线模型,当然,车牌图像只是场景数据的示例,对此不做限制。
综上所述,由于边缘服务器不会将场景数据发送给中心服务器,导致中心服务器无法通过边缘服务器的场景数据训练基线模型,即基线模型无法匹配边缘服务器所处环境,将基线模型部署到边缘服务器后,基线模型的性能较低。示例性的,在智能交通领域,中心服务器可以将车牌图像作为样本数据,来训练用于实现车牌识别的基线模型。然而,实际应用中,由于车牌图像属于隐私数据,基于对隐私数据的保护需求,边缘服务器在获取到车牌图像后,不会将车牌图像发送给中心服务器,导致中心服务器无法通过边缘服务器的车牌图像训练基线模型,即基线模型无法匹配边缘服务器所处环境,将基线模型部署到边缘服务器后,基线模型的性能较低。
鉴于此,本申请实施例中,在将基线模型部署到边缘服务器后,边缘服务器可以利用自身所处环境的场景数据对基线模型进行训练,得到新基线模型。由于是利用边缘服务器所处环境的场景数据对基线模型进行训练,因此,新基线模型能够匹配边缘服务器所处环境,新基线模型的性能较好。
以下结合具体实施例,对本申请进行说明。
图1为本申请实施例的系统的结构示意图,参见图1所示,该系统可以包括中心服务器110和边缘服务器120,与中心服务器110连接的第一中转设备130,与边缘服务器120连接的第二中转设备140,第一中转设备130与第二中转设备140通过网络连接。边缘服务器120的数量为至少一个,每个边缘服务器120连接一个第二中转设备140,即第二中转设备140的数量与边缘服务器120的数量相同。
中心服务器110可组成识别中心系统,至少一个边缘服务器120可组成识别端侧系统,第一中转设备130和至少一个第二中转设备140可组成去隐私系统。
中心服务器110为用于提供基线模型的服务平台,可以为至少一个边缘服务器120提供基线模型。边缘服务器120为具有基线模型需求的服务器,即,需要从中心服务器110获取基线模型,继而根据该基线模型实现人工智能处理。
第一中转设备130是与中心服务器110连接的网络设备(如路由器,交换机等), 用于将中心服务器110发送的基线模型转发给边缘服务器120,将边缘服务器120发送的基线模型转发给中心服务器110。第二中转设备140是与边缘服务器120连接的网络设备,用于将中心服务器110发送的基线模型转发给边缘服务器120,将边缘服务器120发送的基线模型转发给中心服务器110。
本申请实施例中提出一种数据处理方法,参见图2所示,该方法包括步骤201-207。
步骤201,中心服务器将初始基线模型发送给边缘服务器。
示例性的,中心服务器可以获取大量样本数据,对此获取方式不做限制,针对每个样本数据来说,该样本数据具有标签信息,如实际类别和/或目标框等,对此标签信息不做限制。例如,针对车牌识别的应用场景来说,样本数据可以是车牌图像,目标框可以是车牌图像中某个矩形框的坐标信息(如矩形框的左上角坐标,矩形框的宽度和高度等),实际类别表示矩形框区域的车牌标识。
中心服务器根据样本数据和样本数据的标签信息对神经网络(如卷积神经网络)进行训练,得到基线模型,为了区分方便,将该基线模型称为初始基线模型。比如说,将大量样本数据及该样本数据对应的标签信息输入给神经网络,从而利用这些样本数据及标签信息对神经网络内的各神经网络参数进行训练,在神经网络训练完成后,已经完成训练的神经网络可以是初始基线模型。
在神经网络的训练过程中,将样本数据输入给神经网络后,可以得到与该样本数据对应的输出数据,该输出数据是特征向量,可以基于该特征向量确定神经网络是否已收敛。例如,根据该特征向量确定损失函数(可以根据经验进行配置)的损失值,并根据该损失值确定神经网络是否已收敛,例如,若该损失值小于预设阈值,则神经网络已收敛,否则,神经网络未收敛。
若神经网络已收敛,则完成训练过程,得到初始基线模型,若神经网络未收敛,则继续对神经网络内的各神经网络参数进行调整,得到调整后的神经网络,然后,可以将样本数据及该样本数据对应的标签信息输入给调整后的神经网络,继续对调整后的神经网络进行训练,以此类推,直到神经网络已收敛为止。
在实际应用中,还可以采用其它方式确定神经网络是否已收敛,对此确定方式不做限制。例如,若迭代次数达到预设次数阈值,则确定神经网络已收敛;又例如,若迭代时长达到预设时长阈值,则确定神经网络已收敛。
中心服务器在得到初始基线模型后,可以将该初始基线模型发送给边缘服务器,在边缘服务器为多个时,可以将该初始基线模型发送给全部边缘服务器或部分边缘服务器,以下结合一个边缘服务器对此发送过程进行说明:
方式一、中心服务器将该初始基线模型发送给第一中转设备,由第一中转设备将该初始基线模型发送给第二中转设备,第二中转设备可以将该初始基线模型发送给边缘服务器,至此,成功将该初始基线模型发送给边缘服务器。
方式二、中心服务器将该初始基线模型发送给第一中转设备,第一中转设备对该初始基线模型进行第一转换操作,得到第一低维基线模型,并将该第一低维基线模型发送给第二中转设备。第二中转设备对该第一低维基线模型进行第二转换操作,得到该初始基线模型,并将该初始基线模型发送给边缘服务器。
示例性的,第一转换操作可以为加密操作,第二转换操作为解密操作。或者,第一转换操作可以为压缩操作,第二转换操作为解压缩操作。或者,第一转换操作可以为加密操作和压缩操作,第二转换操作为解密操作和解压缩操作。当然,上述只是第一转换操作和第二转换操作的示例,对此不做限制。
在一种可能的实施方式中,第一中转设备接收到初始基线模型后,对初始基线模型进行压缩操作,从而将初始基线模型转换为第一低维基线模型,并将第一低维基线模型发送给第二中转设备。第二中转设备接收到第一低维基线模型后,对第一低维基线模型进行解压缩操作,得到解压缩后的初始基线模型。
由于在网络中传输的是经过压缩的第一低维基线模型,而不是未经压缩的初始基线模型,因此,可以减少传输的数据量,节省模型传输时间。
在一种可能的实施方式中,第一中转设备接收到初始基线模型后,对初始基线模型进行加密操作,从而将初始基线模型转换为第一低维基线模型,并将第一低维基线模型发送给第二中转设备。第二中转设备接收到第一低维基线模型后,对第一低维基线模型进行解密操作,得到解密后的初始基线模型。
由于在网络中传输的是经过加密的第一低维基线模型,而不是未经加密的初始基线模型,因此,可以保证初始基线模型安全,避免攻击者非法截获初始基线模型。
在一种可能的实施方式中,第一中转设备接收到初始基线模型后,对初始基线模型进行压缩操作,对压缩后的初始基线模型进行加密操作,从而将初始基线模型转换为第一低维基线模型,并将第一低维基线模型发送给第二中转设备。第二中转设备接收到第一低维基线模型后,对第一低维基线模型进行解密操作,对解密后的第一低维基线模型进行解压缩操作,得到解压缩后的初始基线模型。
由于在网络中传输的是经过压缩和加密的第一低维基线模型,因此,可以减少传输的数据量,节省模型传输时间,避免攻击者非法截获初始基线模型。
在一种可能的实施方式中,第一中转设备接收到初始基线模型后,对初始基线模型进行加密操作,对加密后的初始基线模型进行压缩操作,从而将初始基线模型转换为第一低维基线模型,并将第一低维基线模型发送给第二中转设备。第二中转设备接收到第一低维基线模型后,对第一低维基线模型进行解压缩操作,对解压缩后的第一低维基线模型进行解密操作,得到解密后的初始基线模型。
在上述实施例中,第一低维基线模型与初始基线模型相比,由于已经对初始基线模型进行加密和/或压缩,因此,初始基线模型可以理解为高维参数向量,而第一低维基线模型可以理解为低维参数向量,即参数的数量更少。
在上述实施例中,可以采用稀疏化算法对初始基线模型进行压缩,且采用稀疏化算法对第一低维基线模型进行解压缩,当然,稀疏化算法只是压缩算法的示例,对此不做限制,可以采用能够对初始基线模型进行压缩的任意算法。
在上述实施例中,可以采用密码算法对初始基线模型进行加密,且采用该密码算法对第一低维基线模型进行解密,该密码算法可以为任意类型的密码算法,对此密码算法不做限制,只要能够对初始基线模型进行加密即可。比如说,该密码算法可以是SM1(Security Manager,安全管理),SM2,SM3,SM4等国产商用密码算法,也可以是DES(Data Encryption Standard,数据加密标准),AES(Advanced Encryption Standard,高级加密标准),IDEA(International Data Encryption Algorithm,国际数据加密算法)等国际商用密码算法。
步骤202,边缘服务器通过边缘服务器的场景数据对该初始基线模型进行训练,得到目标基线模型,即经过训练的初始基线模型为目标基线模型。
示例性的,边缘服务器可以获取大量场景数据,该场景数据可以是边缘服务器所处环境的场景数据,对此获取方式不做限制。针对每个场景数据来说,该场景数据具有标签信息,如实际类别和/或目标框等,对此标签信息不做限制。
例如,针对车牌识别的应用场景来说,该场景数据可以是车牌图像,该目标框可以是车牌图像中某个矩形框的坐标信息(如矩形框的左上角坐标,矩形框的宽度和高度等),该实际类别可以表示矩形框区域的车牌标识。
边缘服务器根据场景数据和场景数据的标签信息对初始基线模型进行训练,得到训练后的基线模型,为了区分方便,将该训练后的基线模型称为目标基线模型。比如说,将大量场景数据及该场景数据对应的标签信息输入给初始基线模型,从而利用这些场景数据及标签信息对初始基线模型内的各神经网络参数进行训练,在初始基线模型训练完成后,已经完成训练的初始基线模型可以是目标基线模型。
在初始基线模型的训练过程中,将场景数据输入给初始基线模型后,可以得到与场景数据对应的输出数据,该输出数据是特征向量,基于该特征向量确定初始基线模型是否已收敛。例如,根据特征向量确定损失函数(可以根据经验进行配置)的损失值,并根据该损失值确定初始基线模型是否已收敛,例如,若该损失值小于预设阈值,则初始基线模型已收敛,否则,初始基线模型未收敛。
若初始基线模型已收敛,则完成训练过程,得到目标基线模型。若初始基线模型未收敛,则继续对初始基线模型内的各神经网络参数进行调整,得到调整后的初始基线模型,将场景数据及标签信息输入给调整后的初始基线模型,继续对调整后的初始基线模型进行训练,以此类推,直到初始基线模型已收敛为止。
在实际应用中,还可以采用其它方式确定初始基线模型是否已收敛,对此不做限制。例如,若迭代次数达到预设次数阈值,则确定初始基线模型已收敛;又例如,若迭代时长达到预设时长阈值,则确定初始基线模型已收敛。
在上述实施例中,可以基于边缘服务器的场景数据对初始基线模型进行训练,得到目标基线模型。由于场景数据是边缘服务器所处环境的数据,且未将场景数据发送给中心服务器,即中心服务器未使用这些场景数据训练初始基线模型,因此,边缘服务器利用这些场景数据对初始基线模型进行训练,使得目标基线模型具有样本数据信息,且具有场景数据的新知识,提升模型性能。
步骤203,边缘服务器确定是否部署目标基线模型。示例性的,若部署目标基线模型,则执行步骤204,若不部署目标基线模型,则执行步骤205。
示例性的,若目标基线模型已经满足性能需求,则可以部署目标基线模型,若目标基线模型未满足性能需求,则可以不部署目标基线模型。
比如说,获取测试数据集合,该测试数据集合包括多个测试数据,每个测试数据对应有该测试数据的实际类别。针对测试数据集合中的每个测试数据,将该测试数据输入给目标基线模型,通过目标基线模型对该测试数据进行人工智能处理,得到人工智能处理结果,该人工智能处理结果是该测试数据的预测类别。若该测试数据的预测类别与该测试数据的实际类别一致,则表示目标基线模型对该测试数据的识别结果正确,若该测试数据的预测类别与该测试数据的实际类别不一致,则表示目标基线模型对该测试数据的识别结果错误。
在对每个测试数据进行上述处理后,可以得到识别结果正确的数量(记为a1)和识别结果错误的数量(记为a2),并根据识别结果正确的数量a1和识别结果错误的数量a2,确定目标基线模型的性能指标。比如说,性能指标可以为a1/(a1+a2),显然, 性能指标越大,则表示目标基线模型的性能越好。
综上所述,若目标基线模型的性能指标大于预设阈值,则说明目标基线模型已经满足性能需求,可部署目标基线模型。若目标基线模型的性能指标不大于预设阈值,则说明目标基线模型未满足性能需求,可不部署目标基线模型。
当然,上述只是确定是否部署目标基线模型的示例,对此确定方式不做限制,比如说,若迭代次数(边缘服务器每次训练得到目标基线模型时,表示一次迭代,即迭代次数加1)达到预设次数阈值,则确定部署目标基线模型。又比如,在接收到用户指令时,基于用户指令确定是否部署目标基线模型。
步骤204,边缘服务器通过目标基线模型对应用数据进行处理。
示例性的,边缘服务器可以在本边缘服务器部署目标基线模型,边缘服务器在获取到应用数据后,可以将该应用数据输入给目标基线模型,以通过目标基线模型对该应用数据进行人工智能处理,得到人工智能处理结果。
示例性的,边缘服务器还可以将目标基线模型部署到终端设备,即由边缘服务器管理的终端设备,如模拟摄像机、IPC(Internet Protocol Camera,网络摄像机)等,终端设备在获取到应用数据后,可以将该应用数据输入给目标基线模型,以通过目标基线模型对该应用数据进行人工智能处理,得到人工智能处理结果。
比如说,假设目标基线模型用于实现车牌识别,则应用数据为包括车牌的图像,将应用数据提供给目标基线模型后,由目标基线模型对应用数据进行人工智能处理,得到人工智能处理结果,该人工智能处理结果可以为车牌标识。
步骤205,边缘服务器将该目标基线模型发送给中心服务器。
边缘服务器在得到目标基线模型后,若确定不部署目标基线模型,则将该目标基线模型发送给中心服务器。若确定部署目标基线模型,则可以将该目标基线模型发送给中心服务器,也可以不将该目标基线模型发送给中心服务器。
针对边缘服务器将该目标基线模型发送给中心服务器的过程,可以包括:
方式一、边缘服务器将该目标基线模型发送给第二中转设备,由第二中转设备将该目标基线模型发送给第一中转设备,第一中转设备可以将该目标基线模型发送给中心服务器,至此,成功将目标基线模型发送给中心服务器。
方式二、边缘服务器将该目标基线模型发送给第二中转设备,第二中转设备对该目标基线模型进行第一转换操作,得到第二低维基线模型,并将该第二低维基线模型发送给第一中转设备。第一中转设备对该第二低维基线模型进行第二转换操作,得到该 目标基线模型,并将该目标基线模型发送给中心服务器。
示例性的,第一转换操作可以为加密操作,第二转换操作为解密操作。或者,第一转换操作可以为压缩操作,第二转换操作为解压缩操作。或者,第一转换操作可以为加密操作和压缩操作,第二转换操作为解密操作和解压缩操作。当然,上述只是第一转换操作和第二转换操作的示例,对此不做限制。
在一种可能的实施方式中,第二中转设备对目标基线模型进行压缩操作,从而将目标基线模型转换为第二低维基线模型,并将第二低维基线模型发送给第一中转设备。第一中转设备对第二低维基线模型进行解压缩操作,得到目标基线模型。或者,第二中转设备对目标基线模型进行加密操作,从而将目标基线模型转换为第二低维基线模型,并将第二低维基线模型发送给第一中转设备。第一中转设备对第二低维基线模型进行解密操作,得到目标基线模型。或者,第二中转设备对目标基线模型进行压缩操作和加密操作,从而将目标基线模型转换为第二低维基线模型,并将第二低维基线模型发送给第一中转设备。第一中转设备对第二低维基线模型进行解密操作和解压缩操作,得到目标基线模型。
步骤206,中心服务器基于目标基线模型和初始基线模型生成融合基线模型。
示例性的,中心服务器可以对至少一个边缘服务器的目标基线模型以及初始基线模型进行融合操作,得到融合基线模型;该融合操作可以包括但不限于以下操作中的一种:加权操作,平均操作,取最大值操作,取最小值操作。
示例性的,目标基线模型和初始基线模型的网络结构相同,如均包括网络层1和网络层2,网络层1包括参数A和参数B,网络层2包括参数C和参数D。
假设中心服务器获取到目标基线模型1和目标基线模型2,初始基线模型中,参数A的取值为a11,参数B的取值为b11,参数C的取值为c11,参数D的取值为d11。目标基线模型1中,参数A的取值为a21,参数B的取值为b21,参数C的取值为c21,参数D的取值为d21。目标基线模型2中,参数A的取值为a31,参数B的取值为b31,参数C的取值为c31,参数D的取值为d31。
在此基础上,中心服务器可以对初始基线模型,目标基线模型1和目标基线模型2进行融合操作,得到融合基线模型,该融合基线模型可以包括网络层1和网络层2,网络层1包括参数A和参数B,网络层2包括参数C和参数D。
在融合基线模型中,参数A的取值为对a11,a21和a31进行融合操作得到的。比如说,参数A的取值为a11,a21和a31的平均值;或,参数A的取值为a11,a21和a31中的最大值;或,参数A的取值为a11,a21和a31中的最小值;或,参数A的取 值为a11,a21和a31的加权值,如初始基线模型的权重(如2)大于目标基线模型的权重(如1)时,参数A的取值为(a11*2+a21*1+a31*1)/4,初始基线模型的权重(如1)小于目标基线模型的权重(如2)时,参数A的取值为(a11*1+a21*2+a31*2)/5。当然,上述只是几个示例,对此不做限制。
在融合基线模型中,参数B的取值为对b11,b21和b31进行融合操作得到的。比如说,参数B的取值为b11,b21和b31的平均值;或,参数B的取值为b11,b21和b31中的最大值;或,参数B的取值为b11,b21和b31中的最小值;或,参数B的取值为b11,b21和b31的加权值,对此不做限制。
在融合基线模型中,参数C的取值为对c11,c21和c31进行融合操作得到的。比如说,参数C的取值为c11,c21和c31的平均值;或,参数C的取值为c11,c21和c31中的最大值;或,参数C的取值为c11,c21和c31中的最小值;或,参数C的取值为c11,c21和c31的加权值,对此不做限制。
在融合基线模型中,参数D的取值为对d11,d21和d31进行融合操作得到的。比如说,参数D的取值为d11,d21和d31的平均值;或,参数D的取值为d11,d21和d31中的最大值;或,参数D的取值为d11,d21和d31中的最小值;或,参数D的取值为d11,d21和d31的加权值,对此不做限制。
步骤207,中心服务器对该融合基线模型进行训练,得到训练后的基线模型,将训练后的基线模型确定为初始基线模型,返回执行步骤201。
示例性的,在得到该融合基线模型后,中心服务器可以根据样本数据和样本数据的标签信息对该融合基线模型进行训练,得到训练后的基线模型,该训练过程可以参见步骤201,在此不再赘述。将训练后的基线模型确定为初始基线模型,返回执行步骤201,即将该初始基线模型发送给边缘服务器。
由以上可见,本申请实施例中,边缘服务器可以从中心服务器获取初始基线模型,通过场景数据对初始基线模型进行训练,得到目标基线模型,将目标基线模型发送给中心服务器,中心服务器基于目标基线模型和初始基线模型生成融合基线模型,对融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,将初始基线模型发送给边缘服务器,以此类推。由于上述过程可循环执行,因此,不断升级初始基线模型和目标基线模型的性能,持续提升初始基线模型和目标基线模型的识别能力,使得目标基线模型达到预期性能,达到高精度的识别能力。经过多次迭代过程,边缘服务器可以得到性能较好的目标基线模型,目标基线模型能够匹配边缘服务器所处环境,智能分析结果的准确度较高。在上述过程中,边缘服务器向中心服务器发送的是目标基线模型,而 不是场景数据(如车牌图像),从而对场景数据进行隐私保护,不会将场景数据发送给中心服务器。由于目标基线模型是基于场景数据训练的,因此,能够将场景数据的信息体现到初始基线模型和目标基线模型。
在一种可能的实施方式中,在智能交通领域,边缘服务器的场景数据可以为车牌图像,中心服务器的样本数据可以为车牌图像,目标基线模型用于对应用数据进行车牌识别处理,即识别出车牌图像中的车牌标识。在此应用场景下,本实施例中,提出去隐私的车牌识别方式,能够在不涉及用户隐私数据的情况下达到良好的车牌识别能力,即不需要将隐私的车牌图像发送给中心服务器。
针对边缘服务器和中心服务器来说,基线模型的网络结构相同,中心服务器训练得到的初始基线模型与边缘服务器训练得到的目标基线模型的网络结构相同。参见图3所示,为基线模型的网络结构示意图,二者采用相同网络结构。
由于边缘服务器与中心服务器没有直接相连,因此,单个边缘服务器的问题不会直接影响中心服务器的运行,可以方便地增加或删除边缘服务器。
示例性的,边缘服务器也可以称为端侧服务器,边缘服务器由合法(满足当地法规)的地区支持商/代理维护,边缘服务器具有模型训练功能。中心服务器由识别系统服务提供商维护,中心服务器具有模型训练功能。边缘服务器可以从中心服务器获取到初始基线模型,取得较好的初步识别能力,并根据本地的车牌图像对初始基线模型进行训练,得到目标基线模型,使目标基线模型能够适配本边缘服务器。中心服务器可以获取边缘服务器发送的目标基线模型,完成目标基线模型与初始基线模型的融合,使得初始基线模型进一步升级,具备更强的泛化能力。
在上述应用场景下,本申请实施例中提出一种数据处理方法,该方法包括:
步骤S1,中心服务器根据样本数据训练得到初始基线模型。
步骤S2,中心服务器将该初始基线模型发送给第一中转设备。
步骤S3,第一中转设备对该初始基线模型进行第一转换操作,得到第一低维基线模型,并将该第一低维基线模型发送给第二中转设备。
示例性的,当第二中转设备的数量为多个时,可以将该第一低维基线模型发送给每个第二中转设备,后续以一个第二中转设备为例进行说明。
步骤S4,第二中转设备对该第一低维基线模型进行第二转换操作,得到该初始基线模型,并将该初始基线模型发送给边缘服务器。
示例性的,当第二中转设备的数量为多个时,针对每个第二中转设备来说,可 以将该初始基线模型发送给与本第二中转设备连接的边缘服务器,即可以将该初始基线模型发送给多个边缘服务器,为了方便描述,后续以一个边缘服务器的处理过程为例进行说明。
步骤S5,边缘服务器通过边缘服务器的场景数据对该初始基线模型进行训练,得到目标基线模型,即经过训练的初始基线模型为目标基线模型。
步骤S6,边缘服务器确定是否部署目标基线模型。示例性的,若部署目标基线模型,则执行步骤S7,若不部署目标基线模型,则执行步骤S8。
步骤S7,边缘服务器通过目标基线模型对应用数据进行处理。
步骤S8,边缘服务器将该目标基线模型发送给第二中转设备。
步骤S9,第二中转设备对该目标基线模型进行第一转换操作,得到第二低维基线模型,并将该第二低维基线模型发送给第一中转设备。
步骤S10,第一中转设备对该第二低维基线模型进行第二转换操作,得到该目标基线模型,并将该目标基线模型发送给中心服务器。
步骤S11,中心服务器基于目标基线模型和初始基线模型生成融合基线模型。
步骤S12,中心服务器对该融合基线模型进行训练,得到训练后的基线模型,将训练后的基线模型确定为初始基线模型,返回执行步骤S2。
综上所述,由于上述过程可以循环执行,因此,在保护各边缘服务器的数据隐私的前提下,能够不断升级中心服务器的初始基线模型的性能,持续提升车牌识别能力。随着中心服务器与边缘服务器之间交互次数的提升,初始基线模型和目标基线模型的性能能够持续提升,达到预期的性能。
基于与上述方法同样的申请构思,本申请实施例中还提出另一种数据处理方法,该方法可以应用于边缘服务器,参见图4所示,该方法可以包括步骤401-404。
步骤401,从中心服务器获取初始基线模型。
步骤402,通过边缘服务器的场景数据对该初始基线模型进行训练,得到目标基线模型,并确定是否部署该目标基线模型。示例性的,若部署该目标基线模型,则执行步骤403,若不部署该目标基线模型,则执行步骤404。
步骤403,通过该目标基线模型对应用数据进行处理。
步骤404,将目标基线模型发送给中心服务器,以使中心服务器基于该目标基线模型和该初始基线模型生成融合基线模型,并基于该融合基线模型重新获取初始基线模 型,返回执行从中心服务器获取初始基线模型的操作,即执行步骤401。这样,边缘服务器可以重新从中心服务器获取初始基线模型。
基于与上述方法同样的申请构思,本申请实施例中还提出另一种数据处理方法,该方法可以应用于中心服务器,参见图5所示,该方法可以包括步骤501-504。
步骤501,将初始基线模型发送给边缘服务器,以使边缘服务器通过边缘服务器的场景数据对该初始基线模型进行训练,得到目标基线模型。
步骤502,从边缘服务器获取该目标基线模型。
步骤503,基于该目标基线模型和该初始基线模型生成融合基线模型。
步骤504,对该融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行将初始基线模型发送给边缘服务器的操作,即执行步骤501。
基于与上述方法同样的申请构思,本申请实施例中提出一种数据处理装置,应用于边缘服务器,图6A为所述装置的结构示意图,参见图6A所示,所述装置包括:
获取模块611,用于从中心服务器获取初始基线模型;
训练模块612,用于通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
发送模块613,用于在不部署所述目标基线模型时,将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型。
示例性的,所述数据处理装置还可以包括:处理模块,用于在部署所述目标基线模型时,通过所述目标基线模型对应用数据进行处理。
基于与上述方法同样的申请构思,本申请实施例中提出一种数据处理装置,应用于中心服务器,图6B为所述装置的结构示意图,参见图6B所示,所述装置包括:
发送模块621,用于将初始基线模型发送给边缘服务器,以使边缘服务器通过边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
获取模块622,用于从所述边缘服务器获取所述目标基线模型;
生成模块623,用于基于所述目标基线模型和所述初始基线模型生成融合基线模型;对融合基线模型进行训练,将训练后的基线模型确定为初始基线模型。
所述生成模块623基于所述目标基线模型和所述初始基线模型生成融合基线模型时具体用于:对至少一个边缘服务器的目标基线模型以及所述初始基线模型进行融合 操作,得到所述融合基线模型;其中,所述融合操作包括以下操作中的一种:加权操作,平均操作,取最大值操作,取最小值操作。
基于与上述方法同样的申请构思,本申请实施例中提出一种边缘服务器,参见图7A所示,所述边缘服务器包括:处理器711和机器可读存储介质712,所述机器可读存储介质712存储有能够被所述处理器711执行的机器可执行指令;所述处理器711用于执行机器可执行指令,以实现如下步骤:
从中心服务器获取初始基线模型;
通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
若否,则将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型,返回执行从中心服务器获取初始基线模型的操作。
基于与上述方法同样的申请构思,本申请实施例中提出一种中心服务器,参见图7B所示,所述中心服务器包括:处理器721和机器可读存储介质722,所述机器可读存储介质722存储有能够被所述处理器721执行的机器可执行指令;所述处理器721用于执行机器可执行指令,以实现如下步骤:
将初始基线模型发送给边缘服务器,以使所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
从所述边缘服务器获取所述目标基线模型;
基于所述目标基线模型和所述初始基线模型生成融合基线模型;
对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行将初始基线模型发送给边缘服务器的操作。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的数据处理方法。
其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失性存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、DVD等),或者类似的存储介质,或者它们的组合。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程数据处理设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程数据处理设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (12)

  1. 一种数据处理方法,所述方法包括:
    中心服务器将初始基线模型发送给边缘服务器;
    所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
    若否,则所述边缘服务器将所述目标基线模型发送给所述中心服务器;
    所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行中心服务器将初始基线模型发送给边缘服务器的操作。
  2. 根据权利要求1所述的方法,其中,
    所述中心服务器将初始基线模型发送给边缘服务器,包括:
    所述中心服务器将所述初始基线模型发送给第一中转设备;
    所述第一中转设备对所述初始基线模型进行第一转换操作,得到第一低维基线模型,并将所述第一低维基线模型发送给第二中转设备;
    所述第二中转设备对所述第一低维基线模型进行第二转换操作,得到所述初始基线模型,并将所述初始基线模型发送给所述边缘服务器。
  3. 根据权利要求1所述的方法,其中,
    所述边缘服务器将所述目标基线模型发送给所述中心服务器,包括:
    所述边缘服务器将所述目标基线模型发送给第二中转设备;
    所述第二中转设备对所述目标基线模型进行第一转换操作,得到第二低维基线模型,并将所述第二低维基线模型发送给第一中转设备;
    所述第一中转设备对所述第二低维基线模型进行第二转换操作,得到所述目标基线模型,并将所述目标基线模型发送给所述中心服务器。
  4. 根据权利要求2或者3所述的方法,其中,
    所述第一转换操作为加密操作,所述第二转换操作为解密操作;或者,
    所述第一转换操作为压缩操作,所述第二转换操作为解压缩操作;或者,
    所述第一转换操作为加密操作和压缩操作,所述第二转换操作为解密操作和解压缩操作。
  5. 根据权利要求1所述的方法,其中,所述边缘服务器包括至少一个边缘服务器,所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,包括:
    所述中心服务器对所述至少一个边缘服务器的目标基线模型以及所述初始基线模 型进行融合操作,得到所述融合基线模型;
    其中,所述融合操作包括以下操作中的一种:
    加权操作,平均操作,取最大值操作,取最小值操作。
  6. 根据权利要求1所述的方法,其中,
    所述确定是否部署所述目标基线模型之后,所述方法还包括:
    若是,则所述边缘服务器通过所述目标基线模型对应用数据进行处理。
  7. 一种数据处理方法,应用于边缘服务器,所述方法包括:
    从中心服务器获取初始基线模型;
    通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
    若否,则将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型,返回执行从中心服务器获取初始基线模型的操作。
  8. 一种数据处理方法,应用于中心服务器,所述方法包括:
    将初始基线模型发送给边缘服务器,以使所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
    从所述边缘服务器获取所述目标基线模型;
    基于所述目标基线模型和所述初始基线模型生成融合基线模型;
    对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行将初始基线模型发送给边缘服务器的操作。
  9. 一种数据处理装置,应用于边缘服务器,所述装置包括:
    获取模块,用于从中心服务器获取初始基线模型;
    训练模块,用于通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
    发送模块,用于在不部署所述目标基线模型时,将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型。
  10. 一种数据处理装置,应用于中心服务器,所述装置包括:
    发送模块,用于将初始基线模型发送给边缘服务器,以使边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
    获取模块,用于从所述边缘服务器获取所述目标基线模型;
    生成模块,用于基于所述目标基线模型和所述初始基线模型生成融合基线模型;对融合基线模型进行训练,将训练后的基线模型确定为初始基线模型。
  11. 一种边缘服务器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现如下的步骤:
    从中心服务器获取初始基线模型;
    通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型,并确定是否部署所述目标基线模型;
    若否,则将所述目标基线模型发送给所述中心服务器,以使所述中心服务器基于所述目标基线模型和所述初始基线模型生成融合基线模型,并基于所述融合基线模型重新获取初始基线模型,返回执行从中心服务器获取初始基线模型的操作。
  12. 一种中心服务器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现如下的步骤:
    将初始基线模型发送给边缘服务器,以使所述边缘服务器通过所述边缘服务器的场景数据对所述初始基线模型进行训练,得到目标基线模型;
    从所述边缘服务器获取所述目标基线模型;
    基于所述目标基线模型和所述初始基线模型生成融合基线模型;
    对所述融合基线模型进行训练,将训练后的基线模型确定为初始基线模型,返回执行将初始基线模型发送给边缘服务器的操作。
PCT/CN2021/073615 2020-06-29 2021-01-25 一种数据处理方法、装置及设备 WO2022001092A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010610046.0A CN111783630B (zh) 2020-06-29 2020-06-29 一种数据处理方法、装置及设备
CN202010610046.0 2020-06-29

Publications (1)

Publication Number Publication Date
WO2022001092A1 true WO2022001092A1 (zh) 2022-01-06

Family

ID=72760352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/073615 WO2022001092A1 (zh) 2020-06-29 2021-01-25 一种数据处理方法、装置及设备

Country Status (2)

Country Link
CN (1) CN111783630B (zh)
WO (1) WO2022001092A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116451276A (zh) * 2023-06-15 2023-07-18 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置、设备及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783630B (zh) * 2020-06-29 2022-07-01 上海高德威智能交通系统有限公司 一种数据处理方法、装置及设备
CN112686300B (zh) * 2020-12-29 2023-09-26 杭州海康威视数字技术股份有限公司 一种数据处理方法、装置及设备
CN113222014A (zh) * 2021-05-12 2021-08-06 深圳思谋信息科技有限公司 图像分类模型训练方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510000A (zh) * 2018-03-30 2018-09-07 北京工商大学 复杂场景下行人细粒度属性的检测与识别方法
CN110263936A (zh) * 2019-06-14 2019-09-20 深圳前海微众银行股份有限公司 横向联邦学习方法、装置、设备及计算机存储介质
CN111091199A (zh) * 2019-12-20 2020-05-01 哈尔滨工业大学(深圳) 一种基于差分隐私的联邦学习方法、装置及存储介质
CN111222647A (zh) * 2020-01-09 2020-06-02 深圳前海微众银行股份有限公司 联邦学习系统优化方法、装置、设备及存储介质
WO2020115273A1 (en) * 2018-12-07 2020-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Predicting network communication performance using federated learning
CN111783630A (zh) * 2020-06-29 2020-10-16 上海高德威智能交通系统有限公司 一种数据处理方法、装置及设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194938B2 (en) * 2009-06-02 2012-06-05 George Mason Intellectual Properties, Inc. Face authentication using recognition-by-parts, boosting, and transduction
US10936907B2 (en) * 2018-08-10 2021-03-02 Buffalo Automation Group Inc. Training a deep learning system for maritime applications
CN109711556B (zh) * 2018-12-24 2020-11-03 中国南方电网有限责任公司 机巡数据处理方法、装置、网级服务器和省级服务器

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510000A (zh) * 2018-03-30 2018-09-07 北京工商大学 复杂场景下行人细粒度属性的检测与识别方法
WO2020115273A1 (en) * 2018-12-07 2020-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Predicting network communication performance using federated learning
CN110263936A (zh) * 2019-06-14 2019-09-20 深圳前海微众银行股份有限公司 横向联邦学习方法、装置、设备及计算机存储介质
CN111091199A (zh) * 2019-12-20 2020-05-01 哈尔滨工业大学(深圳) 一种基于差分隐私的联邦学习方法、装置及存储介质
CN111222647A (zh) * 2020-01-09 2020-06-02 深圳前海微众银行股份有限公司 联邦学习系统优化方法、装置、设备及存储介质
CN111783630A (zh) * 2020-06-29 2020-10-16 上海高德威智能交通系统有限公司 一种数据处理方法、装置及设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116451276A (zh) * 2023-06-15 2023-07-18 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置、设备及系统
CN116451276B (zh) * 2023-06-15 2023-09-26 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置、设备及系统

Also Published As

Publication number Publication date
CN111783630A (zh) 2020-10-16
CN111783630B (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
WO2022001092A1 (zh) 一种数据处理方法、装置及设备
CN113688855B (zh) 数据处理方法、联邦学习的训练方法及相关装置、设备
Chen et al. An edge traffic flow detection scheme based on deep learning in an intelligent transportation system
JP6756037B2 (ja) ユーザ本人確認の方法、装置及びシステム
US10726335B2 (en) Generating compressed representation neural networks having high degree of accuracy
CN110941855B (zh) 一种AIoT场景下的神经网络模型窃取防御方法
CN114144781A (zh) 身份验证和管理系统
CN102611692B (zh) 多承租人数据中心中的安全计算方法
CA3063126A1 (en) System and method for biometric identification
CN113505882B (zh) 基于联邦神经网络模型的数据处理方法、相关设备及介质
CN111680672B (zh) 人脸活体检测方法、系统、装置、计算机设备和存储介质
CN111428887B (zh) 一种基于多个计算节点的模型训练控制方法、装置及系统
CN113537633B (zh) 基于纵向联邦学习的预测方法、装置、设备、介质和系统
CN111191267B (zh) 一种模型数据的处理方法、装置及设备
CN111612167B (zh) 机器学习模型的联合训练方法、装置、设备及存储介质
CN110874571A (zh) 人脸识别模型的训练方法及装置
KR20220076398A (ko) Ar장치를 위한 객체 인식 처리 장치 및 방법
Ma et al. Trusted ai in multiagent systems: An overview of privacy and security for distributed learning
CN114301850A (zh) 一种基于生成对抗网络与模型压缩的军用通信加密流量识别方法
KR20220138696A (ko) 이미지 분류 방법 및 장치
CN112948883A (zh) 保护隐私数据的多方联合建模的方法、装置和系统
RU2704538C1 (ru) Сетевая архитектура человекоподобной сети и способ реализации
US20230133033A1 (en) System and method for processing a data subject rights request using biometric data matching
CN112291188B (zh) 注册验证方法及系统、注册验证服务器、云服务器
CN114004265A (zh) 一种模型训练方法及节点设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21833695

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21833695

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21833695

Country of ref document: EP

Kind code of ref document: A1