CN117931400A - Task execution method and device, storage medium and electronic equipment - Google Patents

Task execution method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117931400A
CN117931400A CN202410077918.XA CN202410077918A CN117931400A CN 117931400 A CN117931400 A CN 117931400A CN 202410077918 A CN202410077918 A CN 202410077918A CN 117931400 A CN117931400 A CN 117931400A
Authority
CN
China
Prior art keywords
data
network layer
data unit
target
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410077918.XA
Other languages
Chinese (zh)
Inventor
李若愚
唐董琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202410077918.XA priority Critical patent/CN117931400A/en
Publication of CN117931400A publication Critical patent/CN117931400A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The specification discloses a task execution method, a task execution device, a storage medium and electronic equipment. The task execution method comprises the following steps: receiving a task execution request aiming at target data; inputting target data into a preset service model according to a task execution request, so that the service model determines each data unit contained in the target data, and determining a data characteristic corresponding to each data unit and a first weight corresponding to the data unit according to the association degree between the data unit and each data unit in the target data for each data unit, wherein the first weight is used for representing the importance degree of the data unit relative to the target data; fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold to obtain fusion characteristics; and determining target characteristics corresponding to the target data according to the fusion characteristics and each data characteristic which is not fused, so as to execute tasks according to the target characteristics.

Description

Task execution method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task execution method, a task execution device, a storage medium, and an electronic device.
Background
With the advent of the large model age, the performance and efficiency of models have also gained widespread attention, in which the Transformer network is widely applied to various fields such as intelligent customer service, risk control, target detection, and privacy protection, due to its excellent performance and efficient training manner, so as to perform tasks under the corresponding fields as a business model.
However, with the increasing data volume, the computation volume of the model and the occupied computation resources also increase as the length of the input sequence increases in square, which results in lower overall execution efficiency of the task, larger resource occupation, and difficulty in meeting the increasing business demands.
Therefore, how to improve the calculation efficiency of the service model, reduce the occupation of resources, and fully satisfy the service requirements is a problem to be solved urgently.
Disclosure of Invention
The specification provides a task execution method, a task execution device, a storage medium and electronic equipment. The length of the output sequence is reduced by fusing the features with smaller importance, and the efficiency of the subsequent tasks is improved.
The technical scheme adopted in the specification is as follows:
The specification provides a task execution method, which comprises the following steps:
receiving a task execution request aiming at target data;
Inputting the target data into a preset service model according to the task execution request, so that the service model determines each data unit contained in the target data, and determining a data characteristic corresponding to each data unit and a first weight corresponding to the data unit according to the association degree between the data unit and each data unit in the target data for each data unit, wherein the first weight is used for representing the importance degree of the data unit relative to the target data;
Fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold to obtain fusion characteristics;
And determining target characteristics corresponding to the target data according to the fusion characteristics and each data characteristic which is not fused, so as to execute tasks according to the target characteristics.
Optionally, for each data unit, determining, according to the association degree between the data unit and each data unit in the target data, a data feature corresponding to the data unit and a first weight corresponding to the data unit specifically includes:
Determining second weights of each data unit relative to the data unit according to the association degree;
And determining the first weight according to the second weights, and determining the data characteristic corresponding to each data unit according to the second weights and the initial characteristic corresponding to the data unit.
Optionally, fusing the data features corresponding to the data units with the first weight smaller than the preset weight threshold to obtain fusion features, which specifically includes:
For each data unit, performing dimension reduction processing on the data characteristics corresponding to the data unit through a linear network layer preset in the service model to obtain dimension reduction characteristics corresponding to the data unit;
And fusing the dimension reduction features corresponding to the data units with the first weights smaller than the preset weight threshold to obtain fusion features.
Optionally, fusing the data features corresponding to the data units with the first weight smaller than the preset weight threshold to obtain each fusion feature, which specifically includes:
Determining the number of packets according to the precision of network parameters in the service model;
and grouping the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold according to the grouping number, and fusing the data characteristics in each group aiming at each group to obtain fusion characteristics corresponding to the group.
Optionally, determining the target feature corresponding to the target data according to the fusion feature and each data feature not fused specifically includes:
For each data processing network layer contained in the service model, the fusion characteristics and the data characteristics which are not fused and output by the last data processing network layer are input into the data processing network layer as the input characteristics corresponding to the data processing network layer, so that the data processing network layer determines the data characteristics which are output by the data processing network layer for the input characteristics and the first weights corresponding to the input characteristic units according to the association degree between the input characteristics and other input characteristics aiming at each input characteristic;
And fusing all the data features of which the first weight is smaller than a preset weight threshold value and which are output by the data processing network layer to obtain fused features which are output by the data processing network layer, inputting all the data features and the fused features which are output by the data processing network layer into a next network layer for processing until feature data which are output by a last data processing network layer are obtained, and taking the feature data which are output by the last data processing network layer as target features.
Optionally, the target data includes: text data or image data.
The present specification provides a task execution device including:
the receiving module is used for receiving a task execution request aiming at target data;
The input module is used for inputting the target data into a preset service model according to the task execution request, so that the service model determines each data unit contained in the target data, and determines the data characteristic corresponding to the data unit and the first weight corresponding to the data unit according to the association degree between the data unit and other data units in the target data for each data unit, wherein the first weight is used for representing the importance degree of the data unit relative to the target data;
The fusion module is used for fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold value to obtain fusion characteristics;
And the execution module is used for determining the target feature corresponding to the target data according to the fusion feature and each data feature which is not fused so as to execute the task according to the target feature.
Optionally, the input module is specifically configured to determine, according to the association degree, each second weight of the other data unit relative to the data unit; and determining the first weight according to the second weights, and determining the data characteristic corresponding to the data unit according to the initial characteristics corresponding to the second weights and other data units.
Optionally, the fusion module is specifically configured to, for each data unit, perform, through a linear network layer preset in the service model, a dimension reduction process on a data feature corresponding to the data unit, so as to obtain a dimension reduction feature corresponding to the data unit; and fusing the dimension reduction features corresponding to the data units with the first weights smaller than the preset weight threshold to obtain fusion features.
Optionally, the fusion module is specifically configured to determine the number of packets according to the accuracy of the network parameters in the service model; and grouping the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold according to the grouping number, and fusing the data characteristics in each group aiming at each group to obtain fusion characteristics corresponding to the group.
Optionally, the execution module is specifically configured to, for each data processing network layer included in the service model, input, as each input feature corresponding to the data processing network layer, a fusion feature and each data feature that are output by a previous data processing network layer, so that, for each input feature, the data processing network layer determines, according to a degree of association between the input feature and other input features, a data feature output by the data processing network layer for the input feature and a first weight corresponding to the input feature unit; and fusing all the data features of which the first weight is smaller than a preset weight threshold value and which are output by the data processing network layer to obtain fused features which are output by the data processing network layer, inputting all the data features and the fused features which are output by the data processing network layer into a next network layer for processing until feature data which are output by a last data processing network layer are obtained, and taking the feature data which are output by the last data processing network layer as target features.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the task execution method described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the task execution method described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
In the task execution method provided in the present specification, a task execution request for target data is received; inputting target data into a preset service model according to a task execution request, so that the service model determines each data unit contained in the target data, and determining a data characteristic corresponding to each data unit and a first weight corresponding to the data unit according to the association degree between the data unit and other data units in the target data for each data unit, wherein the first weight is used for representing the importance degree of the data unit relative to the target data; fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold to obtain fusion characteristics; and determining target characteristics corresponding to the target data according to the fusion characteristics and each data characteristic which is not fused, so as to execute tasks according to the target characteristics.
According to the method, in the process of extracting the characteristics of the data input into the service model, the extracted data characteristics can be fused based on the weight corresponding to each data unit, so that the characteristics corresponding to the data units with lower importance in the data can be fused into fewer characteristics, the length of an output sequence is reduced, the downstream network layer of the model can execute subsequent calculation tasks based on shorter input, and compared with the traditional method, the method greatly reduces the calculation amount of the model, and further improves the overall calculation efficiency of the service model and occupied calculation resources.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. Attached at
In the figure:
FIG. 1 is a schematic flow chart of a task execution method provided in the present specification;
FIG. 2 is a schematic illustration of a feature fusion process provided in the present specification;
FIG. 3 is a schematic diagram of a task performing device provided in the present specification;
Fig. 4 is a schematic view of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Common schemes include reducing the output sequence length, such as randomly discarding words in the sequence at a deeper layer, or discarding some words by calculation decision, thereby reducing the sequence length and reducing the computational overhead. However, semantic information focused by different layers is different, and direct discarding may cause degradation of modeling effects of subsequent layers.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a task execution method provided in the present specification, including the following steps:
S100: a task execution request for target data is received.
In a Transformer network, the self-attention mechanism is calculated between layers, and assuming that the length of an input feature sequence is N, a self-attention weight matrix of n×n is obtained through calculation, which means that the degree of association between data of each position and data of other positions is defined, and then the characterization information of each position can be updated through a weighted summation mode, and in this process, the lengths of the input feature sequence and the output feature sequence remain unchanged.
As such, as the degree of the input sequence of the model increases, the calculated amount of the model increases as a square, and the existing model performance optimization scheme generally performs random discarding on the features in the sequence at a deeper layer in the model or determines to discard some features through calculation, so that the length of the feature sequence is reduced, and the calculation cost is reduced. However, due to the difference of semantic information focused by different network layers in the model, a certain error may be brought by directly discarding the features, which affects the accuracy of the model.
Based on the above, the present specification provides a task execution method, where the service model merges features with low importance in the process of feature extraction, so as to effectively preserve the semantics thereof, reduce the length of a data sequence required to be calculated by a next network layer, and reduce the calculation overhead of the service model.
In this specification, an execution body for realizing one task execution method may be a designated device such as a server, and for convenience of description, one task execution method provided in this specification will be described below using only a server as an execution body.
Wherein the server may receive a task execution request for the target data. The target data may include: text data and image data, of course, may also include other types of data, such as audio data, as not specifically limited in this specification.
S102: according to the task execution request, the target data is input into a preset service model, so that the service model determines each data unit contained in the target data, and for each data unit, according to the association degree between the data unit and other data units in the target data, the data characteristics corresponding to the data unit and the first weight corresponding to the data unit are determined, wherein the first weight is used for representing the importance degree of the data unit relative to the target data.
The server may input the target data carried in the task execution request into a preset service model according to the received task execution request, and then the server may determine each data unit included in the target data through the service model, and encode each data unit to obtain an initial feature (token) corresponding to each data unit.
In this specification, the model structure of the service model may be the transducer network mentioned in step S100, and the tasks performed by the service model and the divided data units may also be different for different types of service data.
For example, when the target data is text data, the business model may be a word recognition model, each data unit corresponds to a word or word in the sequence of locations of the text data, and accordingly, each token corresponds to a word vector or word vector, and the business model may perform word recognition on the target data based on the finally extracted text features.
For another example, when the target data is image data, the service model may be a target recognition model, where each data unit corresponds to an image block of a different area in the image data, and correspondingly, each token corresponds to a vector of an image block, and the service model may perform target recognition on the target data based on the finally extracted image features.
In practical application, a plurality of data processing network layers (L, L +1, l+ … … l+n) for feature extraction are disposed in the transducer network, and taking the data processing network layer L as an example, the data processing network layer calculates self-attention weights of the input feature sequences, and further performs weighted summation on each token according to the calculated weight matrix, so as to output an updated feature sequence and serve as input of the l+1 layer, where the l+1 layer repeats the above operations and inputs the updated features into the next network layer until the l+n layer outputs the final data features.
In the above process, the calculation amount of each network layer increases as the length of the feature sequence increases in square, so the present specification proposes the following scheme to reduce the calculation amount of the model:
for the data processing network layer L, after the server inputs the initial feature sequence corresponding to each data unit into the network layer L, for each data unit, the server may determine, according to the association degree between each data unit including the data unit and the data unit, a second weight of each data unit relative to the data unit.
The server may then determine, according to the sum of the second weights of each data unit with respect to the data unit, a first weight corresponding to the data unit, where the greater the first weight, the greater the contribution of the data unit with respect to the target data semantic, the greater the importance of the data unit to the target data, and the lesser the first weight, the lesser the contribution of the data unit with respect to the target data semantic, and the lesser the importance of the data unit to the target data.
In other words, the greater the degree of association between data unit a and any one of data units a n,an and a, the greater the second weight of a n relative to a, and vice versa. If the second weights of the data unit a 1、a2、a3……an with respect to the data unit a are w 1、w2、w3……wn, respectively, the first weight w may be expressed as:
w=w1+w2+w3+……+wn
further, the server may determine, according to each second weight and each initial feature, a data feature corresponding to each data unit.
Specifically, for the data unit a, the server may perform weighted summation on the initial characteristic of each data unit according to the second weight between each data unit (a 1~an) including the data unit a and the data unit a, and determine the data characteristic corresponding to the data unit a according to the weighted result. Of course, the server may also determine the data characteristic corresponding to the data unit a according to the weighted result and the initial characteristic corresponding to the data unit a.
S104: and fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold value to obtain fusion characteristics.
The server can fuse the data characteristics corresponding to the data units with smaller overall semantic contribution to the target data, so that the number of the output data characteristics is reduced on the premise of not losing the semantics of the data units.
Specifically, the server may determine, from among the data features extracted by the data processing network layer L, data features corresponding to the data units with first weights smaller than the preset weight threshold, and fuse the data features, so as to obtain the fused feature.
Further, the server may determine the number of packets according to the accuracy of the network parameters in the service model, then group the data features corresponding to the data units with the first weight smaller than the preset weight threshold according to the number of packets, and for each packet, fuse the data features in the packet to obtain the fusion features corresponding to the packet.
The higher the accuracy of network parameters in the service model is, the better the feature extraction effect is, and the less semantic information is lost when the data features are fused.
Therefore, when the service model has higher-precision network parameters, fewer groups can be set, and the number of features to be fused of each group is relatively more, so that more data features are fused into one feature, and the calculation efficiency of the model is further improved.
When the service model has network parameters with lower precision, more grouping numbers can be set, and the number of features to be fused of each group is relatively smaller, so that fewer data features are fused into one feature, and the calculation efficiency of the model is improved while the accuracy of the output result of the model is ensured. For example, the number of features to be fused in each group may be set to 2, that is, the data features corresponding to every two data units with the first weight smaller than the preset weight threshold are fused as a group, where the server may divide the two data features with the closest phase distance into a group.
Of course, the number of features to be fused in each group can be set according to the actual situation, the more the number of the fused features in each group is, the fewer the number of the features output by the data processing network layer L is, the higher the calculation efficiency of the model is, but the accuracy of the extracted features is reduced; the smaller the number of fusions per group, the more features the data processing network layer L outputs, the less efficient the model is in calculation, but the accuracy of the extracted features will be raised.
It should be noted that, for the features to be fused in each group, the server may perform weighted summation on the features to be fused according to the first weight corresponding to each feature to be fused, so as to obtain the fused feature.
In addition, the server may determine a third weight corresponding to each feature to be fused according to the first weight corresponding to each feature to be fused, and then obtain a fused feature according to the third weight corresponding to each feature to be fused, where for any group of features to be fused, the sum of the third weights of the group of features to be fused is 1.
Of course, the server may set equal weights for each data feature, and the sum of these weights is 1, so that each data feature is fused by these equal weights.
S108: and determining target characteristics corresponding to the target data according to the fusion characteristics and each data characteristic which is not fused, so as to execute tasks according to the target characteristics.
The server may input the fused features output by the data processing network layer L and each data feature that is not fused into the next network layer (l+1), and repeat the above steps in the (l+1) layer to update the data features output by the L layer, and then the (l+1) layer continuously inputs the updated data features into the next network layer until the final data processing network layer (l+n) outputs the final feature data, so as to use the feature data output by the final data processing network layer as the target feature.
For ease of understanding, the present description provides a schematic diagram of a feature fusion process, as shown in fig. 2.
Fig. 2 is a schematic diagram of a feature fusion process provided in the present specification.
The L layers can be subjected to weighted summation through the self-attention matrix for the characteristics a 1-a 8 of the L layers to obtain updated characteristics b 1-b 8, wherein the first weights corresponding to the characteristics b2, b5, b6 and b8 are smaller than a preset weight threshold value. The server may fuse features b2 and b5 into c1, fuse features b6 and b8 into c2, and then input the unfused features b1, b3, b4, b7 and fused features c1, c2 as target features into the l+1 layer.
In this specification, for any one data processing network layer, a linear network layer may be additionally disposed in the network layer, taking the network layer L as an example, for each data unit, after the data processing network layer L extracts a data feature corresponding to the data unit, the data feature may be input into the linear network layer in the data processing network layer L, so that the dimension reduction processing is performed on the data feature through the linear network layer, and the dimension reduction feature corresponding to the data unit is obtained.
For example, assuming that the feature dimension of the data feature is n×d, the dimension corresponding to the mapping unit is k×n, and k < n, after the data feature is input to the mapping unit, a feature with reduced dimension of k×d may be obtained. The mapping unit may be a linear mapping unit (k=1), and the data feature is input into the linear mapping unit to obtain a d-dimensional linear feature.
And then the server can fuse the dimension reduction features corresponding to the data units with the first weight smaller than the preset weight threshold value to obtain fusion features, and further determine the target features corresponding to the target data according to the fusion features and the dimension reduction features which are not fused.
After determining the target feature corresponding to the target data, the server may execute the task based on the target feature.
For example, when the service scene is an intelligent customer service scene, the target data may be query text input by the user, the server may determine text features corresponding to the query text through L-l+n layers of the service model, and then generate a reply word or determine information to be queried by the user based on the text features and feed back to the user.
For another example, when the service scene is a target detection scene, the target data may be an image to be detected acquired by the sensor, and the server may determine an image feature corresponding to the image to be detected through L-l+n layers of the service model, and then determine information (such as classification, position, size, etc.) of a target object included in the image to be detected based on the image feature.
It should be noted that, in the service model in the present specification, a corresponding output layer may also be provided, so that the server may input the target features output by the multiple data processing network layers into the output layer, so as to output a final task execution result through the output layer. Of course, the business model may also be used only to output the target feature, after which the server may input the target feature into another task model to perform a subsequent task.
In addition, before using the service model, the server may train the service model, where the server may acquire historical service data, and then input the historical service data into the service model to be trained, so as to determine each data unit included in the historical service data through the service model; for each data unit in the historical service data, the server may determine, according to the degree of association between the data unit and each data unit in the target data, a historical data feature corresponding to the data unit and a first weight corresponding to the data unit, where the first weight is used to characterize the importance degree of the data unit relative to the historical service data.
The server can fuse the historical data characteristics corresponding to each historical data unit with the first weight smaller than a preset weight threshold value to obtain each historical fusion characteristic; and determining the historical target characteristics corresponding to the target data according to the historical fusion characteristics and the non-fused historical data characteristics, and then executing tasks according to the historical target characteristics to obtain a prediction result.
The server may train the service model with the objective of optimizing minimizing the deviation between the predicted result and the actual task execution result corresponding to the historical service data.
According to the method, the scheme of feature fusion by applying the attention matrix between layers can be provided for solving the problem of calculation cost in the transducer network, so that the length of a sequence to be calculated of the next layer is reduced while semantic information is effectively reserved, and on the premise of ensuring semantic analysis performance, the calculation cost of the network is effectively reduced, and green calculation is assisted.
In the present specification, an execution body for implementing a code test method may refer to a designated device such as a server provided on a service platform, and for convenience of description, the present specification uses only the server as an execution body as an example, and describes a code test method provided in the present specification.
The above is a method for implementing task execution for one or more of the present specification, and based on the same concept, the present specification further provides a corresponding task execution device, as shown in fig. 3.
Fig. 3 is a schematic diagram of a task execution device provided in the present specification, including:
a receiving module 300, configured to receive a task execution request for target data;
The input module 302 is configured to input the target data into a preset service model according to the task execution request, so that the service model determines each data unit included in the target data, and determines, for each data unit, a data feature corresponding to the data unit and a first weight corresponding to the data unit according to a degree of association between the data unit and each data unit in the target data, where the first weight is used to characterize an importance degree of the data unit relative to the target data;
The fusion module 304 is configured to fuse data features corresponding to each data unit with a first weight smaller than a preset weight threshold to obtain fusion features;
And the execution module 306 is configured to determine a target feature corresponding to the target data according to the fused feature and each data feature that is not fused, so as to execute a task according to the target feature.
Optionally, the input module 302 is specifically configured to determine, according to the association degree, each second weight of each data unit relative to the data unit; and determining the first weight according to the second weights, and determining the data characteristic corresponding to each data unit according to the second weights and the initial characteristic corresponding to the data unit.
Optionally, the fusion module 304 is specifically configured to, for each data unit, perform, through a linear network layer preset in the service model, a dimension reduction process on a data feature corresponding to the data unit, to obtain a dimension reduction feature corresponding to the data unit; and fusing the dimension reduction features corresponding to the data units with the first weights smaller than the preset weight threshold to obtain fusion features.
Optionally, the fusion module 304 is specifically configured to determine the number of packets according to the accuracy of the network parameters in the service model; and grouping the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold according to the grouping number, and fusing the data characteristics in each group aiming at each group to obtain fusion characteristics corresponding to the group.
Optionally, the executing module 306 is specifically configured to, for each data processing network layer included in the service model, input, as each input feature corresponding to the data processing network layer, the fused feature output by the previous data processing network layer and each data feature that is not fused to the data processing network layer, so that, for each input feature, the data processing network layer determines, according to the degree of association between the input feature and each input feature, the data feature output by the data processing network layer for the input feature and the first weight corresponding to the input feature unit; and fusing all the data features of which the first weight is smaller than a preset weight threshold value and which are output by the data processing network layer to obtain fused features which are output by the data processing network layer, inputting all the data features and the fused features which are output by the data processing network layer into a next network layer for processing until feature data which are output by a last data processing network layer are obtained, and taking the feature data which are output by the last data processing network layer as target features.
Optionally, the target data includes: text data or image data.
The present specification also provides a computer readable storage medium storing a computer program operable to perform a task execution method as provided in fig. 1 above.
The present specification also provides a schematic structural diagram of an electronic device corresponding to fig. 1 shown in fig. 4. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 4, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to implement the task execution method described in fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable GATE ARRAY, FPGA)) is an integrated circuit whose logic functions are determined by user programming of the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler (logic compiler)" software, which is similar to the software compiler used in program development and writing, and the original code before being compiled is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but HDL is not just one, but a plurality of kinds, such as ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language), and VHDL (Very-High-SPEED INTEGRATED Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application SPECIFIC INTEGRATED Circuits (ASICs), programmable logic controllers, and embedded microcontrollers, examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (13)

1. A method of task execution, comprising:
receiving a task execution request aiming at target data;
Inputting the target data into a preset service model according to the task execution request, so that the service model determines each data unit contained in the target data, and determining a data characteristic corresponding to each data unit and a first weight corresponding to the data unit according to the association degree between the data unit and each data unit in the target data for each data unit, wherein the first weight is used for representing the importance degree of the data unit relative to the target data;
Fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold to obtain fusion characteristics;
And determining target characteristics corresponding to the target data according to the fusion characteristics and each data characteristic which is not fused, so as to execute tasks according to the target characteristics.
2. The method of claim 1, wherein for each data unit, determining the data characteristic corresponding to the data unit and the first weight corresponding to the data unit according to the association degree between the data unit and each data unit in the target data specifically includes:
Determining second weights of each data unit relative to the data unit according to the association degree;
And determining the first weight according to the second weights, and determining the data characteristic corresponding to each data unit according to the second weights and the initial characteristic corresponding to the data unit.
3. The method of claim 1, wherein the fusing the data features corresponding to the data units with the first weight smaller than the preset weight threshold to obtain the fused features specifically includes:
For each data unit, performing dimension reduction processing on the data characteristics corresponding to the data unit through a linear network layer preset in the service model to obtain dimension reduction characteristics corresponding to the data unit;
And fusing the dimension reduction features corresponding to the data units with the first weights smaller than the preset weight threshold to obtain fusion features.
4. The method of claim 1, wherein the fusing the data features corresponding to the data units with the first weight smaller than the preset weight threshold to obtain the fused features specifically includes:
Determining the number of packets according to the precision of network parameters in the service model;
and grouping the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold according to the grouping number, and fusing the data characteristics in each group aiming at each group to obtain fusion characteristics corresponding to the group.
5. The method of claim 1, wherein determining the target feature corresponding to the target data according to the fused feature and each data feature not fused specifically comprises:
For each data processing network layer contained in the service model, the fusion characteristics and the data characteristics which are not fused and output by the last data processing network layer are input into the data processing network layer as the input characteristics corresponding to the data processing network layer, so that the data processing network layer determines the data characteristics which are output by the data processing network layer for the input characteristics and the first weights corresponding to the input characteristic units according to the association degree between the input characteristics and each input characteristic aiming at each input characteristic;
And fusing all the data features of which the first weight is smaller than a preset weight threshold value and which are output by the data processing network layer to obtain fused features which are output by the data processing network layer, inputting all the data features and the fused features which are output by the data processing network layer into a next network layer for processing until feature data which are output by a last data processing network layer are obtained, and taking the feature data which are output by the last data processing network layer as target features.
6. The method of any of claims 1-5, the target data comprising: text data or image data.
7. A task execution device comprising:
the receiving module is used for receiving a task execution request aiming at target data;
The input module is used for inputting the target data into a preset service model according to the task execution request, so that the service model determines each data unit contained in the target data, and determines the data characteristic corresponding to each data unit and the first weight corresponding to the data unit according to the association degree between the data unit and each data unit in the target data for each data unit, wherein the first weight is used for representing the importance degree of the data unit relative to the target data;
The fusion module is used for fusing the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold value to obtain fusion characteristics;
And the execution module is used for determining the target feature corresponding to the target data according to the fusion feature and each data feature which is not fused so as to execute the task according to the target feature.
8. The apparatus of claim 7, the input module being specifically configured to determine, based on the degree of association, a respective second weight for each data unit relative to the data unit; and determining the first weight according to the second weights, and determining the data characteristic corresponding to each data unit according to the second weights and the initial characteristic corresponding to the data unit.
9. The apparatus of claim 7, wherein the fusion module is specifically configured to, for each data unit, perform a dimension reduction process on a data feature corresponding to the data unit through a linear network layer preset in the service model, to obtain a dimension reduction feature corresponding to the data unit; and fusing the dimension reduction features corresponding to the data units with the first weights smaller than the preset weight threshold to obtain fusion features.
10. The apparatus of claim 7, wherein the fusion module is specifically configured to determine the number of packets according to the accuracy of network parameters in the service model; and grouping the data characteristics corresponding to the data units with the first weight smaller than the preset weight threshold according to the grouping number, and fusing the data characteristics in each group aiming at each group to obtain fusion characteristics corresponding to the group.
11. The apparatus of claim 7, wherein the execution module is specifically configured to, for each data processing network layer included in the service model, input, as each input feature corresponding to the data processing network layer, a fused feature output by a previous data processing network layer and each data feature that is not fused to the data processing network layer, so that, for each input feature, the data processing network layer determines, according to a degree of association between the input feature and each input feature, a data feature output by the data processing network layer for the input feature and a first weight corresponding to the input feature unit; and fusing all the data features of which the first weight is smaller than a preset weight threshold value and which are output by the data processing network layer to obtain fused features which are output by the data processing network layer, inputting all the data features and the fused features which are output by the data processing network layer into a next network layer for processing until feature data which are output by a last data processing network layer are obtained, and taking the feature data which are output by the last data processing network layer as target features.
12. A computer readable storage medium storing a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-6.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the preceding claims 1-6 when the program is executed.
CN202410077918.XA 2024-01-18 2024-01-18 Task execution method and device, storage medium and electronic equipment Pending CN117931400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410077918.XA CN117931400A (en) 2024-01-18 2024-01-18 Task execution method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410077918.XA CN117931400A (en) 2024-01-18 2024-01-18 Task execution method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117931400A true CN117931400A (en) 2024-04-26

Family

ID=90756777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410077918.XA Pending CN117931400A (en) 2024-01-18 2024-01-18 Task execution method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117931400A (en)

Similar Documents

Publication Publication Date Title
CN113516480B (en) Payment risk identification method, device and equipment
CN116167461B (en) Model training method and device, storage medium and electronic equipment
CN115828162B (en) Classification model training method and device, storage medium and electronic equipment
CN115712866B (en) Data processing method, device and equipment
CN116049761A (en) Data processing method, device and equipment
CN117409466B (en) Three-dimensional dynamic expression generation method and device based on multi-label control
CN116434787B (en) Voice emotion recognition method and device, storage medium and electronic equipment
CN117541963A (en) Method and device for extracting key video frames containing text risks
CN117113174A (en) Model training method and device, storage medium and electronic equipment
CN116363418A (en) Method and device for training classification model, storage medium and electronic equipment
CN117093862A (en) Model training method and device, electronic equipment and storage medium
CN115017915B (en) Model training and task execution method and device
CN117931400A (en) Task execution method and device, storage medium and electronic equipment
CN117201334B (en) Multi-mode network traffic prediction method and device
CN117351946B (en) Voice recognition method and device, storage medium and electronic equipment
CN116109008B (en) Method and device for executing service, storage medium and electronic equipment
CN117171401B (en) Query method and device for shortest path in graph data based on hierarchical pre-calculation
CN116340852B (en) Model training and business wind control method and device
CN118098266B (en) Voice data processing method and device based on multi-model selection
CN118261420A (en) Data processing method, device and equipment
CN117592998A (en) Wind control method and device, storage medium and electronic equipment
CN117973869A (en) Business wind control method and device, electronic equipment and storage medium
CN118428333A (en) Method, device, storage medium and electronic equipment for enhancing text data
CN117351946A (en) Voice recognition method and device, storage medium and electronic equipment
CN117909926A (en) Risk identification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240920

Address after: Room 803, floor 8, No. 618 Wai Road, Huangpu District, Shanghai 200010

Applicant after: Ant blockchain Technology (Shanghai) Co.,Ltd.

Country or region after: China

Address before: 310000 801-11 section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Alipay (Hangzhou) Information Technology Co.,Ltd.

Country or region before: China