CN115878330B - Thread operation control method and system - Google Patents

Thread operation control method and system Download PDF

Info

Publication number
CN115878330B
CN115878330B CN202310077120.0A CN202310077120A CN115878330B CN 115878330 B CN115878330 B CN 115878330B CN 202310077120 A CN202310077120 A CN 202310077120A CN 115878330 B CN115878330 B CN 115878330B
Authority
CN
China
Prior art keywords
semantic feature
thread
matrix
thread description
feature matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310077120.0A
Other languages
Chinese (zh)
Other versions
CN115878330A (en
Inventor
尹俊文
廖海宁
尹鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tengyun Chuangwei Information Technology Weihai Co ltd
Original Assignee
Tengyun Chuangwei Information Technology Weihai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tengyun Chuangwei Information Technology Weihai Co ltd filed Critical Tengyun Chuangwei Information Technology Weihai Co ltd
Priority to CN202310077120.0A priority Critical patent/CN115878330B/en
Publication of CN115878330A publication Critical patent/CN115878330A/en
Application granted granted Critical
Publication of CN115878330B publication Critical patent/CN115878330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Programmable Controllers (AREA)
  • Machine Translation (AREA)

Abstract

The application relates to the technical field of thread operation control, and particularly discloses a thread operation control method and a thread operation control system, wherein the thread operation control method comprises the steps of firstly obtaining descriptions of various threads to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; then, semantic feature information in the description of the threads and feature distribution information among the threads are mined through a deep learning technology to obtain a topological global thread description semantic feature matrix; and then, each row vector in the topological global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the ordering of the plurality of probability values, so that the reasonable allocation of the thread priority is adaptively carried out based on the feature distribution among threads, the allocated threads are adapted to the task to be processed, and the processing efficiency and effect are improved.

Description

Thread operation control method and system
Technical Field
The present disclosure relates to the technical field of thread operation control, and more particularly, to a thread operation control method and a system thereof.
Background
Assigning a priority to a thread is a user-level policy. One approach is simply to use rate-monotonic scheduling, where priorities are assigned to threads according to their periodicity, and threads use scheduling contexts that match their sporadic task parameters. Each thread in the system will be isolated in time because the kernel does not allow it to exceed the processing time reservation represented by the scheduling context.
However, the system may offer more options than simple rate random fixed priority scheduling, guaranteeing a policy-free design principle for minimization. While reservation is only a potential right to be given to a particular priority processing time, it actually represents the upper limit of processing time for a particular thread. If a higher priority reservation uses all available CPUs, then the low priority thread is not guaranteed to run. However, threads with lower reservation priorities will run within the system margin, which occurs when the threads do not use all of their reservations. Thus, a high priority range needs to be used for rate monotonic threads, while best effort threads and rate limiting threads run at lower priorities. However, when an actual thread runs, a fixed priority allocation scheme is difficult to achieve due effect due to different processing tasks, which causes low running speed of the thread and insufficient margin time.
Accordingly, an optimized thread run control scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a thread running control method and a thread running control system, which are characterized in that firstly, descriptions of various threads to be assigned are obtained, and the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; then, semantic feature information in the description of the threads and feature distribution information among the threads are mined through a deep learning technology to obtain a topological global thread description semantic feature matrix; and then, each row vector in the topological global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the ordering of the plurality of probability values, so that the reasonable allocation of the thread priority is adaptively carried out based on the feature distribution among threads, the allocated threads are adapted to the task to be processed, and the processing efficiency and effect are improved.
According to one aspect of the present application, there is provided a thread running control method, including: acquiring descriptions of various threads to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; the description of each thread to be assigned is respectively passed through a context encoder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors; calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix; the distance topological matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a distance topological feature matrix; two-dimensional arrangement is carried out on the thread description semantic feature vectors to obtain a global thread description semantic feature matrix; the global thread description semantic feature matrix and the distance topological feature matrix are subjected to a graph neural network model to obtain a topological global thread description semantic feature matrix; based on the global thread description semantic feature matrix, carrying out small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix; each row vector in the optimized topology global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values; and determining a priority of the respective thread to be assigned based on the ordering of the plurality of probability values.
In the above thread operation control method, the describing each thread to be assigned respectively through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors includes: converting the descriptions of the threads to be assigned into embedded vectors by using an embedded layer of the context encoder to obtain sequences of embedded vectors corresponding to the descriptions of the threads to be assigned; performing global-based context semantic coding on the sequence of embedded vectors corresponding to the descriptions of the respective threads to be assigned using a converter-based Bert model of the context encoder to obtain a plurality of feature vectors corresponding to the descriptions of the respective threads to be assigned; and cascading the plurality of feature vectors corresponding to the descriptions of the threads to be assigned to obtain the plurality of thread description semantic feature vectors.
In the above thread operation control method, the calculating the euclidean distance between each two thread description semantic feature vectors in the plurality of thread description semantic feature vectors to obtain a distance topology matrix includes: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a plurality of Euclidean distances according to the following formula;
Figure SMS_1
wherein ,
Figure SMS_2
and />
Figure SMS_3
Respectively representing any two thread description semantic feature vectors in the thread description semantic feature vectors, and (I)>
Figure SMS_4
Representing the calculation of the Euclidean distance between every two thread description semantic feature vectors of said plurality of thread description semantic feature vectors,/for each thread description semantic feature vector>
Figure SMS_5
and />
Figure SMS_6
And respectively representing the characteristic values of the positions of any two thread description semantic characteristic vectors in the thread description semantic characteristic vectors.
In the above thread operation control method, the step of passing the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix includes: further used for: each layer of the convolutional neural network model performs the following steps on input data in forward transfer of the layer: using convolution units of all layers of the convolution neural network model to carry out convolution processing on the input data based on a two-dimensional convolution kernel so as to obtain a convolution characteristic diagram; using pooling units of each layer of the convolutional neural network model to perform pooling processing along a channel dimension on the convolutional feature map so as to obtain a pooled feature map; using an activation unit of each layer of the convolutional neural network model to perform nonlinear activation on the characteristic values of each position in the pooled characteristic map so as to obtain an activated characteristic map; the input of the first layer of the convolutional neural network model is the distance topological matrix, and the output of the last layer of the convolutional neural network model is the distance topological feature matrix.
In the above thread operation control method, the step of obtaining the topological global thread description semantic feature matrix by passing the global thread description semantic feature matrix and the distance topological feature matrix through a graph neural network model includes: the graph neural network processes the global thread description semantic feature matrix and the distance topology feature matrix through the learnable neural network parameters to obtain the topology global thread description semantic feature matrix containing Euclidean distance topology features and semantic understanding feature information of each thread description.
In the above thread operation control method, the performing small scale feature association expression reinforcement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix includes: calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and multiplying the topological global thread description semantic feature matrix by a position point by taking the small-scale local derivative matrix as a weighted feature matrix to obtain the optimized topological global thread description semantic feature matrix.
In the above thread operation control method, the calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix includes:
calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula; wherein, the formula is:
Figure SMS_7
wherein
Figure SMS_8
、/>
Figure SMS_9
and />
Figure SMS_10
The topological global thread description semantic feature matrix, the global thread description semantic feature matrix and the small-scale local derivative matrix are respectively +.>
Figure SMS_11
Characteristic values of the location.
In the above thread operation control method, the step of passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier to obtain a plurality of probability values includes: processing each row vector in the optimized topology global thread description semantic feature matrix by using the classifier in the following formula to obtain the plurality of probability values;
wherein, the formula is:
Figure SMS_12
, wherein />
Figure SMS_13
Representing each row vector in the global thread description semantic feature matrix of the optimized topology>
Figure SMS_14
To- >
Figure SMS_15
Weight matrix for each full-connection layer of the classifier, < >>
Figure SMS_16
To->
Figure SMS_17
Bias vector representing each fully connected layer of the classifier,>
Figure SMS_18
representing each probability value of the plurality of probability values.
According to another aspect of the present application, there is provided a thread running control system, including: the description acquisition module is used for acquiring descriptions of various threads to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; the context coding module is used for respectively enabling the descriptions of the threads to be assigned to pass through a context coder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors; the Euclidean distance calculation module is used for calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix; the convolution coding module is used for enabling the distance topological matrix to pass through a convolution neural network model serving as a feature extractor to obtain a distance topological feature matrix; the two-dimensional arrangement module is used for two-dimensionally arranging the plurality of thread description semantic feature vectors to obtain a global thread description semantic feature matrix; the graph neural coding module is used for enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model to obtain a topological global thread description semantic feature matrix; the matrix optimization module is used for carrying out small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix so as to obtain an optimized topological global thread description semantic feature matrix; the probability value acquisition module is used for respectively passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier to obtain a plurality of probability values; and a priority determining module for determining the priority of each thread to be assigned based on the ordering of the plurality of probability values.
In the above thread operation control system, the matrix optimization module includes: the small-scale local derivative matrix acquisition unit is used for calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and the dot multiplying unit is used for multiplying the topological global thread description semantic feature matrix by position points by taking the small-scale local derivative matrix as a weighted feature matrix to obtain the optimized topological global thread description semantic feature matrix.
Compared with the prior art, the thread operation control method and the thread operation control system provided by the application firstly acquire the description of each thread to be assigned, wherein the description of the thread to be assigned is a scheduling context matched with sporadic task parameters related to the thread to be assigned; then, semantic feature information in the description of the threads and feature distribution information among the threads are mined through a deep learning technology to obtain a topological global thread description semantic feature matrix; and then, each row vector in the topological global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the ordering of the plurality of probability values, so that the reasonable allocation of the thread priority is adaptively carried out based on the feature distribution among threads, the allocated threads are adapted to the task to be processed, and the processing efficiency and effect are improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a flow chart of a thread operation control method and system according to an embodiment of the present application.
FIG. 2 is a schematic diagram of a model architecture of a thread running control method and system thereof according to an embodiment of the present application.
Fig. 3 is a flowchart of a method and a system for controlling the execution of threads according to an embodiment of the present application, where descriptions of the threads to be assigned are respectively passed through a context encoder including an embedded layer to obtain semantic feature vectors of the descriptions of the threads.
Fig. 4 is a flowchart of a method and a system for controlling thread operation according to an embodiment of the present application, where small scale feature association expression enhancement is performed on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix.
FIG. 5 is a block diagram of a thread operation control method and system according to an embodiment of the present application.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Scene overview: as described above, the system may offer more options than simple rate random fixed priority scheduling, guaranteeing a policy-free design principle for minimization. While reservation is only a potential right to be given to a particular priority processing time, it actually represents the upper limit of processing time for a particular thread. If a higher priority reservation uses all available CPUs, then the low priority thread is not guaranteed to run. However, threads with lower reservation priorities will run within the system margin, which occurs when the threads do not use all of their reservations. Thus, a high priority range needs to be used for rate monotonic threads, while best effort threads and rate limiting threads run at lower priorities. However, when an actual thread runs, a fixed priority allocation scheme is difficult to achieve due effect due to different processing tasks, which causes low running speed of the thread and insufficient margin time. Accordingly, an optimized thread run control scheme is desired.
Accordingly, the problem that the running speed of the thread is low, the margin time is insufficient and a good processing effect is difficult to achieve due to the fact that when the thread is actually run, the tasks to be processed are different, and the threads with fixed priorities are processed. In the technical solution of the present application, it is therefore desirable to adaptively assign priorities to the respective threads through feature distribution among the threads so that the assigned threads are adapted to the task to be processed. Extraction of inter-thread feature distribution information requires a sufficiently and accurate semantic understanding of the thread's description, here, the thread's description is a scheduling context that matches sporadic task parameters associated with the thread. However, since the semantic information exists in the description of the threads, the useful information is difficult to acquire, which brings difficulty to the extraction of the feature distribution information among the threads. Therefore, in practical application, the difficulty lies in how to dig out the semantic feature information in the description of the threads and the feature distribution information among the threads, so as to reasonably allocate the priority of the threads, so that the allocated threads are adapted to the task to be processed, and the processing efficiency and effect are improved.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
Deep learning and the development of neural networks provide new solutions and schemes for mining semantic feature information in the descriptions of threads and feature distribution information among the threads.
Specifically, in the technical scheme of the application, first, descriptions of each thread to be assigned are obtained, and the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned. Then, considering that the descriptions of the threads to be assigned are composed of a plurality of words and have semantic relevance of context, in order to accurately perform semantic understanding of the descriptions of the threads to be assigned to more accurately perform thread priority allocation, in the technical scheme of the application, the descriptions of the threads to be assigned are further encoded in context encoders containing embedded layers respectively so as to extract global context-based high-dimensional semantic feature information of the descriptions of the threads to be assigned respectively, thereby obtaining a plurality of thread description semantic feature vectors.
Next, for the semantic understanding feature information of the descriptions of the threads to be assigned, in order to mine feature distribution among the threads to be assigned to determine priority, in the technical scheme of the application, the euclidean distance between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors is further calculated, so that the spatial topology distribution information among the context semantic understanding features of the thread descriptions is represented, and a distance topology matrix is obtained. And then, further carrying out feature mining on the obtained distance topology matrix in a convolutional neural network model serving as a feature extractor so as to extract spatial topology association features among semantic features described by each thread to obtain a distance topology feature matrix.
Further, each thread description semantic feature vector in the thread description semantic feature vectors is used as a feature representation of a node, the distance topology feature matrix is used as a feature representation of an edge between the nodes, and the global thread description semantic feature matrix obtained by two-dimensional arrangement of the thread description semantic feature vectors and the distance topology feature matrix pass through a graph neural network to obtain a topology global thread description semantic feature matrix. Specifically, the graph neural network performs graph structure data coding on the global thread description semantic feature matrix and the distance topological feature matrix through a learnable neural network parameter to obtain the topological global thread description semantic feature matrix containing irregular distance topological features and semantic understanding feature information of each thread description.
And then, each row vector in the topological global thread description semantic feature matrix passes through a classifier respectively to obtain a plurality of probability values. That is, each row vector in the topological global thread description semantic feature matrix is used as a classification feature vector to be subjected to classification processing in a classifier so as to obtain a probability value for representing the description of each thread to be assigned, and the priority of each thread to be assigned is determined based on the ordering of the plurality of probability values. Therefore, reasonable distribution of thread priorities can be adaptively carried out based on the characteristic distribution among threads, so that the distributed threads are adapted to tasks to be processed, and processing efficiency and processing effect are improved.
In particular, in the technical solution of the present application, the global thread description semantic feature matrix and the distance topological feature matrix are obtained through a graph neural network model, so that the global thread description semantic feature matrix expresses association expression of the context semantic features of the description of each thread to be assigned under the semantic similarity topology of each thread to be assigned. However, since each global thread description semantic feature vector of the global thread description semantic feature matrix is a small-scale context semantic coding representation of the description of the thread to be assigned, it is still desirable to promote the small-scale feature association expression of the topological global thread description semantic feature matrix relative to the global thread description semantic feature matrix, so as to promote the expression effect of the topological global thread description semantic feature matrix on the small-scale context coding semantics of the description of each thread to be assigned.
Thus, computing the topological global thread description semantic feature matrix, e.g., denoted as
Figure SMS_19
And the global thread describes a semantic feature matrix, e.g. denoted +.>
Figure SMS_20
As a weighted feature matrix, expressed as:
Figure SMS_21
Figure SMS_22
、/>
Figure SMS_23
and />
Figure SMS_24
The topological global thread description semantic feature matrix +.>
Figure SMS_25
Said global thread describes a semantic feature matrix +.>
Figure SMS_26
And said small-scale local derivative matrix, e.g. denoted +.>
Figure SMS_27
Is>
Figure SMS_28
Characteristic values of the location.
Here, the semantic feature matrix is described by computing the topological global thread
Figure SMS_29
And said global thread description semantic feature matrix +.>
Figure SMS_30
Small-scale local derivative features in between can imitate the physical properties of mutual expression between data sequences based on the geometrical approximation of corresponding positions in between, so that the local nonlinear dependence of the cross-feature-domain positions is enhanced by position-by-position regression among feature matrices. Thus, by deriving the matrix locally on said small scale +.>
Figure SMS_31
Describing semantic feature matrix for the topological global thread as a weighting matrix>
Figure SMS_32
The feature value weighting is carried out by dot multiplication, so that the topological global thread description semantic feature matrix can be improved>
Figure SMS_33
The expression effect of the semantic meaning of the small-scale context codes of the description of each thread to be assigned is improved, and therefore the accuracy of the classification result obtained by the line vector of the thread through the classifier is improved. Therefore, reasonable distribution of thread priorities can be adaptively carried out based on the characteristic distribution among threads, so that the distributed threads are adapted to tasks to be processed, and processing efficiency and processing effect are improved.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
An exemplary method is: FIG. 1 is a flow chart of a method of controlling the operation of threads according to an embodiment of the present application. As shown in fig. 1, the method for controlling the running of the thread according to the embodiment of the application includes: s110, acquiring descriptions of each thread to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; s120, the descriptions of the threads to be assigned are respectively passed through a context encoder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors; s130, calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix; s140, the distance topological matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a distance topological feature matrix; s150, two-dimensionally arranging the thread description semantic feature vectors to obtain a global thread description semantic feature matrix; s160, passing the global thread description semantic feature matrix and the distance topological feature matrix through a graph neural network model to obtain a topological global thread description semantic feature matrix; s170, carrying out small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix; s180, each row vector in the optimized topological global thread description semantic feature matrix passes through a classifier to obtain a plurality of probability values; and S190, determining the priority of each thread to be assigned based on the ordering of the plurality of probability values.
FIG. 2 is a schematic diagram of a model architecture of a thread run control method according to an embodiment of the present application. As shown in fig. 2, in the method for controlling the running of the threads in the embodiment of the present application, first, descriptions of each thread to be assigned are obtained, and the descriptions of each thread to be assigned are respectively passed through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors. And then, calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix, and passing the distance topology matrix through a convolutional neural network model serving as a feature extractor to obtain the distance topology feature matrix. And simultaneously, carrying out two-dimensional arrangement on the thread description semantic feature vectors to obtain a global thread description semantic feature matrix. And then, the global thread description semantic feature matrix and the distance topological feature matrix are processed through a graph neural network model to obtain a topological global thread description semantic feature matrix. And then, based on the global thread description semantic feature matrix, carrying out small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix. And finally, respectively passing each row vector in the optimized topological global thread description semantic feature matrix through a classifier to obtain a plurality of probability values, and determining the priority of each thread to be assigned based on the ordering of the plurality of probability values.
In step S110 of the embodiment of the present application, a description of each thread to be assigned is obtained, where the description of the thread to be assigned is a scheduling context that matches with sporadic task parameters related to the thread to be assigned. As described above, it is considered that when a task to be processed is processed by a thread of a fixed priority, the thread running rate is low due to a difference in tasks when the thread is actually running, and the margin time is insufficient, so that it is difficult to achieve a good processing effect. In the technical solution of the present application, it is therefore desirable to adaptively assign priorities to the respective threads through feature distribution among the threads so that the assigned threads are adapted to the task to be processed. Extraction of inter-thread feature distribution information requires a sufficiently and accurate semantic understanding of the thread's description, here, the thread's description is a scheduling context that matches sporadic task parameters associated with the thread. However, since the semantic information exists in the description of the threads, the useful information is difficult to acquire, which brings difficulty to the extraction of the feature distribution information among the threads. Therefore, in practical application, the difficulty lies in how to dig out the semantic feature information in the description of the threads and the feature distribution information among the threads, so as to reasonably allocate the priority of the threads, so that the allocated threads are adapted to the task to be processed, and the processing efficiency and effect are improved. Deep learning and the development of neural networks provide new solutions and schemes for mining semantic feature information in the descriptions of threads and feature distribution information among the threads.
In a specific example of the application, descriptions of each thread to be assigned when the thread runs are obtained from a system, and the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned.
In step S120 of the embodiment of the present application, descriptions of the threads to be assigned are respectively passed through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors. It should be understood that, considering that, since the descriptions of the respective threads to be assigned are composed of a plurality of words and have semantic relevance of context between the words, in order to accurately perform semantic understanding of the descriptions of the respective threads to be assigned to more accurately perform thread priority allocation, in the technical solution of the present application, the descriptions of the respective threads to be assigned are further encoded by a context encoder including an embedded layer, so as to extract global context-based high-dimensional semantic feature information of the descriptions of the respective threads to be assigned, thereby obtaining a plurality of thread description semantic feature vectors.
Fig. 3 is a flowchart of a method and a system for controlling the execution of threads according to an embodiment of the present application, where descriptions of the threads to be assigned are respectively passed through a context encoder including an embedded layer to obtain semantic feature vectors of the descriptions of the threads. In a specific example of the present application, the describing the threads to be assigned by the context encoder including the embedded layer to obtain a plurality of thread description semantic feature vectors includes: s210, respectively converting the descriptions of the threads to be assigned into embedded vectors by using an embedded layer of the context encoder to obtain a sequence of embedded vectors corresponding to the descriptions of the threads to be assigned; s220, performing global-based context semantic coding on the sequence of embedded vectors corresponding to the descriptions of the threads to be assigned by using a converter-based Bert model of the context encoder to obtain a plurality of feature vectors corresponding to the descriptions of the threads to be assigned; and S230, cascading the plurality of feature vectors corresponding to the descriptions of the threads to be assigned to obtain the plurality of thread description semantic feature vectors.
More specifically, in the embodiment of the application, the context encoder is a Bert model based on a converter, wherein the Bert model can perform context semantic encoding based on the global context of the input sequence on each input quantity in the input sequence based on the internal mask structure of the converter. That is, the converter-based Bert model is able to extract a global-based feature representation of each input in the input sequence. More specifically, in the technical solution of the present application, taking a description of a thread to be assigned as an example, first, an embedding layer of the context encoder is used to convert a thread to be assigned into an embedding vector to obtain a sequence of embedding vectors, where the embedding layer is used to convert a text description into a digital description that can be recognized by a computer. The sequence of embedded vectors is then globally context-based semantic encoded using the converter-based Bert model to obtain a plurality of feature vectors. It should be appreciated that each feature vector of the plurality of feature vectors is used to represent a global context deep implicit feature of each word based on a description global sequence of threads to be assigned. Here, one feature vector corresponds to one word. And then, cascading the feature vectors to obtain the thread description semantic feature vectors, namely, carrying out lossless fusion on high-dimensional feature representations corresponding to each word in a high-dimensional feature space to obtain high-dimensional feature representations of the description overall sequence of the threads to be assigned to obtain one thread description semantic feature vector. Here, one of the thread description semantic feature vectors corresponds to global context-based high-dimensional semantic feature information of a description of one thread to be assigned.
In step S130 of the embodiment of the present application, a euclidean distance between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors is calculated to obtain a distance topology matrix. It should be appreciated that given that although there is a difference between descriptions of the various threads to be assigned, it is not so great, if there are particularly urgent threads, the similarity of the descriptions of this particularly urgent thread to the descriptions of the other threads to be assigned must be much smaller than the similarity between the descriptions of the other threads to be assigned. Therefore, the spatial topology distribution information among the context semantic understanding features of the thread descriptions can be introduced to improve the accuracy of the priority ordering. Specifically, in the technical scheme of the application, for semantic understanding feature information of descriptions of each thread to be assigned, in order to mine feature distribution among the threads to be assigned to determine priority, in the technical scheme of the application, euclidean distance between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors is further calculated, so that spatial topology distribution information among context semantic understanding features of each thread description is represented, and a distance topology matrix is obtained.
In a specific example of the present application, the calculating the euclidean distance between each two thread description semantic feature vectors in the plurality of thread description semantic feature vectors to obtain a distance topology matrix includes: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a plurality of Euclidean distances according to the following formula;
Figure SMS_34
,/>
wherein ,
Figure SMS_35
and />
Figure SMS_36
Respectively representing any two thread description semantic feature vectors in the thread description semantic feature vectors, and (I)>
Figure SMS_37
Representing the calculation of the Euclidean distance between every two thread description semantic feature vectors of said plurality of thread description semantic feature vectors,/for each thread description semantic feature vector>
Figure SMS_38
and />
Figure SMS_39
And respectively representing the characteristic values of the positions of any two thread description semantic characteristic vectors in the thread description semantic characteristic vectors.
In step S140 of the embodiment of the present application, the distance topology matrix is passed through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix. That is, feature mining is performed on the distance topology matrix using a convolutional neural network model as a feature extractor that has excellent performance in terms of implicit associated feature extraction, so as to extract associated features of each position in the distance topology matrix, that is, spatial topology distribution information between context semantic understanding features described by each thread, thereby obtaining a distance topology feature matrix.
In a specific example of the present application, the step of passing the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix includes: further used for: each layer of the convolutional neural network model performs the following steps on input data in forward transfer of the layer: using convolution units of all layers of the convolution neural network model to carry out convolution processing on the input data based on a two-dimensional convolution kernel so as to obtain a convolution characteristic diagram; using pooling units of each layer of the convolutional neural network model to perform pooling processing along a channel dimension on the convolutional feature map so as to obtain a pooled feature map; using an activation unit of each layer of the convolutional neural network model to perform nonlinear activation on the characteristic values of each position in the pooled characteristic map so as to obtain an activated characteristic map; the input of the first layer of the convolutional neural network model is the distance topological matrix, and the output of the last layer of the convolutional neural network model is the distance topological feature matrix.
In step S150 of the embodiment of the present application, the plurality of thread description semantic feature vectors are two-dimensionally arranged to obtain a global thread description semantic feature matrix. It should be appreciated that the plurality of thread description semantic feature vectors represent global context-based high-dimensional semantic feature information of the descriptions of the respective threads to be assigned, but the ordering of the priorities of the respective threads to be assigned should be ordered based on global features, so that the plurality of thread description semantic feature vectors are arranged in two dimensions to obtain a global thread description semantic feature matrix, that is, the global context-based high-dimensional semantic feature information of the descriptions of the respective threads to be assigned is losslessly fused into one feature matrix.
In step S160 of the embodiment of the present application, the global thread description semantic feature matrix and the distance topological feature matrix are passed through a graph neural network model to obtain a topological global thread description semantic feature matrix. That is, each thread description semantic feature vector in the thread description semantic feature vectors is used as a feature representation of a node, the distance topology feature matrix is used as a feature representation of an edge between the nodes, and a global thread description semantic feature matrix obtained by two-dimensional arrangement of the thread description semantic feature vectors and the distance topology feature matrix pass through a graph neural network to obtain a topology global thread description semantic feature matrix. Specifically, the graph neural network performs graph structure data coding on the global thread description semantic feature matrix and the distance topological feature matrix through a learnable neural network parameter to obtain the topological global thread description semantic feature matrix containing irregular distance topological features and semantic understanding feature information of each thread description.
In a specific example of the present application, the step of obtaining the topological global thread description semantic feature matrix by passing the global thread description semantic feature matrix and the distance topological feature matrix through a graph neural network model includes: the graph neural network processes the global thread description semantic feature matrix and the distance topology feature matrix through the learnable neural network parameters to obtain the topology global thread description semantic feature matrix containing Euclidean distance topology features and semantic understanding feature information of each thread description.
In step S170 of the embodiment of the present application, based on the global thread description semantic feature matrix, small-scale feature association expression reinforcement is performed on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix. It should be understood that, in the technical solution of the present application, the global thread description semantic feature matrix and the distance topological feature matrix are obtained through a graph neural network model, so that the global thread description semantic feature matrix expresses association expression of the context semantic feature of the description of each thread to be assigned under the semantic similarity topology of each thread to be assigned. However, since each global thread description semantic feature vector of the global thread description semantic feature matrix is a small scale context semantic encoded representation of the description of the thread to be assigned, it is still desirable to promote the small scale feature associative representation of the topological global thread description semantic feature matrix relative to the global thread description semantic feature matrix, thereby promoting the topologyThe global thread description semantic feature matrix encodes the expression effect of semantics for the small scale context of the description of each thread to be assigned. Thus, computing the topological global thread description semantic feature matrix, e.g., denoted as
Figure SMS_40
And the global thread describes a semantic feature matrix, e.g. denoted +.>
Figure SMS_41
As a weighted feature matrix.
Fig. 4 is a flowchart of a method and a system for controlling thread operation according to an embodiment of the present application, where small scale feature association expression enhancement is performed on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix. As shown in fig. 4, in a specific example of the present application, the performing, based on the global thread description semantic feature matrix, small-scale feature association expression enhancement on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix includes: s310, calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and S320, multiplying the topological global thread description semantic feature matrix by a position point by taking the small-scale local derivative matrix as a weighted feature matrix to obtain the optimized topological global thread description semantic feature matrix.
In a specific example of the present application, the calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix includes:
Calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula; wherein, the formula is:
Figure SMS_42
wherein
Figure SMS_43
、/>
Figure SMS_44
and />
Figure SMS_45
The topological global thread description semantic feature matrix, the global thread description semantic feature matrix and the small-scale local derivative matrix are respectively +.>
Figure SMS_46
Characteristic values of the location.
Here, the semantic feature matrix is described by computing the topological global thread
Figure SMS_47
And said global thread description semantic feature matrix +.>
Figure SMS_48
Small-scale local derivative features in between can imitate the physical properties of mutual expression between data sequences based on the geometrical approximation of corresponding positions in between, so that the local nonlinear dependence of the cross-feature-domain positions is enhanced by position-by-position regression among feature matrices. Thus, by deriving the matrix locally on said small scale +.>
Figure SMS_49
Describing semantic feature matrix for the topological global thread as a weighting matrix>
Figure SMS_50
The feature value weighting is carried out by dot multiplication, so that the topological global thread description semantic feature matrix can be improved>
Figure SMS_51
Expression effect of small-scale context coding semantics of descriptions of each thread to be assigned, thereby improving row vector passing through classifier And the accuracy of the obtained classification result. Therefore, reasonable distribution of thread priorities can be adaptively carried out based on the characteristic distribution among threads, so that the distributed threads are adapted to tasks to be processed, and processing efficiency and processing effect are improved.
In step S180 of the embodiment of the present application, each row vector in the semantic feature matrix of the global thread description of the optimized topology is respectively passed through a classifier to obtain a plurality of probability values. That is, each row vector in the topological global thread description semantic feature matrix is used as a classification feature vector to be subjected to classification processing in a classifier, so as to obtain a probability value for representing the description of each thread to be assigned.
In a specific example of the present application, the step of passing each row vector in the semantic feature matrix of the global thread description of the optimized topology through a classifier to obtain a plurality of probability values includes: processing each row vector in the optimized topology global thread description semantic feature matrix by using the classifier in the following formula to obtain the plurality of probability values; wherein, the formula is:
Figure SMS_52
, wherein />
Figure SMS_53
Representing each row vector in the global thread description semantic feature matrix of the optimized topology >
Figure SMS_54
To->
Figure SMS_55
Weight matrix for each full-connection layer of the classifier, < >>
Figure SMS_56
To->
Figure SMS_57
Bias vector representing each fully connected layer of the classifier,>
Figure SMS_58
representing each probability value of the plurality of probability values.
In step S190 of the embodiment of the present application, the priority of each thread to be assigned is determined based on the ordering of the plurality of probability values. Therefore, reasonable distribution of thread priorities can be adaptively carried out based on the characteristic distribution among threads, so that the distributed threads are adapted to tasks to be processed, and processing efficiency and processing effect are improved.
In summary, according to the thread operation control method in the embodiment of the present application, first, a description of each thread to be assigned is obtained, where the description of the thread to be assigned is a scheduling context matched with sporadic task parameters related to the thread to be assigned; then, semantic feature information in the description of the threads and feature distribution information among the threads are mined through a deep learning technology to obtain a topological global thread description semantic feature matrix; and then, each row vector in the topological global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the ordering of the plurality of probability values, so that the reasonable allocation of the thread priority is adaptively carried out based on the feature distribution among threads, the allocated threads are adapted to the task to be processed, and the processing efficiency and effect are improved.
Exemplary System: FIG. 5 is a block diagram schematic of a thread run control system according to an embodiment of the present application. As shown in fig. 5, the thread running control system 100 according to an embodiment of the present application includes: a description obtaining module 110, configured to obtain descriptions of threads to be assigned, where the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; the context coding module 120 is configured to obtain a plurality of thread description semantic feature vectors by respectively passing the descriptions of the threads to be assigned through a context coder including an embedded layer; a euclidean distance calculating module 130, configured to calculate euclidean distances between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors to obtain a distance topology matrix; a convolutional encoding module 140, configured to pass the distance topological matrix through a convolutional neural network model serving as a feature extractor to obtain a distance topological feature matrix; the two-dimensional arrangement module 150 is configured to two-dimensionally arrange the plurality of thread description semantic feature vectors to obtain a global thread description semantic feature matrix; the graph neural coding module 160 is configured to pass the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model to obtain a topology global thread description semantic feature matrix; the matrix optimization module 170 is configured to perform small-scale feature association expression enhancement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix; the probability value obtaining module 180 is configured to obtain a plurality of probability values by respectively passing each row vector in the semantic feature matrix of the optimized topology global thread description through a classifier; and a priority determining module 190 configured to determine a priority of the respective threads to be assigned based on the ordering of the plurality of probability values.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described thread operation control system have been described in detail in the above description of the thread operation control method with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
Exemplary electronic device: next, an electronic device according to an embodiment of the present application is described with reference to fig. 6.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 6, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to implement the thread of execution control methods and/or other desired functions of the various embodiments of the present application described above. Various content, such as descriptions of the various threads to be assigned, may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information to the outside, including a plurality of probability values, priorities of the respective threads to be assigned, and the like. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Exemplary computer program product and computer readable storage Medium: in addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the method of controlling the operation of a thread according to the various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps of the method of controlling the operation of a thread according to various embodiments of the present application described in the above section of the "exemplary method" of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.

Claims (8)

1. A method for controlling the operation of a thread, comprising:
acquiring descriptions of various threads to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned;
the description of each thread to be assigned is respectively passed through a context encoder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors;
calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix;
the distance topological matrix is passed through a convolutional neural network model serving as a feature extractor to obtain a distance topological feature matrix;
two-dimensional arrangement is carried out on the thread description semantic feature vectors to obtain a global thread description semantic feature matrix;
the global thread description semantic feature matrix and the distance topological feature matrix are subjected to a graph neural network model to obtain a topological global thread description semantic feature matrix;
based on the global thread description semantic feature matrix, carrying out small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix;
Each row vector in the optimized topology global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values; and
determining a priority of the respective thread to be assigned based on the ordering of the plurality of probability values;
the performing small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix comprises the following steps:
calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and
multiplying the topological global thread description semantic feature matrix by a position point by taking the small-scale local derivative matrix as a weighted feature matrix to obtain the optimized topological global thread description semantic feature matrix;
wherein said calculating a small scale local derivative matrix between said topological global thread description semantic feature matrix and said global thread description semantic feature matrix comprises:
calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula;
Wherein, the formula is:
Figure QLYQS_1
wherein
Figure QLYQS_2
、/>
Figure QLYQS_3
and />
Figure QLYQS_4
The topological global thread description semantic feature matrix, the global thread description semantic feature matrix and the small-scale local derivative matrix are respectively +.>
Figure QLYQS_5
Characteristic values of the location.
2. The thread execution control method according to claim 1, wherein the passing the descriptions of the respective threads to be assigned through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors, respectively, includes:
converting the descriptions of the threads to be assigned into embedded vectors by using an embedded layer of the context encoder to obtain sequences of embedded vectors corresponding to the descriptions of the threads to be assigned;
performing global-based context semantic coding on the sequence of embedded vectors corresponding to the descriptions of the respective threads to be assigned using a converter-based Bert model of the context encoder to obtain a plurality of feature vectors corresponding to the descriptions of the respective threads to be assigned; and
cascading the plurality of feature vectors corresponding to the descriptions of the respective threads to be assigned to obtain semantic feature vectors of the plurality of thread descriptions.
3. The thread execution control method according to claim 2, wherein the calculating the euclidean distance between each two of the plurality of thread description semantic feature vectors to obtain the distance topology matrix comprises: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a plurality of Euclidean distances according to the following formula;
Figure QLYQS_6
wherein ,
Figure QLYQS_7
and />
Figure QLYQS_8
Respectively representing any two thread description semantic feature vectors in the thread description semantic feature vectors, and (I)>
Figure QLYQS_9
Representing the calculation of the Euclidean distance between every two thread description semantic feature vectors of said plurality of thread description semantic feature vectors,/for each thread description semantic feature vector>
Figure QLYQS_10
and />
Figure QLYQS_11
Representing the thread description semantics respectivelyAny two threads in the feature vector describe the feature values of the respective positions of the semantic feature vector.
4. The thread operation control method according to claim 3, wherein the step of passing the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix comprises: further used for: each layer of the convolutional neural network model performs the following steps on input data in forward transfer of the layer:
using convolution units of all layers of the convolution neural network model to carry out convolution processing on the input data based on a two-dimensional convolution kernel so as to obtain a convolution characteristic diagram;
using pooling units of each layer of the convolutional neural network model to perform pooling processing along a channel dimension on the convolutional feature map so as to obtain a pooled feature map; and
using an activation unit of each layer of the convolutional neural network model to perform nonlinear activation on the characteristic values of each position in the pooled characteristic map so as to obtain an activated characteristic map;
The input of the first layer of the convolutional neural network model is the distance topological matrix, and the output of the last layer of the convolutional neural network model is the distance topological feature matrix.
5. The thread execution control method according to claim 4, wherein said passing the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model to obtain a topology global thread description semantic feature matrix comprises:
the graph neural network processes the global thread description semantic feature matrix and the distance topology feature matrix through a learnable neural network parameter to obtain the topology global thread description semantic feature matrix containing Euclidean distance topology features and semantic understanding feature information of each thread description.
6. The thread operation control method according to claim 5, wherein the step of passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier to obtain a plurality of probability values includes: processing each row vector in the optimized topology global thread description semantic feature matrix by using the classifier in the following formula to obtain the plurality of probability values;
Wherein, the formula is:
Figure QLYQS_12
, wherein />
Figure QLYQS_13
Representing each row vector in the global thread description semantic feature matrix of the optimized topology>
Figure QLYQS_14
To->
Figure QLYQS_15
Weight matrix for each full-connection layer of the classifier, < >>
Figure QLYQS_16
To->
Figure QLYQS_17
Bias vector representing each fully connected layer of the classifier,>
Figure QLYQS_18
representing each probability value of the plurality of probability values.
7. A thread run control system, comprising:
the description acquisition module is used for acquiring descriptions of various threads to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned;
the context coding module is used for respectively enabling the descriptions of the threads to be assigned to pass through a context coder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors;
the Euclidean distance calculation module is used for calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix;
the convolution coding module is used for enabling the distance topological matrix to pass through a convolution neural network model serving as a feature extractor to obtain a distance topological feature matrix;
The two-dimensional arrangement module is used for two-dimensionally arranging the plurality of thread description semantic feature vectors to obtain a global thread description semantic feature matrix;
the graph neural coding module is used for enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model to obtain a topological global thread description semantic feature matrix;
the matrix optimization module is used for carrying out small-scale feature association expression reinforcement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix so as to obtain an optimized topological global thread description semantic feature matrix;
the probability value acquisition module is used for respectively passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier to obtain a plurality of probability values; and
a priority determining module, configured to determine a priority of each thread to be assigned based on the ordering of the plurality of probability values;
wherein, the matrix optimization module includes:
calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and
Multiplying the topological global thread description semantic feature matrix by a position point by taking the small-scale local derivative matrix as a weighted feature matrix to obtain the optimized topological global thread description semantic feature matrix;
wherein said calculating a small scale local derivative matrix between said topological global thread description semantic feature matrix and said global thread description semantic feature matrix comprises:
calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula;
wherein, the formula is:
Figure QLYQS_19
wherein
Figure QLYQS_20
、/>
Figure QLYQS_21
and />
Figure QLYQS_22
The topological global thread description semantic feature matrix, the global thread description semantic feature matrix and the small-scale local derivative matrix are respectively +.>
Figure QLYQS_23
Characteristic values of the location.
8. The thread run control system of claim 7, wherein the euclidean distance calculation module comprises: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a plurality of Euclidean distances according to the following formula;
Figure QLYQS_24
wherein ,
Figure QLYQS_25
and />
Figure QLYQS_26
Respectively representing any two thread description semantic feature vectors in the thread description semantic feature vectors, and (I) >
Figure QLYQS_27
Representing the calculation of the Euclidean distance between every two thread description semantic feature vectors of said plurality of thread description semantic feature vectors,/for each thread description semantic feature vector>
Figure QLYQS_28
and />
Figure QLYQS_29
And respectively representing the characteristic values of the positions of any two thread description semantic characteristic vectors in the thread description semantic characteristic vectors. />
CN202310077120.0A 2023-02-08 2023-02-08 Thread operation control method and system Active CN115878330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310077120.0A CN115878330B (en) 2023-02-08 2023-02-08 Thread operation control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310077120.0A CN115878330B (en) 2023-02-08 2023-02-08 Thread operation control method and system

Publications (2)

Publication Number Publication Date
CN115878330A CN115878330A (en) 2023-03-31
CN115878330B true CN115878330B (en) 2023-05-30

Family

ID=85760855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310077120.0A Active CN115878330B (en) 2023-02-08 2023-02-08 Thread operation control method and system

Country Status (1)

Country Link
CN (1) CN115878330B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116663568B (en) * 2023-07-31 2023-11-17 腾云创威信息科技(威海)有限公司 Critical task identification system and method based on priority
CN116957304B (en) * 2023-09-20 2023-12-26 飞客工场科技(北京)有限公司 Unmanned aerial vehicle group collaborative task allocation method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114741186A (en) * 2022-03-28 2022-07-12 慧之安信息技术股份有限公司 Thread pool adaptive capacity adjustment method and device based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10269088B2 (en) * 2017-04-21 2019-04-23 Intel Corporation Dynamic thread execution arbitration
CN109144716A (en) * 2017-06-28 2019-01-04 中兴通讯股份有限公司 Operating system dispatching method and device, equipment based on machine learning
CN109886407B (en) * 2019-02-27 2021-10-22 上海商汤智能科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN113269323B (en) * 2020-02-17 2024-03-12 北京达佳互联信息技术有限公司 Data processing method, processing device, electronic equipment and storage medium
CN115373813A (en) * 2022-04-03 2022-11-22 福建福清核电有限公司 Scheduling method and system based on GPU virtualization in cloud computing environment and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114741186A (en) * 2022-03-28 2022-07-12 慧之安信息技术股份有限公司 Thread pool adaptive capacity adjustment method and device based on deep learning

Also Published As

Publication number Publication date
CN115878330A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN115878330B (en) Thread operation control method and system
CN115203380B (en) Text processing system and method based on multi-mode data fusion
CN112966522B (en) Image classification method and device, electronic equipment and storage medium
Ye et al. Inverted pyramid multi-task transformer for dense scene understanding
CN111105029B (en) Neural network generation method, generation device and electronic equipment
US20230113271A1 (en) Methods and apparatus to perform dense prediction using transformer blocks
CN110673840A (en) Automatic code generation method and system based on tag graph embedding technology
CN114676234A (en) Model training method and related equipment
CN115880036B (en) Parking space level dynamic sharing intelligent management and control transaction platform
US11227110B1 (en) Transliteration of text entry across scripts
US11210474B2 (en) Language processing using a neural network
CN114840327A (en) Multi-mode multi-task processing method, device and system
CN115373813A (en) Scheduling method and system based on GPU virtualization in cloud computing environment and electronic equipment
CN111027681B (en) Time sequence data processing model training method, data processing method, device and storage medium
CN111563391A (en) Machine translation method and device and electronic equipment
CN116258859A (en) Semantic segmentation method, semantic segmentation device, electronic equipment and storage medium
CN117197271A (en) Image generation method, device, electronic equipment and storage medium
CN115118675A (en) Method and system for accelerating data stream transmission based on intelligent network card equipment
CN112966140B (en) Field identification method, field identification device, electronic device, storage medium and program product
CN113065322B (en) Code segment annotation generation method and system and readable storage medium
CN112738647B (en) Video description method and system based on multi-level coder-decoder
CN110555099B (en) Computer-implemented method and apparatus for language processing using neural networks
CN116739219A (en) Melt blown cloth production management system and method thereof
CN115831246A (en) Pharmaceutical chemical reaction synthesis and conversion rate prediction combined optimization method
CN115091211A (en) Numerical control turning and grinding combined machine tool and production control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant