CN112363823A - Lightweight serverless computing method based on message - Google Patents

Lightweight serverless computing method based on message Download PDF

Info

Publication number
CN112363823A
CN112363823A CN202011079954.8A CN202011079954A CN112363823A CN 112363823 A CN112363823 A CN 112363823A CN 202011079954 A CN202011079954 A CN 202011079954A CN 112363823 A CN112363823 A CN 112363823A
Authority
CN
China
Prior art keywords
message
algorithm
grabbing
determining
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011079954.8A
Other languages
Chinese (zh)
Inventor
李彦清
李志鹏
邹强
李利军
于滨峰
张春林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dongfangtong Software Co ltd
Beijing Tongtech Co Ltd
Original Assignee
Beijing Dongfangtong Software Co ltd
Beijing Tongtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dongfangtong Software Co ltd, Beijing Tongtech Co Ltd filed Critical Beijing Dongfangtong Software Co ltd
Priority to CN202011079954.8A priority Critical patent/CN112363823A/en
Publication of CN112363823A publication Critical patent/CN112363823A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a lightweight serverless computing method based on messages. The method comprises the following steps: receiving a client message, and determining the content and the type of the message; determining a required message algorithm and an algorithm capturing way according to the message content; determining the grabbing sequence of the algorithm grabbing paths according to the message type; respectively importing the message contents into different computing nodes according to the message types within preset time; matching the algorithm grabbing path with the computing node to determine matching information; capturing a message algorithm to a computing node according to the matching information and the capturing sequence; and executing the non-service calculation according to the captured message algorithm. The invention has the beneficial effects that: users can utilize the algorithm flexibility of the serverless computing architecture to meet the current computing requirements through a flexible and standardized algorithm acquisition mode. The calculation cost is effectively saved, and the resource utilization rate is improved.

Description

Lightweight serverless computing method based on message
Technical Field
The invention relates to the technical field of calculation, in particular to a lightweight serverless calculation method based on messages.
Background
At present, the development of cloud computing is changing day by day, and server-free computing is an inevitable trend of cloud computing development at present. The method comprises the steps that the original application program is disassembled without server calculation, finer-grained service scheduling is provided, only when a request comes, a resource calling service is occupied, when no request comes, no resource is occupied, and charging is conducted according to calling times and duration. Compared with the traditional online service mode, the use cost of the user is greatly reduced by the server-free calculation, the user can completely not pay attention to the configuration problem of the server, so that the development is simplified, and the flexibility is better than that of the traditional online service. However, the current serverless computing model introduces significant performance problems due to the cold start nature of its own container.
Therefore, in order to solve the performance problem of no server, a server-less computing method which is lower in starting delay and higher in resource utilization rate and computing rate and is provided by the technical personnel in the field compared with the existing server-less platform is needed.
Disclosure of Invention
The invention provides a lightweight server-free computing method based on messages, which is used for solving the problems of stability and safety of server computing.
A message-based lightweight serverless computing method, comprising:
receiving a client message, and determining the content and the type of the message;
determining a required message algorithm and an algorithm capturing way according to the message content;
determining the grabbing sequence of the algorithm grabbing paths according to the message type;
respectively calling the message contents into different computing nodes within preset time;
matching the algorithm grabbing path with the computing node to determine matching information;
capturing a message algorithm to a computing node according to the matching information and the capturing sequence;
according to the captured message algorithm, a calculation is performed.
As an embodiment of the present invention, the receiving a client message, and determining the message content and the message type includes:
based on a synonymy semantic division rule, dividing the client message into a plurality of different message sequences according to sentences;
performing relevance calculation on different sentences in the same message sequence, and determining first relevance parameters between different sentences in the same message sequence;
determining a second correlation parameter between different sequences according to the first correlation value parameter;
substituting the first correlation parameter and the second correlation parameter into a discrete regression function to construct the message sequence and a discrete distribution relational graph of statements in the message sequence;
and determining the statement area of each statement in the discrete distribution relation graph according to the discrete distribution relation graph, classifying the client messages based on the statement areas, and determining the message content of each classified client message.
As an embodiment of the present invention, the determining a required message algorithm and an algorithm capture route according to the message content includes:
acquiring message content, and determining characteristic parameters and characteristic types;
determining algorithm parameters and demand parameters of message contents corresponding to each feature type according to the feature types;
determining a calculation function and a calculation logic of a message algorithm according to the demand parameters;
determining the calculation characteristics of the message algorithm according to the algorithm parameters;
respectively acquiring a first data set with the same computing function, a second data set with the same computing logic and a third data set with the same computing characteristic according to the computing function, the computing logic and the computing characteristic;
determining the same data according to the first data collection set, the second data collection set and the third data collection set;
acquiring a target data address and a target domain name address of the same data;
determining an algorithm capture way of the same data according to the target data address and the target domain name address;
and acquiring the data volume of the same data, integrating all the algorithms of the same data to grab the path, and determining all the message algorithms of the same data.
As an embodiment of the present invention, the acquiring the data address and the domain name address of the same data further includes:
when a plurality of data addresses and domain name addresses are acquired by the same data, a plurality of domain name addresses are butted through any computing node, and the butting time is acquired;
and determining the domain name address corresponding to the shortest time value in the time values according to the time values of the docking time, and taking the domain name address corresponding to the shortest time value as a target domain name address.
As an embodiment of the present invention, the determining a fetch order of the algorithm fetch routes according to the message types includes:
acquiring a message type, and determining the correlation of the message type;
determining a parallel relation and a branch relation in the correlation relation according to the correlation relation;
according to the parallel relation, calculating the entropy weight of the message content in the parallel relation;
determining a first grabbing order of algorithm grabbing paths corresponding to the message contents of the parallel relation according to the entropy weight;
constructing a tree graph of the message type according to the branch relation;
determining a second grabbing order of algorithm grabbing paths corresponding to the messages corresponding to the branch relations according to the tree-shaped graph;
and determining the grabbing sequence of the message type according to the first grabbing order and the second grabbing order.
As an embodiment of the present invention, the importing, within a preset time, message contents into different computing nodes according to the message types respectively includes:
respectively determining time requirements for importing message contents of different message types into the computing nodes according to the message types;
according to the time requirement, establishing a time range for importing the message contents of different message types into the computing node;
according to the time range and the message type, the message content is imported into a computing node; wherein the content of the first and second substances,
when the time for importing the message content into the computing node exceeds the time range, the message content is represented to have message noise, and the message noise is filtered and then is imported into the computing node again;
and when the time for importing the message content into the computing node is lower than the time range, the message content is obtained again and imported into the computing node.
As an embodiment of the present invention, the redirecting the message noise into the computing node after filtering the message noise includes:
acquiring message content and generating a message text;
judging the type of the message noise according to the message text; wherein the content of the first and second substances,
the types of the noise at least comprise a character folding type, a multi-meaning type and a semantic unclear type;
and according to the type of the noise, carrying out denoising processing in a replacement, addition or deletion mode, and importing the processed message content into a computing node.
As an embodiment of the present invention, the matching the algorithm grab path with the computing node according to the message type to determine matching information includes:
step 1: determining a parameter set A of the calculation nodes and a parameter set B of the algorithm grabbing paths based on the number of the calculation nodes and the number of the algorithm grabbing paths:
A={a1,a2,a3,……,ai};
B={b1,b2,b3,……bj};
wherein, the aiA parameter representing an ith compute node; b isjParameters representing the capturing path of the jth algorithm; 1,2,3, … … n; j ═ 1,2,3, … … j;
step 2: substituting the calculation nodes and the algorithm grabbing paths into a normal distribution function, and determining the matching probability P of any calculation node and any algorithm grabbing path:
Figure BDA0002718155290000051
wherein, the
Figure BDA0002718155290000052
Representing a parameter mean value of a computing node; the above-mentioned
Figure BDA0002718155290000053
Representing a parameter mean value of an algorithm grabbing path; the P isi,jRepresenting the matching probability of the ith computing node and the jth algorithm grabbing path;
and step 3: determining the matching capability N of the computing node according to the matching probability:
N=∑RiBi∫P(ai,bj)dt;
wherein, R isiRepresenting the storage capacity of the ith computing node; b isi,jIs shown asThe proportion of the algorithm grabbing paths matched by the i computing nodes is calculated;
and 4, step 4: and constructing a coupling model X according to the parameters of the calculation nodes and the parameters of the algorithm grabbing path:
Figure BDA0002718155290000061
wherein, X isi,jRepresenting the coupling of the ith computing node and the jth algorithm grabbing path;
and 5: according to the coupling model X and the matching capacity, a matching model H of the method grabbing path and the computing node is constructed:
Figure BDA0002718155290000062
wherein, the Hi,jCapturing a path matching value by the ith computing node and the jth algorithm;
step 6: substituting the parameter set of the computing node and the parameter set of the algorithm grabbing path into the matching model to determine a matching value set of the computing node and the algorithm grabbing path;
Figure BDA0002718155290000063
and arranging the matching values in the matching value set from large to small, and generating the matching information taking the gradient table as an output form.
As an embodiment of the present invention, the fetching a message algorithm to a computing node according to the matching information and the fetching order includes:
determining the sequence of matching values corresponding to the computing nodes and the algorithm grabbing paths from large to small according to the matching information; (ii) a
Judging whether the sequence of the matching values from large to small is the same as the grabbing sequence;
when the sequence is the same, capturing a message algorithm to the computing node;
when the sequence is different, determining calculation nodes and algorithm grabbing ways with different sequences, and calculating grabbing weights of the calculation nodes and the algorithm grabbing ways with different sequences;
and capturing a message algorithm to the computing node according to the capturing weight.
As an embodiment of the present invention, the performing the calculation according to the captured message algorithm includes the following steps:
step S1: reading the client message and initializing a cluster center;
step S2: marking the cluster center after the cluster center is initialized;
step S3: substituting the marked cluster center into the message algorithm, and calculating to obtain a new cluster center;
step S4: judging whether the cluster center changes;
step S5: repeating steps S1 to S4 when the cluster center changes;
step S6: calculating the client message through the message algorithm when the cluster center is unchanged.
The invention has the beneficial effects that: users can meet the current computing requirements by utilizing the algorithm flexibility of the serverless computing architecture and through a flexible and standardized algorithm acquisition mode. Through an extremely fine-grained processing mode, accurate distribution and multi-algorithm mass synchronous calculation are achieved on data, calculation cost is effectively saved, and resource utilization rate is improved. The data processing method and the data processing system can automatically complete data processing tasks submitted by users, reduce work and energy of the users on the management server to the maximum extent, and have the characteristics of universality, high efficiency and usability.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for a message-based lightweight serverless computing method in an embodiment of the invention;
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
as shown in fig. 1, a lightweight serverless message-based computing method includes:
step 100: receiving a client message, and determining the content and the type of the message;
the client message received by the invention comprises three types of description type information, behavior type information and association type information.
Description class information is mainly information used to understand basic attributes of a client, such as: contact information, geographic information and demographic information of individual customers, social and economic statistics information of enterprise customers and the like, wherein the information mainly comes from registration information of the customers. And customer basic information collected by the enterprise's operations management system.
The behavior class information generally includes: the record of the customer purchasing the service or product, the consumption record of the customer service or product, the contact record of the customer and the enterprise, and the consumption behavior of the customer, the customer preference, the life style and other related information;
the association information is related to the client behavior, and reflects and influences factors such as the client behavior and psychology. The main purpose of a business to build and maintain such information is to more effectively help marketers and customer analysts of the business to understand deeply the relevant factors that affect customer behavior.
Step 101: determining a required message algorithm and an algorithm capturing way according to the message content;
different message contents, different acquisition ways and different data sources, so the algorithm for acquiring the message is different. For example: the description information is static information, and a static algorithm is required to be adopted when the information is invariable; the behavior information is changed from moment to moment, and a dynamic algorithm is needed.
Step 102: determining a grabbing sequence of the algorithm grabbing paths according to the message types;
different message types and different capturing ways are used for describing class information, for example, information of basic attributes of customer information needs to be docked with an information registration website of a customer; the behavior information, for example, consumption information of the client, is acquired by a financial institution which requires authorization of the client.
Step 103: respectively calling the message contents into different computing nodes within preset time;
the calculation of the message content is limited in time, so that the calculation time is prevented from being too long, and the calculation process is blocked and cannot be known. Therefore, the calculation is prevented from being stuck by presetting the calculation time.
Step 104: matching the algorithm grabbing path with the computing node to determine matching information;
the algorithm capturing way and the calculation node matching aim to enable the contents of the calculation nodes to correspond to the algorithm, and calculation can be performed more accurately and accurately.
Step 105: capturing a message algorithm to a computing node according to the matching information and the capturing sequence;
the matching information represents the size sequence of the algorithm matching value, the capturing sequence represents the calculation sequence, and the size sequence and the calculation sequence are compared with each other in various countries, so that the efficiency of calculation speed is improved.
Step 106: according to the captured message algorithm, a calculation is performed.
The beneficial effects of the above technical scheme are that: users can meet the current computing requirements by using the algorithm flexibility of the serverless computing architecture and through a flexible and standardized algorithm acquisition mode. Through an extremely fine-grained processing mode, accurate distribution and multi-algorithm mass synchronous calculation are achieved for data, calculation cost is effectively saved, and resource utilization rate is improved. The data processing method and the data processing system can automatically complete data processing tasks submitted by users, reduce work and energy of the users on the management server to the maximum extent, and have the characteristics of universality, high efficiency and usability.
Example 2:
as an embodiment of the present invention, the receiving a client message, and determining the message content and the message type includes:
based on a synonymy semantic division rule, dividing the client message into a plurality of different message sequences according to sentences; the synonymous semantic meaning division rule is based on the content meaning of the client content, and divides the client messages with the same meaning into one type, wherein each type is a message sequence, and further forms a plurality of different message sequences.
Performing relevance calculation on different sentences in the same message sequence, and determining first relevance parameters between different sentences in the same message sequence; in the same message sequence, there will be at least one statement with the same meaning, and the first association parameter is 1; when in the same message sequence, there are two or more sentences with the same meaning, and the mahalanobis distance of any two sentences in the same sequence is calculated to determine whether the two sentences are the same sentences, namely: for example, sentence a and sentence B, which have mahalanobis distance of 1, are equivalent to the existence of duplicate sentences, which are also computational resource consuming, so that after the duplicate sentences are determined, duplicates can be deleted. And when the mahalanobis distance is less than 1, it means that sentence a and sentence B are different.
Determining a second correlation parameter between different sequences according to the first correlation value parameter;
there will be cases where all statements are identical between sequences, so the second correlation parameter can find out a repeated sequence.
Substituting the first correlation parameter and the second correlation parameter into a discrete regression function to construct the message sequence and a discrete distribution relational graph of statements in the message sequence; the discrete distribution relation graph is divided into two levels, wherein one level is the discrete distribution relation graph among sequences, and the next level is the discrete distribution relation graph among different sentences in a certain sequence.
And according to the discrete distribution relation graph, determining the area of each statement in the discrete distribution relation graph, classifying the client messages based on the area, and determining the message content of each classified client message. After the graphs of the sequence of discrete distribution relational graphs and the graphs of the words are superimposed, the words form one region by a plurality of points and represent one sequence, and therefore, the classification can be determined by the area of the region.
The beneficial effects of the above technical scheme are that: the invention divides the client content by the same meaning, thus realizing the first-level division of the client message; the determination of the repeated sentences or repeated sequences is realized through the associated parameters, so that the repeated sentences or repeated sequences are deleted, and the occupation of computing resources is reduced; and finally, based on area classification, secondary division of the client message is realized, and further, accurate division of the client message is realized.
Example 3:
as an embodiment of the present invention, the determining a required message algorithm and an algorithm capture route according to the message content includes:
acquiring message content, and determining characteristic parameters and characteristic types; the type features include timeliness features, depth features, capacity features and the like, and the message features can be determined according to the specific message calculation characteristics when the calculation formula is carried out.
Determining algorithm parameters and demand parameters of message contents corresponding to each feature type according to the feature types; algorithm parameters are parameters that can be directly derived from the message content that are needed for the computation, for example: the capacity of the client message, i.e. the occupied memory space, the parameters of the semantic features of the client message, i.e. the meaning of the expression of the client message. The demand parameters are demand parameters which are integrated by determining what parameters are needed when calculating the customer messages from a calculation end.
Determining a calculation function and a calculation logic of a message algorithm according to the demand parameters; the calculation function is the result produced after the calculation. And computational logic is the method logic used to obtain the results of the computation.
Determining the calculation characteristics of the message algorithm according to the algorithm parameters;
respectively acquiring a first data set with the same computing function, a second data set with the same computing logic and a third data set with the same computing characteristic according to the computing function, the computing logic and the computing characteristic;
determining the same data according to the first data collection set, the second data collection set and the third data collection set; the same data and the repeated data are different, and the same data have similar or similar semantemes, namely, the same, similar or similar contents are generated by using different words or sentences; the same sentence is also different from the synonymous division rule, the synonymous division rule subjectively senses the semantic similarity, and the same data is obtained by calculation and is identical in parameter, logic and function.
Acquiring a target data address and a target domain name address of the same data;
determining the algorithm grabbing path of the same data according to the data address and the domain name address; the target domain name address identifies the web page or web site where the same data exists, while the target data address identifies the location of the same data on the web page.
And integrating all the algorithm grabbing ways of the same data according to the quantity of the same data, and determining a message algorithm corresponding to all the algorithm grabbing ways of the same data.
The beneficial effects of the above technical scheme are that: the message algorithm obtained by the invention can be completely adapted to the calculation of the client message, the accurate determination of the algorithm is realized through the function, logic and characteristic of the calculation, and the data address and the domain name address accurately position the message algorithm. The resulting message algorithm is also the most suitable algorithm for client message computation.
Example 4:
as an embodiment of the present invention, the determining the data address and the domain name address of the same data further includes:
when a plurality of data addresses and domain name addresses are acquired by the same data, a plurality of domain name addresses are butted through any computing node, and the butting time is acquired; for example: the same article or document exists in the known network, the Wanfang, the Upu and the loving academy, but the time for connecting the known network, the Wanfang, the Upu and the loving academy through any client is different, and the citrus cannot be obtained under the condition that the time deviation is extremely small during manual operation.
And determining the domain name address corresponding to the shortest time value in the time values according to the time values of the docking time, and taking the domain name address corresponding to the shortest time value as the domain name address of an algorithm grabbing path. The domain name address corresponding to the shortest time value in the time values represents the fastest domain name address of the connection, and the domain name address is used as the target domain name address of the invention.
Example 5:
as an embodiment of the present invention, the determining a fetch order of the algorithm fetch routes according to the message types includes:
acquiring a message type, and determining the correlation of the message type; the correlation of the message types, namely the parallel relation and the branch relation in the correlation are determined according to the correlation among the message types;
according to the parallel relation, calculating the entropy weight of the message content in the parallel relation;
determining a first grabbing order of algorithm grabbing paths corresponding to the message contents of the parallel relation according to the entropy weight; the first grab sequence is a grab of the message algorithm for message content of the main class, i.e. the completely different classes.
Constructing a tree graph of the message type according to the branch relation;
determining a second capturing sequence of algorithm capturing ways corresponding to the message types corresponding to the branch relations according to the tree-shaped graph; the second capture order is the order of the branching relationships of each tree according to the tree. Therefore, in the tree diagram, each branch and the branch or trunk of the previous stage are related, so that the calculation amount in the algorithm capturing process can be reduced.
And determining the grabbing sequence of the message type according to the first grabbing sequence and the second grabbing sequence.
The beneficial effects of the above technical scheme are that: according to the method and the device, the capturing sequence is determined, different message algorithms are captured by different types of client messages, the difficulty of obtaining the message algorithms is reduced according to the capturing sequence, and the sequential obtaining of the message algorithms is realized.
Example 6:
as an embodiment of the present invention, the importing, within a preset time, message contents into different computing nodes according to the message types respectively includes:
respectively determining time requirements for importing message contents of different message types into the computing nodes according to the message types; the computation of the message content is computationally time consuming, but there may be computation failures or computation jams that cause computation to be interrupted.
According to the time requirement, establishing a time range for importing the message contents of different message types into the computing node; the time range is the time range over which the message algorithm computes the message content.
According to the time range and the message type, the message content is imported into a computing node; wherein the content of the first and second substances,
when the time for importing the message content into the computing node exceeds the time range, the message content is represented to have message noise, and the message noise is filtered and then is imported into the computing node again;
and when the time for importing the message content into the computing node is lower than the time range, the message content is obtained again and imported into the computing node.
The beneficial effects of the above technical scheme are that: whether noise data exist in the message content can be judged by setting the calculation time of the message algorithm, so that the data can be cleaned and refined by means of filtering and the like, and the clean data can be calculated.
Example 7:
as an embodiment of the present invention, the redirecting the message noise into the computing node after filtering the message noise includes:
acquiring message content and generating a message text; the message text is markup text of a hypertext markup language of a text file in a general HTML format.
Judging the type of the message noise according to the message text; wherein the content of the first and second substances,
the types of the noise at least comprise a character folding type, a multi-meaning type and a semantic unclear type;
and according to the type of the noise, carrying out denoising processing in a replacement, addition or deletion mode, and importing the processed message content into a computing node.
The beneficial effects of the above technical scheme are that: the invention relates to a method for processing the noise of a message, which comprises the steps of carrying out the text denoising by a computer, wherein the text denoising is mainly used for processing data of overlapping characters, multiple meanings and semanteme unknown, the text denoising is mainly used for processing the data of overlapping characters, multiple meanings and semanteme unknown, the text denoising is mainly used for calculating, the data calculation amount is large, and therefore, the noise data can be completely removed by the simplest and direct method of replacing, adding or deleting the noise data.
Example 8:
as an embodiment of the present invention, the matching the algorithm grab path with the computing node according to the message type to determine matching information includes:
step 1: determining a parameter set A of the calculation nodes and a parameter set B of the algorithm grabbing paths based on the number of the calculation nodes and the number of the algorithm grabbing paths:
A={a1,a2,a3,……,ai};
B={b1,b2,b3,……bj};
wherein, the aiA parameter representing an ith compute node; b isjParameters representing the capturing path of the jth algorithm; 1,2,3, … … n; j ═ 1,2,3, … … j;
step 2: substituting the calculation nodes and the algorithm grabbing paths into a normal distribution function, and determining the matching probability P of any calculation node and any algorithm grabbing path:
Figure BDA0002718155290000161
wherein, the
Figure BDA0002718155290000162
Representing a parameter mean value of a computing node; the above-mentioned
Figure BDA0002718155290000163
Representing a parameter mean value of an algorithm grabbing path; the P isi,jRepresenting the matching probability of the ith computing node and the jth algorithm grabbing path;
and step 3: determining the matching capability N of the computing node according to the matching probability:
N=∑RiBi∫P(ai,bj)dt;
wherein, R isiRepresenting the storage capacity of the ith computing node; b isi,jRepresenting the proportion of algorithm grabbing ways which can be matched by the ith computing node;
and 4, step 4: and constructing a coupling model X according to the parameters of the calculation nodes and the parameters of the algorithm grabbing path:
Figure BDA0002718155290000164
wherein, X isi,jRepresenting the coupling of the ith computing node and the jth algorithm grabbing path;
and 5: according to the coupling model X and the matching capacity, a matching model H of the method grabbing path and the computing node is constructed:
Figure BDA0002718155290000171
wherein, the Hi,jCapturing a path matching value by the ith computing node and the jth algorithm;
step 6: substituting the parameter set of the computing node and the parameter set of the algorithm grabbing path into the matching model to determine a matching value set of the computing node and the algorithm grabbing path;
Figure BDA0002718155290000172
and arranging the matching values in the matching value set from large to small, and generating the matching information taking the gradient table as an output form.
The principle and the beneficial effects of the technical scheme are as follows: the invention adopts a distributed computing mode because of no server computing, so that the number of computing nodes is determined, two sets are generated by the parameters of the computing nodes and the parameters of algorithm grabbing ways, and the probability of acquiring a message algorithm from any algorithm grabbing way by any computing node can be determined by determining the matching probability of any computing node and any algorithm grabbing way, thereby determining the matching capacity of each computing node relative to a plurality of algorithm grabbing ways, namely the proportion and the adaptation capacity of the algorithm grabbing ways which can be matched by the computing nodes. The coupling model determines the parameters of the calculation nodes and the coupling degree of an algorithm grabbing way, and then the ratio of any calculation node to the matching capacity is determined according to the ratio of the coupling degree to the matching capacity; and then listing ratios of all the calculation nodes and the algorithm capture ways according to the ratios, and further determining the most suitable algorithm capture way of any calculation node according to the ratios.
Example 9:
as an embodiment of the present invention, the fetching a message algorithm to a computing node according to the matching information and the fetching order includes:
determining the sequence of matching values corresponding to the computing nodes and the algorithm grabbing paths from large to small according to the matching information;
judging whether the sequence of the matching values from large to small is the same as the grabbing sequence;
when the sequence is the same, capturing a message algorithm to the computing node;
when the sequence is different, determining calculation nodes and algorithm grabbing ways with different sequences, and calculating grabbing weights of the calculation nodes and the algorithm grabbing ways with different sequences;
and capturing a message algorithm to the computing node according to the capturing weight.
The principle and the beneficial effects of the technical scheme are as follows: the order of the matching values and the order of the gradient table determine the adaptation order; the algorithm capturing sequence determines the algorithm obtaining sequence, and because the bandwidth of the network is determined, the good algorithm capturing sequence can obtain the algorithm more quickly, and further the calculating speed is improved. The two sequences are the same, which means that under the condition of adding the two optimal sequences, the calculation of the invention is optimal in algorithm capture and adaptive sequence, and further the optimal calculation rate is obtained.
Example 10:
as an embodiment of the present invention, the performing of the out-of-service computation according to the message algorithm includes the following steps: :
step S1: reading the client message and initializing a cluster center;
step S2: marking the cluster center after the cluster center is initialized;
step S3: substituting the marked cluster center into the message algorithm, and calculating to obtain a new cluster center;
step S4: judging whether the cluster center changes;
step S5: repeating steps S1 to S4 when the cluster center changes;
step S6: calculating the client message through the message algorithm when the cluster center is unchanged.
The principle and the beneficial effects of the technical scheme are as follows: in the last calculation step, the cluster center of the client message is determined after initialization, and when the cluster center is unchanged and the cluster center is changed, the initialization and judgment steps are repeated until the client message is calculated when the cluster center is unchanged.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A message-based lightweight serverless computing method, comprising:
receiving a client message, and determining the content and the type of the message;
determining a required message algorithm and an algorithm capturing way according to the message content;
determining the algorithm grabbing way and grabbing sequence according to the message type;
respectively calling the message contents into different computing nodes within preset time;
matching the algorithm grabbing path with the computing node to determine matching information;
capturing a message algorithm to a computing node according to the matching information and the capturing sequence;
performing a calculation according to the captured message algorithm;
matching the algorithm grabbing path with the computing node according to the message type to determine matching information, wherein the method comprises the following steps:
step 1: determining a parameter set A of the calculation nodes and a parameter set B of the algorithm grabbing paths based on the number of the calculation nodes and the number of the algorithm grabbing paths:
A={a1,a2,a3,……,ai};
B={b1,b2,b3,……bj};
wherein, the aiA parameter representing an ith compute node; b isjParameters representing the capturing path of the jth algorithm; 1,2,3, … … n; j is 1,2,3, … … m, which means that n computation nodes and m algorithm grabbing paths are shared;
step 2: substituting the calculation nodes and the algorithm grabbing paths into a normal distribution function, and determining the matching probability P of any calculation node and any algorithm grabbing path:
Figure FDA0002718155280000011
wherein, the
Figure FDA0002718155280000012
Representing a parameter mean value of a computing node; the above-mentioned
Figure FDA0002718155280000013
Representing the parameter mean value of the algorithm grabbing way; the P isi,jRepresenting the matching probability of the ith computing node and the jth algorithm grabbing path;
and step 3: determining the matching capability N of the computing node according to the matching probability:
N=∑RiBi∫P(ai,bj)dt;
wherein, R isiRepresenting the storage capacity of the ith computing node;b isiRepresenting the proportion of algorithm grabbing ways which can be matched by the ith computing node;
and 4, step 4: and constructing a coupling model X according to the parameters of the calculation nodes and the parameters of the algorithm grabbing path:
Figure FDA0002718155280000021
wherein, X isi,jRepresenting the coupling of the ith computing node and the jth algorithm grabbing path;
and 5: according to the coupling model and the matching capability, a matching model H of the method grabbing path and the computing node is constructed:
Figure FDA0002718155280000022
wherein, the Hi,jCapturing a path matching value by the ith computing node and the jth algorithm;
step 6: substituting the parameter set of the computing node and the parameter set of the algorithm grabbing path into the matching model to determine a matching value set of the computing node and the algorithm grabbing path;
Figure FDA0002718155280000023
and arranging the matching values in the matching value set from large to small, and generating the matching information taking the gradient table as an output form.
2. A message-based lightweight serverless computing method as claimed in claim 1 wherein said receiving a client message, determining a message content and a message type comprises:
based on the synonymy semantic division rule, dividing the client message into a plurality of different message sequences according to sentences;
performing relevance calculation on different sentences in the same message sequence, and determining first relevance parameters between different sentences in the same message sequence;
determining a second correlation parameter between different sequences according to the first correlation parameter;
substituting the first correlation parameter and the second correlation parameter into a discrete regression function to construct the message sequence and a discrete distribution relation graph of sentences in the message sequence;
and determining the statement area of each statement in the discrete distribution relational graph according to the discrete distribution relational graph, classifying the client messages based on the statement areas, and determining the message content of each classified client message.
3. The message-based lightweight serverless computing method according to claim 2, wherein the obtaining the target data address and the target domain name address of the same data comprises:
when a plurality of data addresses and domain name addresses are acquired by the same data, a plurality of domain name addresses are butted through any computing node, and the butting time is acquired;
and determining the domain name address corresponding to the shortest time value in the time values according to the time values of the docking time, and taking the domain name address corresponding to the shortest time value as a target domain name address.
4. The message-based lightweight serverless computing method according to claim 1, wherein the determining a crawling order of the algorithm crawling ways according to the message type comprises:
acquiring a message type, and determining the correlation of the message type;
determining a parallel relation and a branch relation in the correlation relation according to the correlation relation;
according to the parallel relation, calculating the entropy weight of the message content in the parallel relation;
determining a first grabbing order of algorithm grabbing paths corresponding to the message contents of the parallel relation according to the entropy weight;
constructing a tree graph of the message type according to the branch relation;
determining a second grabbing order of algorithm grabbing paths corresponding to the messages corresponding to the branch relations according to the tree-shaped graph;
and determining the grabbing sequence of the message type according to the first grabbing sequence and the second grabbing sequence.
5. The message-based lightweight serverless computing method according to claim 1, wherein the calling the message content to different computing nodes within a preset time comprises:
respectively determining time requirements for importing message contents of different message types into the computing nodes according to the message types;
according to the time requirement, establishing a time range for importing the message contents of different message types into the computing node;
according to the time range and the message type, the message content is imported into a computing node; wherein the content of the first and second substances,
when the time for importing the message content into the computing node exceeds the time range, the message content is represented to have message noise, and the message noise is filtered and then is imported into the computing node again;
and when the time for importing the message content into the computing node is lower than the time range, the message content is obtained again and imported into the computing node.
6. The message-based lightweight serverless computing method according to claim 5, wherein the filtering the message noise and then reintroducing the filtered message noise to the compute node comprises:
acquiring message content and generating a message text;
judging the type of the message noise according to the message text; wherein the content of the first and second substances,
the types of the message noise at least comprise a character-overlapping type, a multi-meaning type and a semantic unclear type;
and according to the type of the message noise, carrying out denoising processing in a replacement, addition or deletion mode, and importing the processed message content into a computing node.
7. The message-based lightweight serverless computing method according to claim 1, wherein the crawling message algorithm to the computing nodes according to the matching information and the crawling order comprises:
determining the sequence of matching values corresponding to the computing nodes and the algorithm grabbing paths from large to small according to the matching information;
judging whether the sequence of the matching values from large to small is the same as the grabbing sequence;
when the sequence is the same, capturing a message algorithm to the computing node;
when the sequence is different, determining calculation nodes and algorithm grabbing ways with different sequences, and calculating grabbing weights of the calculation nodes and the algorithm grabbing ways with different sequences;
and capturing a message algorithm to the computing node according to the capturing weight.
8. The message-based lightweight serverless computing method of claim 1, wherein the performing a computation according to a crawled message algorithm comprises the steps of:
step S1: reading the client message and initializing a cluster center;
step S2: marking the cluster center after the cluster center is initialized;
step S3: substituting the marked cluster center into the message algorithm, and calculating to obtain a new cluster center;
step S4: judging whether the cluster center changes;
step S5: repeating steps S1 to S4 when the cluster center changes;
step S6: and when the cluster center is unchanged, calculating the client message through the captured message algorithm.
CN202011079954.8A 2020-10-10 2020-10-10 Lightweight serverless computing method based on message Pending CN112363823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011079954.8A CN112363823A (en) 2020-10-10 2020-10-10 Lightweight serverless computing method based on message

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011079954.8A CN112363823A (en) 2020-10-10 2020-10-10 Lightweight serverless computing method based on message

Publications (1)

Publication Number Publication Date
CN112363823A true CN112363823A (en) 2021-02-12

Family

ID=74506641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011079954.8A Pending CN112363823A (en) 2020-10-10 2020-10-10 Lightweight serverless computing method based on message

Country Status (1)

Country Link
CN (1) CN112363823A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103873523A (en) * 2012-12-14 2014-06-18 北京东方通科技股份有限公司 Client cluster access method and device
CN103984734A (en) * 2014-05-20 2014-08-13 中国科学院软件研究所 Cloud service message transmission method orienting high-performance computation
US8862125B2 (en) * 2009-11-17 2014-10-14 Thales Method and system for distributing content with guarantees of delivery timescales in hybrid radio networks
CN110058950A (en) * 2019-04-17 2019-07-26 上海沄界信息科技有限公司 Distributed cloud computing method and equipment based on serverless backup framework
CN110383764A (en) * 2016-12-16 2019-10-25 华为技术有限公司 The system and method for usage history data processing event in serverless backup system
CN111541760A (en) * 2020-04-20 2020-08-14 中南大学 Complex task allocation method based on server-free fog computing system architecture
CN111562990A (en) * 2020-07-15 2020-08-21 北京东方通软件有限公司 Lightweight serverless computing method based on message

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8862125B2 (en) * 2009-11-17 2014-10-14 Thales Method and system for distributing content with guarantees of delivery timescales in hybrid radio networks
CN103873523A (en) * 2012-12-14 2014-06-18 北京东方通科技股份有限公司 Client cluster access method and device
CN103984734A (en) * 2014-05-20 2014-08-13 中国科学院软件研究所 Cloud service message transmission method orienting high-performance computation
CN110383764A (en) * 2016-12-16 2019-10-25 华为技术有限公司 The system and method for usage history data processing event in serverless backup system
CN110058950A (en) * 2019-04-17 2019-07-26 上海沄界信息科技有限公司 Distributed cloud computing method and equipment based on serverless backup framework
CN111541760A (en) * 2020-04-20 2020-08-14 中南大学 Complex task allocation method based on server-free fog computing system architecture
CN111562990A (en) * 2020-07-15 2020-08-21 北京东方通软件有限公司 Lightweight serverless computing method based on message

Similar Documents

Publication Publication Date Title
US8447640B2 (en) Device, system and method of handling user requests
US20070078699A1 (en) Systems and methods for reputation management
US9946775B2 (en) System and methods thereof for detection of user demographic information
US9569499B2 (en) Method and apparatus for recommending content on the internet by evaluating users having similar preference tendencies
US9324112B2 (en) Ranking authors in social media systems
US8799285B1 (en) Automatic advertising campaign structure suggestion
EP1320041A2 (en) Searching profile information
CN106844407B (en) Tag network generation method and system based on data set correlation
CN102722553A (en) Distributed type reverse index organization method based on user log analysis
CN111125453A (en) Opinion leader role identification method in social network based on subgraph isomorphism and storage medium
CN109819015A (en) Information-pushing method, device, equipment and storage medium based on user's portrait
CN101022377A (en) Interactive service establishing method based on service relation body
Lim et al. A topological approach for detecting twitter communities with common interests
Buckley et al. Social media and customer behavior analytics for personalized customer engagements
JP4868484B2 (en) How to compare search profiles
CN111562990B (en) Lightweight serverless computing method based on message
CN103955461A (en) Semantic matching method based on ontology set concept similarity
CN103412883A (en) Semantic intelligent information publishing and subscribing method based on P2P technology
US20150074121A1 (en) Semantics graphs for enterprise communication networks
CN111882224A (en) Method and device for classifying consumption scenes
CN111160699A (en) Expert recommendation method and system
CN103942249A (en) Information service scheduling system based on body collective semantic matching
CN111523297A (en) Data processing method and device
CN112363823A (en) Lightweight serverless computing method based on message
Cohen Data management for social networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210212

RJ01 Rejection of invention patent application after publication