CN115982518A - Component operation method and related equipment - Google Patents

Component operation method and related equipment Download PDF

Info

Publication number
CN115982518A
CN115982518A CN202310091157.9A CN202310091157A CN115982518A CN 115982518 A CN115982518 A CN 115982518A CN 202310091157 A CN202310091157 A CN 202310091157A CN 115982518 A CN115982518 A CN 115982518A
Authority
CN
China
Prior art keywords
computing
component
target
computing component
hash value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310091157.9A
Other languages
Chinese (zh)
Inventor
董佳佳
张启超
殷山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202310091157.9A priority Critical patent/CN115982518A/en
Publication of CN115982518A publication Critical patent/CN115982518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The specification provides a component operation method and related equipment, which are applied to a computing system, wherein the computing system comprises a computing component combination for realizing a computing process; at least part of the computing components in the computing component combination have computing dependency relationship with corresponding upstream computing components in the computing flow. The method comprises the following steps: determining whether a target computing component in the computing component combination has a corresponding upstream computing component in the computing flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow; determining whether the calculated hash value of the target computing component is the same as the hash value calculated by the target computing component in the last execution; if not, the target computing component is executed again.

Description

Component operation method and related equipment
Technical Field
One or more embodiments of the present disclosure relate to the field of data processing technologies, and in particular, to a component operating method and related device.
Background
In various computing scenarios, in the face of complex computing processes, it is often necessary to continuously adjust parameters of each computing component in the computing process and continuously rerun the parameters to make the final computing result reach an ideal state. However, the re-running of the computing components often takes a lot of time and hardware resources. Generally speaking, the cases where a computing component must be re-run include the following two: if the parameters of the component are changed, the component needs to be operated again; if there is a change in the parameter of an upstream component that has a dependency relationship with the component, the component needs to be re-run even if there is no change in the parameter of the component itself. Therefore, when the calculation flow is re-executed, if the calculation component without any parameter adjustment can be made to be not required to be re-run, a great deal of time and hardware resources can be saved. Further, how to quickly and conveniently judge whether the current computing component needs to be operated again is an urgent problem to be solved.
Disclosure of Invention
In view of the above, one or more embodiments of the present disclosure provide an assembly operation method and a related device.
To achieve the above object, one or more embodiments of the present specification provide a component operating method applied to a computing system, where the computing system includes a combination of computing components for implementing a computing process; wherein, at least part of the computing components in the computing component combination have a computing dependency relationship with the corresponding upstream computing components in the computing process; the method comprises the following steps:
determining whether a target compute component in the set of compute components has a corresponding upstream compute component in the compute flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow;
determining whether the calculated hash value of the target computing component is the same as the hash value calculated by the target computing component when the target computing component was executed last time; if not, the target computing component is executed again.
Correspondingly, the specification also provides a component running device which is applied to a computing system, wherein the computing system comprises a computing component combination for realizing the computing process; wherein, at least part of the computing components in the computing component combination have computing dependency relationship with corresponding upstream computing components in the computing process; the device comprises:
the first calculation module is used for determining whether a target calculation component in the calculation component combination has a corresponding upstream calculation component in the calculation flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow;
a first execution module, configured to determine whether the computed hash value of the target computing component is the same as the hash value computed by the target computing component in the last execution; if not, the target computing component is executed again.
Accordingly, the present specification also provides a computer apparatus comprising: a memory and a processor; the memory having stored thereon a computer program executable by the processor; when the processor executes the computer program, the component execution method according to the above embodiments is executed.
Accordingly, the present specification also provides a computer readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, performs the component execution method according to the embodiments.
In summary, when the same calculation process is executed each time, the hash value of each of the plurality of calculation components corresponding to the calculation process may be calculated, and whether the calculation component needs to be executed again is determined by comparing whether the calculated hash value is the same as the hash value of the calculation component when the calculation component is executed last time. If the computing component has an upstream computing component having a dependency relationship with the computing component in the computing process, the hash value of the computing component may be computed based on the parameter of the computing component and the hash value of the upstream computing component. Therefore, when the hash value of the upstream computing component changes, the hash value of the downstream component having a dependency relationship with the upstream computing component also necessarily changes, so that the parameter change of the computing component is effectively transmitted from top to bottom in the computing process to form a self-closed loop. When the whole calculation process is executed again each time, the calculation system can efficiently and conveniently judge whether any calculation component in the calculation process needs to be executed again without depending on a front end or other platforms, and the method is simple to implement and low in cost. And furthermore, the time and hardware resources consumed by executing the calculation flow each time can be greatly reduced.
Drawings
FIG. 1 is a schematic block diagram of a computing system provided in an exemplary embodiment;
FIG. 2 is a schematic diagram of a DAG comprised of computing components provided by an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method for operating a component provided by an exemplary embodiment;
FIG. 4 is a schematic diagram of a hash chain provided by an exemplary embodiment;
FIG. 5 is a flow diagram illustrating another method for operating components provided by an exemplary embodiment;
FIG. 6 is a schematic diagram of an exemplary embodiment of a component operating apparatus;
fig. 7 is a schematic structural diagram of a computer device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the methods may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
In addition, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the relevant data need to comply with relevant laws and regulations and standards of relevant countries and regions, and are provided with corresponding operation entrances for the user to select authorization or denial.
First, some terms in the present specification are explained so as to be easily understood by those skilled in the art.
(1) The privacy calculation is a technical set for realizing data analysis calculation on the premise of protecting data from being leaked outside, so that the purpose of 'available and invisible' of the data is achieved. The privacy calculation can realize the conversion and release of data value on the premise of fully protecting data and privacy safety.
(2) A Directed Acyclic Graph (DAG) refers to a loop-free Directed Graph. In the embodiment of the present specification, a DAG composed of several computation components may be maintained in various computing systems, each node in the DAG may correspond to one computation component, and a connection line with a direction between nodes may correspond to upstream and downstream computation dependencies between the computation components.
As described above, in various computing scenarios, in the face of a complex computing process, it is often necessary to continuously adjust parameters of each computing component in the computing process and continuously re-run the computing components to make the final computing result reach an ideal state. However, the re-running of the computing components often requires a significant amount of time and hardware resources. Therefore, when the calculation flow is re-executed, the calculation components without any parameter adjustment do not need to be re-run, so as to save a great deal of time and hardware resources.
For example, in the training process of the model, parameters of each computing component in the computing process involved in the model often need to be continuously adjusted, so that the model can finally output a result that meets the expectation. The parameters may include, for example, model hyper-parameters, data slicing ratios, iteration number, and the like. Considering that the number of parameters of each computing component is huge, if each parameter of the computing component is adjusted in each execution, the database needs to be maintained frequently to store massive parameters configured for each computing component in each execution, and the computing amount is huge, the time consumption is long, and the error rate is high. Therefore, in a common technical solution, it is often determined whether there is an adjustment in a parameter of a computing component by a front-end recognizing user behavior, for example, whether a corresponding configuration modification operation is performed on a parameter configuration interface provided by a user on the front-end recognizing user behavior with respect to a parameter of a certain computing component, and whether there is an adjustment in the parameter of the computing component is determined based on a recognition result returned by the front-end recognizing user behavior. Therefore, although the calculation amount is reduced to a certain extent, the operation of the back-end calculation component excessively depends on the front end, and the implementation process is still complex, so that the actual application requirement cannot be met.
Based on the above, the present specification provides a technical solution for calculating hash values of each computing component based on parameters of each computing component and possibly existing hash values of upstream computing components corresponding thereto, and comparing whether the hash value calculated this time is the same as the hash value when the computing component is executed last time, so as to efficiently and conveniently judge whether each computing component needs to be executed again.
In implementation, a computing system may traverse through various computing components in a computing component combination in response to instructions executing against the computing component combination in which to implement a computing flow. When traversing to the target computing component therein, it may be determined whether the target computing component has a corresponding upstream computing component in the computing flow; if so, the computing system may calculate the hash value of the target computing component based on the parameters of the target computing component and the hash value of the upstream computing component corresponding to the target computing component in the computing flow. Subsequently, the computing system may determine whether the computed hash value of the target computing component is the same as the hash value computed by the target computing component on the last execution; if not, the computing system needs to execute the target computing component again, and correspondingly, if the hash values are the same, the computing system does not need to execute the target computing component again.
In the above technical solution, each time the same calculation flow is executed, the hash value of each of the plurality of calculation components corresponding to the calculation flow may be calculated, and whether the calculation component needs to be executed again is determined by comparing whether the hash value calculated this time is the same as the hash value of the calculation component executed last time. If the computing component has an upstream computing component having a dependency relationship with the computing component in the computing process, the hash value of the computing component may be computed based on the parameter of the computing component and the hash value of the upstream computing component. Therefore, when the hash value of the upstream computing component changes, the hash value of the downstream component having a dependency relationship with the upstream computing component also necessarily changes, so that the parameter change of the computing component is effectively transmitted from top to bottom in the computing process to form a self-closed loop. When the whole calculation flow is re-executed every time, the calculation system can efficiently and conveniently judge whether any calculation component in the calculation flow needs to be re-executed without depending on a front end or other platforms, and the method is simple to implement and low in cost. And furthermore, the time and hardware resources consumed by executing the calculation flow each time can be greatly reduced.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a computing system according to an exemplary embodiment. As shown in fig. 1. The computing system may include a combination of computing components including a plurality of computing components, such as computing component a, computing component B, and computing component C, among others. The combination of computing components can be used to implement a predetermined computing process. Where a computing component is generally a simple encapsulation of data and methods, or an abstraction of an algorithm.
In an illustrated embodiment, the computing system shown in fig. 1 may be any computing system, and a plurality of computing components included therein may also be used for executing any computing, which is not specifically limited in this specification.
In an illustrative embodiment, the computing system may include a privacy computing system and, accordingly, the computing component may be configured to perform privacy calculations. In an illustrated embodiment, the computing system may comprise a big data computing system and, accordingly, the computing component may be configured to perform big data computing. In an illustrated embodiment, the computing system may also be a cloud computing system, and accordingly, the computing component may be configured to perform cloud computing, and so on, which is not specifically limited in this specification.
For example, in a private computing scenario, a Trusted Execution Environment (TEE) may be pre-established on a computing device on a private computing platform, where the TEE is generally an area on a processor in the computing device (e.g., a smart phone, a computer, a server, etc.). The function of this block area is mainly to provide a more secure space for the execution of data and code and to ensure their confidentiality and integrity. The plurality of computing components may be computing programs running in the TEE.
In an illustrated embodiment, at least some of the computing components in the computing component combination may have a computing dependency relationship with their corresponding upstream computing components in the computing flow. When the downstream computing component and the upstream computing component have a computing dependency relationship, it is usually the downstream computing component that needs to use the computing result of the upstream computing component during execution, or the downstream computing component needs to further compute the computing result output by the upstream computing component, and so on,
for example, the computing component a, the computing component B, the computing component C, and the like in fig. 1 may be respectively used to execute a part of the computing flow, and if the computing component B needs to use the computing result of the computing component a during execution, the computing component B has an upstream computing component a with a computing dependency relationship in the computing flow.
For example, if the computation component C needs to use the computation results of the computation component a and the computation component B when executing, the computation component C has an upstream computation component a and an upstream computation component B having a computation dependency relationship with the computation component a and the upstream computation component B in the computation flow.
For example, if the computing component a is a head component in the computing flow, such as the computing component a is a computing component for executing a first step in the computing flow, the computing component a does not generally depend on the computing result of any other computing component when executing, and the computing component a does not have an upstream computing component in the computing flow having a computing dependency relationship with the computing component a.
In an illustrated embodiment, the computing system described above can also maintain a DAG composed of multiple computing components included in the computing component combination. Each node in the DAG may correspond to one computing component in the computing component combination, and a connection line with a direction between nodes may correspond to upstream and downstream computing dependencies between the computing components.
In an illustrated embodiment, the computing system may generate a corresponding DAG based on the computing dependency relationship existing among the multiple computing components, and store the DAG in a local storage device or another remote storage device, and the like, which is not specifically limited in this specification.
In an illustrative embodiment, referring to FIG. 2, FIG. 2 is a diagram of a DAG comprised of computing components in accordance with an exemplary embodiment. As shown in fig. 2, the DAG is composed of computation component a, computation component B, computation component C, computation component D, computation component E, computation component F, and computation component G.
In one illustrated embodiment, the computing components a, B, C, D, E, F, and G may be computing components within the computing system shown in fig. 1 and described above. When the computing system is a privacy computing system, the computing component a, the computing component B, the computing component C, the computing component D, the computing component E, the computing component F, the computing component G may be used to perform privacy calculations.
As shown in fig. 2, the computation component a and the computation component B are head components in the computation flow, and the execution of the computation component a and the computation component B is independent of any other computation component.
As shown in fig. 2, the computing component C has a computing dependency relationship with both the computing component a and the computing component B, which are upstream computing components (or parent components) of the computing component C. Illustratively, the computation component C needs to use the computation results of the computation component a and the computation component B when performing the privacy computation.
As shown in fig. 2, a computation component D has a computation dependency relationship with a computation component B, which is an upstream computation component of the computation component D. Illustratively, the computation component D needs to use the computation results of the computation component B when performing the privacy computation.
As shown in fig. 2, the computing component E has a computing dependency relationship with both the computing component a and the computing component C, which are upstream computing components of the computing component E. Further, as shown in fig. 2, the computing component E directly depends on the computing component a and the computing component C, and indirectly depends on the computing component B, so that the computing component C may also be referred to as a direct parent component of the computing component E, and the like, which is not specifically limited in this specification. Illustratively, the computation component E needs to use the computation results of the computation component a and the computation component C when performing the privacy computation.
As shown in fig. 2, the computation component F has a computation dependency relationship with both the computation component E and the computation component C, which are upstream computation components of the computation component F. Further, as shown in fig. 2, the computing component F directly depends on the computing component C and the computing component E, and indirectly depends on the computing component a and the computing component B, so the computing component C and the computing component E may also be referred to as direct parent components of the computing component F, and the like, which is not specifically limited in this specification. Illustratively, the computation component F needs to use the computation results of the computation component E and the computation component C when performing the privacy computation.
As shown in fig. 2, the computation component G has a computation dependency relationship with both the computation component D and the computation component F, which are upstream computation components of the computation component G. Further, as shown in fig. 2, the computing component G depends directly on the computing component D and the computing component F, and depends indirectly on the computing component a, the computing component B, the computing component C, and the computing component E, so the computing component D and the computing component F may also be referred to as direct parent components of the computing component G, and the like, which is not limited in this specification. Illustratively, the computation component G needs to use the computation results of the computation component D and the computation component F when performing the privacy computation.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a method for operating a component according to an exemplary embodiment. As shown in FIG. 3, the method may be applied in a computing system, such as the computing system described in FIG. 1. In particular, the method may be applied in a computer device in the computing system. The computer device may be, for example, a smart wearable device, a smart phone, a tablet computer, a notebook computer, a desktop computer, a vehicle computer, a server, and the like, which is not specifically limited in this specification. As shown in fig. 2, the method may specifically include the following steps S101 to S102.
Step S101, determining whether a target calculation component in the calculation component combination has a corresponding upstream calculation component in the calculation flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow.
The computing system may be used to implement a series of computing processes, and further, a combination of computing components for implementing a computing process may be included within the computing system. Wherein, the computing component combination can include a plurality of computing components, and each computing component can be used for executing at least part of the computing task in the computing flow.
In an illustrative embodiment, the computing system may include a privacy computing system, the computing process may be a privacy computing process, and accordingly, each computing component in the combination of computing components may be used to perform privacy computations. In an illustrated embodiment, the computing system may include a big data computing system, the computing process may be a big data computing process, and accordingly, each computing component in the computing component combination may be used to perform big data computing, and so on, which are not described herein again.
As described above, in order to obtain an ideal calculation result, the developer often needs to continuously adjust the parameters of the calculation components and perform (or run) the whole calculation component combination again based on the adjusted parameters. If the calculation result obtained by executing the parameters after adjusting the parameters is still not ideal, the parameters of the calculation components need to be adjusted again, the whole calculation component combination needs to be executed again, and the like, until the calculation component combination can finally generate a more ideal calculation result after execution.
Further, as described above, each time the computing system re-executes the whole computing component combination, when the parameter of a certain computing component in the computing component combination is not adjusted and the parameter of the upstream computing component on which the certain computing component depends is also not adjusted, the computing component may not need to be executed again at this time, thereby saving the whole time and hardware resources each time the computing component combination is executed.
In an illustrated embodiment, each time the computing system executes a computing component combination corresponding to a computing process, the computing system may traverse each computing component in the computing component combination and determine whether each computing component needs to be executed again this time.
In an illustrative embodiment, a computing system may traverse each computing component in the combination of computing components in response to instructions for execution on the combination of computing components, and execute a corresponding flow for the traversed target computing component to determine whether to re-execute the target computing component. The target computing component may be one computing component in the computing component combination, such as computing component a, computing component C, or computing component G shown in fig. 2.
In an illustrated embodiment, a DAG composed of a combination of computing components, such as the DAG shown in fig. 2, may be maintained in the computing system, and will not be described herein again.
In an illustrated embodiment, the computing system may traverse, based on the DAG, the various compute components in the DAG in order by compute dependencies between the various compute components in the DAG, and perform corresponding flows for the traversed target compute component to determine whether to re-execute the target compute component.
In an illustrative embodiment, when a computing system traverses to a target computing component in a computing component group, it may first be determined whether the target computing component has a corresponding upstream computing component in the computing flow.
In an illustrated embodiment, a computing system may determine whether a target compute component has a corresponding upstream compute component in a compute flow based on a DAG it maintains.
Illustratively, taking the target compute component as compute component a in fig. 2 above as an example, when the computing system traverses to compute component a, the computing system may determine that there is no corresponding upstream compute component in the compute flow for that compute component C based on the DAG it maintains (e.g., the DAG shown in fig. 2).
Illustratively, taking the target compute component as compute component C in fig. 2 as an example, when the computing system traverses to compute component C, the computing system may determine that there is a corresponding upstream compute component, respectively compute component a and compute component B, in the computation flow for that compute component C based on the DAG (e.g., the DAG shown in fig. 2) that it maintains.
Illustratively, taking the target computing component as the computing component G in fig. 2 as an example, when the computing system traverses to the computing component G, the computing system may determine that there is a corresponding upstream computing component, respectively computing component D and computing component F, in the computing flow of the computing component G based on the DAG (e.g., the DAG shown in fig. 2) that the computing system maintains.
Further, in one illustrated embodiment, if the computing system determines that the target computing component has a corresponding upstream computing component in the computing flow, the computing system may compute the hash value of the target computing component based on a parameter (params) of the target computing component and the hash value of the corresponding upstream computing component in the computing flow of the target computing component.
For example, taking the target computing component as the computing component C in fig. 2 as an example, after determining that the computing component C has a corresponding upstream computing component (including the computing component a and the computing component B) in the computing process, the computing system may obtain the current parameters of the computing component C and the current hash values of the computing component a and the computing component B. The computing system may then calculate the current hash value for compute component C based on the current parameters for compute component C and the current hash values for compute component a and compute component B.
Obviously, when traversing to compute component C, compute component a and compute component B have completed computing their current hash values.
Further, in an illustrated embodiment, if the computing system determines that the target computing component does not have a corresponding upstream computing component in the computing flow, the computing system may compute a hash value for the target computing component based on parameters of the target computing component.
For example, taking the target computing component as the computing component a in fig. 2 as an example, after the computing system determines that the computing component a does not have a corresponding upstream computing component in the computing process, the computing system may obtain the current parameters of the computing component a. The computing system may then calculate the current hash value for compute component a based on the current parameters for compute component a.
Further, in an illustrated embodiment, if the computing system determines that the target computing component has N corresponding upstream computing components in the computing process, the computing system may obtain corresponding data to be computed based on the hash values of the N upstream computing components and the parameters of the target computing component. Then, the computing system may perform a computation on the data to be computed based on a preset hash algorithm, so as to obtain a hash value of the target computing component. Wherein N is an integer greater than or equal to 1.
For example, the predetermined hash Algorithm may be a one-way hash Algorithm (Message-Digest Algorithm 5, md5) Algorithm, or any other possible hash Algorithm, which is not specifically limited in this specification. The MD5 algorithm, otherwise known as the information-summarization algorithm, is developed through MD2, MD3 and MD 4. The MD5 algorithm generally has the following functions: inputting information with any length, and outputting a 128-bit output result (digital fingerprint) after processing; different inputs yield different results (uniqueness); the input information cannot be deduced inversely from the output result of 128 bits (irreversibility).
For example, the computing system may fuse the hash values of the N upstream computing components corresponding to the target computing component with the parameters of the target computing component to obtain the data to be computed. The mode of fusion is not particularly limited in the present specification. For example, the computing system may directly concatenate the hash values of the N upstream computing components and the parameters of the target computing component, so as to obtain the to-be-computed data; or, the computing system may further add the hash values of the N upstream computing components and the parameters of the target computing component, so as to obtain the data to be computed; or, the computing system may map the hash values of the N upstream computing components and the parameters of the target computing component into corresponding values respectively, and then perform splicing or accumulation, so as to obtain the data to be computed, and the like, which is not specifically limited in this specification. In an illustrated embodiment, the computing system may further obtain the data to be calculated corresponding to the hash values of the N upstream computing components and the parameters of the target computing component by any other possible method besides the above method, which is not specifically limited in this specification.
Illustratively, the combination of computing components shown in FIG. 2 is still used as an example. Please refer to the description of the embodiment in fig. 2 for the upstream and downstream computation dependency relationship among the computation components in the computation component combination, which is not described herein again. The hash value of each computing component in the combination of computing components may be as follows:
hash value of component a hashA = hash (paramsA)
Hash value of component B hashB = hash (paramsB)
Hash value of component C hashC = hash (hashA + hashB + paramsC)
Hash value of component D hashD = hash (hashB + paramsD)
Hash value hash of component E = hash (hashA + hashC + paramsE)
Hash value of component F hashF = hash (hashC + hashE + paramsF)
Hash value of component G hashG = hash (hashD + hashF + paramsG)
Step S102, determining whether the calculated hash value of the target calculation component is the same as the hash value calculated by the target calculation component when the target calculation component is executed last time; if not, the target computing component is executed again.
As described above, when a computing system traverses to a target computing component in a computing component combination in response to an instruction executing on the computing component combination, a current hash value for the target computing component may be computed. Further, in an illustrative embodiment, after the computing system calculates the current hash value of the target computing component, the computing system may determine whether the current hash value of the target computing component is the same as the hash value calculated by the target computing component when it was last executed.
In an illustrated embodiment, each time the computing system executes a computing component combination, the computing system may store a hash value corresponding to the execution of the computing component in the computing system. In an illustrated embodiment, after the computing component is successfully executed, the computing system may store the hash value calculated by the computing component this time. Accordingly, after calculating the current hash value of the target computing component, the computing system may obtain the hash value of the target computing component saved when the computing component was executed last time, so as to determine whether the current hash value of the target computing component is the same as the hash value calculated when the target computing component was executed last time.
In an illustrative embodiment, the computing system may generate a hash chain holding hash values for respective compute components based on the compute dependencies between the respective compute components. Accordingly, in an illustrated embodiment, the computing system may determine whether the computed current hash value of the target computing component is the same as the hash value of the target computing component in the hash chain saved the last time the computing component was combined.
Referring to fig. 4, fig. 4 is a schematic diagram of a hash chain according to an exemplary embodiment. As shown in fig. 4, still taking the multiple computing components shown in fig. 2 as an example, after the computing component a, the computing component B, the computing component C, and the like are successfully executed, the computing system may store hash values, such as hashA, hashB, hashC, and the like, which are obtained by calculation in the current execution of the computing component a, the computing component B, the computing component C, and the like. Further, as shown in fig. 4, the computing system may form a hash chain storing hash values of each of the computation components, such as hashA, hashB, and hashC, based on upstream and downstream computation dependencies among the computation component a, the computation component B, and the computation component C.
For example, when the computing system traverses to compute component F, the computing system may determine whether the hash value currently computed by compute component F is the same as the hash value hashF of compute component F in the hash chain shown in fig. 4 that was saved when the computing component was combined for the last time.
It should be understood that, as described in step S101 above, the hash value of each computing component is not only related to its own parameter, but also related to the hash value of the upstream computing component that may exist in the computing component, and therefore, if the hash value of any computing component on the hash chain shown in fig. 4 changes, the hash value of the downstream computing component that has a computing dependency relationship with the computing component will also change in general.
For example, if the hash value of the computing component D changes during the current execution compared to the hash value of the computing component D during the previous execution, for example, the hash value changes, the hash value of the computing component G at this time will also change. At this time, when the computing system traverses the computing component G, based on that the hash value of the corresponding upstream computing component D has changed, the computing system may also directly re-execute the computing component G and simultaneously compute the current hash value of the upstream computing component G, or re-compute the current hash value of the upstream computing component G after the execution of the upstream computing component G is completed, so as to be referred to in the next execution.
For example, if the hash value of the computing component a in fig. 4 is changed from the last execution, for example, to hashA', the hash value of the computing component C and the computing component E at this time will also be changed, further, the hash value of the computing component F at this time will also be changed since the hash value of the computing component C and the computing component E at this time is changed, and further, the hash value of the computing component G at this time will also be changed since the hash value of the computing component F at this time is changed. And will not be described in detail herein.
Further, in one illustrated embodiment, the computing system determines that the current hash value of the target computing component is not the same as the hash value computed by the target computing component at the last execution of the target computing component, and the computing system re-executes the target computing component.
For example, taking fig. 4 as an example, if the computing system determines that the current hash value of the computing component B is not the same as the hash value computed by the computing component B during the last execution, the computing system re-executes the computing component B. For example, if the current hash value of the computing component B is hashB', which is different from the hash value hashB calculated by the computing component B in the previous execution shown in fig. 4, the computing system re-executes the computing component B. For example, after the computing system re-executes the computing component B, the computing system may traverse to the computing component C to calculate the current hash value of the computing component C, and so on, which is not described herein again until traversing to the computing component G to complete the execution of the computing component combination this time.
Further, in an illustrated embodiment, if the computing system determines that the current hash value of the target computing component is the same as the hash value calculated by the target computing component in the last execution, the computing system does not execute the target computing component this time.
For example, taking fig. 4 as an example, if the computing system determines that the current hash value of the computing component B is the same as the hash value calculated by the computing component B in the last execution, the computing system does not execute the computing component B again, and traverses down to the computing component C to calculate the current hash value of the computing component C, and so on, which is not described herein again until traversing to the computing component G to complete the execution of the computing component combination.
Further, referring to fig. 5, fig. 5 is a flow chart illustrating another method for operating components according to an exemplary embodiment. The method may be applied to a computing system, for example, the computing system described in fig. 1, and is not described herein again. The method for operating the components provided in this specification will be described in detail below with reference to fig. 5. As shown in fig. 5, the method may include the following steps S201 to S208.
Step S201, receiving an instruction executed for a computing component combination, where the computing component combination includes M computing components.
The computing system includes a computing component combination for implementing the computing process, where the computing component combination includes a plurality of computing components, which may specifically refer to the description of the embodiments corresponding to fig. 1 to fig. 4, and details are not repeated here. Illustratively, the combination of computing components may include M computing components. Wherein M is an integer greater than 1.
In an illustrative embodiment, a computing system may receive instructions for execution by a combination of computing components. For example, in response to an input operation by a user, the computing system may receive an instruction for execution of a combination of computing components. For example, the computing system may receive an instruction for execution of the combination of computing components in response to a user completing the current adjustment of the parameters of at least some of the combination of computing components.
Step S202, determining whether the ith computing component in the computing component combination has a corresponding upstream computing component in the computing flow.
The computing system, in response to an instruction executing against a combination of compute components, begins traversing each of the M compute components included in the combination of compute components.
In an illustrated embodiment, the computing system may sequentially traverse each of the M compute components based on the DAG (e.g., the DAG shown in fig. 2) that it maintains that is made up of the M compute components. In an illustrated embodiment, the schematic diagram shown in fig. 2 may also be referred to as a component flow, and a plurality of computing components in the component flow may be ordered according to their computing dependencies in the computing flow with other computing components. For example, compute component a shown in fig. 2 may be the compute component traversed by the compute system 1, compute component B may be the compute component traversed by the compute system 2, compute component C may be the compute component traversed by the compute system 3, compute component D may be the compute component traversed by the compute system 4, compute component E may be the compute component traversed by the compute system 5, compute component F may be the compute component traversed by the compute system 6, and compute component G may be the compute component traversed by the compute system 7 (traversal ends up).
In an illustrated embodiment, when the computing system traverses to the ith computing component in the M computing components, it may be determined whether the ith computing component has a corresponding upstream computing component in the computing flow; if yes, step S203 is executed, and if no, step S204 is executed.
For example, if the ith component is the computing component C shown in fig. 2 (i.e., i = 3), when the computing system traverses to the computing component C, it may be determined that there is a corresponding upstream computing component (including the computing component a and the computing component B) in the computing flow of the computing component C, and the computing system performs the following step S203.
For example, if the ith component is the computing component B shown in fig. 2 (i.e., i = 2), when the computing system traverses to the computing component B, it may be determined that the computing component B does not have a corresponding upstream computing component in the computing flow, and the computing system performs the following step S204.
Step S203, calculating a hash value of the ith calculation component based on the parameter of the ith calculation component and the hash value of the corresponding upstream calculation component of the ith calculation component in the calculation flow.
In an illustrated embodiment, if the computing system determines that the ith computing component has a corresponding upstream computing component in the computing process, for example, there are N corresponding upstream computing components, the computing system may calculate the hash value of the ith computing component based on the parameter of the ith computing component and the hash values of the N upstream computing components.
In an illustrated embodiment, the computing system may fuse (e.g., concatenate or accumulate) the hash values of the N upstream computing components with the parameters of the target computing component to obtain the data to be computed. The subsequent computing system may calculate the data to be calculated based on a preset hash algorithm (e.g., MD5 algorithm), so as to obtain a hash value of the ith computing component.
In an illustrated embodiment, the computing system may fuse the hash values of the N upstream computing components in sequence based on a preset first order rule to obtain a fusion result, and fuse the fusion result with a parameter of the ith computing component to obtain the to-be-computed data. Therefore, the consistency of hash calculation at each time can be ensured, and the condition that the content of the hash value of the upstream calculation component is unchanged, but the splicing sequence or the accumulation sequence is changed, so that the influence on the hash calculation result is avoided, and the correctness in calculating the hash value of each calculation component at each time is ensured.
For example, the computing system may sequentially fuse the hash values of the N upstream computing components based on the order of their respective component names in the alphabet to obtain a fused result. For example, taking the ith computing component as the computing component C in fig. 2, the computing system may sequentially merge the respective hash values of the computing component a and the computing component B in the order of the computing component a before the computing component B, so as to obtain a merged result. In an illustrated embodiment, the hash values of the N upstream computing components may also be fused in sequence based on any other possible order rule, which is not specifically limited in this specification.
Further, in an illustrated embodiment, the parameters of the ith computing component may include multiple parameters of different categories, for example, including model hyper-parameters, data segmentation scale, and the like. Therefore, after the computing system obtains the fusion result of the hash values of the N upstream computing components, based on the preset second order rule, the computing system may further fuse the multiple parameters of different categories of the ith computing component in order on the basis of the fusion result to obtain the data to be computed. Similarly, the consistency of the hash calculation each time can be ensured, and the influence on the hash calculation result caused by the change of the splicing sequence or the accumulation sequence due to the unchanged contents of a plurality of parameters of different types of the ith calculation component is avoided, so that the correctness of calculating the hash value of each calculation component each time is ensured.
For example, the computing system may fuse, based on the order of the category names of the parameters of different categories of the ith computing component in the alphabet, the parameters of different categories of the ith computing component in order on the basis of the fusion result of the hash values of the N upstream computing components, to obtain the data to be computed. In an illustrated embodiment, multiple parameters of different classes of the ith computing component may also be fused in sequence based on any other possible order rule, which is not specifically limited in this specification.
Step S204, calculating the hash value of the ith calculation component based on the parameter of the ith calculation component.
In an illustrated embodiment, if the computing system determines that the ith computing component does not have a corresponding upstream computing component in the computing flow, the computing system may calculate the hash value of the ith computing component based on the parameter of the ith computing component. Specifically, reference may be made to the description of the embodiment corresponding to fig. 3, which is not repeated herein.
In step S205, it is determined whether the calculated hash value of the ith calculation component is the same as the hash value calculated by the ith calculation component in the last execution.
After the hash value of the ith computing component is obtained by computing, the computing system can determine whether the computed hash value of the ith computing component is the same as the hash value computed by the ith computing component in the last execution; if yes, step S204 is executed, and if no, step S207 is executed.
In an illustrative embodiment, the computing system may determine whether the hash value of the ith computing component calculated this time is the same as the hash value calculated by the ith computing component when it was executed last time, based on a hash chain including the hash values of the M computing components saved when it was executed last time. Specifically, reference may be made to the description of the embodiment corresponding to fig. 3 and fig. 4, which is not repeated herein.
In step S206, the ith calculation component is not executed.
And the computing system determines that the computed hash value of the ith computing component is the same as the computed hash value of the ith computing component in the last execution, so that the ith computing component does not need to be executed again. Specifically, reference may be made to the description of the embodiment corresponding to fig. 3 and fig. 4, which is not repeated herein.
Step S207, the ith calculation component is executed.
And the computing system determines that the computed hash value of the ith computing component is different from the hash value computed by the ith computing component in the last execution, and the ith computing component still needs to be executed at this time. Specifically, reference may be made to the description of the embodiment corresponding to fig. 3 and fig. 4, which is not repeated herein.
In step S208, it is determined whether i is equal to M.
The computing system determines whether the ith currently traversed compute component is the last compute component (i.e., the Mth compute component) in the M compute components included in the compute component combination; if yes, the computing system may end the execution, if no, the computing system will traverse to the next computing component (i.e., the (i + 1) th computing component), and repeat the above steps S202-S207 until i = M, and the execution ends.
In summary, when the same calculation flow is executed each time, the hash value of each of the plurality of calculation components corresponding to the calculation flow may be calculated, and whether the calculation component needs to be executed again is determined by comparing whether the hash value calculated this time is the same as the hash value of the calculation component executed last time. If the computing component has an upstream computing component having a dependency relationship with the computing component in the computing process, the hash value of the computing component may be computed based on the parameter of the computing component and the hash value of the upstream computing component. Therefore, when the hash value of the upstream computing component changes, the hash value of the downstream component having a dependency relationship with the upstream computing component also changes inevitably, so that the parameter change of the computing component is effectively transmitted from top to bottom in the computing process, and a self-closed loop is formed. When the whole calculation process is executed again each time, the calculation system can efficiently and conveniently judge whether any calculation component in the calculation process needs to be executed again without depending on a front end or other platforms, and the method is simple to implement and low in cost. And furthermore, the time and hardware resources consumed by executing the calculation flow each time can be greatly reduced.
Corresponding to the implementation of the above method flow, an embodiment of the present specification further provides an assembly running apparatus, which is applied to a computing system, where the computing system includes a computing assembly combination for implementing a computing flow; and at least part of the computing components in the computing component combination have computing dependency relationship with corresponding upstream computing components in the computing flow. Referring to fig. 6, fig. 6 is a schematic structural diagram of an assembly operating device according to an exemplary embodiment. The apparatus 30 may be configured to traverse each compute component in the set of compute components in response to an instruction to execute on the set of compute components, and determine whether to re-execute the target compute component for the traversed target compute component. As shown in fig. 6, the apparatus 30 includes:
a first calculation module 301, configured to determine whether the target calculation component has a corresponding upstream calculation component in the calculation flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow;
a first execution module 304, configured to determine whether the calculated hash value of the target computing component is the same as the hash value calculated by the target computing component in the last execution; if not, the target computing component is executed again.
In an illustrative embodiment, the computing system includes a privacy computing system, the computing component for performing privacy calculations.
In an illustrated embodiment, the apparatus 30 further comprises:
a saving module 303, configured to, after each computing component in the computing component combination is successfully executed, save the hash value of each computing component, and generate a hash chain in which the hash value of each computing component is saved based on the computing dependency relationship between the computing components.
In an illustrated embodiment, the first executing module 304 is specifically configured to:
determining whether the calculated hash value of the target computing component is the same as the hash value of the target computing component in the hash chain saved when the computing component combination is executed last time.
In an illustrated embodiment, the computing system maintains a directed acyclic graph DAG composed of a combination of the computing components; each node in the DAG corresponds to one computing component in the computing component combination, and the connecting lines with directions among the nodes correspond to upstream and downstream computing dependency relations among the computing components.
In an illustrated embodiment, the first calculating module 301 is specifically configured to:
acquiring parameters of the target computing component and hash values of N corresponding upstream computing components of the target computing component in the computing process, and fusing the hash values of the N upstream computing components and the parameters of the target computing component to obtain data to be computed;
and calculating the data to be calculated based on a preset hash algorithm to obtain a hash value of the target calculation component.
In an illustrated embodiment, the first computing module 301 is specifically configured to:
and fusing the hash values of the N upstream computing components in sequence based on a preset first sequence rule to obtain a fusion result, and fusing the fusion result and the parameters of the target computing component to obtain the data to be computed.
In an illustrated embodiment, the first computing module 301 is specifically configured to:
and fusing the hash values of the N upstream computing components in sequence based on the sequence of the component names of the N upstream computing components in the alphabet to obtain a fusion result.
In an illustrated embodiment, the first computing module 301 is specifically configured to:
and based on a preset second sequence rule, sequentially fusing a plurality of parameters of different types of the target computing assembly on the basis of the fusion result to obtain the data to be computed.
In an illustrated embodiment, the first calculating module 301 is specifically configured to:
and based on the sequence of the category names of the parameters of different categories of the target computing assembly in the alphabet, sequentially fusing the parameters of different categories of the target computing assembly on the basis of the fusion result to obtain the data to be computed.
In an illustrated embodiment, the apparatus 30 further comprises:
a second calculating module 302, configured to calculate a hash value of the target computing component based on the parameter of the target computing component if the target computing component does not have a corresponding upstream computing component in the computing flow.
In an illustrated embodiment, the apparatus 30 further comprises:
a second executing module 305, configured to not execute the target computing component if the computed hash value of the target computing component is the same as the computed hash value of the target computing component in the last execution.
The detailed description of the functions and actions of the units in the device 30 is provided in the embodiments corresponding to fig. 1 to 5, and will not be repeated herein. It should be understood that the above-mentioned apparatus 30 can be implemented by software, and also can be implemented by hardware or a combination of hardware and software. Taking a software implementation as an example, the logical device is formed by reading a corresponding computer program instruction into a memory for running through a Central Processing Unit (CPU) of the device. In terms of hardware, the device in which the above apparatus is located generally includes, in addition to the CPU and the memory, other hardware such as a chip for performing wireless signal transmission and reception and/or other hardware such as a board for realizing a network communication function.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the units or modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The apparatuses, units and modules illustrated in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
Corresponding to the method embodiment, the embodiment of the specification further provides a computer device. Referring to fig. 7, fig. 7 is a schematic structural diagram of a computer device according to an exemplary embodiment. As shown in fig. 7, the computer device 1000 includes a processor 1001 and a memory 1002, and further may include an input device 1004 (e.g., a keyboard, etc.) and an output device 1005 (e.g., a display, etc.). The processor 1001, memory 1002, input device 1004, and output device 1005 may be connected by a bus or other means. As shown in fig. 7, the memory 1002 includes a computer-readable storage medium 1003, and the computer-readable storage medium 1003 stores a computer program that can be executed by the processor 1001. The processor 1001 may be a general purpose central processing unit, a microprocessor, or an integrated circuit for controlling the execution of the above method embodiments. When the stored computer program is executed, the processor 1001 may perform the steps of the component execution method in the embodiment of the present specification, including: determining whether a target computing component in a computing component combination has a corresponding upstream computing component in the computing flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow; determining whether the calculated hash value of the target computing component is the same as the hash value calculated by the target computing component when the target computing component was executed last time; if not, the target computing component is executed again, and so on. For detailed descriptions of the steps of the above component operation method, please refer to the previous contents, which are not described herein again.
In correspondence with the above method embodiments, embodiments of the present specification also provide a computer-readable storage medium on which computer programs are stored, which, when executed by a processor, perform the steps of the component execution method in the embodiments of the present specification. Please refer to the description of the embodiments corresponding to fig. 1 to 5, which is not repeated herein.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
In a typical configuration, a terminal device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data.
Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "...," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.

Claims (15)

1. The component operation method is applied to a computing system, and the computing system comprises a computing component combination for realizing a computing process; wherein, at least part of the computing components in the computing component combination have a computing dependency relationship with the corresponding upstream computing components in the computing process; the method comprises the following steps:
determining whether a target computing component in the computing component combination has a corresponding upstream computing component in the computing flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow;
determining whether the calculated hash value of the target computing component is the same as the hash value calculated by the target computing component when the target computing component was executed last time; if not, the target computing component is executed again.
2. The method of claim 1, the computing system comprising a privacy computing system, the computing component to perform privacy computations.
3. The method of claim 1, further comprising:
and after each computing component in the computing component combination is successfully executed, storing the hash value of each computing component, and generating a hash chain storing the hash value of each computing component based on the computing dependency relationship among the computing components.
4. The method of claim 3, the determining whether the computed hash value of the target computing component is the same as the hash value computed by the target computing component on the last execution, comprising:
determining whether the calculated hash value of the target computing component is the same as the hash value of the target computing component in the hash chain saved when the computing component is executed last time.
5. The method of claim 1, the computing system maintaining a Directed Acyclic Graph (DAG) composed of the combination of computing components; each node in the DAG corresponds to one computing component in the computing component combination, and the connecting lines with directions among the nodes correspond to upstream and downstream computing dependency relations among the computing components.
6. The method of claim 1, the computing the hash value of the target computing component based on the parameters of the target computing component and the hash value of the corresponding upstream computing component of the target computing component in the computing flow, comprising:
acquiring parameters of the target computing component and hash values of N corresponding upstream computing components of the target computing component in the computing process, and fusing the hash values of the N upstream computing components and the parameters of the target computing component to obtain data to be computed;
and calculating the data to be calculated based on a preset hash algorithm to obtain the hash value of the target calculation component.
7. The method according to claim 6, wherein the fusing the hash values of the N upstream computing components with the parameters of the target computing component to obtain data to be computed comprises:
and fusing the hash values of the N upstream computing components in sequence based on a preset first sequence rule to obtain a fusion result, and fusing the fusion result and the parameters of the target computing component to obtain the data to be computed.
8. The method according to claim 7, wherein the fusing the hash values of the N upstream computing components in order based on a preset first order rule to obtain a fused result comprises:
and fusing the hash values of the N upstream computing components in sequence based on the sequence of the component names of the N upstream computing components in the alphabet to obtain a fusion result.
9. The method of claim 7, the parameters of the target computing component comprising a plurality of parameters of different classes; the fusing the fusion result with the parameter of the target computing component to obtain the data to be computed, including:
and based on a preset second sequence rule, sequentially fusing a plurality of parameters of different types of the target computing assembly on the basis of the fusion result to obtain the data to be computed.
10. The method according to claim 9, wherein the fusing, based on a preset second order rule, multiple parameters of different categories of the target computing component in order based on the fusion result to obtain the data to be computed, includes:
and based on the sequence of the category names of the parameters of different categories of the target computing assembly in the alphabet, sequentially fusing the parameters of different categories of the target computing assembly on the basis of the fusion result to obtain the data to be computed.
11. The method of claim 1, further comprising:
and if the target computing component does not have a corresponding upstream computing component in the computing process, computing the hash value of the target computing component based on the parameters of the target computing component.
12. The method of any of claims 1-11, further comprising:
and if the calculated hash value of the target computing component is the same as the hash value calculated by the target computing component in the last execution, not executing the target computing component.
13. The component running device is applied to a computing system, and the computing system comprises a computing component combination for realizing a computing process; wherein, at least part of the computing components in the computing component combination have a computing dependency relationship with the corresponding upstream computing components in the computing process; the device comprises:
a first computing module, configured to determine whether a target computing component in the computing component combination has a corresponding upstream computing component in the computing flow; if yes, calculating the hash value of the target calculation component based on the parameters of the target calculation component and the hash value of the upstream calculation component corresponding to the target calculation component in the calculation flow;
a first execution module, configured to determine whether the computed hash value of the target computing component is the same as the hash value computed by the target computing component in the last execution; if not, the target computing component is executed again.
14. An electronic device, comprising: a memory and a processor; the memory having stored thereon a computer program executable by the processor; the processor, when executing the computer program, performs the method of any of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 12.
CN202310091157.9A 2023-01-19 2023-01-19 Component operation method and related equipment Pending CN115982518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310091157.9A CN115982518A (en) 2023-01-19 2023-01-19 Component operation method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310091157.9A CN115982518A (en) 2023-01-19 2023-01-19 Component operation method and related equipment

Publications (1)

Publication Number Publication Date
CN115982518A true CN115982518A (en) 2023-04-18

Family

ID=85972462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310091157.9A Pending CN115982518A (en) 2023-01-19 2023-01-19 Component operation method and related equipment

Country Status (1)

Country Link
CN (1) CN115982518A (en)

Similar Documents

Publication Publication Date Title
US10999060B2 (en) Data processing method and apparatus
RU2725760C1 (en) Method and apparatus for determining state of database and method and apparatus for verifying consistency
CN109347787B (en) Identity information identification method and device
JP6804668B2 (en) Block data validation method and equipment
CN108846749B (en) Partitioned transaction execution system and method based on block chain technology
CN109033772B (en) Verification information input method and device
CN117035452A (en) Root cause positioning method and system of business scene and electronic equipment
CN112363814A (en) Task scheduling method and device, computer equipment and storage medium
CN110245684B (en) Data processing method, electronic device, and medium
CN111784246B (en) Logistics path estimation method
CN109508791A (en) Vehicle damage management method
CN108470043A (en) A kind of acquisition methods and device of business result
US11243742B2 (en) Data merge processing based on differences between source and merged data
CN108710658B (en) Data record storage method and device
CN115982518A (en) Component operation method and related equipment
CN112907198B (en) Service state circulation maintenance method and device and electronic equipment
CN110569659B (en) Data processing method and device and electronic equipment
CN111049988A (en) Intimacy prediction method, system, equipment and storage medium for mobile equipment
CN112068814A (en) Method, device, system and medium for generating executable file
CN111723247A (en) Graph-based hypothetical computation
CN110647519B (en) Method and device for predicting missing attribute value in test sample
US20220351054A1 (en) Systems and methods for generating customer journeys for an application based on process management rules
CN108830454B (en) Service decision method and device and electronic equipment
CN113821449A (en) System testing method and device and electronic equipment
CN117787995A (en) Suspicious group partner identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination