CN113010421A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113010421A
CN113010421A CN202110281829.3A CN202110281829A CN113010421A CN 113010421 A CN113010421 A CN 113010421A CN 202110281829 A CN202110281829 A CN 202110281829A CN 113010421 A CN113010421 A CN 113010421A
Authority
CN
China
Prior art keywords
database
data
quality
level
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110281829.3A
Other languages
Chinese (zh)
Other versions
CN113010421B (en
Inventor
盛海英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110281829.3A priority Critical patent/CN113010421B/en
Publication of CN113010421A publication Critical patent/CN113010421A/en
Application granted granted Critical
Publication of CN113010421B publication Critical patent/CN113010421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data processing method, a data processing device, an electronic device and a storage medium, wherein the method comprises the following steps: performing quality validation on the data to be processed in response to transferring the data to be processed to the low-level database; when the data to be processed passes the quality verification, transferring the data to be processed to a high-level database; wherein the quality verification corresponds to a quality level of the high-level database being higher than a quality level of the low-level database. In the embodiment of the application, different databases are set to correspond to different quality grades, products stored in the database with a lower quality grade are automatically subjected to quality verification, and the products passing the quality verification are transferred to the database with a higher quality grade, so that the products are graded by using the different databases, and the automatic verification of the quality grade of the products is realized; therefore, the products screened from the database can be ensured to meet the quality standard corresponding to the database, and the success rate of product release is improved.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of network technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
The whole software release process comprises 2 parts of Continuous Integration (CI) and Continuous Delivery (CD). In the continuous integration part, codes submitted by an integration developer are formed into data expressed in a binary mode (the data can also be called a product), the product is stored in a database, and the product is tested; the continuous delivery is to deploy the products in the database to the operating environment for testing and online on the basis of continuous integration so as to release the software.
In each node of continuous integration and continuous delivery, the quality grade product corresponding to the node needs to be manually screened out from a database, and the quality grade products are stored in the database. Therefore, in the release node of the products, the products manually screened from the database are not necessarily the products meeting the quality grade requirement of the node, so that the product release failure is easily caused, and the success rate of the product release is further reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a data processing method, a data processing device, electronic equipment and a storage medium, and solves the technical problem that the quality grades of products screened from a database do not always meet the publishing requirement and further the success rate of product publishing is low due to the fact that products with various quality grades are stored in the database and the nodes are published in products. The specific technical scheme is as follows:
in a first aspect of the embodiments of the present invention, a data processing method is first provided, including:
in response to transferring pending data to a low level database, performing quality validation on the pending data;
when the data to be processed passes the quality verification, transferring the data to be processed to a high-level database;
wherein the quality verification corresponds to a quality level of the high-level database that is higher than a quality level of the low-level database.
In a second aspect of the embodiments of the present invention, there is also provided a data processing apparatus, including:
a verification module for performing quality verification on data to be processed in response to transferring the data to be processed to a low-level database;
the transfer module is used for transferring the data to be processed to a high-level database when the data to be processed passes the quality verification;
wherein the quality verification corresponds to a quality level of the high-level database that is higher than a quality level of the low-level database.
In a third aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to execute the data processing method according to any one of the above-mentioned embodiments.
In a fourth aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data processing method according to any one of the above embodiments.
In the embodiment of the invention, the quality verification is carried out on the product with lower grade stored in one database, and the product passing the quality verification in the database is moved to another database, so that the products with different quality grades are stored in different databases, and the identification information carried by the product can also reflect the quality grade corresponding to the product. Thus, different databases are set to correspond to different quality grades, products stored in the database with a lower quality grade are automatically subjected to quality verification, and the products passing the quality verification are transferred to the database with a higher quality grade, so that the products are graded by using the different databases, and the automatic verification of the quality grade of the products is realized; therefore, the products screened from the database can be ensured to meet the corresponding quality standard of the database, the failure of product release is avoided, and the success rate of product release is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a diagram of an application of a data processing method according to an embodiment of the present invention;
FIG. 3 is a diagram of another application scenario of the data processing method according to the embodiment of the present invention;
FIG. 4 is a diagram of another application scenario of the data processing method according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
The data processing method provided by the embodiment of the invention is suitable for the software release process. It should be noted that the software release process includes, but is not limited to, 2 parts of continuous integration and continuous delivery, and the process of continuous integration and continuous delivery includes at least the following nodes: the system comprises a product forming node, a test node, an online verification node and a product publishing node.
In the existing software release process, only one product warehouse is usually provided, which may also be referred to as a database, and the one database stores products corresponding to each node, which may also be referred to as data. In each node of continuous integration and continuous delivery, a product with a quality grade corresponding to the node needs to be manually screened out from the database, however, the database stores the products with each quality grade, so that the product manually screened out from the database is not necessarily a product meeting the quality grade requirement of the node, which easily causes the product release failure, and further reduces the success rate of product release.
For example, in the release node of the product, the product screened from the database may be the product corresponding to the product forming node, and the product is not subjected to subsequent testing and verification, and the release of the product may result in a failure in software release.
Based on the technical problems in the prior art, the invention provides the following technical concepts:
setting different databases corresponding to different quality grades, automatically carrying out quality verification on products stored in the database with a lower quality grade, and transferring the products passing the quality verification to the database with a higher quality grade. In this way, the database is matched with the quality grade of the stored data, so that the products screened from the database can be ensured to meet the corresponding quality standard of the database.
Referring to fig. 1, fig. 1 is a flowchart illustrating a data processing method according to an embodiment of the invention. The data processing method provided by the present application is applied to a data storage system including at least 2 databases, and it should be noted that the data storage system provided by the present application may also be applied to any software distribution scenario, that is, the data storage system is used to store and manage data in the software distribution scenario, which is not limited specifically herein.
For the sake of clarity of illustration of the embodiment, the technical solution is illustrated by taking the data processing method as an example for application to a data storage system.
The data processing method provided by the embodiment comprises the following steps:
s101, responding to the data to be processed being transferred to the low-level database, and performing quality verification on the data to be processed.
In this step, the data storage system includes at least 2 databases, each database corresponding to a different quality level, where a quality level may be understood as a predetermined level standard.
Alternatively, the quality level of the database may be associated with the node to which the data stored by the database corresponds.
For example, data stored in a database is data corresponding to a test node, that is, data stored in the database is data to be tested; and if the data stored in the other database is the data corresponding to the publishing node, the data stored in the database is the data to be published. In this case, in the release flow of the work in process, the release node is the last processing node, and the test node is the intermediate node, so the quality level of the database can be determined based on the sequence of the nodes in the whole release flow. In other words, the quality level corresponding to the database is higher the later the nodes corresponding to the data stored in the database are in the whole process.
Alternatively, the quality level of the database may be related to identification information carried by the data stored in the database.
It is to be understood that the identification information carried by the data may characterize the processing node where the data is located, wherein the data may be given a suffix name with the identification information. Besides the embodiment that the identification information is used as the suffix name, the identification information can be carried by the data to be processed in other ways. For example, the identification information may also be inserted into a specified field (which may be specified by any user-defined method), a newly added field, or a reserved blank field (or referred to as an extension field) of the data to be processed, which is not described again.
Based on the same principle, the quality grade of the database can be determined based on the sequence of the nodes corresponding to the data stored in the database in the whole issuing process.
It should be understood that in some embodiments, other custom settings may be made on the quality level of the database, and are not set forth herein in any greater detail.
For two databases adjacent to any quality level, the database with higher quality level may be referred to as a high-level database, and the database with lower quality level may be referred to as a low-level database.
And when the data to be processed is transferred to the low-level database, performing quality verification on the data to be processed.
It should be understood that the quality verification is a verification mode matched with the quality grades of the high-level database, the high-level databases with different quality grades correspond to different quality verification schemes, if one to-be-processed data passes the quality verification corresponding to the database, it indicates that the to-be-processed data meets the grade standard corresponding to the database, and the to-be-processed data passing the quality verification can be stored in the database.
Wherein the low-level database may be a primary database, i.e., the database with the lowest quality rating; the low-level database may also be a non-primary database, and the non-primary database is a hierarchical database other than the highest hierarchical database.
The data to be processed may be understood as the product mentioned in the above, and the data appearing in the following content has the same meaning as the product for the sake of clarity of the technical solution.
S102, when the data to be processed passes the quality verification, transferring the data to be processed to a high-level database.
In this step, when the data to be processed passes the quality verification, it indicates that the data to be processed meets the quality standard corresponding to the high-level database, and in this case, the data to be processed may be transferred to the high-level database.
For example, the data storage system includes 3 databases, which are a first database, a second database and a third database, respectively, and the quality level of the third database is greater than that of the second database, and the quality level of the second database is greater than that of the first database. In this case, when the data to be processed is transferred to the first database, performing a first quality verification on the data to be processed, wherein the first quality verification is related to the quality grade of the second database; when the data to be processed passes the first quality verification, transferring the data to be processed to a second database, and performing second quality verification on the data to be processed, wherein the second quality verification is related to the quality grade of a third database; and when the data to be processed passes the second quality verification, transferring the data to be processed to a third database.
In the embodiment of the invention, the quality verification of high quality grade is carried out on the product with lower grade stored in one database, and the product which passes the quality verification in the database is moved to another database, so that the products with different quality grades are stored in different databases. Thus, different databases are set to correspond to different quality grades, products stored in the database with a lower quality grade are automatically subjected to quality verification, and the products passing the quality verification are transferred to the database with a higher quality grade, so that the products are graded by using the different databases, and the automatic verification of the quality grade of the products is realized; therefore, the products screened from the database can be ensured to meet the corresponding quality standard of the database, the failure of product release is avoided, and the success rate of product release is further improved.
On the basis of the embodiment shown in fig. 1, in a possible implementation manner, the data may also be marked by using the identification information. Specifically, the data to be processed carries identification information representing the quality grade of the data to be processed, and in the transfer process of the data to be processed after the quality verification, part of the data content of the data to be processed is modified, which is specifically represented as: adding the data to be processed in the high-level database and deleting the data to be processed in the low-level database; and modifying the identification information carried by the data to be processed into a second identification from the first identification.
In this embodiment, the data to be processed is added to the high-level database, and the data to be processed is deleted from the low-level database, so that migration of the data to be processed is realized.
The data to be processed carries identification information, and the identification information is used for representing the quality grade corresponding to the data to be processed. As described above, different databases correspond to different quality levels, so that identification information carried by to-be-processed data stored in different databases is also different; and in the migration process of the data to be processed, modifying the identification information from a first identification to a second identification, wherein the first identification is used for indicating the quality level of the low-level database, and the second identification is used for indicating the quality level of the high-level database.
In this scenario, the execution timing of "performing quality verification on the to-be-processed data in response to transferring the to-be-processed data to the low-level database" in the foregoing step S101 may be at least two cases: one possible implementation is: when the data to be processed is moved to the low-level database, performing quality verification on the data to be processed; alternatively, another possible implementation is: and when the identification information carried by the data in the low-level database is modified into a first identification, performing quality verification on the data to be processed.
Specifically, when the data to be processed is moved to the low-level database, the data to be processed is subjected to quality verification. In this embodiment, the identification information carried by the data to be processed may not be modified to the first identification. In another embodiment, when the identification information carried by the data to be processed in the low-level database is modified into the first identification, the quality of the data to be processed is verified. At this time, the identification information carried by the data to be processed may have been modified into the first identification.
The following specifically describes the quality verification process of the data to be processed: it should be understood that the database of different quality classes stores data to be processed that can only be quality verified using a quality verification method that matches the quality class of the database. In specific implementation, the test cases matched with the databases of all quality levels can be preset in advance, so that the quality of the data to be processed is verified through the test cases corresponding to all the databases.
In an optional implementation manner, the implementation manner of the foregoing step S101 may be: in response to the data to be processed is transferred to the low-level database, querying the low-level database by using a preset quality level list, and determining a high-level database corresponding to the low-level database; determining a test case corresponding to the high-level database; and performing quality verification on the data to be processed by using the test case.
It should be understood that a quality class table is stored in the data storage system, and quality class relationships between databases are maintained in the quality class table. And querying the low-level database in the quality grade table to obtain a database with the quality grade higher than the adjacent grade of the low-level database, and determining the database as a high-level database.
The data storage system related to the embodiment is also provided with a plurality of test cases in advance, wherein each test case corresponds to the quality grade of the database, so that the layering of the test cases is realized; and the quality test can be carried out on the data to be processed by using the corresponding test case, and the data to be processed is automatically transferred to the database with higher quality level under the condition that the data to be processed passes the quality test, so that the quality level corresponding to the data to be processed is improved. And realizing automatic promotion of the data to be processed.
It should be understood that, in the process of improving the quality level corresponding to the data to be processed, the quality level of the data to be processed can only be improved step by step, and only after the quality test is performed on the data to be processed by using the test cases corresponding to the respective quality levels, the quality level of the data to be processed can be improved to the highest level, and the quality level of the data to be processed cannot be improved in a skipping manner.
It will be appreciated that one test case corresponds to one quality level, and one quality level may correspond to one or more test cases. In other words, in the process of performing quality verification on the data to be processed, one test case corresponding to the high-level database may be used for performing quality verification, and a plurality of test cases corresponding to the high-level database may also be used for performing quality verification.
It can be understood that if the database in which the data to be processed is located is the database with the highest quality level, that is, there is no database with a higher level than the current database in the data storage system, in this case, there is no need to perform higher-level quality verification on the data to be processed.
For example, the data storage system has three databases, a first database, a second database, and a third database, wherein the second database has a higher quality level than the first database, and the third database has a higher quality level than the second database. In this case, for the to-be-processed data stored in the first database, the quality of the to-be-processed data can be verified only by using the test case corresponding to the second database.
The quality verification method involved in the embodiment of the present invention may include, but is not limited to, at least one of the following: smoke test verification, function test verification or online verification.
The smoking test verification is also called as smoking test, and the smoking test is used for confirming that codes in the data to be processed can operate as expected without damaging the stability of the whole software version to be released; functional test verification is also called functional test, and the functional test is used for verifying whether the data to be processed can realize partial functions; the online verification is used for verifying whether the data to be processed is in accordance with online release.
In some possible embodiments, the quality verification may also include at least one of a full test or a version acceptance test. Wherein the full test is used to test all parameters in the article; the version acceptance test is used for testing partial parameters in the gray level package, is a quick test process and aims to ensure the basic functions and the content of software to be correct and complete.
It should be understood that, based on different application scenarios of the data storage system, the quality level classification manner of each database may be designed in a user-defined manner, and the quality verification manner corresponding to each database may also be designed in a user-defined manner, which is not particularly limited in this embodiment of the present invention. For convenience of understanding, hereinafter, CI and CD processes in different operating systems in a software release scenario are taken as an example to describe the processes, and are not described herein for the time being, and are described in detail later.
As described above, when the data to be processed passes the quality verification matching the quality level of the high-level database, the data to be processed is automatically transferred from the low-level database to the high-level database.
In the transfer process of the data to be processed, the migration of the data to be processed can be realized in the following way: transferring the data to be processed in the low-level database to the high-level database through a data interface; or, using a right-lifting thread to transfer the data to be processed in the low-level database to the high-level database.
In this embodiment, an optional implementation manner is that a unidirectional transmission interface may be set between databases of adjacent quality classes, where a data transmission direction of the unidirectional transmission interface is fixed, and the unidirectional transmission interface is: a low level database to a high level database.
Wherein, the one-way transmission interface can be a kind of entity communication interface. For example, in an application scenario, a first server may be understood as a low-level database and a second server as a high-level database, in which case the first server and the second server communicate via a unidirectional transmission interface.
The one-way transmission interface may also be a virtual communication interface, for example, in an application scenario, the one-way transmission interface may be a communication protocol for communication between databases, which provides that data can only be transmitted from a low-level database to a high-level database.
Therefore, the data to be processed in the low-level database is transferred to the high-level database through the data interface by using the one-way data interface between the adjacent databases, so that the method is fast, safe, convenient to process, small in occupation of system resources and beneficial to maintaining the stability of the system.
In the above embodiment, the number of the data interfaces is related to the number of the databases, and optionally, the unidirectional transmission interface may be provided between all adjacent databases. For example, the data storage system includes 4 databases, respectively a first database, a second database, a third database, and a fourth database. In this case, a data interface between the first database and the second database may be set as a first data interface, a data interface between the second database and the third database may be set as a second data interface, and a data interface between the third database and the fourth database may be set as a third data interface, where the first data interface, the second data interface, and the third data interface are all unidirectional transmission interfaces.
In addition to the foregoing embodiments, data migration may also be implemented using a bidirectional transport interface; alternatively, the above unidirectional transmission interface may be arranged between only a part of the adjacent databases, and other types of data interfaces, such as bidirectional transmission interfaces, may be arranged between the rest of the adjacent databases, which is not described herein too much.
In another alternative embodiment, the to-be-processed data in the low-level database may be transferred to the high-level database using an authorization thread, where the authorization thread is used to raise the storage authority of the to-be-processed data, so that the high-level database with a higher quality level may store the authorized to-be-processed data.
In some embodiments, a single privilege thread may be used to implement the transfer of pending data among all databases.
For example, in one application scenario, a data storage system includes a first database, a second database, and a third database, wherein the quality level of the first database is less than the quality level of the second database, and the quality level of the second database is less than the quality level of the third database. In this case, the data to be processed stored in the first database may be transferred to the second database using the privilege escalation thread; and after the data to be processed passes the quality verification corresponding to the third database, transferring the data to be processed from the second database to the third database by using the privilege-giving thread.
In other embodiments, different privilege threads may be used for transferring the data to be processed between different databases.
As an example above, a first fetch thread may be used to transfer pending data stored by a first database to a second database; and after the data to be processed passes the quality verification corresponding to the third database, transferring the data to be processed from the second database to the third database by using a second extraction thread.
It should be understood that the migration of the pending data is also applicable to other data migration methods, for example, the pending data may be manually migrated from a low-level database to a high-level database; and the data to be processed can be automatically transferred from the low-level database to the high-level database after passing through the quality verification corresponding to the high-level database.
It should be understood that there may be a case where a high-level database having a higher quality level than a low-level database is stored in the data storage system, and in this case, it is necessary to perform quality verification on the data to be processed in the low-level database.
On the basis of any one of the foregoing embodiments, in the embodiment of the present invention, the data to be processed may also be automatically generated based on the processing of the code management tool, and stored in the present data storage system.
In an optional embodiment, the method may further include the steps of: compiling and packaging the received binary file to generate data to be processed; and storing the data to be processed to a primary database.
In this embodiment, a developer may generate a binary file using a code management tool, and send the binary file to a data storage system, and the system may compile and package the received binary file using a preset persistent delivery tool, such as a Software Development Kit (SDK), to generate data to be processed.
It should be understood that the to-be-processed data generated by packaging and compiling the binary file can be reused and stored in different databases during the distribution process of the software, but the MD5 code carried by the to-be-processed data is not changed, i.e. the data content of the to-be-processed data is not tampered in the subsequent processing process.
The code management tool includes but is not limited to Gitlab, and the continuous delivery tool includes but is not limited to Jenkins.
After the data to be processed is obtained, the system compiles and packages the received binary file to obtain the data to be processed, so that the quality grade corresponding to the data to be processed is the lowest, and the data to be processed is stored in a primary database, wherein the primary database is the database with the lowest quality grade in all databases.
Hereinafter, the data processing method provided by the embodiment of the present invention is described by taking CI and CD processes in different operating systems as examples in a software distribution scenario.
It should be understood that the operating system applied by the software includes, but is not limited to, a PCA system, an android system, and an IOS system, where the PCA system is an operating system applied to the single chip microcomputer. In the development scenes of application software of different operating systems, the software release flows are different, so that the grading rules of the database, the quality verification of the data to be processed and the test cases related to the quality verification are different.
In the following, how to perform quality classification and quality verification on data to be processed in the case that the operating systems carried by the data storage system are the PCA system, the android system, and the IOS system, respectively, is specifically described.
Optionally, the data storage system comprises at least: the system comprises a first database, a second database, a third database and a fourth database, wherein the first database stores data generated by compiling based on a binary file, the second database stores data verified by a smoking test, the third database stores data verified by a functional test, and the fourth database stores data verified by an online test;
wherein the quality level of the first database is less than the quality level of the second database, the quality level of the second database is less than the quality level of the third database, and the quality level of the third database is less than the quality level of the fourth database.
In this embodiment, taking an example that the data storage system includes 4 databases, how to perform quality classification and quality verification on the data to be processed when the data storage system is loaded with different operating systems is described:
it should be noted that, in different operating systems, identification information carried by data to be processed stored in different databases is different, please refer to table one:
a first database A second database A third database Fourth database
PCA pca-dev pca-ci pca-staging pca-release
Android device android-dev android-ci android-staging android-release
IOS ios-dev ios-ci ios-staging ios-release
As shown in table one, in the PCA system, the identification information carried by the to-be-processed data stored in the first database is PCA-dev, that is, the to-be-processed data uses PCA-dev as a suffix, and the identification information carried by the to-be-processed data stored in the second database is PCA-ci; the identification information carried by the data to be processed stored in the third database is pca-stating; and the identification information carried by the data to be processed stored in the fourth database is pca-release.
Thus, the grade corresponding to the data to be processed can be determined based on the identification information carried by the data to be processed, and as described in table one, if the suffix name of the data to be processed is dev, the lowest grade corresponding to the data to be processed can be determined; if the suffix name of the data to be processed is release, it may be determined that the data to be processed corresponds to the highest level.
In the android system, identification information carried by to-be-processed data stored in a first database is android-dev; the identification information carried by the data to be processed stored in the second database is android-ci; the identification information carried by the data to be processed stored in the third database is android-staging; and the identification information carried by the data to be processed stored in the fourth database is android-release.
In an IOS system, identification information carried by data to be processed stored in a first database is IOS-dev; the identification information carried by the data to be processed stored in the second database is ios-ci; the identification information carried by the data to be processed stored in the third database is ios-stating; and the identification information carried by the data to be processed stored in the fourth database is ios-release.
In the following, the flow of performing quality classification and quality verification on the data to be processed and the flow of software release are specifically described in the case that the operating system is a PCA system:
referring to fig. 2, in the PCA system, the continuous integration process and the continuous release process of the software can be synchronously implemented.
As shown in fig. 2, the data storage system carrying the PCA operating system needs to apply 4 databases during software release, and a first database in fig. 2 may be referred to as a database 11, a second database may be referred to as a database 12, a third database may be referred to as a database 13, and a fourth database may be referred to as a database 14. The system compiles and packages the received binary file to generate data to be processed, and stores the data to be processed into the database 11 with the lowest quality grade.
The data to be processed in the database 11 is subjected to a smoking test, and if the data to be processed passes the smoking test, a corresponding label is marked on the data to be processed, as shown in fig. 2, a label of QL2 may be marked on the data to be processed that passes the smoking test, and the marked data to be processed is transferred to the database 12.
It is composed of
The labeling of the to-be-processed data may be understood as modifying the identification information carried by the to-be-processed data, for example, modifying a suffix name of the to-be-processed data to pca-ci, or adding another identification information to a field of the to-be-processed data based on the modification of the identification information carried by the to-be-processed data.
And performing a comprehensive test on the data to be processed in the database 12, and if the data to be processed passes the comprehensive test, labeling the data to be processed with a corresponding label, as shown in fig. 2, labeling the data to be processed which passes the smoking test with a label of QL3, and transferring the labeled data to be processed to the database 13.
The data to be processed in the database 13 is verified online, and if the data to be processed passes the online verification, a corresponding label is marked on the data to be processed, as shown in fig. 2, a label of QL4 may be marked on the data to be processed which passes the smoking test, and the marked data to be processed is transferred to the database 14.
And (3) carrying out configuration modification on the data to be processed in the database 14 to generate channel products, and issuing the channel products to finish the software issuing.
It should be understood that the grade corresponding to the data to be processed may be determined based on the tag carried by the data to be processed. As described above, for example, if the label carried by the to-be-processed data is QL1, it may be determined that the to-be-processed data corresponds to the lowest level; if the label carried by the data to be processed is QL4, it may be determined that the data to be processed corresponds to the highest level.
It should be understood that the corresponding grade of the data to be processed may be determined based on the suffix name of the data to be processed. As described above, for example, if the suffix name of the data to be processed is dev, it may be determined that the data to be processed corresponds to the lowest level; if the suffix name carried by the data to be processed is release, it can be determined that the data to be processed corresponds to the highest level.
In the following, the flow of performing quality classification and quality verification on data to be processed and the flow of issuing software are specifically described in the case where the operating system is an android system:
referring to fig. 3, as shown in fig. 3, in the software release process, the data storage system with the android operating system needs to apply 4 databases, which are a first database, a second database, a third database and a fourth database, respectively, and in order to distinguish from the 4 databases involved in the PCA system, the first database may be referred to as a database 21, the second database may be referred to as a database 22, the third database may be referred to as a database 23, and the fourth database may be referred to as a database 24.
It should be understood that in the android system, the continuous integration process and the continuous release process of the software are performed step by step, and the continuous integration process is executed first, and then the continuous release process is executed.
The continuous integration process focuses on comprehensively testing codes submitted by users, and a vulnerability of a product is guaranteed not to appear after the product is online; the continuous release process focuses on releasing the artifacts.
In the continuous integration process, the data to be processed, which is obtained by compiling and packaging all binary files submitted by the user, is also called a common package.
In the continuous release process, the application store of the android system limits the released product, and part of configuration parameters in the product are not supported by the application store, so that in the continuous release process, only part of binary files submitted by a user can be compiled and packaged to form to-be-processed data, which is also called a gray level package.
The difference between the normal package and the gray package is that the normal package includes all configuration parameters in the binary file submitted by the user, and the gray package includes only part of the configuration parameters in the binary file in order to meet the release requirement. Therefore, the normal packet can be used for comprehensive testing of the code.
The following describes a continuous integration process corresponding to the android system.
The generic packages are stored in the database 21 and labeled with corresponding labels, as shown in fig. 3, the generic packages stored in the database 21 may be labeled with CIQL 1.
The common package in the database 21 is subjected to the smoking test, and if the common package passes the smoking test, the common package is labeled with a corresponding label, as shown in fig. 3, the common package may be labeled with CIQL2, and the labeled common package is transferred to the database 22.
The general package in the database 22 is fully tested, and if the general package passes the full test, the general package is labeled with a corresponding label, as shown in fig. 3, the general package may be labeled with CIQL3, and the labeled general package is transferred to the database 23.
After the general package passes the comprehensive test, the code submitted by the user has no loophole, and a continuous release process can be carried out.
The following describes a persistent publication flow corresponding to the android system.
The gray scale package is stored in database 21 and labeled with a corresponding label, such as QL1 for the gray scale package stored in database 21, as shown in fig. 3.
The gray scale package in the database 21 is subjected to a smoking test, and if the gray scale package passes the smoking test, the gray scale package is labeled with a corresponding label, and as shown in fig. 3, the gray scale package passing the smoking test may be labeled with QL2, and the gray scale package may be transferred to the database 22.
The gray scale package in the database 22 is subjected to a version acceptance test, and if the gray scale package passes the version acceptance test, the gray scale package is labeled with a corresponding label, as shown in fig. 3, a label of QL3 may be labeled for the gray scale package passing the version acceptance test, and the gray scale package is transferred to the database 23.
And performing online verification on the gray level package in the database 23, and issuing the gray level package after the online verification is passed.
The gray scale package successfully issued is labeled with a corresponding label, as shown in fig. 3, a QL4 label can be labeled for the gray scale package successfully issued, and the gray scale package is stored in the database 24.
In the following, the flow of performing quality classification and quality verification on data to be processed and the flow of software release in the case that the operating system is an IOS system are specifically described:
referring to fig. 4, as shown in fig. 4, in the software release process, the data storage system equipped with the IOS operating system needs to apply 4 databases, which are a first database, a second database, a third database and a fourth database, respectively, and in order to distinguish from the 4 databases involved in the PCA system and the android system, the first database may be referred to as a database 31, the second database may be referred to as a database 32, the third database may be referred to as a database 33, and the fourth database may be referred to as a database 34.
In the IOS system, the continuous integration process and the continuous release process of software are carried out step by step, wherein the continuous integration process is executed firstly, and then the continuous release process is executed.
The continuous integration process corresponding to the IOS system is consistent with the continuous integration process corresponding to the android system, and will not be described repeatedly herein.
The following describes a persistent publication flow corresponding to the android system.
And compiling and packaging the binary file submitted by the user by using a software development kit to generate data to be processed, wherein the data to be processed can be called as an external test kit.
Store the outsource package to the database 31 and label the outsource package with a corresponding label, as shown in fig. 4, the grey scale package in the database 31 may be labeled with QL 1.
The smoking test is performed on the outsourcing pack in the database 31, and if the outsourcing pack passes the smoking test, the corresponding label is labeled on the outsourcing pack, and as shown in fig. 4, the label of QL2 can be labeled on the outsourcing pack which passes the smoking test.
Uploading the outsourced package in the database 31 to the test platform, performing a version acceptance test on the outsourced package on the test platform, if the outsourced package passes the version acceptance test, marking a corresponding label for the gray-scale package, as shown in fig. 4, marking a label of QL3 for the outsourced package passing the version acceptance test, and storing the outsourced package in the database 43.
And performing online verification on the external test packages in the database 43, and issuing the external test packages on the test platform after the online verification is passed.
The successfully issued outsourced package is labeled with a corresponding label, as shown in fig. 4, the successfully issued outsourced package may be labeled with QL4, and the outsourced package is stored in the database 44.
In summary, in the CI and CD processes of software release, four databases are provided, each database has a different quality level, and each data has a different test case, so that the quality level and the test case are layered.
In the CI and CD processes, the quality of the data to be processed stored in the database with lower quality level is automatically tested, and the data to be processed is transferred to the database with higher quality level on the premise that the data to be processed passes the corresponding quality test, so that the quality level corresponding to the data to be processed is improved, the grade promotion of the data to be processed is realized, and the flow type CI and CD process is formed.
For any node in the CI and CD processes, the node can only process the data to be processed in the associated database, but cannot process the data to be processed in other databases, and the access control effect is achieved. For the software publishing node, the data to be processed in the database associated with the software publishing node conforms to the corresponding quality grade, so that the published data to be processed conforms to the publishing requirement.
Based on the processing, the scheme can realize the automatic processing of CI and CD flows, is favorable for reducing the adverse effect of manual operation on the process, and simultaneously, the automatic promotion and verification hierarchical data management mode has the advantages that the data can be automatically promoted and cannot jump under the condition of meeting the admission and permission standard, the quality grade matching between the data and the database to which the data belongs is also ensured, the data with the adaptive quality grade can only be obtained at a specific stage, the data cannot be subjected to cross-grading or degradation, the door control management effect of the database is formed, the development, the operation and maintenance management are favorable, and the research and development operation and maintenance efficiency is improved.
As shown in fig. 5, an embodiment of the present invention further provides a data processing apparatus 200, including:
a verification module 201, configured to perform quality verification on to-be-processed data in response to transferring the to-be-processed data to a low-level database;
a transferring module 202, configured to transfer the data to be processed to a high-level database when the data to be processed passes the quality verification.
Optionally, the transfer module 202 is further configured to:
adding the data to be processed in the high-level database and deleting the data to be processed in the low-level database;
and modifying the identification information carried by the data to be processed into a second identification from the first identification.
Optionally, the transfer module 202 is further configured to:
transferring the data to be processed in the low-level database to the high-level database through a data interface; alternatively, the first and second electrodes may be,
and transferring the data to be processed in the low-level database to the high-level database by using a privilege-giving thread.
Optionally, the verification module 201 is further configured to:
in response to the data to be processed being transferred to a low-level database, querying the low-level database by using a preset quality level list, and determining a high-level database corresponding to the low-level database;
determining a test case corresponding to the high-level database;
and performing quality verification on the data to be processed by using the test case.
Optionally, the verification module 201 is further configured to:
when the data to be processed is moved to the low-level database, performing quality verification on the data to be processed; alternatively, the first and second electrodes may be,
and when the identification information carried by the data to be processed in the low-level database is modified into a first identification, performing quality verification on the data to be processed.
Optionally, the data processing apparatus 200 further includes:
the generating module is used for compiling and packaging the received binary file to generate data to be processed;
and the storage module is used for storing the data to be processed to a primary database.
The embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 301, a communication interface 302, a memory 303, and a communication bus 304, where the processor 301, the communication interface 302, and the memory 303 complete mutual communication through the communication bus 304.
A memory 303 for storing a computer program;
the processor 301 is configured to execute the data processing method according to any of the above embodiments by the processor 301 when the processor 301 executes the program stored in the memory 303.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the computer is caused to execute the data processing method described in any one of the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data processing method described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (11)

1. A data processing method applied to a data storage system, wherein the data storage system comprises at least 2 databases, and the quality grades of any two databases are different, the method comprises the following steps:
in response to transferring pending data to a low level database, performing quality validation on the pending data;
when the data to be processed passes the quality verification, transferring the data to be processed to a high-level database;
wherein the quality verification corresponds to a quality level of the high-level database that is higher than a quality level of the low-level database.
2. The method of claim 1, wherein transferring the data to be processed to a high level database comprises:
adding the data to be processed in the high-level database and deleting the data to be processed in the low-level database;
and modifying the identification information carried by the data to be processed into a second identification from a first identification, wherein the first identification is used for indicating the quality grade of the low-grade database, and the second identification is used for indicating the quality grade of the high-grade database.
3. The method of claim 1, wherein transferring the data to be processed to a high level database comprises:
transferring the data to be processed in the low-level database to the high-level database through a data interface, wherein the data interface is a one-way transmission interface between the low-level database and the high-level database; alternatively, the first and second electrodes may be,
and transferring the data to be processed in the low-level database to the high-level database by using a privilege-giving thread, wherein the privilege-giving thread is used for transmitting the data from the database with lower quality level to the database with higher quality level.
4. The method of claim 1, wherein the quality validating the pending data in response to transferring the pending data to a low level database comprises:
in response to the data to be processed is transferred to the low-level database, querying the low-level database by using a preset quality level list, and determining a high-level database corresponding to the low-level database, wherein the quality level list reflects the mapping relation between the database and the quality level;
determining a test case corresponding to the high-level database, wherein the quality level of the high-level database is adjacent to the quality level of the low-level database;
and performing quality verification on the data to be processed by using the test case.
5. The method according to any of claims 1-4, wherein the quality verification comprises at least one of: smoke test verification, function test verification or online verification.
6. The method of claim 5, wherein the data storage system comprises at least: the system comprises a first database, a second database, a third database and a fourth database, wherein the first database is used for storing data generated based on binary file compiling, the second database is used for storing data verified by a smoking test, the third database is used for storing data verified by a functional test, and the fourth database is used for storing data verified by an online test;
wherein the quality level of the first database is less than the quality level of the second database, the quality level of the second database is less than the quality level of the third database, and the quality level of the third database is less than the quality level of the fourth database.
7. The method of claim 1, wherein the quality validating the pending data in response to transferring the pending data to a low level database comprises:
when the data to be processed is moved to the low-level database, performing quality verification on the data to be processed; alternatively, the first and second electrodes may be,
and when the identification information carried by the data to be processed in the low-level database is modified into a first identification, performing quality verification on the data to be processed.
8. The method of claim 1, further comprising:
compiling and packaging the received binary file to generate data to be processed;
and storing the data to be processed into a primary database, wherein the quality grade of the primary database is the lowest among the quality grades of all databases.
9. A data processing apparatus, characterized in that the apparatus comprises:
a verification module for performing quality verification on data to be processed in response to transferring the data to be processed to a low-level database;
the transfer module is used for transferring the data to be processed to a high-level database when the data to be processed passes the quality verification;
wherein the quality verification corresponds to a quality level of the high-level database that is higher than a quality level of the low-level database.
10. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the data processing method of any one of claims 1 to 8 when executing the program stored in the memory.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1 to 8.
CN202110281829.3A 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium Active CN113010421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110281829.3A CN113010421B (en) 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110281829.3A CN113010421B (en) 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113010421A true CN113010421A (en) 2021-06-22
CN113010421B CN113010421B (en) 2023-09-01

Family

ID=76408406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110281829.3A Active CN113010421B (en) 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113010421B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010052107A1 (en) * 2000-06-13 2001-12-13 Mentor Graphics Corporation Integrated verification and manufacturability tool
US20120036135A1 (en) * 2010-08-03 2012-02-09 Accenture Global Services Gmbh Database anonymization for use in testing database-centric applications
US20160070725A1 (en) * 2014-09-08 2016-03-10 International Business Machines Corporation Data quality analysis and cleansing of source data with respect to a target system
US20170039121A1 (en) * 2015-08-06 2017-02-09 International Business Machines Corporation Test self-verification with integrated transparent self-diagnose
CN109726136A (en) * 2019-01-28 2019-05-07 上海达梦数据库有限公司 Test method, device, equipment and the storage medium of database
CN109800090A (en) * 2018-12-19 2019-05-24 北京仁科互动网络技术有限公司 A kind of data integrated system and method
CN110019145A (en) * 2018-06-19 2019-07-16 杭州数澜科技有限公司 A kind of multi-environment cascade method and apparatus of big data platform
CN110505198A (en) * 2019-07-05 2019-11-26 中国平安财产保险股份有限公司 A kind of checking request method, apparatus, computer equipment and storage medium
CN110543469A (en) * 2019-08-28 2019-12-06 贝壳技术有限公司 Database version management method and server
CN111159016A (en) * 2019-12-16 2020-05-15 深圳前海微众银行股份有限公司 Standard detection method and device
CN111857722A (en) * 2020-06-23 2020-10-30 远光软件股份有限公司 DevOps quality assurance system and method based on three-library mode

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010052107A1 (en) * 2000-06-13 2001-12-13 Mentor Graphics Corporation Integrated verification and manufacturability tool
US20120036135A1 (en) * 2010-08-03 2012-02-09 Accenture Global Services Gmbh Database anonymization for use in testing database-centric applications
US20160070725A1 (en) * 2014-09-08 2016-03-10 International Business Machines Corporation Data quality analysis and cleansing of source data with respect to a target system
US20170039121A1 (en) * 2015-08-06 2017-02-09 International Business Machines Corporation Test self-verification with integrated transparent self-diagnose
CN110019145A (en) * 2018-06-19 2019-07-16 杭州数澜科技有限公司 A kind of multi-environment cascade method and apparatus of big data platform
CN109800090A (en) * 2018-12-19 2019-05-24 北京仁科互动网络技术有限公司 A kind of data integrated system and method
CN109726136A (en) * 2019-01-28 2019-05-07 上海达梦数据库有限公司 Test method, device, equipment and the storage medium of database
CN110505198A (en) * 2019-07-05 2019-11-26 中国平安财产保险股份有限公司 A kind of checking request method, apparatus, computer equipment and storage medium
CN110543469A (en) * 2019-08-28 2019-12-06 贝壳技术有限公司 Database version management method and server
CN111159016A (en) * 2019-12-16 2020-05-15 深圳前海微众银行股份有限公司 Standard detection method and device
CN111857722A (en) * 2020-06-23 2020-10-30 远光软件股份有限公司 DevOps quality assurance system and method based on three-library mode

Also Published As

Publication number Publication date
CN113010421B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN108595157B (en) Block chain data processing method, device, equipment and storage medium
US11531909B2 (en) Computer system and method for machine learning or inference
CN110597531B (en) Distributed module upgrading method and device and storage medium
CN104954353B (en) The method of calibration and device of APK file bag
CN109474578A (en) Message method of calibration, device, computer equipment and storage medium
US20160034382A1 (en) Automated regression test case selector and black box test coverage tool for product testing
US20070162976A1 (en) Method of managing and mitigating security risks through planning
CN112181804A (en) Parameter checking method, equipment and storage medium
US20140289697A1 (en) Systems and Methods for Software Development
CN112685410A (en) Business rule checking method and device, computer equipment and storage medium
CN106529229A (en) Permission data processing method and apparatus
CN106060130A (en) Verification method and system of merchandise inventory
CN111026737A (en) Task processing method and device
CN106095511A (en) A kind of server updating method and apparatus
CN113537845A (en) Task distribution method and device, electronic equipment and computer readable storage medium
CN113010421A (en) Data processing method and device, electronic equipment and storage medium
CN110806979B (en) Interface return value checking method, device, equipment and storage medium
CN117273278A (en) ERP cloud management system
CN112085611A (en) Asynchronous data verification method and device, electronic equipment and storage medium
CN114841797A (en) Method and device for determining business processing rule based on Drools rule engine
CN115080012A (en) class file conflict recognition method and device, electronic equipment and storage medium
US20220122016A1 (en) Evolutionary software prioritization protocol for digital systems
CN113934625A (en) Software detection method, device and storage medium
CN112613567A (en) User label management method, system, device and storage medium
US8930287B2 (en) Dynamic training for tagging computer code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant