CN113010421B - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113010421B
CN113010421B CN202110281829.3A CN202110281829A CN113010421B CN 113010421 B CN113010421 B CN 113010421B CN 202110281829 A CN202110281829 A CN 202110281829A CN 113010421 B CN113010421 B CN 113010421B
Authority
CN
China
Prior art keywords
database
data
level
processed
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110281829.3A
Other languages
Chinese (zh)
Other versions
CN113010421A (en
Inventor
盛海英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110281829.3A priority Critical patent/CN113010421B/en
Publication of CN113010421A publication Critical patent/CN113010421A/en
Application granted granted Critical
Publication of CN113010421B publication Critical patent/CN113010421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a data processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: in response to transferring the data to be processed to a low-level database, performing quality verification on the data to be processed; when the data to be processed passes the quality verification, transferring the data to be processed to an advanced database; wherein the quality verification corresponds to a quality level of the high-level database, the quality level of the high-level database being higher than the quality level of the low-level database. In the embodiment of the application, different databases are set to correspond to different quality grades, the quality of the products stored in the database with lower quality grade is automatically verified, and the products passing the quality verification are transferred to the database with higher quality grade, so that the products are graded by using the different databases, and the automatic verification of the quality grade of the products is realized; therefore, products screened from the database can be ensured to accord with the quality standard corresponding to the database, and the success rate of product release is improved.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of network technologies, and in particular, to a data processing method, an apparatus, an electronic device, and a storage medium.
Background
The entire software release process includes 2 parts of persistent integration (Continuous Integration, CI) and persistent delivery (Continuous Delivery, CD). In the continuous integration section, code submitted by an integration developer is formed into data expressed in binary (the data may also be referred to as an article), the article is stored in a database, and the article is tested; and the continuous delivery is to deploy the products in the database into the running environment for testing and online on the basis of continuous integration, so as to release the software.
In each node of continuous integration and continuous delivery, the product of the quality level corresponding to the node needs to be manually screened from a database, and the database stores the product of each quality level. Therefore, in the release node of the product, the product manually screened from the database is not necessarily the product meeting the quality grade requirement of the node, and thus the product release failure is easy to be caused, and the success rate of the product release is further reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a data processing method, a data processing device, electronic equipment and a storage medium, which solve the technical problems that the quality grade of products screened from a database does not necessarily meet the release requirement due to the fact that products with various quality grades are stored in the database, and the success rate of product release is lower. The specific technical scheme is as follows:
In a first aspect of an embodiment of the present invention, there is first provided a data processing method, including:
in response to transferring data to be processed to a low-level database, performing quality verification on the data to be processed;
when the data to be processed passes the quality verification, transferring the data to be processed to a high-level database;
wherein the quality verification corresponds to a quality level of the high-level database, the quality level of the high-level database being higher than the quality level of the low-level database.
In a second aspect of the embodiments of the present invention, there is also provided a data processing apparatus, including:
the verification module is used for responding to the transfer of the data to be processed to the low-level database and carrying out quality verification on the data to be processed;
the transfer module is used for transferring the data to be processed to a high-level database when the data to be processed passes the quality verification;
wherein the quality verification corresponds to a quality level of the high-level database, the quality level of the high-level database being higher than the quality level of the low-level database.
In a third aspect of the embodiments of the present invention, there is also provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the data processing method according to any one of the embodiments described above.
In a fourth aspect of the invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data processing method according to any of the embodiments described above.
In the embodiment of the invention, the quality verification is carried out on the products with lower grades stored in one database, the products passing the quality verification in the database are moved to the other database, so that the products with different quality grades are stored in different databases, and the identification information carried by the products can reflect the quality grades corresponding to the products. In this way, different databases are set to correspond to different quality grades, the quality of the products stored in the database with lower quality grades is automatically verified, the products passing the quality verification are transferred to the database with higher quality grades, so that the products are graded by using the different databases, and the automatic verification of the quality grades of the products is realized; therefore, products screened from the database can be ensured to accord with the quality standard corresponding to the database, the failure of product release is avoided, and the success rate of product release is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of a data processing method in an embodiment of the invention;
FIG. 2 is a diagram of an application scenario of a data processing method according to an embodiment of the present invention;
FIG. 3 is a diagram of another application scenario of the data processing method according to the embodiment of the present invention;
FIG. 4 is a diagram of another application scenario of the data processing method according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
The data processing method provided by the embodiment of the invention is suitable for the release flow of software. It should be noted that, the release process of the software includes, but is not limited to, continuous integration and continuous delivery of 2 parts, and the process of continuous integration and continuous delivery includes at least the following nodes: the system comprises an article forming node, a testing node, an online verification node and an article release node.
In the existing software release process, there is usually only one product repository, which may also be referred to as a database, in which products corresponding to the individual nodes, which are also referred to as data, are stored. In each node of continuous integration and continuous delivery, the product with the quality grade corresponding to the node needs to be screened out from the database manually, however, the product with each quality grade is stored in the database, so that the product screened out from the database manually is not necessarily the product meeting the quality grade requirement of the node, thus the product release failure is easy to be caused, and the success rate of the product release is further reduced.
For example, in the release node of the product, the product selected from the database may be the product corresponding to the product forming node, and the product is not tested and verified later, and release of the product may result in failure of release of the software.
Based on the technical problems existing in the prior art, the application provides the following technical ideas:
and setting different databases to correspond to different quality grades, automatically verifying the quality of the products stored in the database with lower quality grade, and transferring the products passing the quality verification to the database with higher quality grade. In this way, the database is matched with the quality grade of the stored data, so that products screened from the database can be ensured to meet the quality standard corresponding to the database.
Referring to fig. 1, fig. 1 is a flowchart of a data processing method according to an embodiment of the application. The data processing method provided by the application is applied to a data storage system comprising at least 2 databases, and it should be noted that the data storage system provided by the application can also be applied to any software release scene, namely, the data storage system is utilized to store and manage data in the software release scene, and the method is not particularly limited.
For clarity of description of the embodiments, the description of the technical solution is given by taking the application of the data processing method to the data storage system as an example.
The data processing method provided by the embodiment comprises the following steps:
s101, performing quality verification on the data to be processed in response to transferring the data to be processed to a low-level database.
In this step, the data storage system comprises at least 2 databases, each database corresponding to a different quality level, wherein the quality level is understood to be a preset level criterion.
Alternatively, the quality level of the database may be associated with the node to which the data stored in the database corresponds.
For example, the data stored in a database is the data corresponding to the test node, that is, the data stored in the database is the data to be tested; if the data stored in the other database is the data corresponding to the publishing node, namely the data stored in the database is the data to be published. In this case, in the release process of the product, the release node is the last processing node, and the test node is the intermediate node, so that the quality level of the database can be determined based on the sequence of the nodes in the whole release process. In other words, the more the order of the nodes corresponding to the data stored in the database in the whole flow, the higher the quality level corresponding to the database.
Alternatively, the quality level of the database may be related to identification information carried by the data stored in the database.
It should be understood that the identification information carried by the data may characterize the processing node where the data is located, where the data may use the identification information as a suffix name. In addition to the embodiment of using the identification information as a suffix, the data to be processed may also be made to carry the identification information in other manners. For example, the identification information may be inserted into a specified field (which may be optionally customized) of the data to be processed, an added field or a reserved blank field (or referred to as an extension field), which is not described in detail.
Based on the same principle, the quality grade of the database can be determined based on the sequence of the nodes corresponding to the data stored in the database in the whole release process.
It should be appreciated that in some embodiments, other custom settings may be made for the quality level of the database, not described further herein.
For two databases adjacent to any quality level, a database with a higher quality level may be referred to as a high-level database, and a database with a lower quality level may be referred to as a low-level database.
And when the data to be processed is transferred to the low-level database, performing quality verification on the data to be processed.
It should be understood that the quality verification is a verification manner matched with the quality level of the advanced database, the advanced databases with different quality levels correspond to different quality verification schemes, and if one piece of data to be processed passes the quality verification corresponding to the database, it indicates that the data to be processed meets the level standard corresponding to the database, the data to be processed passing the quality verification can be stored in the database.
Wherein the low-level database may be a primary database, i.e. a database with the lowest quality level; the low level database may also be a non-primary database, and the non-primary database is a hierarchical database other than the highest hierarchical database.
The data to be processed can be understood as the products mentioned in the above, and the data appearing in the following will have the same meaning as the products for the purpose of clearly illustrating the technical solution.
S102, transferring the data to be processed to a high-level database when the data to be processed passes the quality verification.
In this step, when the data to be processed passes the quality verification, it means that the data to be processed meets the quality standard corresponding to the high-level database, and in this case, the data to be processed may be transferred to the high-level database.
For example, the data storage system includes 3 databases, a first database, a second database, and a third database, respectively, and the quality level of the third database is greater than the quality level of the second database, which is greater than the quality level of the first database. In this case, when the data to be processed is transferred to the first database, performing a first quality verification of the data to be processed, wherein the first quality verification is related to the quality level of the second database; when the data to be processed passes the first quality verification, transferring the data to be processed to a second database, and performing the second quality verification on the data to be processed, wherein the second quality verification is related to the quality grade of a third database; and when the data to be processed passes the second quality verification, transferring the data to be processed to a third database.
In the embodiment of the invention, the quality verification of the high quality grade is carried out on the products with lower grade stored in one database, and the products with the quality verification in the database are moved to the other database, so that the products with different quality grades are stored in different databases. In this way, different databases are set to correspond to different quality grades, the quality of the products stored in the database with lower quality grades is automatically verified, the products passing the quality verification are transferred to the database with higher quality grades, so that the products are graded by using the different databases, and the automatic verification of the quality grades of the products is realized; therefore, products screened from the database can be ensured to accord with the quality standard corresponding to the database, the failure of product release is avoided, and the success rate of product release is further improved.
In one possible implementation, based on the embodiment shown in fig. 1, the data may also be marked with identification information. Specifically, the to-be-processed data carries identification information representing the quality grade of the to-be-processed data, and in the transfer process of the to-be-processed data after passing the quality verification, partial data content of the to-be-processed data is modified, which is specifically expressed as follows: adding the data to be processed in the high-level database, and deleting the data to be processed in the low-level database; and modifying the identification information carried by the data to be processed from the first identification to the second identification.
In this embodiment, data to be processed is added to the high-level database, and the data to be processed is deleted from the low-level database, so as to implement migration of the data to be processed.
The data to be processed carries identification information, and the identification information is used for representing the quality grade corresponding to the data to be processed. As described above, different databases correspond to different quality levels, so that the identification information carried by the data to be processed stored in the different databases is also different; in the migration process of the data to be processed, the identification information is modified from a first identification to a second identification, wherein the first identification is used for indicating the quality level of the low-level database, and the second identification is used for indicating the quality level of the high-level database.
In this scenario, the execution timing of the foregoing step S101 of "performing quality verification on the data to be processed in response to transferring the data to be processed to the low-level database" may at least be as follows: one possible implementation is: when the data to be processed moves to the low-level database, performing quality verification on the data to be processed; alternatively, another possible implementation is: and when the identification information carried by the data in the low-level database is modified into the first identification, carrying out quality verification on the data to be processed.
Specifically, when the data to be processed is moved to a low-level database, the data to be processed is subjected to quality verification. The quality verification is related to the quality level of the higher-level database corresponding to the lower-level database, and in this embodiment, the identification information carried by the data to be processed may not be modified into the first identification. In another embodiment, when the identification information carried by the data to be processed in the low-level database is modified to the first identification, the quality of the data to be processed is verified. At this time, the identification information carried by the data to be processed may have been modified to the first identification.
The following specifically describes a quality verification process of data to be processed: it will be appreciated that the data to be processed stored in the databases of different quality levels can only be quality verified using a quality verification method that matches the quality level of the database. In specific implementation, the test cases matched with the databases of each quality level can be preset in advance, so that the quality of the data to be processed is verified through the test cases corresponding to each database.
In an alternative embodiment, the implementation manner of the step S101 may be: in response to transferring data to be processed to the low-level database, inquiring the low-level database by using a preset quality grade list, and determining a high-level database corresponding to the low-level database; determining a test case corresponding to the advanced database; and carrying out quality verification on the data to be processed by using the test case.
It should be appreciated that the data storage system has a quality level table stored therein, with quality level relationships maintained between the databases. And querying the low-level database in the quality level table to obtain a database with the quality level higher than the adjacent level of the low-level database, and determining the database as a high-level database.
The data storage system related to the embodiment is also provided with a plurality of test cases in advance, wherein each test case corresponds to the quality grade of the database, so that the layering of the test cases is realized; and the quality test can be carried out on the data to be processed by using the corresponding test case, and the data to be processed is automatically transferred to a database with higher quality level under the condition that the data to be processed passes the quality test, so that the quality level corresponding to the data to be processed is improved. Automatic promotion of the data to be processed is realized.
It should be understood that, in the process of improving the quality level corresponding to the data to be processed, the quality level of the data to be processed can only be improved step by step, and only after the quality test is performed on the data to be processed by using the test cases corresponding to the quality levels, the quality level of the data to be processed can be improved to the highest level, but the quality level of the data to be processed cannot be improved by step.
It is to be appreciated that one test case corresponds to one quality level, and one quality level may correspond to one or more test cases. In other words, in the process of performing quality verification on the data to be processed, one test case corresponding to the high-level database may be used for quality verification, and a plurality of test cases corresponding to the high-level database may be used for quality verification.
It will be appreciated that if the database in which the data to be processed is located is the highest quality level database, i.e. there is no higher level database in the data storage system than the current database, then there is no need to perform a higher level quality verification of the data to be processed in this case.
For example, the data storage system has three databases, namely a first database, a second database, and a third database, wherein the quality level of the second database is higher than that of the first database, and the quality level of the third database is higher than that of the second database. In this case, for the data to be processed stored in the first database, only the test case corresponding to the second database can be used to perform quality verification on the data to be processed.
The quality verification method according to the embodiment of the present invention may include, but is not limited to, at least one of the following: smoke test verification, functional test verification or online verification.
The smoking test verification is also called a smoking test, and the smoking test is used for confirming that codes in the data to be processed can operate according to expectations and does not damage the stability of the whole software version to be distributed; functional test verification is also called functional test, which is used for verifying whether the data to be processed can realize partial functions; the online verification is used for verifying whether the data to be processed accords with the online release.
In some possible implementations, the quality verification may also include at least one of a full test or a version acceptance test. Wherein the full test is used to test all parameters in the article; the version acceptance test is used for testing part of parameters in the gray scale package, and is a rapid test process, so as to ensure the correct and complete basic functions and contents of software.
It should be understood that, based on different application scenarios of the data storage system, the quality class dividing manner of each database may be designed in a self-defined manner, and the quality verification manner corresponding to each database may also be designed in a self-defined manner. For ease of understanding, the following describes CI and CD processes in different operating systems in a software release scenario, which is not described herein in detail.
As described above, when the data to be processed passes the quality verification matching the quality level of the high-level database, the data to be processed is automatically transferred from the low-level database to the high-level database.
In the transfer process of the data to be processed, the transfer of the data to be processed can be realized by the following modes: transferring data to be processed in the low-level database to the high-level database through a data interface; or transferring the data to be processed in the low-level database to the high-level database by using a right-raising thread.
In this embodiment, an optional implementation manner may set a unidirectional transmission interface between databases of adjacent quality levels, where a data transmission direction of the unidirectional transmission interface is fixed, and is: low level database to high level database.
Wherein the unidirectional transport interface may be a physical communication interface. For example, in an application scenario, a first server may be understood as a low-level database and a second server as a high-level database, in which case the first server and the second server communicate via a unidirectional transport interface.
The unidirectional transport interface may also be a virtual communication interface, for example, in an application scenario, the unidirectional transport interface may be a communication protocol that communicates between databases, the communication protocol specifying that data can only be transported from a low-level database to a high-level database.
Therefore, the data to be processed in the low-level database is transferred to the high-level database through the data interface by using the unidirectional data interface between the adjacent databases, so that the method is fast and safe, convenient to process, small in occupation of system resources and beneficial to maintaining the system stability.
In the above embodiment, the number of data interfaces is related to the number of databases, and optionally, the unidirectional transmission interfaces may be disposed between all adjacent databases. For example, the data storage system includes 4 databases, a first database, a second database, a third database, and a fourth database, respectively. In this case, the data interface between the first database and the second database may be set as a first data interface, the data interface between the second database and the third database is a second data interface, and the data interface between the third database and the fourth database is a third data interface, where the first data interface, the second data interface, and the third data interface are all unidirectional transmission interfaces.
In addition to the foregoing embodiments, data migration may also be implemented using a bi-directional transport interface; alternatively, the unidirectional transmission interface may be provided only between some adjacent databases, and other types of data interfaces, such as bidirectional transmission interfaces, may be provided between the other adjacent databases, which will not be described herein.
In another alternative embodiment, a right-raising thread may be used to transfer the data to be processed in the low-level database to the high-level database, where the right-raising thread is used to raise the storage right of the data to be processed, so that the high-level database with higher quality level may store the data to be processed after the right raising.
In some embodiments, transfer of pending data across all databases may be accomplished using one weighting thread.
For example, in an application scenario, the data storage system includes a first database, a second database, and a third database, wherein the quality level of the first database is less than the quality level of the second database, and the quality level of the second database is less than the quality level of the third database. In this case, the data to be processed stored in the first database may be transferred to the second database using the upgrade thread; and after the data to be processed passes the quality verification corresponding to the third database, transferring the data to be processed from the second database to the third database by using the weight raising thread.
In other embodiments, different weighting threads may be used to transfer the data to be processed between different databases.
As exemplified above, a first extraction thread may be used to transfer data stored in a first database to be processed to a second database; and after the data to be processed passes the quality verification corresponding to the third database, transferring the data to be processed from the second database to the third database by using a second extraction thread.
It should be understood that the migration manner of the data to be processed is equally applicable to other data migration manners, for example, the data to be processed may be manually migrated from the low-level database to the high-level database; the data to be processed can be automatically transferred from the low-level database to the high-level database after passing the corresponding quality verification of the high-level database.
It should be appreciated that there may be a case where a higher level database having a higher quality than a lower level database is stored in a data storage system, in which case it is necessary to perform quality verification on data to be processed in the lower level database first.
On the basis of any one of the foregoing embodiments, in the embodiment of the present invention, the data to be processed may also be automatically generated based on the processing of the code management tool, and stored in the present data storage system.
In an alternative embodiment, the method may further comprise the steps of: compiling and packaging the received binary files to generate data to be processed; and storing the data to be processed into a primary database.
In this embodiment, a developer may use a code management tool to generate a binary file and send the binary file to a data storage system, and the system may use a preset persistent delivery tool, such as a software development kit (Software Development Kit, SDK), to compile and package the received binary file to generate the data to be processed.
It should be understood that the data to be processed generated by packaging and compiling the binary file can be repeatedly used and stored in different databases in the process of releasing the software, but the MD5 code carried by the data to be processed is unchanged, i.e. the data content of the data to be processed is not tampered in the subsequent processing process.
Wherein the code management tool includes, but is not limited to, gitlab and the persistent delivery tool includes, but is not limited to, jenkins.
After the data to be processed is obtained, the data to be processed is obtained by compiling and packaging the received binary files by the system, so that the quality grade corresponding to the data to be processed is lowest, and the data to be processed is stored in a primary database, wherein the primary database is the database with the lowest quality grade in all databases.
In the following, the data processing method provided by the embodiment of the present invention is described by taking CI and CD processes in different operating systems as examples in a software release scenario.
It should be appreciated that the operating systems to which the software is applied include, but are not limited to, PCA systems, which are operating systems applied to a single chip microcomputer, android systems, and IOS systems. In the development scenes of different operating system application software, the software release flows are different, so that the rules of database classification, quality verification of data to be processed and test cases related to the quality verification are also different.
In the following, it is specifically described how to perform quality classification and quality verification on data to be processed in the case that the operating systems carried by the data storage system are respectively a PCA system, an android system and an IOS system.
Optionally, the data storage system at least includes: the system comprises a first database, a second database, a third database and a fourth database, wherein the first database stores data generated based on binary file compiling, the second database stores data verified by a smoking test, the third database stores data verified by a functional test, and the fourth database stores data verified by online;
The quality level of the first database is smaller than that of the second database, the quality level of the second database is smaller than that of the third database, and the quality level of the third database is smaller than that of the fourth database.
In this embodiment, taking an example that the data storage system includes 4 databases, how to perform quality classification and quality verification on data to be processed when the data storage system is loaded with different operating systems is described:
it should be noted that, in different operating systems, the identification information carried by the data to be processed stored in different databases is different, please refer to the following table:
a first database Second database Third database Fourth database
PCA pca-dev pca-ci pca-staging pca-release
Android android-dev android-ci android-staging android-release
IOS ios-dev ios-ci ios-staging ios-release
As shown in table one, in the PCA system, the identification information carried by the data to be processed stored in the first database is PCA-dev, that is, the data to be processed uses PCA-dev as a suffix name, and the identification information carried by the data to be processed stored in the second database is PCA-ci; the identification information carried by the data to be processed stored in the third database is pca-starting; the identification information carried by the data to be processed stored in the fourth database is pca-release.
Thus, the grade corresponding to the data to be processed can be determined based on the identification information carried by the data to be processed, and if the suffix name of the data to be processed is dev, the lowest grade corresponding to the data to be processed can be determined; if the suffix name of the data to be processed is release, the highest grade corresponding to the data to be processed can be determined.
In the android system, identification information carried by data to be processed stored in a first database is android-dev; the identification information carried by the data to be processed stored in the second database is android-ci; the identification information carried by the data to be processed stored in the third database is android-starting; the identification information carried by the data to be processed stored in the fourth database is android-release.
In the IOS system, the identification information carried by the data to be processed stored in the first database is IOS-dev; the identification information carried by the data to be processed stored in the second database is ios-ci; the identification information carried by the data to be processed stored in the third database is ios-starting; the identification information carried by the data to be processed stored in the fourth database is ios-release.
The following specifically describes a process of performing quality classification and quality verification on data to be processed and a process of software release in the case that the operating system is a PCA system:
Referring to fig. 2, in the PCA system, a continuous integration flow and a continuous release flow of software may be implemented synchronously.
As shown in fig. 2, the data storage system carrying the PCA operating system needs to apply 4 databases in the software release process, and the first database in fig. 2 may be referred to as a database 11, the second database as a database 12, the third database as a database 13, and the fourth database as a database 14. The system compiles and packages the received binary files to generate data to be processed, and stores the data to be processed into the database 11 with the lowest quality level.
The smoking test is performed on the data to be processed in the database 11, if the data to be processed passes the smoking test, the data to be processed is labeled correspondingly, as shown in fig. 2, the data to be processed passing the smoking test may be labeled with QL2, and the labeled data to be processed is transferred to the database 12.
Which is a kind of
The marking of the data to be processed may be understood as modifying the identification information carried by the data to be processed, for example, modifying the suffix name of the data to be processed to pca-ci, or adding another identification information to the field of the data to be processed on the basis of modifying the identification information carried by the data to be processed.
The data to be processed in the database 12 is comprehensively tested, if the data to be processed passes the comprehensive test, the data to be processed is labeled with a corresponding label, as shown in fig. 2, the data to be processed passing the smoking test may be labeled with QL3, and the labeled data to be processed is transferred to the database 13.
On-line verification is performed on the data to be processed in the database 13, if the data to be processed passes the on-line verification, the data to be processed is labeled with a corresponding label, as shown in fig. 2, the data to be processed passing the smoke test may be labeled with QL4, and the labeled data to be processed is transferred to the database 14.
Configuration modification is carried out on the data to be processed in the database 14, channel products are generated, the channel products are distributed, and software distribution is completed.
It should be appreciated that the level to which the data to be processed corresponds may be determined based on the tag carried by the data to be processed. As described above, for example, if the tag carried by the data to be processed is QL1, it may be determined that the data to be processed corresponds to the lowest level; if the label carried by the data to be processed is QL4, the highest grade corresponding to the data to be processed can be determined.
It should be appreciated that the level to which the data to be processed corresponds may be determined based on the suffix name of the data to be processed. As described above, for example, if the suffix name of the data to be processed is dev, it may be determined that the data to be processed corresponds to the lowest level; if the suffix name carried by the data to be processed is release, the highest grade corresponding to the data to be processed can be determined.
The following specifically describes a process of performing quality classification and quality verification on data to be processed and a process of software release under the condition that an operating system is an android system:
referring to fig. 3, as shown in fig. 3, the data storage system with the android operating system needs to use 4 databases, namely, a first database, a second database, a third database and a fourth database, in the software release process, and in order to distinguish from the 4 databases involved in the PCA system, the first database may be referred to as a database 21, the second database may be referred to as a database 22, the third database may be referred to as a database 23, and the fourth database may be referred to as a database 24.
It should be understood that in the android system, the continuous integration flow and the continuous release flow of the software are performed step by step, and the continuous integration flow is executed first and then the continuous release flow is executed.
The continuous integration flow focuses on comprehensively testing codes submitted by users, and therefore the product is guaranteed not to have loopholes after being online; the continuous release process then focuses on releasing the product.
In the continuous integration flow, the data to be processed obtained after compiling and packaging all binary files submitted by users is also called a common package.
In the continuous release process, the application store of the android system has limitation on release products, and part of configuration parameters in the products are not supported by the application store, so that part of binary files submitted by users can only be compiled and packaged in the continuous release process to form data to be processed, and the data to be processed is also called a gray scale package.
The common package is different from the gray package in that the common package comprises all configuration parameters in the binary file submitted by the user, and the gray package only comprises part of the configuration parameters in the binary file in order to meet the release requirement. So, a generic package can be used to fully test the code.
In the following, a continuous integration flow corresponding to the android system is described.
The normal package is stored in the database 21, and the normal package is labeled with a corresponding label, as shown in fig. 3, and the normal package stored in the database 21 may be labeled with CIQL 1.
The normal packet in the database 21 is subjected to a smoking test, and if the normal packet passes the smoking test, the normal packet is labeled with a corresponding label, as shown in fig. 3, the normal packet may be labeled with CIQL2, and the labeled normal packet is transferred to the database 22.
The general package in the database 22 is fully tested, and if the general package passes the full test, the general package is labeled with a corresponding label, as shown in fig. 3, the general package may be labeled with CIQL3, and the labeled general package is transferred to the database 23.
After the common package passes the comprehensive test, the code submitted by the user is indicated to have no loopholes, and the continuous release process can be carried out.
The continuous release process corresponding to the android system is described below.
The grayscale packets are stored in the database 21 and labeled accordingly, and as shown in fig. 3, the grayscale packets stored in the database 21 may be labeled with QL 1.
The smoking test is performed on the gray scale package in the database 21, and if the gray scale package passes the smoking test, the gray scale package is labeled with a corresponding label, as shown in fig. 3, the gray scale package passing the smoking test may be labeled with QL2, and the gray scale package may be transferred to the database 22.
The gray scale package in the database 22 is subjected to the version acceptance test, and if the gray scale package passes the version acceptance test, the gray scale package is labeled with a corresponding label, as shown in fig. 3, the gray scale package passing the version acceptance test may be labeled with QL3, and the gray scale package is transferred to the database 23.
The grayscale package in the database 23 is subjected to online verification, and after the online verification is passed, the grayscale package is issued.
The successfully issued grayscale packets may be labeled with the corresponding labels, as shown in fig. 3, QL4 may be labeled for the successfully issued grayscale packets, and the grayscale packets may be stored in database 24.
The following specifically describes a flow of performing quality classification and quality verification on data to be processed and a flow of software release in the case that the operating system is an IOS system:
referring to fig. 4, as shown in fig. 4, the data storage system carrying the IOS operating system needs to use 4 databases, namely, a first database, a second database, a third database and a fourth database, respectively, in the software release process, and for distinguishing from the 4 databases involved in the PCA system and the android system, the first database may be referred to as a database 31, the second database may be referred to as a database 32, the third database may be referred to as a database 33, and the fourth database may be referred to as a database 34.
In the IOS system, the continuous integration flow and the continuous release flow of the software are performed step by step, and the continuous integration flow is executed first and then the continuous release flow is executed.
The continuous integration flow corresponding to the IOS system is consistent with the continuous integration flow corresponding to the android system, and will not be repeated here.
The continuous release process corresponding to the android system is described below.
And compiling and packaging the binary files submitted by the users by using a software development kit to generate data to be processed, wherein the data to be processed can be called as an external test package.
The external test packets are stored in the database 31 and labeled with corresponding labels, as shown in fig. 4, and the grayscale packets in the database 31 may be labeled with QL 1.
The external test packet in the database 31 is subjected to a smoking test, and if the external test packet passes the smoking test, the external test packet is labeled with a corresponding label, as shown in fig. 4, and the external test packet passing the smoking test may be labeled with QL 2.
The external test packet in the database 31 is uploaded to a test platform, a version acceptance test is performed on the test platform, if the external test packet passes the version acceptance test, a corresponding label is marked on the gray scale packet, as shown in fig. 4, a QL3 label may be marked on the external test packet passing the version acceptance test, and the external test packet is stored in the database 43.
And (3) performing online verification on the external test packet in the database 43, and issuing the external test packet on the test platform after the online verification is passed.
The successfully issued external test packets may be labeled with corresponding labels, as shown in fig. 4, and QL4 may be labeled for the successfully issued external test packets and stored in database 44.
In summary, four databases are set in the CI and CD processes of software release, the quality grades corresponding to each database are different, and the test cases corresponding to each data are also different, so that the layering of the quality grades and the test cases is realized.
In the CI and CD process, the quality test is automatically carried out on the data to be processed stored in the database with lower quality level, and the data to be processed is transferred to the database with higher quality level on the premise that the data to be processed passes the corresponding quality test, so that the quality level corresponding to the data to be processed is improved, the promotion of the level of the data to be processed is realized, and the flowing CI and CD process is formed.
For any node in the CI and CD processes, the node can only process the data to be processed in the associated database, but cannot process the data to be processed in other databases, so that the access control effect is realized. For the software release node, the data to be processed in the database associated with the software release node accords with the corresponding quality grade, so that the released data to be processed is ensured to accord with release requirements.
Based on the above processing, the scheme can realize the automatic processing of CI and CD processes, is beneficial to reducing the adverse effect of manual operation on the process, and meanwhile, the hierarchical data management mode of automatic promotion and verification is beneficial to development, operation and maintenance management and improvement of the research and development operation and maintenance efficiency by automatically promoting and not jumping the data under the condition that the data meets the standard of quasi-entry and quasi-exit, which ensures that the quality level between the data and the database to which the data belongs is matched, only the data adapting to the quality level can be obtained in a specific stage, and the data can not be obtained in a trans-stage or degrading way.
As shown in fig. 5, an embodiment of the present invention further provides a data processing apparatus 200, including:
a verification module 201, configured to perform quality verification on data to be processed in response to transferring the data to be processed to a low-level database;
a transferring module 202, configured to transfer the data to be processed to a high-level database when the data to be processed passes the quality verification.
Optionally, the transferring module 202 is further configured to:
adding the data to be processed in the high-level database, and deleting the data to be processed in the low-level database;
and modifying the identification information carried by the data to be processed from the first identification to the second identification.
Optionally, the transferring module 202 is further configured to:
transferring data to be processed in the low-level database to the high-level database through a data interface; or alternatively, the process may be performed,
and transferring the data to be processed in the low-level database to the high-level database by using a right-raising thread.
Optionally, the verification module 201 is further configured to:
in response to transferring data to be processed to a low-level database, inquiring the low-level database by using a preset quality level list, and determining a high-level database corresponding to the low-level database;
Determining a test case corresponding to the advanced database;
and carrying out quality verification on the data to be processed by using the test case.
Optionally, the verification module 201 is further configured to:
when the data to be processed moves to the low-level database, performing quality verification on the data to be processed; or alternatively, the process may be performed,
and when the identification information carried by the data to be processed in the low-level database is modified to be the first identification, carrying out quality verification on the data to be processed.
Optionally, the data processing apparatus 200 further includes:
the generation module is used for compiling and packaging the received binary files to generate data to be processed;
and the storage module is used for storing the data to be processed into a primary database.
The embodiment of the invention also provides an electronic device, as shown in fig. 6, which comprises a processor 301, a communication interface 302, a memory 303 and a communication bus 304, wherein the processor 301, the communication interface 302 and the memory 303 complete communication with each other through the communication bus 304.
A memory 303 for storing a computer program;
a processor 301, configured to execute a program stored in a memory 303, where the processor 301 executes the data processing method according to any one of the above embodiments.
The communication bus mentioned by the above electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, in which instructions are stored, which when run on a computer, cause the computer to perform the data processing method according to any of the above embodiments.
In a further embodiment of the present invention, a computer program product comprising instructions which, when run on a computer, causes the computer to perform the data processing method according to any of the embodiments described above is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (9)

1. A data processing method, characterized in that the method is applied to a data storage system comprising at least 2 databases, any two of which differ in quality level, the method comprising the steps of:
in response to transferring data to be processed to a low-level database, performing quality verification on the data to be processed;
when the data to be processed passes the quality verification, transferring the data to be processed to a high-level database;
wherein the quality verification corresponds to a quality level of the high-level database, the quality level of the high-level database being higher than the quality level of the low-level database; said transferring said data to be processed to a high-level database comprises:
transferring data to be processed in the low-level database to the high-level database through a data interface, wherein the data interface is a unidirectional transmission interface between the low-level database and the high-level database; or alternatively, the process may be performed,
Transferring data to be processed in the low-level database to the high-level database by using a right-raising thread, wherein the right-raising thread is used for transmitting the data from the database with lower quality level to the database with higher quality level;
wherein said responding to transfer the data to be processed to the low-level database, the quality verification of the data to be processed comprises:
in response to transferring data to be processed to the low-level database, inquiring the low-level database by using a preset quality grade list, and determining a high-level database corresponding to the low-level database, wherein the quality grade list reflects a mapping relation between the database and the quality grade;
determining a test case corresponding to the high-level database, wherein the quality grade of the high-level database is adjacent to the quality grade of the low-level database;
and carrying out quality verification on the data to be processed by using the test case.
2. The method of claim 1, wherein transferring the data to be processed to a high-level database comprises:
adding the data to be processed in the high-level database, and deleting the data to be processed in the low-level database;
Modifying the identification information carried by the data to be processed into a second identification from a first identification, wherein the first identification is used for indicating the quality level of the low-level database, and the second identification is used for indicating the quality level of the high-level database.
3. The method according to any one of claims 1-2, wherein the quality verification comprises at least one of: smoke test verification, functional test verification or online verification.
4. A method according to claim 3, wherein the data storage system comprises at least: the system comprises a first database, a second database, a third database and a fourth database, wherein the first database is used for storing data generated based on binary file compiling, the second database is used for storing data verified by a smoking test, the third database is used for storing data verified by a functional test, and the fourth database is used for storing data verified by online;
the quality level of the first database is smaller than that of the second database, the quality level of the second database is smaller than that of the third database, and the quality level of the third database is smaller than that of the fourth database.
5. The method of claim 1, wherein the validating the quality of the data to be processed in response to transferring the data to be processed to a low level database comprises:
when the data to be processed moves to the low-level database, performing quality verification on the data to be processed; or alternatively, the process may be performed,
and when the identification information carried by the data to be processed in the low-level database is modified to be the first identification, carrying out quality verification on the data to be processed.
6. The method according to claim 1, wherein the method further comprises:
compiling and packaging the received binary files to generate data to be processed;
and storing the data to be processed into a primary database, wherein the quality level of the primary database is the lowest among the quality levels of all databases.
7. A data processing apparatus, the apparatus comprising:
the verification module is used for responding to the transfer of the data to be processed to the low-level database and carrying out quality verification on the data to be processed;
the transfer module is used for transferring the data to be processed to a high-level database when the data to be processed passes the quality verification;
Wherein the quality verification corresponds to a quality level of the high-level database, the quality level of the high-level database being higher than the quality level of the low-level database;
wherein, transfer module is still used for:
transferring data to be processed in the low-level database to the high-level database through a data interface, wherein the data interface is a unidirectional transmission interface between the low-level database and the high-level database; or alternatively, the process may be performed,
transferring data to be processed in the low-level database to the high-level database by using a right-raising thread, wherein the right-raising thread is used for transmitting the data from the database with lower quality level to the database with higher quality level;
wherein, the verification module is further configured to:
in response to transferring data to be processed to a low-level database, inquiring the low-level database by using a preset quality grade list, and determining a high-level database corresponding to the low-level database, wherein the quality grade list reflects a mapping relation between the database and the quality grade;
determining a test case corresponding to the high-level database, wherein the quality grade of the high-level database is adjacent to the quality grade of the low-level database;
And carrying out quality verification on the data to be processed by using the test case.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the data processing method according to any one of claims 1 to 6 when executing a program stored on a memory.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the data processing method according to any one of claims 1-6.
CN202110281829.3A 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium Active CN113010421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110281829.3A CN113010421B (en) 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110281829.3A CN113010421B (en) 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113010421A CN113010421A (en) 2021-06-22
CN113010421B true CN113010421B (en) 2023-09-01

Family

ID=76408406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110281829.3A Active CN113010421B (en) 2021-03-16 2021-03-16 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113010421B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726136A (en) * 2019-01-28 2019-05-07 上海达梦数据库有限公司 Test method, device, equipment and the storage medium of database
CN109800090A (en) * 2018-12-19 2019-05-24 北京仁科互动网络技术有限公司 A kind of data integrated system and method
CN110019145A (en) * 2018-06-19 2019-07-16 杭州数澜科技有限公司 A kind of multi-environment cascade method and apparatus of big data platform
CN110505198A (en) * 2019-07-05 2019-11-26 中国平安财产保险股份有限公司 A kind of checking request method, apparatus, computer equipment and storage medium
CN110543469A (en) * 2019-08-28 2019-12-06 贝壳技术有限公司 Database version management method and server
CN111159016A (en) * 2019-12-16 2020-05-15 深圳前海微众银行股份有限公司 Standard detection method and device
CN111857722A (en) * 2020-06-23 2020-10-30 远光软件股份有限公司 DevOps quality assurance system and method based on three-library mode

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6425113B1 (en) * 2000-06-13 2002-07-23 Leigh C. Anderson Integrated verification and manufacturability tool
US8682910B2 (en) * 2010-08-03 2014-03-25 Accenture Global Services Limited Database anonymization for use in testing database-centric applications
US9600504B2 (en) * 2014-09-08 2017-03-21 International Business Machines Corporation Data quality analysis and cleansing of source data with respect to a target system
US9952965B2 (en) * 2015-08-06 2018-04-24 International Business Machines Corporation Test self-verification with integrated transparent self-diagnose

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019145A (en) * 2018-06-19 2019-07-16 杭州数澜科技有限公司 A kind of multi-environment cascade method and apparatus of big data platform
CN109800090A (en) * 2018-12-19 2019-05-24 北京仁科互动网络技术有限公司 A kind of data integrated system and method
CN109726136A (en) * 2019-01-28 2019-05-07 上海达梦数据库有限公司 Test method, device, equipment and the storage medium of database
CN110505198A (en) * 2019-07-05 2019-11-26 中国平安财产保险股份有限公司 A kind of checking request method, apparatus, computer equipment and storage medium
CN110543469A (en) * 2019-08-28 2019-12-06 贝壳技术有限公司 Database version management method and server
CN111159016A (en) * 2019-12-16 2020-05-15 深圳前海微众银行股份有限公司 Standard detection method and device
CN111857722A (en) * 2020-06-23 2020-10-30 远光软件股份有限公司 DevOps quality assurance system and method based on three-library mode

Also Published As

Publication number Publication date
CN113010421A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN108595157B (en) Block chain data processing method, device, equipment and storage medium
CN109491763B (en) System deployment method and device and electronic equipment
CN110597531B (en) Distributed module upgrading method and device and storage medium
CN109582452B (en) Container scheduling method, scheduling device and electronic equipment
CN107844588A (en) A kind of processing method of data dictionary, device, storage medium and processor
CN109683930A (en) Air-conditioning equipment programme upgrade method, device, system and household appliance
CN106529229A (en) Permission data processing method and apparatus
CN107798064A (en) Page processing method, electronic equipment and computer-readable recording medium
US11520620B2 (en) Electronic device and non-transitory storage medium implementing test path coordination method
CN109614159B (en) Method and device for distributing and importing planning tasks
CN113010421B (en) Data processing method, device, electronic equipment and storage medium
CN103984633B (en) A kind of bank main passes down the automatization test system of operation
CN110990356A (en) Real-time automatic capacity expansion method and system for logical mirror image
CN110806979B (en) Interface return value checking method, device, equipment and storage medium
CN107491460B (en) Data mapping method and device of adaptation system
CN112965697A (en) Code file generation method and device and electronic equipment
CN111800446B (en) Scheduling processing method, device, equipment and storage medium
CN113867778A (en) Method and device for generating mirror image file, electronic equipment and storage medium
CN116264550A (en) Resource slice processing method and device, storage medium and electronic device
CN112084006A (en) Mirror image packet processing method and device and electronic equipment
CN112613567A (en) User label management method, system, device and storage medium
CN112463596A (en) Test case data processing method, device and equipment and processing equipment
CN110769064A (en) System, method and equipment for offline message pushing
US20190057139A1 (en) Mass data movement mechanism
CN115878793B (en) Multi-label document classification method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant