WO2021047506A1 - 数据统计分析系统、方法及计算机可读存储介质 - Google Patents
数据统计分析系统、方法及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021047506A1 WO2021047506A1 PCT/CN2020/114009 CN2020114009W WO2021047506A1 WO 2021047506 A1 WO2021047506 A1 WO 2021047506A1 CN 2020114009 W CN2020114009 W CN 2020114009W WO 2021047506 A1 WO2021047506 A1 WO 2021047506A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user information
- task
- spark
- data
- algorithm
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/254—Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2282—Tablespace storage structures; Management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
Definitions
- the embodiments of the present invention relate to, but are not limited to, the field of data statistical analysis, and more specifically to a data statistical analysis system, method, and computer-readable storage medium.
- the embodiment of the present invention provides a data statistical analysis system, including: ADMA application algorithm unit; the ADMA application algorithm unit includes: a table modeling module for creating spark tables; an algorithm modeling module for using To provide sql algorithm; the first task modeling module is used to create spark tasks according to the sql algorithm; the second task modeling module is used to create ETL tasks.
- the embodiment of the present invention also provides a data statistical analysis method, including: ADMA application algorithm unit creates spark tables, spark tasks, and ETL tasks; wherein, the spark table includes: user information spark table, user information preprocessing spark table, and The total number of users index spark table; the spark tasks include: user information data mapping tasks, user information preprocessing tasks, and total number of users index tasks; the ETL tasks include: user information ETL tasks.
- An embodiment of the present invention also provides a computer-readable storage medium that stores a computer program, where the computer program is used to execute the data statistical analysis method in the embodiment of the present invention.
- FIG. 1 is a schematic structural diagram of a data statistical analysis system provided by an embodiment of the present invention
- FIG. 2 is a schematic structural diagram of a data statistical analysis system provided by another embodiment of the present invention.
- FIG. 3 is a schematic flowchart of a data statistical analysis method according to an embodiment of the present invention.
- FIG. 4 is a schematic flowchart of a data statistical analysis method provided by another embodiment of the present invention.
- FIG. 1 is a schematic structural diagram of a data statistical analysis system provided by an embodiment of the present invention. As shown in FIG. 1, the system includes: an ADMA application algorithm unit;
- the ADMA application algorithm unit includes:
- Algorithm modeling module used to provide sql algorithm
- the first task modeling module is used to create a spark task according to the sql algorithm
- the second task modeling module is used to create ETL tasks.
- the spark table includes: a spark table of user information, a spark table of user information preprocessing, and a spark table of the total number of users;
- the spark tasks include: user information data mapping tasks, user information preprocessing tasks, and total number of users index tasks;
- the ETL task includes: user information ETL task.
- the table modeling module is specifically configured to create the user information spark table, the user information preprocessing spark table, and the user total index spark table according to the xml file of the table and the xml file of the summary table;
- the algorithm modeling module is specifically configured to instantiate the sql algorithm by using sql according to the configuration algorithm .sql file, algorithm .xml file, and algorithm .conf file;
- the first task modeling module is specifically configured to use the instantiated SQL algorithm to create a user information data mapping task, a user information preprocessing task, and a total number of users index task according to the task xml file and the virtual task xml file;
- the second task modeling module is specifically configured to create user information ETL tasks according to ELT rules.
- the xml file of the table and the xml file of the summary table, the configuration algorithm .sql file, the algorithm .xml file and the algorithm .conf file, the task xml file and the virtual task xml file, the ELT rules are all adopted Standardized version.
- the system also includes:
- the data collection unit is used to import the original data of user information and output it to the ADMA application algorithm unit;
- the ADMA application algorithm unit further includes: an ETL module and a calculation module;
- the ETL module is configured to call the user information ETL task to process the original user information data and output to the calculation module;
- the calculation module is configured to call the user information data mapping task to map the processed data to the user information spark table, and call the user information preprocessing task to perform data preprocessing on the processed data and write it to the user Information preprocessing spark table, calling the total number of users index task to aggregate the processed data into the total number of users index spark table; then save the user information spark table, user information preprocessing spark table and the total number of users index spark table to The storage unit.
- the user information data mapping task, the user information preprocessing task, and the total number of users index task are data-driven, and the user information ETL task is a timed execution method.
- the system also includes: management portal;
- the ADMA application algorithm unit is also used to store the user information spark table, the user information preprocessing spark table, and the user total number indicator spark table stored in the storage unit, as well as the user information data mapping task, the user information preprocessing task, and the user total number indicator task.
- the user information ETL task is synchronized to the management portal;
- the management portal is used to classify and display the user information data mapping task, user information preprocessing task, user total number index task, and user information ETL task.
- the management portal is also used for the user information spark table, the user information preprocessing spark table and the user total index spark table, the user information data mapping task, the user information preprocessing task, the user total index task, and the user information ETL task Perform blood relationship analysis, monitor task execution status, and re-execute tasks.
- the management portal is also used to support supplementary collection of the original user information data when the original user information data is not imported in time.
- Fig. 2 is a schematic structural diagram of a data statistical analysis system provided by another embodiment of the present invention. As shown in Fig. 2, the system includes:
- ADMA application algorithm unit management portal, data acquisition unit and storage unit;
- the ADMA application algorithm unit is used to provide an ADMA application algorithm, and the implementation of the ADMA application algorithm includes table modeling, algorithm modeling, and task modeling;
- Table modeling includes the xml file of the table and the xml file of the summary table.
- the table xml file corresponds to the table creation script that generates the table, and the summary table xml file describes information such as the table creation path of the table.
- Algorithm modeling is implemented by SQL, and algorithm .sql files, algorithm .xml files and algorithm .conf files need to be configured. These three files are related to each other to form an instantiated sql algorithm.
- Task modeling includes task xml files and virtual task xml files. Tasks are generated based on xml files, including data-driven tasks and timed tasks.
- the ADMA application algorithm unit includes: a table modeling module, an algorithm modeling module, a first task modeling module, and a second task modeling module;
- Algorithm modeling module used to provide sql algorithm
- the first task modeling module is used to create a spark task according to the sql algorithm
- the second task modeling module is used to create ETL tasks.
- the spark table includes: a spark table of user information, a spark table of user information preprocessing, and a spark table of the total number of users;
- the spark tasks include: user information data mapping tasks, user information preprocessing tasks, and total number of users index tasks;
- the ETL task includes: user information ETL task.
- the table modeling module is specifically configured to create the user information spark table, the user information preprocessing spark table, and the user total number indicator spark table according to the xml file of the table and the xml file of the summary table;
- the algorithm modeling module is specifically configured to instantiate the sql algorithm by using sql according to the configuration algorithm .sql file, algorithm .xml file, and algorithm .conf file;
- the first task modeling module is specifically configured to use the instantiated SQL algorithm to create a user information data mapping task, a user information preprocessing task, and a total number of users index task according to the task xml file and the virtual task xml file;
- the second task modeling module is specifically configured to create user information ETL tasks according to ELT rules.
- the xml file of the table and the xml file of the summary table, the configuration algorithm .sql file, the algorithm .xml file and the algorithm .conf file, the task xml file and the virtual task xml file, the ELT rules are all adopted Standardized version.
- jar package and configuration file are the underlying things, used to instantiate and run tasks; data table design: store the xml file of the table and the xml file of the summary table; algorithm: store the algorithm .xml file and algorithm shell script, etc.; ETL rules: store ETL configuration files and scripts.
- the data collection unit is used to import the original data of user information and output it to the ADMA application algorithm unit;
- the ADMA application algorithm unit further includes: an ETL module and a calculation module;
- the ETL module is configured to call the user information ETL task to process the original user information data and output to the calculation module;
- the calculation module is configured to call the user information data mapping task to map the processed data to the user information spark table, and call the user information preprocessing task to perform data preprocessing on the processed data and write it to the user Information preprocessing spark table, calling the total number of users index task to aggregate the processed data into the total number of users index spark table; then save the user information spark table, user information preprocessing spark table and the total number of users index spark table to The storage unit.
- the ADMA application algorithm unit is also used to store the user information spark table, the user information preprocessing spark table, and the user total number indicator spark table stored in the storage unit, as well as the user information data mapping task, the user information preprocessing task, and the user total number indicator.
- Tasks and user information ETL tasks are synchronized to the management portal; among them, the table will be synchronized to the portal after the table is created.
- Task synchronization means that all tasks can be instantiated regularly every day and displayed on the management portal.
- the management portal is used to classify and display the user information data mapping task, user information preprocessing task, user total number index task, and user information ETL task.
- the management portal For example, it is classified and displayed on the management portal according to site (such as Sichuan), standard (such as vinsight), project (such as statistics server), task type (such as data mapping), etc.
- site such as Sichuan
- standard such as vinsight
- project such as statistics server
- task type such as data mapping
- the management portal is also used for the user information spark table, the user information preprocessing spark table and the user total index spark table, the user information data mapping task, the user information preprocessing task, the user total index task, and the user information ETL task Perform blood relationship analysis, monitor task execution status, and re-execute tasks.
- blood relationship analysis refers to the intuitive display of the process on the portal. For example, to generate the spark table of the total number of users, you must first ensure that the spark table has data in the user information preprocessing, otherwise it will not be executed, and then the relationship will be displayed intuitively on the portal. on.
- the management portal is also used to support supplementary collection of the original user information data when the original user information data is not imported in time.
- the original data of user information is transmitted to the data collection module regularly every day. If there is a network disconnection or other reasons, the original data of the user information is not transmitted, and then the network is restored, the original data can be transmitted to the data collection module again, and processing is supported at this time Delayed arrival of data.
- Supplementary procurement includes the tasks of the ETL module, as well as tasks such as data mapping.
- management portal also supports analysis or management functions such as dynamic resource management, data quality, and operation and maintenance KPI alarms to realize the data governance capabilities of the big data platform.
- the technical solution provided by the embodiment of the present invention provides a data statistical analysis system based on Spark and ADMA, which realizes data management of a big data platform, provides an underlying support platform to integrate other projects, and allows each project to develop different data statistical analysis indicators. Reduce the development and maintenance cost of the statistical analysis system.
- FIG. 3 is a schematic flowchart of a data statistical analysis method provided by an embodiment of the present invention. As shown in FIG. 3, the method includes:
- Step 301 the ADMA application algorithm unit creates spark tables, spark tasks, and ETL tasks;
- the spark table includes: a user information spark table, a user information preprocessing spark table, and a user total number indicator spark table;
- the spark tasks include: a user information data mapping task, a user information preprocessing task, and a user total number indicator task;
- the ETL tasks include: user information ETL tasks.
- the creation of the spark table includes: creating the user information spark table, the user information preprocessing spark table, and the user total number indicator spark table according to the xml file of the table and the xml file of the summary table;
- the creation of the spark task includes: using the sql to instantiate the sql algorithm according to the configuration algorithm .sql file, the algorithm .xml file, and the algorithm .conf file; and creating the user using the instantiated sql algorithm according to the task xml file and the virtual task xml file Information data mapping task, user information preprocessing task, total number of users index task;
- the creation of the ETL task includes: creating a user information ETL task according to ELT rules.
- the xml file of the table and the xml file of the summary table, the configuration algorithm .sql file, the algorithm .xml file and the algorithm .conf file, the task xml file and the virtual task xml file, the ELT rules are all adopted Standardized version.
- the user information data mapping task, the user information preprocessing task, and the total number of users index task are data-driven, and the user information ETL task is a timed execution method.
- the method also includes:
- call the user information data mapping task to map the processed data to the user information spark table
- call the user information preprocessing task to perform data preprocessing and write the processed data to the user information preprocessing spark table
- call The user total number indicator task aggregates the processed data and writes it to the user total number indicator spark table; then saves the user information spark table, the user information preprocessing spark table, and the user total indicator spark table.
- the method also includes:
- the user information data mapping task, the user information preprocessing task, the total number of users index task, and the user information ETL task are displayed in categories.
- the method also includes:
- the method also includes:
- FIG. 4 is a schematic flowchart of a data statistical analysis method provided by another embodiment of the present invention.
- the method includes:
- Step 401 the ADMA application algorithm unit creates spark tables, spark tasks, and ETL tasks;
- the spark table includes: a user information spark table, a user information preprocessing spark table, and a user total number indicator spark table;
- the spark tasks include: a user information data mapping task, a user information preprocessing task, and a user total number indicator task;
- the ETL tasks include: user information ETL tasks;
- ADMA application algorithm including jar package, table design, algorithm, etc.
- the table modeling will generate the user information spark table, the user information preprocessing spark table, and the user total indicator spark table;
- the task modeling will generate the user information ETL task, the user information data mapping task, and the user information preview. Processing tasks, total number of users index tasks.
- Step 402 The data collection unit introduces the original user information data and outputs it to the ADMA application algorithm unit;
- Step 403 The ETL module of the ADMA application algorithm unit invokes the user information ETL task to process the original user information data and output it to the calculation module;
- the ETL module calls the user information ETL task instantiated by 401 to process the original user information data. After data extraction, data accuracy verification, and data conversion, the non-compliant records are removed, the compliant records are retained, and output To the calculation module;
- step 404 the computing module calls the user information data mapping task to map the processed data to the user information spark table, and calls the user information preprocessing task to perform data preprocessing on the processed data and write it to the user.
- Information preprocessing spark table calling the total number of users index task to aggregate the processed data into the total number of users index spark table; then save the user information spark table, user information preprocessing spark table and the total number of users index spark table to The storage unit;
- the computing module calls the user information data mapping task instantiated in step 401 to map the data to the user information spark table of the storage module; triggers the user information preprocessing task and writes the data to the user information preprocessing spark table ; Trigger the total number of users indicator task, write the indicator data to the spark table of the total number of users indicator, and store it in the form of a file in a storage unit, such as HDFS (Hadoop Distributed File System);
- HDFS Hadoop Distributed File System
- Step 405 The ADMA application algorithm unit converts the user information spark table, the user information preprocessing spark table, and the user total index spark table stored in the storage unit, as well as the user information data mapping task, the user information preprocessing task, the user total index task, and the user
- the information ETL task is synchronized to the management portal;
- Step 406 The management portal classifies and displays the user information data mapping task, user information preprocessing task, user total number index task, and user information ETL task.
- the management portal you can see the task execution time, the task execution method, and the original data (ie blood relationship) used by the task.
- user information ETL tasks are executed regularly, and user information data mapping tasks, user information preprocessing tasks, and total number of users index tasks are data-driven.
- the method may also include:
- the management portal performs blood relationship analysis and monitoring tasks on the user information spark table, the user information preprocessing spark table, and the user total number indicator spark table, the user information data mapping task, the user information preprocessing task, the user total number indicator task, and the user information ETL task. Execution status and re-execution of tasks;
- the instantiated task to perform blood relationship analysis, the relationship between the task and the original data, and the spark table; monitor the task execution status, such as whether the task is executed, successful or failed; re-execute the task, such as not executed, executed successfully , Tasks that fail to execute can be executed again.
- log in to the management portal of the big data platform view the total number of user tasks, perform blood relationship analysis, and analyze the failed tasks.
- the relationship between each task and data (spark table) is automatically displayed on the portal after the spark table and task are instantiated.
- the blood relationship analysis here means that after a problem occurs in the task, you can analyze the location of the problem based on the relationship displayed on the portal. For tasks that are successfully executed or not executed, no conditions are required and can be re-executed.
- the technical solution provided by the embodiment of the present invention provides a data statistical analysis method based on Spark and ADMA.
- the ADMA application algorithm adopts a standardized version, which unifies the data processing mechanism and facilitates the integration of different statistical items. It only needs to be in the directory corresponding to the ADMA application algorithm. Just download the development code.
- the ADMA service will generate timed tasks and data-driven tasks based on the codes in the catalog, which solves the fragmentation of the performance of each project and greatly reduces the development workload.
- the big data platform management portal provides unified metadata management, visual task monitoring, visual blood relationship analysis, dynamic resource management, data quality, operation and maintenance KPI alarms and other analysis or management functions to realize the data governance capabilities of the big data platform and reduce statistics Analyze system development and maintenance costs.
- a computer-readable storage medium which stores a computer program, where the computer program is used to execute the data statistical analysis method in the foregoing embodiment.
- the technical solution provided by the embodiment of the present invention standardizes the ADMA application algorithm, greatly reduces the development workload, and reduces the development and maintenance cost of the statistical analysis system.
- Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
- the term computer storage medium includes volatile and non-volatile data implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
- Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
- communication media usually contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery media. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Debugging And Monitoring (AREA)
Abstract
一种数据统计分析系统、方法及计算机可读存储介质,其中该系统包括:ADMA应用算法单元;所述ADMA应用算法单元,包括:表建模模块,用于创建spark表;算法建模模块,用于提供sql算法;第一任务建模模块,用于根据所述sql算法创建spark任务;第二任务建模模块,用于创建ETL任务。
Description
相关申请的交叉引用
本申请基于申请号为201910856752.0、申请日为2019年9月11日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
本发明实施例涉及但不限于数据统计分析领域,更具体地涉及一种数据统计分析系统、方法及计算机可读存储介质。
随着通信网络技术的提升,从3G(3rd generation)到4G(4th generation),从4G到5G(5th generation),用户使用的流量日益增多,对更快、更稳定的业务诉求也日益更明显。运营商也着力推出更快更稳定的业务,随着用户量的增加,业务规模的不断扩大,产生的业务数据越来越多,运营商需要越来越多的数据统计分析指标来监测保证业务的稳定运行。
为此,出现了一大批数据统计分析项目,比如统计服务器、日志服务器、运维运营服务器等,这些项目虽能够满足各大运营商的需求,但项目之间展现出明显的碎片化,例如:表现在日志服务器通过crontab定时任务调用shell脚本进行数据分析,将结果输出到ES索引中;统计服务器将结果输出到oracle或Gbase数据表中;若想重新执行数据分析任务,日志服务器需要登录linux服务器手工执行shell脚本;统计服务器需要登录数据库手工执行存储过程等等。
可以看出,这些统计分析项目涉及模块较多(ES、oracle、Gbase等),对数据的底层处理机制各自不同。为交付这些项目,往往需要投入过多的人力,也会出现重复开发的现象,导致统计分析系统开发维护成本非常高。
发明内容
有鉴于此,本发明实施例提供了一种数据统计分析系统,包括:ADMA应用算法单元;所述ADMA应用算法单元,包括:表建模模块,用于创建spark表;算法建模模块,用于提供sql算法;第一任务建模模块,用于根据所述sql算法创建spark任务;第二任务建模模块,用于创建ETL任务。
本发明实施例还提供了一种数据统计分析方法,包括:ADMA应用算法单元创建spark表、spark任务和ETL任务;其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;所述ETL任务包括:用户信息ETL任务。
本发明实施例还提供了一种计算机可读存储介质,其存储有计算机程序,其中,该计 算机程序用于执行本发明实施例中的数据统计分析方法。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
附图用来提供对本发明技术方案的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明的技术方案,并不构成对本发明技术方案的限制。
图1为本发明一实施例提供的一种数据统计分析系统的结构示意图;
图2为本发明另一实施例提供的一种数据统计分析系统的结构示意图;
图3为本发明一实施例提供的一种数据统计分析方法的流程示意图;
图4为本发明另一实施例提供的一种数据统计分析方法的流程示意图。
为使本发明的目的、技术方案和优点更加清楚明白,下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1为本发明一实施例提供的一种数据统计分析系统的结构示意图,如图1所示,该系统包括:ADMA应用算法单元;
所述ADMA应用算法单元,包括:
表建模模块,用于创建spark表;
算法建模模块,用于提供sql算法;
第一任务建模模块,用于根据所述sql算法创建spark任务;
第二任务建模模块,用于创建ETL任务。
其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;
所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;
所述ETL任务包括:用户信息ETL任务。
其中,所述表建模模块,具体用于根据表的xml文件和汇总表的xml文件创建所述用户信息spark表、用户信息预处理spark表和用户总数指标spark表;
所述算法建模模块,具体用于根据配置算法.sql文件、算法.xml文件和算法.conf文件采用sql实例化所述sql算法;
所述第一任务建模模块,具体用于根据任务xml文件和虚拟任务xml文件采用实例化的所述sql算法创建用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;
所述第二任务建模模块,具体用于根据ELT规则创建用户信息ETL任务。
其中,所述表的xml文件和汇总表的xml文件,所述配置算法.sql文件、算法.xml文件和算法.conf文件,所述任务xml文件和虚拟任务xml文件,所述ELT规则都采用标准化版本。
其中,该系统还包括:
数据采集单元和存储单元;
所述数据采集单元,用于引入用户信息原始数据,输出到所述ADMA应用算法单元;
所述ADMA应用算法单元,还包括:ETL模块和计算模块;
所述ETL模块,用于调用所述用户信息ETL任务对所述用户信息原始数据进行处理后输出到所述计算模块;
所述计算模块,用于调用所述用户信息数据映射任务将处理后的数据进行数据映射到用户信息spark表,调用所述用户信息预处理任务将处理后的数据进行数据预处理后写到用户信息预处理spark表,调用用户总数指标任务将处理后的数据进行指标数据聚合后写到用户总数指标spark表;然后将用户信息spark表、用户信息预处理spark表和用户总数指标spark表保存到所述存储单元。
其中,所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务是数据驱动方式,所述用户信息ETL任务是定时执行方式。
其中,该系统还包括:管理门户;
所述ADMA应用算法单元,还用于将将存储单元保存的用户信息spark表、用户信息预处理spark表和用户总数指标spark表以及用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务同步到所述管理门户;
所述管理门户,用于分类展示所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务。
其中,所述管理门户,还用于对用户信息spark表、用户信息预处理spark表和用户总数指标spark表、用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务进行血缘分析、监控任务执行状态以及重新执行任务。
其中,所述管理门户,还用于当用户信息原始数据未及时引入时,支持补采用户信息原始数据。
图2为本发明另一实施例提供的一种数据统计分析系统的结构示意图,如图2所示,该系统包括:
ADMA应用算法单元、管理门户、数据采集单元和存储单元;
其中,所述ADMA应用算法单元,用于提供ADMA应用算法,该ADMA应用算法 实现包括表建模、算法建模和任务建模;
表建模包括表的xml文件和汇总表的xml文件,表xml文件对应生成表的建表脚本,汇总表xml文件说明表的建表路径等信息。
算法建模采用sql实现,需要配置算法.sql文件、算法.xml文件和算法.conf文件,这三种文件相互关联组成一个实例化的sql算法。
任务建模包括任务xml文件和虚拟任务xml文件,根据xml文件生成任务,包括数据驱动任务和定时任务。
具体而言,所述ADMA应用算法单元包括:表建模模块、算法建模模块、第一任务建模模块、第二任务建模模块;
表建模模块,用于创建spark表;
算法建模模块,用于提供sql算法;
第一任务建模模块,用于根据所述sql算法创建spark任务;
第二任务建模模块,用于创建ETL任务。
其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;
所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;
所述ETL任务包括:用户信息ETL任务。
具体而言,所述表建模模块,具体用于根据表的xml文件和汇总表的xml文件创建所述用户信息spark表、用户信息预处理spark表和用户总数指标spark表;
所述算法建模模块,具体用于根据配置算法.sql文件、算法.xml文件和算法.conf文件采用sql实例化所述sql算法;
所述第一任务建模模块,具体用于根据任务xml文件和虚拟任务xml文件采用实例化的所述sql算法创建用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;
所述第二任务建模模块,具体用于根据ELT规则创建用户信息ETL任务。
其中,所述表的xml文件和汇总表的xml文件,所述配置算法.sql文件、算法.xml文件和算法.conf文件,所述任务xml文件和虚拟任务xml文件,所述ELT规则都采用标准化版本。
如此,将ADMA应用算法标准化,对外融合项目时,只需要提供jar包、配置文件、数据表设计、算法以及ETL规则,减少了开发成本。其中,jar包、配置文件:为底层的东西,用于实例化以及运行任务;数据表设计:存放表的xml文件和汇总表的xml文件;算法:存放算法.xml文件和算法shell脚本等;ETL规则:存放ETL的配置文件以及脚本等。
其中,所述数据采集单元,用于引入用户信息原始数据,输出到所述ADMA应用算 法单元;
所述ADMA应用算法单元,还包括:ETL模块和计算模块;
所述ETL模块,用于调用所述用户信息ETL任务对所述用户信息原始数据进行处理后输出到所述计算模块;
所述计算模块,用于调用所述用户信息数据映射任务将处理后的数据进行数据映射到用户信息spark表,调用所述用户信息预处理任务将处理后的数据进行数据预处理后写到用户信息预处理spark表,调用用户总数指标任务将处理后的数据进行指标数据聚合后写到用户总数指标spark表;然后将用户信息spark表、用户信息预处理spark表和用户总数指标spark表保存到所述存储单元。
其中,所述ADMA应用算法单元,还用于将存储单元保存的用户信息spark表、用户信息预处理spark表和用户总数指标spark表以及用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务同步到所述管理门户;其中,表创建完会同步到门户,任务同步指的是:每天可以定时实例化好所有的任务,在管理门户上展示。
所述管理门户,用于分类展示所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务。
例如,按照局点(比如四川)、制式(比如vinsight)、项目(比如统计服务器)、任务类型(比如数据映射)等进行分类在管理门户上展示。
其中,所述管理门户,还用于对用户信息spark表、用户信息预处理spark表和用户总数指标spark表、用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务进行血缘分析、监控任务执行状态以及重新执行任务。其中,血缘分析是指将流程直观的展示在门户上,比如要生成用户总数指标spark表,必须先保证用户信息预处理spark表有数据,否则不执行,然后将这种关系直观的展示在门户上。另外,重新执行任务没有条件,只是如果没有数据的话,任务重新执行后,spark表不变而已。重新执行的是数据映射任务、数据预处理任务、数据总数指标任务,这些重新执行的输入不一定是原始数据。
例如,结合实例化的任务进行血缘分析,分析各类任务与原始数据、与spark表的关系等;还可以监控任务执行状态,例如任务是否执行、执行成功或失败;还可以重新执行任务,例如未执行过、执行成功、执行失败的任务都可以再重新执行。
其中,所述管理门户,还用于当用户信息原始数据未及时引入时,支持补采用户信息原始数据。比如,用户信息原始数据每天定时传到数据采集模块,如果中间出现网络断开或其他原因造成用户信息原始数据没传过来,然后网络恢复,原始数据又可以传到数据采集模块,这个时候支持处理延迟到达的数据。补采包括ETL模块的任务,也包括数据映射等任务。
例如,针对原始数据未及时引入统计分析系统的异常情况,支持补采。
此外,管理门户还支持动态资源管理、数据质量、运维KPI告警等分析或管理功能,实现大数据平台的数据治理能力。
本发明实施例提供的技术方案,提供了一种基于Spark和ADMA的数据统计分析系统,实现大数据平台的数据管理,提供底层支撑平台融合其他项目,供各项目开发不同的数据统计分析指标,降低统计分析系统开发维护成本。
图3为本发明一实施例提供的一种数据统计分析方法的流程示意图,如图3所示,该方法包括:
步骤301,ADMA应用算法单元创建spark表、spark任务和ETL任务;
其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;所述ETL任务包括:用户信息ETL任务。
其中,所述创建spark表,包括:根据表的xml文件和汇总表的xml文件创建所述用户信息spark表、用户信息预处理spark表和用户总数指标spark表;
所述创建spark任务,包括:根据配置算法.sql文件、算法.xml文件和算法.conf文件采用sql实例化sql算法;根据任务xml文件和虚拟任务xml文件采用实例化的所述sql算法创建用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;
所述创建ETL任务,包括:根据ELT规则创建用户信息ETL任务。
其中,所述表的xml文件和汇总表的xml文件,所述配置算法.sql文件、算法.xml文件和算法.conf文件,所述任务xml文件和虚拟任务xml文件,所述ELT规则都采用标准化版本。
其中,所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务是数据驱动方式,所述用户信息ETL任务是定时执行方式。
其中,该方法还包括:
引入用户信息原始数据,调用所述用户信息ETL任务对所述用户信息原始数据进行处理;
调用所述用户信息数据映射任务将处理后的数据进行数据映射到用户信息spark表,调用所述用户信息预处理任务将处理后的数据进行数据预处理后写到用户信息预处理spark表,调用用户总数指标任务将处理后的数据进行指标数据聚合后写到用户总数指标spark表;然后保存用户信息spark表、用户信息预处理spark表和用户总数指标spark表。
其中,该方法还包括:
分类展示所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务。
其中,该方法还包括:
对用户信息spark表、用户信息预处理spark表和用户总数指标spark表、用户信息数 据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务进行血缘分析、监控任务执行状态以及重新执行任务。
其中,该方法还包括:
当用户信息原始数据未及时引入时,支持补采用户信息原始数据。
图4为本发明另一实施例提供的一种数据统计分析方法的流程示意图,
本实施例应用于图3所示的系统
如图4所示,该方法包括:
步骤401,ADMA应用算法单元创建spark表、spark任务和ETL任务;
其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;所述ETL任务包括:用户信息ETL任务;
具体而言,ADMA应用算法,包括jar包、表设计、算法等。服务启动后,进行表建模,会生成用户信息spark表、用户信息预处理spark表、用户总数指标spark表;进行任务建模,会生成用户信息ETL任务、用户信息数据映射任务、用户信息预处理任务、用户总数指标任务。
步骤402,数据采集单元引入用户信息原始数据,输出到所述ADMA应用算法单元;
步骤403,所述ADMA应用算法单元的ETL模块调用用户信息ETL任务对所述用户信息原始数据进行处理后输出到计算模块;
具体而言,ETL模块调用401实例化出的用户信息ETL任务处理用户信息原始数据,在经过数据抽取、数据准确性校验、数据转换后,将不符合的记录剔除,符合的记录保留,输出到计算模块;
步骤404,所述计算模块调用所述用户信息数据映射任务将处理后的数据进行数据映射到用户信息spark表,调用所述用户信息预处理任务将处理后的数据进行数据预处理后写到用户信息预处理spark表,调用用户总数指标任务将处理后的数据进行指标数据聚合后写到用户总数指标spark表;然后将用户信息spark表、用户信息预处理spark表和用户总数指标spark表保存到所述存储单元;
具体而言,计算模块调用步骤401实例化出的用户信息数据映射任务,将数据映射到存储模块的用户信息spark表中;触发用户信息预处理任务,将数据写到用户信息预处理spark表中;触发用户总数指标任务,将指标数据写到用户总数指标spark表中,同时以文件的方式存储到存储单元,例如HDFS(分布式文件系统,Hadoop Distributed File System)中;
步骤405,所述ADMA应用算法单元将存储单元保存的用户信息spark表、用户信息预处理spark表和用户总数指标spark表以及用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务同步到所述管理门户;
步骤406,所述管理门户分类展示所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务。
具体而言,在管理门户可看到任务执行时间、任务执行方式、任务所使用的原始数据(即血缘关系)等。比如用户信息ETL任务是定时执行方式,用户信息数据映射任务、用户信息预处理任务、用户总数指标任务是数据驱动方式。
其中,该方法还可以包括:
所述管理门户对用户信息spark表、用户信息预处理spark表和用户总数指标spark表、用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务进行血缘分析、监控任务执行状态以及重新执行任务;
具体而言,结合实例化的任务进行血缘分析,任务与原始数据、与spark表的关系;监控任务执行状态,例如任务是否执行、执行成功或失败;重新执行任务,比如未执行过、执行成功、执行失败的任务都可以再重新执行。例如,登录大数据平台管理门户,查看用户总数指标任务,进行血缘分析,分析出执行失败的任务,对于执行失败的任务可查看失败原因,并重新执行。各个任务和数据(spark表)之间的关系在实例化出spark表和任务后自动在门户上展示。此处的血缘分析是指任务出问题后,可以根据门户上展示的关系分析定位问题的位置。对于执行成功或未执行的任务,不需要任何条件,均可以重新执行。
另外,针对原始数据未及时引入统计分析系统的异常情况,支持数据补采。其他故障分析及解决操作也可通过管理门户进行。
本发明实施例提供的技术方案,提供了基于Spark和ADMA的数据统计分析方法,首先ADMA应用算法采用标准化版本,统一了数据处理机制,方便融合不同统计项目,只需在ADMA应用算法对应的目录下开发代码即可,ADMA服务会根据目录下的代码生成定时任务和数据驱动任务,解决了各个项目表现的碎片化,大大的减少了开发工作量。其次,大数据平台管理门户,提供统一元数据管理、可视化任务监控、可视化血缘分析、动态资源管理、数据质量、运维KPI告警等分析或管理功能,实现大数据平台的数据治理能力,降低统计分析系统开发维护成本。
在本发明另一实施例中,提供了一种计算机可读存储介质,其存储有计算机程序,其中,该计算机程序用于执行上述实施例中的数据统计分析方法。
本发明实施例提供的技术方案,将ADMA应用算法标准化,大大的减少了开发工作量,降低了统计分析系统开发维护成本。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被 实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
Claims (11)
- 一种数据统计分析系统,包括:ADMA应用算法单元;所述ADMA应用算法单元,包括:表建模模块,用于创建spark表;算法建模模块,用于提供sql算法;第一任务建模模块,用于根据所述sql算法创建spark任务;第二任务建模模块,用于创建ETL任务。
- 根据权利要求1所述的系统,其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;所述ETL任务包括:用户信息ETL任务。
- 根据权利要求2所述的系统,其中,所述表建模模块,具体用于根据表的xml文件和汇总表的xml文件创建所述用户信息spark表、用户信息预处理spark表和用户总数指标spark表;所述算法建模模块,具体用于根据配置算法.sql文件、算法.xml文件和算法.conf文件采用sql实例化所述sql算法;所述第一任务建模模块,具体用于根据任务xml文件和虚拟任务xml文件采用实例化的所述sql算法创建用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;所述第二任务建模模块,具体用于根据ELT规则创建用户信息ETL任务。
- 根据权利要3所述的系统,其中,所述表的xml文件和汇总表的xml文件,所述配置算法.sql文件、算法.xml文件和算法.conf文件,所述任务xml文件和虚拟任务xml文件,所述ELT规则都采用标准化版本。
- 根据权利要求3所述的系统,其中,该系统还包括:数据采集单元和存储单元;所述数据采集单元,用于引入用户信息原始数据,输出到所述ADMA应用算法单元;所述ADMA应用算法单元,还包括:ETL模块和计算模块;所述ETL模块,用于调用所述用户信息ETL任务对所述用户信息原始数据进行处理后输出到所述计算模块;所述计算模块,用于调用所述用户信息数据映射任务将处理后的数据进行数据映射到用户信息spark表,调用所述用户信息预处理任务将处理后的数据进行数据预处理后写到用户信息预处理spark表,调用用户总数指标任务将处理后的数据进行指标数据聚合后写到用户总数指标spark表;然后将用户信息spark表、用户信息预处理spark表和用户总数 指标spark表保存到所述存储单元。
- 根据权利要求5所述的系统,其中,所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务是数据驱动方式,所述用户信息ETL任务是定时执行方式。
- 根据权利要求5所述的系统,其中,该系统还包括:管理门户;所述ADMA应用算法单元,还用于将存储单元保存的用户信息spark表、用户信息预处理spark表和用户总数指标spark表以及用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务同步到所述管理门户;所述管理门户,用于分类展示所述用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务。
- 根据权利要求7所述的系统,其中,所述管理门户,还用于对用户信息spark表、用户信息预处理spark表和用户总数指标spark表、用户信息数据映射任务、用户信息预处理任务、用户总数指标任务、用户信息ETL任务进行血缘分析、监控任务执行状态以及重新执行任务。
- 根据权利要求7所述的系统,其中,所述管理门户,还用于当用户信息原始数据未及时引入时,支持补采用户信息原始数据。
- 一种数据统计分析方法,包括:ADMA应用算法单元创建spark表、spark任务和ETL任务;其中,所述spark表包括:用户信息spark表、用户信息预处理spark表和用户总数指标spark表;所述spark任务包括:用户信息数据映射任务、用户信息预处理任务、用户总数指标任务;所述ETL任务包括:用户信息ETL任务。
- 一种计算机可读存储介质,其存储有计算机程序,其中,所述计算机程序用于执行权利要求10所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910856752.0A CN112487068A (zh) | 2019-09-11 | 2019-09-11 | 数据统计分析系统及方法 |
CN201910856752.0 | 2019-09-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021047506A1 true WO2021047506A1 (zh) | 2021-03-18 |
Family
ID=74867208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/114009 WO2021047506A1 (zh) | 2019-09-11 | 2020-09-08 | 数据统计分析系统、方法及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112487068A (zh) |
WO (1) | WO2021047506A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170060977A1 (en) * | 2015-08-31 | 2017-03-02 | BloomReach, Inc. | Data preparation for data mining |
CN107103050A (zh) * | 2017-03-31 | 2017-08-29 | 海通安恒(大连)大数据科技有限公司 | 一种大数据建模平台及方法 |
CN108595473A (zh) * | 2018-03-09 | 2018-09-28 | 广州市优普计算机有限公司 | 一种基于云计算的大数据应用平台 |
CN109522341A (zh) * | 2018-11-27 | 2019-03-26 | 北京京东金融科技控股有限公司 | 实现基于sql的流式数据处理引擎的方法、装置、设备 |
CN109753531A (zh) * | 2018-12-26 | 2019-05-14 | 深圳市麦谷科技有限公司 | 一种大数据统计方法、系统、计算机设备及存储介质 |
-
2019
- 2019-09-11 CN CN201910856752.0A patent/CN112487068A/zh active Pending
-
2020
- 2020-09-08 WO PCT/CN2020/114009 patent/WO2021047506A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170060977A1 (en) * | 2015-08-31 | 2017-03-02 | BloomReach, Inc. | Data preparation for data mining |
CN107103050A (zh) * | 2017-03-31 | 2017-08-29 | 海通安恒(大连)大数据科技有限公司 | 一种大数据建模平台及方法 |
CN108595473A (zh) * | 2018-03-09 | 2018-09-28 | 广州市优普计算机有限公司 | 一种基于云计算的大数据应用平台 |
CN109522341A (zh) * | 2018-11-27 | 2019-03-26 | 北京京东金融科技控股有限公司 | 实现基于sql的流式数据处理引擎的方法、装置、设备 |
CN109753531A (zh) * | 2018-12-26 | 2019-05-14 | 深圳市麦谷科技有限公司 | 一种大数据统计方法、系统、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112487068A (zh) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10740196B2 (en) | Event batching, output sequencing, and log based state storage in continuous query processing | |
CN109997126B (zh) | 事件驱动提取、变换、加载(etl)处理 | |
CN107506451B (zh) | 用于数据交互的异常信息监控方法及装置 | |
US10216582B2 (en) | Recovery log analytics with a big data management platform | |
CN112910945A (zh) | 请求链路跟踪方法和业务请求处理方法 | |
WO2023082681A1 (zh) | 基于批流一体的数据处理方法、装置、计算机设备和介质 | |
US20170199903A1 (en) | System for backing out data | |
WO2021169275A1 (zh) | Sdn 网络设备访问方法、装置、计算机设备及存储介质 | |
CN115374102A (zh) | 数据处理方法及系统 | |
WO2020253314A1 (zh) | 分布式数据库的事务监控方法及装置、系统、存储介质 | |
US10915510B2 (en) | Method and apparatus of collecting and reporting database application incompatibilities | |
CN112434043A (zh) | 一种数据同步方法、装置、电子设备及介质 | |
CN114077518B (zh) | 数据快照方法、装置、设备及存储介质 | |
CN111338834B (zh) | 数据存储方法和装置 | |
US8930352B2 (en) | Reliance oriented data stream management system | |
US10129328B2 (en) | Centralized management of webservice resources in an enterprise | |
WO2021047506A1 (zh) | 数据统计分析系统、方法及计算机可读存储介质 | |
CN116737693A (zh) | 数据迁移方法及装置、电子设备和计算机可读存储介质 | |
CN111698109A (zh) | 监控日志的方法和装置 | |
CN111008202A (zh) | 分布式事务处理方法和框架 | |
CN114064658A (zh) | 集群中的维表更新方法及装置 | |
US20180063242A1 (en) | Method and apparatus for operating infrastructure layer in cloud computing architecture | |
CN112988806A (zh) | 一种数据处理的方法及装置 | |
CN114896347A (zh) | 一种数据处理方法、装置、电子设备及存储介质 | |
CN112241332A (zh) | 一种接口补偿的方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20863952 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20863952 Country of ref document: EP Kind code of ref document: A1 |