CN117406960A - Low-code social data computing platform and device for agile analysis scene - Google Patents
Low-code social data computing platform and device for agile analysis scene Download PDFInfo
- Publication number
- CN117406960A CN117406960A CN202311296667.6A CN202311296667A CN117406960A CN 117406960 A CN117406960 A CN 117406960A CN 202311296667 A CN202311296667 A CN 202311296667A CN 117406960 A CN117406960 A CN 117406960A
- Authority
- CN
- China
- Prior art keywords
- data
- analysis
- module
- data source
- social
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 86
- 230000006870 function Effects 0.000 claims abstract description 53
- 238000007781 pre-processing Methods 0.000 claims abstract description 40
- 238000007405 data analysis Methods 0.000 claims abstract description 30
- 238000010586 diagram Methods 0.000 claims abstract description 11
- 238000010276 construction Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000004140 cleaning Methods 0.000 claims description 4
- 230000008602 contraction Effects 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 3
- 230000037430 deletion Effects 0.000 claims description 3
- 238000005111 flow chemistry technique Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000008520 organization Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 12
- 230000006978 adaptation Effects 0.000 abstract description 3
- 238000011161 development Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 28
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000003012 network analysis Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000004141 dimensional analysis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/20—Software design
- G06F8/24—Object-oriented
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method for constructing a low-code data computing platform for a social network agility analysis scene, which comprises 5 main modules: the system comprises a multi-source data access module, a data preprocessing module, a social association diagram construction module, a data analysis module and an analysis result customization output module. The output module can collect corresponding analysis results at regular time to construct specific business output such as portraits and the like. The invention has high decoupling among components, good expansibility, and each module provides various operators for users in advance in a low code form, has flexible change and high development efficiency, and can meet the downstream business of analysis of various social platforms; the method utilizes the emerging technologies such as micro-service functions, stream computing and the like to realize flexible access to the data source and rapid adaptation to the data characteristics, thereby improving the efficiency and quality of data analysis.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a low-code social data computing platform and device for agile analysis scenes.
Background
With the rapid development of social platforms, massive data generated by a large number of users presents challenges for data analysis. In addition, the data source heterogeneity and the changing nature of data also present certain difficulties in data analysis. It is often necessary to rewrite the business code for a particular application scenario. However, data processing of these business scenarios often has similarities.
Existing social platform data analysis systems and methods mostly use traditional data analysis architecture and techniques, which often require extensive programming and debugging effort and require high skill level for developers. Therefore, a low-code social platform data analysis processing method and system are needed to improve the efficiency and quality of data analysis and reduce the development cost and time.
Disclosure of Invention
Therefore, the invention firstly provides a low-code social data computing platform and a device for agile analysis scene, which are applied to a social data analysis system and are characterized in that the system comprises a data source definition flow and a data source processing flow;
the data source definition flow defines social platform data source data and comprises 4 modules:
the data format definition module is used for defining the data format of the heterogeneous social platform and carrying out field binding on the specific relationship and attribute of the social platform;
the preprocessing flow definition module is used for defining the flow processing flow of the data source data preprocessing module by arranging a plurality of predefined data preprocessing functions or adding a user-defined preprocessing function by a user;
the analysis task scheduling module is used for selecting different analysis services according to different analysis results based on analysis services provided by the system analysis service center, and combining and scheduling the results;
the output result definition module is used for defining the format and the content of the output result according to the requirements of the user;
the data source processing flow runs the defined processing flow and comprises 5 modules:
the data access module utilizes a registration callback mechanism to realize flexible access to a data source in the form of a micro service function, performs preliminary cleaning and formatting on received data by the data access system defined by the data format definition module, and sends the data to the data preprocessing module for further processing;
the data preprocessing module is used for preprocessing continuously-added and unordered social platform data in a streaming calculation organization mode based on the function defined by the preprocessing flow definition module;
the social connection diagram construction module inputs data to be preprocessed by the data preprocessing module through the message subscription frame, decouples the data storage flow from the processing flow, constructs the data into a social connection diagram and stores the social connection diagram into the diagram database;
the data analysis module inputs the social association graph based on the selection of the analysis task arrangement module, and customizes and outputs a data analysis result;
the customized output module registers a corresponding data analysis module according to the analysis requirement of a user based on the definition of the output result definition module, and periodically collects corresponding analysis results to construct specific business output.
The drag editing page provided by the control panel of the data source is edited and configured in a low-code mode, the control panel can create a corresponding data source controller for each newly-built data source, the creation, inquiry, update and deletion of the data source are uniformly managed, and the data source controller can periodically send heartbeat signals to the control panel to report the state of the data source controller, so that the data source manager can take timely response measures for the condition of the controller disconnection.
After the data source controller is established, the resource coordinator is informed to allocate resources to a specific flow module, after the data source controller is informed to the resource coordinator to establish functions, the data source A controller establishes a preprocessing function 1 and a processing function m on a physical node 1 and a physical node n respectively, a plurality of processing functions can exist on one physical node, the established functions can send heartbeat signals to a subordinate manager at regular time, the functions and the running states of a function running container are informed, and capacity expansion and contraction processing is carried out according to load conditions.
The invention has the technical effects that:
the method for constructing the low-code data computing platform for the social network agility analysis scene provided by the invention can agilely adapt to the rapidly transformed analysis service requirements for the social network, is based on massive and heterogeneous social platform data, performs service involvement in a low-code form, performs efficient and rapid processing and analysis, and finally outputs customized analysis results to meet the downstream service of various social platform analysis. The method comprises a multi-source data access module, a data preprocessing module, a social connection diagram construction module, a data analysis module and an analysis result customizing output module. By providing these modules with high decoupling and operators in low code form, the user is able to process and analyze data quickly and flexibly. Meanwhile, the method utilizes the emerging technologies such as micro-service functions, stream computing and the like to realize flexible access to the data source and rapid adaptation to the data characteristics, thereby improving the efficiency and quality of data analysis.
Drawings
FIG. 1 illustrates a low code data computing platform construction method architecture for agile analysis scenarios;
FIG. 2 data source creation process;
FIG. 3 data source controller architecture;
FIG. 4 is a flow of analysis and calculation of the whole data;
fig. 5 is a flow chart of analysis and calculation of the whole data.
Detailed Description
The following is a preferred embodiment of the present invention and a technical solution of the present invention is further described with reference to the accompanying drawings, but the present invention is not limited to this embodiment.
The invention provides a method for constructing a low-code data computing platform of a agile analysis scene.
Specifically, the invention discloses a system for constructing a low-code data computing platform for a social network agility analysis scene, which comprises a data source definition flow and a data source processing flow for each data source:
specifically, the data source definition flow defines social platform data source data, and includes 4 modules:
the data format definition module is used for defining the data formats of the heterogeneous social platform, including but not limited to data formats such as text, pictures and videos, and carrying out field binding on a plurality of specific relations (praise, forwarding and comment) and a plurality of attributes (account id and user nickname) of the social platform, wherein other fields define data types and meanings by users;
an example format is as follows:
the system definition field is a minimum set of basic attributes of the social platform data, and comprises a minimum information set required for constructing a correlation map:
text_fields: text fields are defined. In the example only one content field is included, the type of which is "text".
image_fields: a picture field is defined. In the example only one image url field is included, the type of which is "image".
video_fields: a video field is defined. In the example only one video_url field is included, the type of which is "video".
relationship_fields: relationship fields are defined, including praise, comment and share. In the example, the like, comment and share fields are defined, respectively, and their types are "relation". These relationship fields may be bound to other related fields (e.g., account_id and user_name).
User-defined fields can be used as additional information for data analysis to enhance the authenticity and reliability of the analysis results:
user defined fields: user-defined fields are defined. Two custom fields, custom_field1 and custom_field2, of the types "text" and "number" respectively, are defined in the example. The user can define more custom fields as required.
After the data definition is adopted, the platform can perform unified access on data of multiple sources, so that the data sources are fully decoupled from the subsequent service, and the complex analysis service is enabled.
And the preprocessing flow definition module is used for defining the flow processing flow of the data source data preprocessing module by arranging a plurality of predefined data preprocessing functions or adding a user-defined preprocessing function by a user. Specific aspects include, but are not limited to, data cleaning, deduplication, filtering, normalization, etc. to ensure accuracy and efficiency of subsequent data processing and analysis. The configuration examples are as follows:
a pipeline is an array of pre-processing streams defining a series of pre-processing functions and their parameters.
Each preprocessing function is represented by an object, which contains two fields, name and params. The name field indicates the name of the preprocessing function, such as "dataCleaning", "dataDeduplicatio n", "dataFiltering". The params field is an object for setting parameters of the preprocessing function. The specific parameters are dependent on each function.
In an example, the "dataclean" function has two parameters: removetopwords and remov ePunctuation, determine whether to remove stop words and punctuation.
The "dataDegussation" function has one parameter: keyField, a key field designated for deduplication.
The "dataFiltering" function has one parameter: the filter condition is set, and a filter condition is set, for example, data with a praise number greater than 1000 is screened out.
And the analysis task scheduling module is used for selecting different analysis services according to different analysis requirements based on the analysis services provided by the system analysis service center, and combining and scheduling the results. The module supports the orchestration of a variety of analysis tasks and operates in an out-of-service manner. These analysis service algorithms may be categorized according to different classifications, including but not limited to machine learning, deep learning, and user-defined algorithms.
For the machine learning algorithm, a support vector machine (Support Vector Machine), random Forest (Random Forest), logistic regression (Logistic Regression), K nearest neighbor algorithm (K-Nearest Neighbors), principal component analysis (Principal Component Analysis), adaBoost algorithm, XGBoost algorithm and the like are supported.
For the deep learning algorithm, a convolutional neural network (Convolutional Neural Networ k), a cyclic neural network (Recurrent Neural Network), a Long Short-Term Memory network (Long Short-Term Memory), a bidirectional cyclic neural network (Bidirectional Recurrent Neural Network), a generation countermeasure network (Generative Adversarial Network), a Transformers, and a deep reinforcement learning algorithm are supported
For the user-defined algorithm, the user can develop the algorithm for analysis according to the own requirements and data characteristics. These algorithms may be based on traditional statistical methods, domain-specific knowledge models, or other custom analysis methods.
The analysis task orchestration module supports the user to select different algorithms and configure their parameters to meet specific analysis requirements. The user can flexibly select a proper algorithm according to the data characteristics, the problem types and the service scene, and combine and arrange a plurality of algorithm results to obtain a required analysis result. Therefore, multi-angle and multi-dimensional analysis of the data can be realized, and more comprehensive and accurate analysis service is provided for the user. The configuration examples are as follows:
analysis tasks are arrays of analysis tasks defining a series of analysis tasks and their algorithms and parameters. Each analysis task is represented by an object, which contains name, algorithm and params fields.
The name field is the name of the analysis task, such as "sendmentanalysis", "topicClassi fication".
The algorithm field specifies the algorithm class used, such as "mapping learning", "deep learning", or other user-defined algorithms.
The params field is an object for setting parameters of an analysis task. The specific parameters are per algorithm.
In an example, the "sendmentanalysis" analysis task uses a machine learning algorithm to analyze emotion, which analyzes the "text" field and stores the results in the "sendent" field.
The "topicClassification" analysis task uses a deep learning algorithm to classify the topic, which analyzes the "text" field and stores the results in the "topic" field.
The "entityExtraction" analysis task uses natural language processing algorithms for entity extraction, which analyze the "text" field and store the results in the "entities" field.
The "networkAnalysis" analysis task uses a graph processing algorithm to perform social network analysis, which analyzes fields such as "accountds", "lists", "comments", "shares", and stores the results in the "networkMetrics" field.
According to the actual requirements you can customize the analysis tasks, algorithms and parameters according to this example and set the fields to be analyzed as needed.
The output result definition module: the format and content of the output results are defined according to the user's needs, including but not limited to statistics, charts, social association graphs, and the like. The module supports the definition of various output results, various data display modes and various data exposure modes so as to meet different business requirements. The configuration examples are as follows:
outputescents are arrays of output results, defining a series of output results and their types and fields. Each output result is represented by an object, which contains a type and corresponding fields.
the type field specifies the type of output result, such as "statistics", "chart" or "Association G graph".
In the example, the first type of output result is "statistics," which generates statistics and computes them through specified fields, here "send" and "topic.
The second type of output result is "character", which generates a histogram and uses the "topic" field as the X-axis and the "send" field as the Y-axis.
The third type of output result is "socialGraph", which generates a social association graph, and uses the "entries" field as node information and the "networkMetrics" field as link information.
According to the actual requirements you can customize the output result and its type and related fields according to this example.
Specifically, the data source processing flow runs the defined processing flow, and includes 5 modules:
and the data access module is used for: the module utilizes a registration callback mechanism to realize flexible access to the data sources in the form of a micro-service function, and accesses the social platform data of different sources into the system. The access module performs preliminary cleaning and formatting on the received data, and sends the data to the data preprocessing module for further processing;
and a data preprocessing module: the module takes stream computation as a main organization mode and carries out preprocessing on continuously added and unordered social platform data. Aiming at the newly added data characteristics, the rapid adaptation can be performed by adding a stream processing link. The pseudo code is as follows:
the social connection diagram construction module: the module decouples the data storage flow from the processing flow through the message subscription framework, constructs the data into a social association graph and stores the social association graph into a graph database. The method is beneficial to converting the social platform data into the graphical data, and facilitates subsequent data analysis and customized output;
and a data analysis module: the module is in the form of a service center (Serverless) to achieve high scalability of analysis capability. Various data analysis algorithms can be flexibly selected, and a preset low-code operator and a visualization component are provided, so that a user can conveniently customize and output data analysis results;
the above pseudocode defines an analysis module interface that all data analysis modules need to implement and provide an analysis method to execute specific data analysis logic. Two data analysis services, sentimentAnalysis and topicClassification, are then invoked, which respectively perform emotion analysis and topic classification on the data. Finally, in the combineAnalysis modules function, the different analysis modules are combined and their analysis methods are called sequentially to complete the data analysis. Different analysis modules can be created according to actual requirements and are transmitted into Co mbineAnalysisModules functions together with the original data for analysis. Finally, the system may process and output the analysis results.
And a customized output module: the module registers a corresponding data analysis module according to the analysis requirement of a user, and periodically collects corresponding analysis results to construct specific service output such as portraits and the like so as to enable downstream services. The output module also supports custom output formats and channels to meet the output requirements of different users.
The data source definition flow and the data source processing flow are associated and correspond to each other, the data format definition module configures the multi-source data access module, the processing flow definition module configures the data preprocessing module, the analysis task arrangement module configures the data analysis module, and the output result definition module configures the customized output module and configures an acceptance scheme of the downstream task.
The drag editing page provided by the control panel of the data source is edited and configured in a low-code mode, and a user does not need to perform detailed encoding operation on the data flow control.
For each newly built data source, the control panel creates a corresponding data source controller, and performs unified management on creation, inquiry, update and deletion of the data source. The data source controller can periodically send heartbeat signals to the control panel to report the state of the data source controller, so that the data source manager can take timely response measures to the controller disconnection condition.
After creation, the data source controller will inform the resource coordinator to allocate resources to the particular flow modules. Taking a preprocessing manager as an example, the resource coordinator searches for a proper free physical node in the cluster according to the default function number set in the preprocessing manager, and creates a preprocessing function on the physical node.
As seen in fig. 2 below, the resource data source a controller creates a preprocessing function 1 and a processing function m on physical node 1 and physical node n, respectively, after notifying the resource coordinator to create the functions. At the same time, there can be multiple processing functions on one physical node.
The capacity expansion function is designed to dynamically adjust the resources according to the load condition, so that the system can adapt to different workloads and improve the utilization efficiency of the resources.
When the resource utilization rate exceeds the threshold value, the threshold value can be set according to the actual situation or determined by tuning; if it is below the threshold, no expansion is required.
Calculating the number of instances requiring capacity expansion: num_samples=int (resource_use/thre shold) +1. And calculating the number of instances needing to be expanded according to the ratio of the resource utilization rate to the threshold value.
Limiting the calculated number of instances to not exceed a maximum number of instances: num_samples=min (num_samples, max_samples).
And returning the calculated number of examples to represent the number of the expansion needed. The code examples are as follows:
the capacity expansion function can be adjusted and modified according to actual conditions. The parameters such as the resource utilization rate threshold value, the maximum instance number and the like can be set according to the system requirements and actual conditions so as to meet the requirements and improve the system performance.
The created function can send a heartbeat signal to the subordinate manager at regular time, inform the function and the running state of the function running container, and perform expansion and contraction processing according to the load condition.
In particular, for the data analysis center, it can be understood that a group of normalized algorithm libraries organized in an out-of-service form can be deployed in another set of hardware device environment alone, because its computing power may depend on a dedicated device such as a graphics computing component. The data processing flow is coordinated with the service center station through a communication protocol, so that components can be fully decoupled, and the overall robustness and usability are improved;
FIG. 3 illustrates the overall analysis and computation flow of data after defining the data sources.
Step one: the data source a pushes the acquired message to the message queue either with a software-provided API or with a generic message queue SDK. The message can be pushed in a single data text format, and can also be pushed in a batch format.
Step two: the subordinate function instance of the data source A manager can regularly pull new data from the message queue, and after the topic of the message is modified, the subsequent function is reserved for processing. Finally, the whole preprocessing flow is executed in a sequential form.
Step three: after processing, the data waits for capturing the function instance of the social connection graph constructor in the message queue. The attribute vector embedding function normalizes attribute fields in the data according to a predefined type and converts the attribute fields into a vector form. After entering the database, the subsequent analysis module is reserved for use. The data is then organized in the form of a graph and stored in a graph database. The two provide data bottom layer support for subsequent analysis business.
Step four: and the data analysis manager can call the analysis algorithm provided by the corresponding data analysis center according to the configured index at regular intervals. The analysis tasks are generally divided into two types:
if the analysis index is of an independent type, i.e. does not need to rely on other data, but only on statistical values such as max, it directly updates the index of the corresponding node of the graph database after calculation.
If the analysis index depends on other data, if the whole graph needs to be walked, the addition of a new node can bring about the updating of other node indexes. Such computing tasks typically require a longer time to compute, which may trigger the computing task when the overall load of the cluster is low, timing or accumulating the same task to a counter threshold. When the database is updated, the database is locked, and the service is provided again after the update is completed.
Step five: the output manager will periodically rearrange the analysis index data in the form of configuration requirements and finally provide it to downstream applications, which are generally divided into two ways:
if the user configures the passive receiving mode, the output manager will sort the results regularly and send them to the message queue. Or calling a hook function configured by the user and sending the result to the corresponding address.
If the user configures an active receiving mode, the user can acquire an analysis result in the latest state by only setting a calling api provided by the system externally.
And repeatedly executing the first step to the fifth step.
Claims (3)
1. A low-code social data computing platform and a device of an agile analysis scene are applied to a social data analysis system, and are characterized in that the system comprises a data source definition flow and a data source processing flow;
the data source definition flow defines social platform data source data and comprises 4 modules:
the data format definition module is used for defining the data format of the heterogeneous social platform and carrying out field binding on the specific relationship and attribute of the social platform;
the preprocessing flow definition module is used for defining the flow processing flow of the data source data preprocessing module by arranging a plurality of predefined data preprocessing functions or adding a user-defined preprocessing function by a user;
the analysis task scheduling module is used for selecting different analysis services according to different analysis results based on analysis services provided by the system analysis service center, and combining and scheduling the results;
the output result definition module is used for defining the format and the content of the output result according to the requirements of the user;
the data source processing flow runs the defined processing flow and comprises 5 modules:
the data access module utilizes a registration callback mechanism to realize flexible access to a data source in the form of a micro service function, performs preliminary cleaning and formatting on received data by the data access system defined by the data format definition module, and sends the data to the data preprocessing module for further processing;
the data preprocessing module is used for preprocessing continuously-added and unordered social platform data in a streaming calculation organization mode based on the function defined by the preprocessing flow definition module;
the social connection diagram construction module inputs data to be preprocessed by the data preprocessing module through the message subscription frame, decouples the data storage flow from the processing flow, constructs the data into a social connection diagram and stores the social connection diagram into the diagram database;
the data analysis module inputs the social association graph based on the selection of the analysis task arrangement module, and customizes and outputs a data analysis result;
the customized output module registers a corresponding data analysis module according to the analysis requirement of a user based on the definition of the output result definition module, and periodically collects corresponding analysis results to construct specific business output.
2. The low-code social data computing platform and apparatus of a agile analysis scenario of claim 1, wherein: the drag editing page provided by the control panel of the data source is edited and configured in a low-code mode, the control panel can create a corresponding data source controller for each newly-built data source, the creation, inquiry, update and deletion of the data source are uniformly managed, and the data source controller can periodically send heartbeat signals to the control panel to report the state of the data source controller, so that the data source manager can take timely response measures for the condition of the controller disconnection.
3. The low-code social data computing platform and apparatus of a agile analysis scenario of claim 2, wherein: after the data source controller is established, the resource coordinator is informed to allocate resources to a specific flow module, after the data source controller is informed to the resource coordinator to establish functions, the data source A controller establishes a preprocessing function 1 and a processing function m on a physical node 1 and a physical node n respectively, a plurality of processing functions can exist on one physical node, the established functions can send heartbeat signals to a subordinate manager at regular time, the functions and the running states of a function running container are informed, and capacity expansion and contraction processing is carried out according to load conditions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311296667.6A CN117406960A (en) | 2023-10-09 | 2023-10-09 | Low-code social data computing platform and device for agile analysis scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311296667.6A CN117406960A (en) | 2023-10-09 | 2023-10-09 | Low-code social data computing platform and device for agile analysis scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117406960A true CN117406960A (en) | 2024-01-16 |
Family
ID=89497151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311296667.6A Pending CN117406960A (en) | 2023-10-09 | 2023-10-09 | Low-code social data computing platform and device for agile analysis scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117406960A (en) |
-
2023
- 2023-10-09 CN CN202311296667.6A patent/CN117406960A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11243704B2 (en) | Data pipeline architecture for analytics processing stack | |
US20230015926A1 (en) | Low-latency streaming analytics | |
US11636397B1 (en) | Graphical user interface for concurrent forecasting of multiple time series | |
CN107577805B (en) | Business service system for log big data analysis | |
US20210133634A1 (en) | Efficiently executing commands at external computing services | |
US11461350B1 (en) | Control interface for dynamic elements of asset monitoring and reporting system | |
US11954127B1 (en) | Determining affinities for data set summarizations | |
US11314808B2 (en) | Hybrid flows containing a continous flow | |
US11283690B1 (en) | Systems and methods for multi-tier network adaptation and resource orchestration | |
CN106790718A (en) | Service call link analysis method and system | |
US11146599B1 (en) | Data stream processing to facilitate conferencing based on protocols | |
KR20150092586A (en) | Method and Apparatus for Processing Exploding Data Stream | |
CN114830080B (en) | Data distribution flow configuration method and device, electronic equipment and storage medium | |
CN103138981A (en) | Method and device for social network service analysis | |
US20180314393A1 (en) | Linking data set summarizations using affinities | |
WO2023131303A1 (en) | Digital twin network orchestration method, digital twin network, medium, and program | |
CN114338746A (en) | Analysis early warning method and system for data collection of Internet of things equipment | |
CN116048817B (en) | Data processing control method, device, computer equipment and storage medium | |
CN110781180A (en) | Data screening method and data screening device | |
CN118211180A (en) | Information service method and platform for intelligent fusion of big data | |
CN114830615B (en) | Data distribution system and data distribution method | |
CN115499313A (en) | Network slice management method and device based on user intention and storage medium | |
CN115729683A (en) | Task processing method, device, system, computer equipment and storage medium | |
CN110908642B (en) | Policy generation execution method and device | |
CN117406960A (en) | Low-code social data computing platform and device for agile analysis scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |