CN112995263B - Network priority data processing system - Google Patents

Network priority data processing system Download PDF

Info

Publication number
CN112995263B
CN112995263B CN201911310571.4A CN201911310571A CN112995263B CN 112995263 B CN112995263 B CN 112995263B CN 201911310571 A CN201911310571 A CN 201911310571A CN 112995263 B CN112995263 B CN 112995263B
Authority
CN
China
Prior art keywords
data
processing
stream data
message
plug
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911310571.4A
Other languages
Chinese (zh)
Other versions
CN112995263A (en
Inventor
李佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Shanxi Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Shanxi Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Shanxi Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911310571.4A priority Critical patent/CN112995263B/en
Publication of CN112995263A publication Critical patent/CN112995263A/en
Application granted granted Critical
Publication of CN112995263B publication Critical patent/CN112995263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Abstract

The invention discloses a network priority flow data processing system, which comprises: the system comprises a message queue, a frame module and a processing plug-in, wherein the message queue comprises at least one message node and is used for acquiring file data from an acquisition source, processing the file data into stream data and storing the stream data into each message node; the framework module is used for starting a thread, reading stream data from the message node through the thread, and preprocessing and classifying the read stream data; the processing plug-in is used for accessing the flow data which is preprocessed and classified, processing the accessed flow data and feeding back the processed accessed flow data to the framework module; the frame module is further to: and issuing the processed accessed streaming data for the data user to use. The system separates the acquisition and the distribution of data from the data processing, so that the system has large data storage capacity, high processing efficiency and secondary development capability.

Description

Network priority data processing system
Technical Field
The invention relates to the technical field of communication, in particular to a network priority data processing system.
Background
Stream data is a set of sequential, large, fast, continuous arriving data sequences, which can be generally viewed as a dynamic collection of data that grows indefinitely over time. Streaming data has four characteristics: (1) data arrives in real time; (2) The data arrival sequence is independent and is not controlled by an application system; (3) the data scale is large and the maximum value cannot be predicted; (4) Once the data is processed, it cannot be retrieved again for processing unless purposely saved, or it is expensive to retrieve the data again. The processing of the streaming data is also special according to the characteristics of the streaming data, for example, the streaming data processing is single data or batch data processing of several pieces of data, only a delay of about a few seconds or a few milliseconds is needed, and the timeliness of information processing has important value.
The prior art (CN 107070890A) discloses a stream data processing device in a communication network optimization system and the communication network optimization system, and the method comprises the following steps: the method comprises the steps of receiving different data sources, arranging the data into a format required by stream data processing, and sending the data to a Kafka processing module, wherein the Kafka processing module is a queue type Kafka cluster, uniformly loading the stream data to the Kafka cluster, calling the stream data to be processed by a Storm processing engine as required, and enabling a third party data consumer to use the stream data through a standard interface.
However, the inventor discovers in the process of implementing the invention that: in the prior art, the processing of the flow data is to finish the collection, classification, processing and release of the flow data in a set frame, the processing capacity of the flow data of the frame is fixed, the capacity expansion can not be carried out according to the requirement in the later period, and the flow data frame is a whole, so the energy consumption is large in maintenance and operation.
Disclosure of Invention
In view of the above, the present invention has been made to provide a network priority data processing system that overcomes or at least partially solves the above problems.
According to an aspect of the present invention, there is provided a network priority data processing system, including: a message queue, a framework module, and a processing plug-in, wherein,
the message queue comprises at least one message node and is used for acquiring file data from an acquisition source, processing the file data into stream data and storing the stream data into each message node;
the frame module is used for starting a thread, reading stream data from the message node through the thread, and preprocessing and classifying the read stream data;
the processing plug-in is used for accessing the flow data which is preprocessed and classified, processing the accessed flow data and feeding back the processed accessed flow data to the framework module;
the frame module is further to: and issuing the processed accessed streaming data for the data user to use.
Optionally, the frame module further comprises: a preprocessor, a classifier and an indexer;
the preprocessor is used for reading stream data from message nodes of the message queue through threads and preprocessing the read stream data;
the classifier is used for classifying the flow data after the pretreatment;
and the indexer is used for calling the processing plug-in to transfer the preprocessed and classified stream data to the processing plug-in for processing.
Optionally, the frame module further comprises: and the solr index platform is used for issuing the processed accessed streaming data.
Optionally, the message queue is specifically a Kafka message queue, and the message node is a Kafka node.
Optionally, the acquisition source comprises at least one of: HDFS interface, TCP snooping, 8808 port, UDP snooping.
Optionally, the processing plug-in is loaded via ClassLoad technology.
Optionally, the processing plug-in accesses the pre-processed and classified stream data via a multi-partition mode.
According to the network priority data processing system of the invention, the system comprises: the system comprises a message queue, a frame module and a processing plug-in, wherein the message queue comprises at least one message node and is used for acquiring file data from an acquisition source, processing the file data into stream data and storing the stream data into each message node; the framework module is used for starting a thread, reading stream data from the message node through the thread, and preprocessing and classifying the read stream data; the processing plug-in is used for accessing the flow data which is preprocessed and classified, processing the accessed flow data and feeding back the processed accessed flow data to the framework module; the frame module is further to: and issuing the processed accessed streaming data for the data user to use. The system separates the acquisition and the distribution of the streaming data from the data processing, so that the system has large data storage capacity, high processing efficiency and secondary development capability.
The above description is only an overview of the technical solutions of the present invention, and the present invention can be implemented in accordance with the content of the description so as to make the technical means of the present invention more clearly understood, and the above and other objects, features, and advantages of the present invention will be more clearly understood.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram illustrating an interaction architecture of a network priority data processing system according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a thread framework in an embodiment of the invention;
fig. 3 is a schematic structural diagram illustrating a network priority data processing system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a network priority data processing system according to another embodiment of the invention;
FIG. 5 shows a schematic diagram of acquisition sources, data sources, and collections in an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a schematic diagram illustrating an interaction architecture of a network priority data processing system in an embodiment of the present invention, and as shown in fig. 1, the interaction architecture mainly includes 3 interaction subjects: the system comprises a message queue, a thread frame and a data publishing platform, wherein the message queue comprises a plurality of message nodes, and the data publishing platform comprises a plurality of databases such as HBASE, ADFS, HIVE, MPP and the like. The main interactive flow of the stream data processing is as follows: the method comprises the steps that file data obtained by different acquisition sources are loaded into a piece of streaming data by a message queue, and a thread frame classifies, preprocesses and processes the obtained streaming data to obtain processed data; and the data publishing platform publishes the processed data to a third-party platform for consumption by a user, and the data publishing platform comprises various databases such as HBASE, ADFS, HIVE, MPP and the like.
Based on the interaction architecture and the interaction flow, the embodiment of the invention transforms the thread frame and divides the thread frame into a frame part and a processing part, so that the network priority data processing system can have secondary development capacity and can perform capacity expansion processing.
Fig. 2 is a schematic diagram illustrating a thread framework in an embodiment of the present invention, and as shown in fig. 2, in the embodiment of the present invention, the thread framework is divided into a framework part and a processing part, and the framework part is only responsible for starting, collecting stream data, and issuing data of the thread framework. The processing part is composed of a plurality of self-defined plug-ins and is responsible for processing the streaming data. The following is an explanation of specific examples.
Fig. 3 shows a schematic structural diagram of a network priority data processing system provided by an embodiment of the present invention, and as shown in fig. 3, the system includes: a message queue 30, a framework module 31 and a processing plug-in 32. Wherein the message queue 30 includes at least one message node.
And the message queue 30 is used for acquiring file data from the acquisition source, processing the file data into stream data, and storing the stream data into each message node.
The message queue 30 collects file data from collection sources including at least one of: the method comprises the steps of HDFS interface, TCP monitoring, 8808 port and UDP monitoring, and loading the obtained file data into a piece of stream data and storing the stream data into each message node.
And the framework module 31 is used for starting the thread, reading the streaming data from the message node through the thread, and preprocessing and classifying the read streaming data.
The frame module 31 corresponds to a frame part in the thread frame, starts a thread first, then reads stream data from a message node of the message queue through the thread, and preprocesses and classifies the read stream data.
And the processing plug-in 32 is used for accessing the flow data which is preprocessed and classified, processing the accessed flow data and feeding back the processed accessed flow data to the framework module.
The processing plug-in is also the self-defined plug-in of the processing part and is used for accessing the streaming data which is classified and preprocessed in the framework module, processing the accessed streaming data and feeding back the processed data to the framework module. Therefore, processing plug-ins can be added to the system according to actual requirements, so that the system is expanded, and serial operation, parallel operation and serial-parallel combined operation can be realized through setting.
After that, the frame module 31 is further used for: and issuing the processed accessed streaming data for the data user to use. Finally, the framework module 31 publishes the processed data fed back by the processing plug-in 32 for use by the data parties, e.g., to a third party platform for consumption of the data by the user.
According to the network-priority data processing system provided by the embodiment of the invention, the mode divides the streaming data processing framework into the framework module and the processing plug-in, the framework is only responsible for streaming data collection and data distribution, the data processing is completed by transferring the data processing to the third-party plug-in, and the data acquisition and distribution are separated from the data processing, so that the system has large data storage capacity and high processing efficiency, and has the capacity of secondary development.
Fig. 4 is a schematic diagram of a network priority data processing system according to an embodiment of the present invention, where, as shown in fig. 4, the system includes: acquisition source 41, kafka message queue 42, framework module 43, and processing plug-in 44.
And the Kafka message queue is used for acquiring the file data from the acquisition source 41, processing the file data into stream data, and storing the stream data into each Kafka node. Wherein the acquisition source 41 comprises at least one of: HDFS interface, TCP snooping, 8808 port, UDP snooping, that is, the file data collected from the collection source includes at least one of: HDFS interface data, TCP snooped data, 8808 interface data, UDP snooped data.
And the framework module 43 is used for starting the thread, reading the streaming data from the message node through the thread, and preprocessing and classifying the read streaming data.
The framework module comprises equipment respectively used for executing data preprocessing, data classification and data interaction. Specifically, the frame module 33 includes: a preprocessor, a classifier, and an indexer. The preprocessor is used for reading stream data from message nodes of the message queue through threads and preprocessing the read stream data; a classifier for classifying the preprocessed stream data; an indexer for invoking the processing plug-in 44 to pass the pre-processed and classified stream data to the processing plug-in 44 for processing.
When the indexer calls the processing plug-in, the index information of the stream data is distributed to different sets (collections), the classifier selects the sets to send, and the processing plug-in processes the corresponding stream data according to the index information. Specifically, the acquisition source sends the acquired data to a data source, the data source is the Kafka node, the data in the data source is distributed to the sets, and the processing plug-in takes the data in the corresponding set for processing according to the index information. FIG. 5 is a schematic diagram of an acquisition source, a data source, and a collection in an embodiment of the invention, wherein different data acquired by the acquisition source can be sent to the same data source, but one data cannot be sent to multiple data sources simultaneously. The data of each data source can only be sent to one set, and the data of a plurality of different data sources can be sent to the same set. In specific implementation, the Collection group can be further divided into: each Collection group is composed of a plurality of collections of the same category; the Collection group is named by name + month of year (6 bits); for example, collection1201504 is its true name.
And the processing plug-in 44 is used for accessing the preprocessed and classified streaming data, processing the accessed streaming data, and feeding back the processed accessed streaming data to a solr index platform in the framework module, wherein the solr index platform is used for issuing the processed accessed streaming data.
The processing plug-in 44 includes a plurality of custom plug-ins, wherein the processing plug-in is loaded by using a ClassLoad technology in Java, and the ClassLoad can load a corresponding plug-in according to a configured plug-in name. All jar packets in a plugin directory under the DataProcesssLoad are read into a system space by adopting a ClassLoad method, so that all plug-ins of a DataProcesssproject can be automatically loaded as long as being placed in the directory, and the purpose of freely defining the plug-ins in a streaming framework is achieved, thereby realizing the purpose of setting the number of running threads and the serial-parallel processing mode of the streaming framework, and enabling the system to have flexible secondary development capability.
Further, the processing plug-in 44 accesses streaming data delivered by the indexer via multiple hosts. A plurality of hosts are adopted to consume information in a certain Topic at the same time, and the stream data processing of the whole cluster is not influenced when one host fails; this mode of operation also determines that the application layer is not needed to solve the load problem, but the system inherently has an automatic load handling architecture. And the streaming data is accessed by adopting a multi-partition mode, so that automatic load balancing is supported.
Finally, the processing plug-in 44 writes the processed data into the solr index platform in the framework module again, and issues the data so that the user can use the data.
According to the network-priority data processing system provided by the embodiment, the data processing framework is divided into the framework part and the processing part, wherein the framework part is used for collecting, distributing and releasing data, the processing part is used for completing the data processing through the processing plug-in, and the data acquisition and the data release are separated from the data processing, so that the effects of large data storage capacity, high processing efficiency and simple secondary development are achieved. Secondly, in the mode of accessing the streaming data, the streaming data is accessed in a multi-partition mode, an application layer is not needed to solve the load problem, and the system naturally has a processing framework of automatic load. In addition, the purpose of freely defining the plug-ins can be achieved by loading the processing plug-ins by using the ClassLoad technology, so that the number of threads for setting and running of the streaming processing framework and the serial-parallel processing mode are realized, and the system has flexible secondary development capability.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website, or provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (7)

1. A network priority data processing system, the system comprising: a message queue, a framework module, and a processing plug-in, wherein,
the message queue comprises at least one message node and is used for acquiring file data from an acquisition source, processing the file data into stream data and storing the stream data into each message node;
the frame module is used for starting a thread, reading stream data from the message node through the thread, and preprocessing and classifying the read stream data;
the processing plug-in is used for accessing the flow data which is preprocessed and classified, processing the accessed flow data and feeding back the processed accessed flow data to the framework module;
the frame module is further to: and issuing the processed accessed streaming data for a data user to use.
2. The system of claim 1, wherein the frame module further comprises: a preprocessor, a classifier and an indexer;
the preprocessor is used for reading stream data from message nodes of the message queue through threads and preprocessing the read stream data;
the classifier is used for classifying the preprocessed stream data;
the indexer is used for calling the processing plug-in to transmit the preprocessed and classified stream data to the processing plug-in for processing.
3. The system of claim 1 or 2, wherein the frame module further comprises: and the solr index platform is used for issuing the processed accessed streaming data.
4. The system according to claim 1, wherein the message queue is specifically a Kafka message queue and the message node is a Kafka node.
5. The system of claim 1, wherein the acquisition source comprises at least one of: HDFS interface, TCP snoop, 8808 port, UDP snoop.
6. The system of claim 1, wherein the processing plug-in is loaded via ClassLoad technology.
7. The system of claim 1, wherein the processing plugin accesses the pre-processed and classified streaming data over multiple hosts.
CN201911310571.4A 2019-12-18 2019-12-18 Network priority data processing system Active CN112995263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911310571.4A CN112995263B (en) 2019-12-18 2019-12-18 Network priority data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911310571.4A CN112995263B (en) 2019-12-18 2019-12-18 Network priority data processing system

Publications (2)

Publication Number Publication Date
CN112995263A CN112995263A (en) 2021-06-18
CN112995263B true CN112995263B (en) 2022-11-22

Family

ID=76343915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911310571.4A Active CN112995263B (en) 2019-12-18 2019-12-18 Network priority data processing system

Country Status (1)

Country Link
CN (1) CN112995263B (en)

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286473B1 (en) * 2002-07-10 2007-10-23 The Directv Group, Inc. Null packet replacement with bi-level scheduling
US7010538B1 (en) * 2003-03-15 2006-03-07 Damian Black Method for distributed RDSMS
JP5272414B2 (en) * 2008-01-18 2013-08-28 富士通セミコンダクター株式会社 Information processing system and firmware execution method
CN103034928B (en) * 2012-12-11 2015-11-18 清华大学 The plug and play data platform of self-discipline dispersion and management method and application
CN105468735A (en) * 2015-11-23 2016-04-06 武汉虹旭信息技术有限责任公司 Stream preprocessing system and method based on mass information of mobile internet
CN105573760B (en) * 2015-12-16 2018-11-30 南京邮电大学 Internet of things data processing system and method based on storm
US9992248B2 (en) * 2016-01-12 2018-06-05 International Business Machines Corporation Scalable event stream data processing using a messaging system
CN105956082B (en) * 2016-04-29 2019-07-02 深圳大数点科技有限公司 Real time data processing and storage system
CN107545014A (en) * 2016-06-28 2018-01-05 国网天津市电力公司 Stream calculation instant disposal system for treating based on Storm
CN107070890A (en) * 2017-03-10 2017-08-18 北京市天元网络技术股份有限公司 Flow data processing device and communication network major clique system in a kind of communication network major clique system
CN107391719A (en) * 2017-07-31 2017-11-24 南京邮电大学 Distributed stream data processing method and system in a kind of cloud environment
US10983843B2 (en) * 2018-01-16 2021-04-20 Enterpriseweb Llc Event-driven programming model based on asynchronous, massively parallel dataflow processes for highly-scalable distributed applications
CN109246073A (en) * 2018-07-04 2019-01-18 杭州数云信息技术有限公司 A kind of data flow processing system and its method
CN109254982B (en) * 2018-08-31 2020-09-29 杭州安恒信息技术股份有限公司 Stream data processing method, system, device and computer readable storage medium
CN109783251A (en) * 2018-12-21 2019-05-21 招银云创(深圳)信息技术有限公司 Data processing system based on Hadoop big data platform

Also Published As

Publication number Publication date
CN112995263A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
US7076545B2 (en) Load balancing the servicing of received packets
CN103946803B (en) The processor queued up with efficient operation
CN109873736A (en) A kind of micro services monitoring method and system
WO2020078470A1 (en) Network-on-chip data processing method and device
CN103607424B (en) Server connection method and server system
US11210131B2 (en) Method and apparatus for assigning computing task
CN105892996A (en) Assembly line work method and apparatus for batch data processing
CN108683720A (en) A kind of container cluster service configuration method and device
CN103207785B (en) The processing method of data download request, Apparatus and system
CN106503791A (en) System and method for the deployment of effective neutral net
US11743333B2 (en) Tiered queuing system
CN105955807A (en) System and method for processing task
CN102981973B (en) Perform the method for request within the storage system
CN108073402A (en) Kafka clusters automatic deployment method and device based on linux system
CN107609061A (en) A kind of method and apparatus of data syn-chronization
CN103116655A (en) Clustered data query method, client side and system
CN107623731A (en) A kind of method for scheduling task, client, service cluster and system
CN114710571B (en) Data packet processing system
CN110308987A (en) A method of distributed training mission Connecting quantity on more new container cloud
CN104239508A (en) Data query method and data query device
CN112995263B (en) Network priority data processing system
CN116996112A (en) Real-time preprocessing method for remote sensing satellite data
CN106897126A (en) A kind of picture grasping means and server
CN109189578A (en) Storage server distribution method, device, management server and storage system
CN102902593B (en) Agreement distributing and processing system based on caching mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant