US20220053046A1 - Optimization of files compression - Google Patents

Optimization of files compression Download PDF

Info

Publication number
US20220053046A1
US20220053046A1 US17/395,530 US202117395530A US2022053046A1 US 20220053046 A1 US20220053046 A1 US 20220053046A1 US 202117395530 A US202117395530 A US 202117395530A US 2022053046 A1 US2022053046 A1 US 2022053046A1
Authority
US
United States
Prior art keywords
files
size
compressed file
compression ratio
application server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/395,530
Other versions
US11722551B2 (en
Inventor
Denis MORAND
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Schneider Electric Industries SAS
Original Assignee
Schneider Electric Industries SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Schneider Electric Industries SAS filed Critical Schneider Electric Industries SAS
Assigned to SCHNEIDER ELECTRIC INDUSTRIES SAS reassignment SCHNEIDER ELECTRIC INDUSTRIES SAS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MORAND, DENIS
Publication of US20220053046A1 publication Critical patent/US20220053046A1/en
Application granted granted Critical
Publication of US11722551B2 publication Critical patent/US11722551B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/258Data format conversion from or to a database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1744Redundancy elimination performed by the file system using compression, e.g. sparse files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24568Data stream processing; Continuous queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Definitions

  • the present invention generally relates to computer systems and methods, and more particularly relates to applications that handle large amounts of data resulting from different Internet of Things applications in industrial machines.
  • IoT Internet of Things
  • a gateway like an IoT box, collects these data from the data center under the form of files and sends the files to the cloud where the data will be stored and analyzed.
  • the files are compressed into a compressed file by a compression application before being sent to the cloud.
  • the size of the compressed file must be less than a threshold, for example one Megabyte, and the compressed file must be sent regularly, for example every ten minutes. However, if the constraint of sending regularly data under a predefined size is respected, the size of data transit is not optimized as it can be more or less close to the predefined size.
  • a method for optimizing the scheduling of files to be sent to an application server at regular time intervals comprising the following steps in a control device:
  • the compression ratio is updated based on the size of the second set of files and the size of the compressed file
  • the reinforcement learning uses the size of a previous compressed file from a previous time interval.
  • the method optimizes the number of bytes sent to the cloud per unit of time and reduces the cloud cost (as frames length are optimized).
  • the reinforcement learning is light to implement, in term of consumption of memory and processing. Moreover, the reinforcement learning is fast in execution and does not need a learning phase.
  • the reinforcement learning will still be adapted to the other compression algorithm. If the files do not need to be compressed anymore, the ratio of compression will converge to value “1”. If the threshold for transmission changes for a new threshold, the reinforcement learning will also converge to reach the new threshold automatically.
  • the updated compression ratio is used for a next first set of files to be retrieved for a next time interval.
  • the scheduling algorithm takes as input the first set of files, the compression ratio and the size limit and provides as output the second set of files, wherein the size of the second set of files divided by the compression ratio is less than the size limit.
  • the scheduling algorithm is based on a Johnson scheduling algorithm.
  • the reinforcement learning is based on a multi-armed bandits model.
  • the scheduling algorithm provides an intermediary set of files that is used by a set of arms that use more or less files than the intermediary set of files and one arm is selected according to Upper Confidence Bound and corresponds to the second set of files.
  • the sum of the weights of files of the intermediary set of files divided by the compression ratio is under the size limit.
  • a learning phase is used for the set of arms, taking as feedback the size of the previous compressed file sent to the application server.
  • a device for optimizing the scheduling of files to be sent to an application server at regular time intervals comprising:
  • the compression ratio is updated based on the size of the second set of files and the size of the compressed file
  • the reinforcement learning uses the size of a previous compressed file from a previous time interval.
  • an apparatus for optimizing the scheduling of files to be sent to an application server at regular time intervals comprising:
  • one or more network interfaces to communicate with a telecommunication network
  • a processor coupled to the network interfaces and configured to execute one or more processes
  • a memory configured to store a process executable by the processor, the process when executed operable to:
  • the compression ratio is updated based on the size of the second set of files and the size of the compressed file
  • the reinforcement learning uses the size of a previous compressed file from a previous time interval.
  • a computer-readable medium having embodied thereon a computer program for executing a method for optimizing the scheduling of files to be sent to an application server at regular time intervals.
  • Said computer program comprises instructions which carry out steps according to the method according to the invention.
  • FIG. 1 shows a schematic block diagram of a communication system according to one embodiment of the invention for optimizing the scheduling of files to be sent to an application server;
  • FIG. 2 shows a flow chart illustrating a method for optimizing the scheduling of files to be sent to an application server according to one embodiment of the invention.
  • a control device CD can communicate with a database DB through a first telecommunication network TN 1 and with an application server AS through a second telecommunication network TN 2 .
  • the first and second telecommunication networks may be a wired or wireless network, or a combination of wired and wireless networks.
  • the first and second telecommunication networks can be associated with a packet network, for example, an IP (“Internet Protocol”) high-speed network such as the Internet or an intranet, or even a company-specific private network.
  • IP Internet Protocol
  • the first or second telecommunication network is for example a digital cellular radio communication network of the GPRS (General Packet Radio Service), UMTS (Universal Mobile Telecommunications System), CDMA (Code Division Multiple Access) type, LTE (Long Term Evolution) or even 5G (Fifth Generation) type.
  • the wireless telecommunication network TN can be accessed by the mobile device via a wireless link, such as a Wi-Fi network or Bluetooth connection.
  • the first or second telecommunication network is a public wireless network of limited scope, such as WLAN (Wireless Local Area Network) or conforming to a standard 802.1x, or medium range according to the protocol WiMAX (World Wide Interoperability Microwave Access.
  • WLAN Wireless Local Area Network
  • WiMAX Worldwide Wide Interoperability Microwave Access.
  • first or second telecommunication network may be operating in accordance with fourth or fifth generation wireless communication protocols and/or the like as well as similar wireless communication protocols that may be developed in the future.
  • the database DB stores data resulting from different Internet of Things applications in industrial machines.
  • the Internet of Things applications are implemented in sensors linked to industrial machines and measure, record, and send operating data related to the industrial machines to source databases, such as an Influxdb database for telemetry data or a sqlite database for jsonstream data.
  • the database DB extracts data from the source databases under the form of files, like JSON files and stores the files in a priority queue.
  • the priority queue has the following properties: every file has a priority associated with it, a file with high priority is dequeued before a file with low priority, and if two files have the same priority, they are server according to their order in the queue (like in First In First Out scheme). It is assumed that a file contains a set of data coming from one source database and related to one industrial machine for example. The set of data include coherent data that are intended to be filtered and consumed by external applications or client users of industrial machines.
  • the application server AS is a server able to analyse the content of data from the databases DB. To that end, the application server can decompress files that are received from the control device and that contain such data.
  • the analysed content may serve the operator of industrial machines for delivering or improving different kind of services related to the industrial machines.
  • the control device CD includes a collector module COL, an optimizer module OPT, a compression module COM and a publication module PUB.
  • the control device CD is operated to retrieve files from the database DB and to send selected files to the application server AS with the following constraints:
  • selected files are sent under the form of a compressed file which size must be under a predefined threshold (for example 1 Megabyte),
  • selected files must be sent regularly (for example every 10 minutes).
  • the compression algorithm is not additive, meaning that the size of a compressed file of a first file and a second file is different from the sum of the size of compressed first file and of the size of the compressed second file.
  • the optimizer module OPT applies a job shop scheduler to schedule and order files to send to the application server and further applies a reinforcement learning to the job shop scheduler to better select the size of the files to send to the application server.
  • the compression module COM compresses selected files by the optimizer module OPT with a determined ratio.
  • the files may be transformed in a predefined format adapted for publication to the application server.
  • the publication module PUB is configured to send the compressed files to the application server AS through the second telecommunication network TN 2 .
  • the optimizer module OPT executes a scheduling algorithm using a compression ratio combined with a reinforcement learning to select a set of files for compression and to update the compression ratio.
  • One constraint of the scheduling algorithm is that the selected set of files should have a weight as close as possible to a predefined threshold. For example, this threshold is imposed by the limit of bandwidth allocated to the reporting service performed by the control device via the second telecommunication network TN 2 .
  • the optimizer module OPT executes a Johnson scheduling that consists in selecting files with a constraint related to a size limit and taking into account the priority and the size of files.
  • the scheduling algorithm may be defined as the following:
  • the precision of the scheduling algorithm depends on the compression ratio that is estimated (the estimated ratio is the ratio used to compress the last files sent to the application server). If the ratio is better estimated, the batch size to send to the application server is thus optimized. To that end, a reinforcement learning is used to better estimate the ratio.
  • the reinforcement learning is based on the Multi Armed Bandits Algorithm (MAB).
  • MAB Multi Armed Bandits Algorithm
  • a decision maker repeatedly chooses among a finite set of actions.
  • a chosen action “a” yields a reward drawn from a probability distribution intrinsic to action “a” and unknown to the decision maker.
  • the goal for the latter is to learn, as fast as possible, which are the actions yielding maximum reward in expectation.
  • Multiple algorithms have been proposed within this framework.
  • UMBs Upper Confidence Bounds
  • ⁇ 1 , . . . , ⁇ k be the mean values associated with these reward distributions.
  • the gambler iteratively plays one lever per round and observes the associated reward.
  • the objective is to maximize the sum of the collected rewards.
  • the horizon is the number of rounds that remain to be played.
  • the bandit problem is formally equivalent to a one-state Markov decision process.
  • the regret p after T rounds is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards:
  • a “zero regret strategy” is a strategy whose average regret per round ⁇ /T tends to zero with probability equal to “1” when the number of played rounds tends to infinity. Zero regret strategies are guaranteed to converged to an optimal strategy if enough rounds are played. Some strategies exist which provide a solution to the bandit problem.
  • the Upper Confidence Bound algorithm is selected to provide a quick convergence to an optimal ratio with accuracy.
  • t denotes the number of tries
  • Tj denotes the number of tries per arm j
  • r i denotes the reward for the try i
  • is a function indicating that the machine j has been selected for the try i.
  • the bias must be chosen to have a logarithmic decrease of the regret (the regret is logarithmically bounded):
  • the arm chosen is the one maximizing the sum of the two terms X j and A j .
  • An embodiment comprises a control device CD under the form of an apparatus comprising one or more processor(s), I/O interface(s), and a memory coupled to the processor(s).
  • the processor(s) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the processor(s) can be a single processing unit or a number of units, all of which could also include multiple computing units.
  • the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory.
  • processor may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
  • the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared.
  • explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • ROM read only memory
  • RAM random access memory
  • non volatile storage Other hardware, conventional and/or custom, may also be included.
  • the memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static random access memory (SRAM) and dynamic random access memory (DRAM)
  • non-volatile memory such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • the memory includes modules and data.
  • the modules include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types.
  • the data serves as a repository for storing data processed, received, and generated by one or more of the modules.
  • program storage devices for example, digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, where said instructions perform some or all of the steps of the described method.
  • the program storage devices may be, for example, digital memories, magnetic storage media, such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
  • a method for optimizing the scheduling of files to be sent to an application server comprises steps S 1 to S 5 .
  • step S 1 the collector module COL of the control device CD retrieves files from the database DB through the first telecommunication network TN 1 .
  • these files become remaining files in the priority queue and have an updated priority incremented by 1.
  • the collector module COL thus retrieves a first set of files among all the files currently stored in the priority queue.
  • the collector module retrieves a set of files regularly according to a publication period, for example every ten minutes, in order to send at least a part of this set of files to the application server. Said at least a part of this set of files will be removed from the priority queue that will be also feed by other files coming from the source databases for a next publication period.
  • step S 2 the optimizer module OPT applies the first set of files to a scheduling algorithm combined with a reinforcement learning, for example the Johnson scheduling combined with the multi armed bandits model, according to sub-steps S 21 to S 24 explained in more details hereinafter.
  • a scheduling algorithm combined with the reinforcement learning takes as input that can change according to the publication period, the first set of files, the compression ratio and the number of arm, and provides as output a second set of files intended to be sent to the application server and an updated compression ratio that will be used for the next publication period.
  • the Johnson scheduling is applied to the first set of files and selects an intermediary set of files among the first set of files by satisfying the following criteria: the sum of the weights of files of the intermediary set of files divided by the compression ratio is under the size limit L
  • the optimizer module OPT executes the reinforcement learning on the intermediary set of files.
  • the reinforcement learning is based on the multi armed bandits model using N arms, with an odd number N>3.
  • the intermediary set of files B ⁇ f 0 , . . . , f k ⁇ corresponds to an arm arm[0] with index “0”.
  • Each arm corresponds to the intermediary set of files with more or less files.
  • the arm arm[ ⁇ 1] corresponds to the intermediary set of files B without one file among the files f 0 , . . . , f k .
  • the reinforcement learning is done by exploration by reduction or by augmentation of the intermediary set of files B.
  • arm[ ⁇ i] corresponds to the intermediary set of files B without the i last files:
  • arm[+i] corresponds to the intermediary set of files B plus next i files:
  • step S 23 the optimizer module OPT uses a learning phase for the different arms, taking as feedback the size ⁇ of the last compressed file sent to the application server during the last publication period.
  • Each arm receives a reward ⁇ according to the formula:
  • the reward for a given arm can be seen as the estimated weight of a compressed file based on the files corresponding to the intermediary set of files used by the given arm.
  • An arm can be penalized if said the estimated weight is above the size limit.
  • step S 24 the Upper Confidence Bound algorithm is selected and applied to each arm.
  • the optimizer module OPT selects the arm[i] that provides the best reward among the rewards of the arms, according to the formula:
  • step S 3 the optimizer module OPT has selected an arm that contains a second set of files corresponding to the intermediate set of files with more or less files and provides the second set of files to the compression module COM.
  • the compression module COM compresses the second set of files with the compression ratio into a compressed file, the compressed file being intended to be sent to the application server and having a size ⁇ with a new value compared to the value of the size of the last compressed file sent to the application server during the last publication period.
  • step S 4 the publication module COM retrieves the compressed file, optionally adapts it with respect to the protocol used for publication.
  • the publication module COM sends the compressed file to the application server through the second telecommunication network.
  • step S 5 that can take place before or in parallel to step S 4 , the optimizer module OPT has selected an arm that contains a second set of files containing the files ⁇ f 0 , . . . , f m ⁇ .
  • the optimizer module OPT updates the compression ratio based on the size of the compressed file and the size of the files of the second set of files.
  • the updated compression ratio can determined according to the following relation:
  • R t R t - 1 + ⁇ ⁇ / ⁇ ⁇ 2
  • steps S 1 to S 5 are reiterated for a new first set of files, the optimizer module taking as input the updated compression ratio for the scheduling algorithm and the size of the compressed file for the reinforcement learning.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Fuzzy Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A device for optimizing the scheduling of files to be sent to an application server at regular time intervals, the device configured to:
    • retrieve a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
    • apply the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files,
    • compress the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit, and
    • send the compressed file to the application server,
    • wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file, and
    • wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.

Description

    FIELD OF INVENTION
  • The present invention generally relates to computer systems and methods, and more particularly relates to applications that handle large amounts of data resulting from different Internet of Things applications in industrial machines.
  • BACKGROUND
  • The Internet of Things (IoT) is an essential element of the digital development of companies. On many markets, connected objects capture valuable information. Industrial IoTs are mainly sensors linked to machines that are located in different industrial sites and measure, record, and send operating data to a data center to be analyzed. To that end, a gateway, like an IoT box, collects these data from the data center under the form of files and sends the files to the cloud where the data will be stored and analyzed.
  • The files are compressed into a compressed file by a compression application before being sent to the cloud. The size of the compressed file must be less than a threshold, for example one Megabyte, and the compressed file must be sent regularly, for example every ten minutes. However, if the constraint of sending regularly data under a predefined size is respected, the size of data transit is not optimized as it can be more or less close to the predefined size.
  • There is therefore a need for improving the selection of files to send regularly such as the compressed size of the files is as close as possible to a predefined threshold.
  • SUMMARY
  • This summary is provided to introduce concepts related to the present inventive subject matter. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
  • In one implementation, there is provided a method for optimizing the scheduling of files to be sent to an application server at regular time intervals, comprising the following steps in a control device:
  • retrieving a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
  • applying the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files,
  • compressing the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit,
  • sending the compressed file to the application server,
  • wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file,
  • wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.
  • Advantageously, the method optimizes the number of bytes sent to the cloud per unit of time and reduces the cloud cost (as frames length are optimized). The reinforcement learning is light to implement, in term of consumption of memory and processing. Moreover, the reinforcement learning is fast in execution and does not need a learning phase.
  • Advantageously, if the compression algorithm is replaced by another one, the reinforcement learning will still be adapted to the other compression algorithm. If the files do not need to be compressed anymore, the ratio of compression will converge to value “1”. If the threshold for transmission changes for a new threshold, the reinforcement learning will also converge to reach the new threshold automatically.
  • In an embodiment, the updated compression ratio is used for a next first set of files to be retrieved for a next time interval.
  • In an embodiment, the scheduling algorithm takes as input the first set of files, the compression ratio and the size limit and provides as output the second set of files, wherein the size of the second set of files divided by the compression ratio is less than the size limit.
  • In an embodiment, the scheduling algorithm is based on a Johnson scheduling algorithm.
  • In an embodiment, the reinforcement learning is based on a multi-armed bandits model.
  • In an embodiment, the scheduling algorithm provides an intermediary set of files that is used by a set of arms that use more or less files than the intermediary set of files and one arm is selected according to Upper Confidence Bound and corresponds to the second set of files.
  • In an embodiment, the sum of the weights of files of the intermediary set of files divided by the compression ratio is under the size limit.
  • In an embodiment, a learning phase is used for the set of arms, taking as feedback the size of the previous compressed file sent to the application server.
  • In another implementation, there is provided a device for optimizing the scheduling of files to be sent to an application server at regular time intervals, the device comprising:
  • means for retrieving a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
  • means for applying the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files,
  • means for compressing the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit,
  • means for sending the compressed file to the application server,
  • wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file,
  • wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.
  • In another implementation, there is provided an apparatus for optimizing the scheduling of files to be sent to an application server at regular time intervals, the apparatus comprising:
  • one or more network interfaces to communicate with a telecommunication network;
  • a processor coupled to the network interfaces and configured to execute one or more processes; and
  • a memory configured to store a process executable by the processor, the process when executed operable to:
  • retrieve a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
  • apply the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files,
  • compress the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit,
  • send the compressed file to the application server,
  • wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file,
  • wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.
  • In another implementation there is provided a computer-readable medium having embodied thereon a computer program for executing a method for optimizing the scheduling of files to be sent to an application server at regular time intervals. Said computer program comprises instructions which carry out steps according to the method according to the invention.
  • BRIEF DESCRIPTION OF THE FIGURES
  • The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
  • FIG. 1 shows a schematic block diagram of a communication system according to one embodiment of the invention for optimizing the scheduling of files to be sent to an application server; and
  • FIG. 2 shows a flow chart illustrating a method for optimizing the scheduling of files to be sent to an application server according to one embodiment of the invention.
  • The same reference number represents the same element or the same type of element on all drawings.
  • It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
  • DESCRIPTION OF EMBODIMENTS
  • The figures and the following description illustrate specific exemplary embodiments of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within the scope of the invention. Furthermore, any examples described herein are intended to aid in understanding the principles of the invention, and are to be construed as being without limitation to such specifically recited examples and conditions. As a result, the invention is not limited to the specific embodiments or examples described below, but by the claims and their equivalents.
  • Referring to FIG. 1, a control device CD can communicate with a database DB through a first telecommunication network TN1 and with an application server AS through a second telecommunication network TN2.
  • The first and second telecommunication networks may be a wired or wireless network, or a combination of wired and wireless networks. The first and second telecommunication networks can be associated with a packet network, for example, an IP (“Internet Protocol”) high-speed network such as the Internet or an intranet, or even a company-specific private network.
  • The first or second telecommunication network is for example a digital cellular radio communication network of the GPRS (General Packet Radio Service), UMTS (Universal Mobile Telecommunications System), CDMA (Code Division Multiple Access) type, LTE (Long Term Evolution) or even 5G (Fifth Generation) type. Furthermore, the wireless telecommunication network TN can be accessed by the mobile device via a wireless link, such as a Wi-Fi network or Bluetooth connection.
  • In another example, the first or second telecommunication network is a public wireless network of limited scope, such as WLAN (Wireless Local Area Network) or conforming to a standard 802.1x, or medium range according to the protocol WiMAX (World Wide Interoperability Microwave Access.
  • Additionally, the first or second telecommunication network may be operating in accordance with fourth or fifth generation wireless communication protocols and/or the like as well as similar wireless communication protocols that may be developed in the future.
  • The database DB stores data resulting from different Internet of Things applications in industrial machines. The Internet of Things applications are implemented in sensors linked to industrial machines and measure, record, and send operating data related to the industrial machines to source databases, such as an Influxdb database for telemetry data or a sqlite database for jsonstream data.
  • The database DB extracts data from the source databases under the form of files, like JSON files and stores the files in a priority queue. The priority queue has the following properties: every file has a priority associated with it, a file with high priority is dequeued before a file with low priority, and if two files have the same priority, they are server according to their order in the queue (like in First In First Out scheme). It is assumed that a file contains a set of data coming from one source database and related to one industrial machine for example. The set of data include coherent data that are intended to be filtered and consumed by external applications or client users of industrial machines.
  • The application server AS is a server able to analyse the content of data from the databases DB. To that end, the application server can decompress files that are received from the control device and that contain such data. The analysed content may serve the operator of industrial machines for delivering or improving different kind of services related to the industrial machines.
  • The control device CD includes a collector module COL, an optimizer module OPT, a compression module COM and a publication module PUB.
  • The control device CD is operated to retrieve files from the database DB and to send selected files to the application server AS with the following constraints:
  • selected files are sent under the form of a compressed file which size must be under a predefined threshold (for example 1 Megabyte),
  • selected files must be sent regularly (for example every 10 minutes).
  • It has to be considered that the compression algorithm is not additive, meaning that the size of a compressed file of a first file and a second file is different from the sum of the size of compressed first file and of the size of the compressed second file.
  • The extractor module EXT extracts data as files from the database, the files being stored in a priority queue. Initially, all files are enqueued with a priority p with p=0. When files are selected by the optimizer module to be sent to the application server, the remaining files in the priority queue have a priority incremented by 1.
  • The optimizer module OPT applies a job shop scheduler to schedule and order files to send to the application server and further applies a reinforcement learning to the job shop scheduler to better select the size of the files to send to the application server.
  • The compression module COM compresses selected files by the optimizer module OPT with a determined ratio. The files may be transformed in a predefined format adapted for publication to the application server.
  • The publication module PUB is configured to send the compressed files to the application server AS through the second telecommunication network TN2.
  • More precisely, the optimizer module OPT executes a scheduling algorithm using a compression ratio combined with a reinforcement learning to select a set of files for compression and to update the compression ratio. One constraint of the scheduling algorithm is that the selected set of files should have a weight as close as possible to a predefined threshold. For example, this threshold is imposed by the limit of bandwidth allocated to the reporting service performed by the control device via the second telecommunication network TN2.
  • In one embodiment, the optimizer module OPT executes a Johnson scheduling that consists in selecting files with a constraint related to a size limit and taking into account the priority and the size of files.
  • The scheduling algorithm takes in input a first set of files F={f0, . . . , fn} retrieved from the priority queue, a sort criteria C (per priority p of files and size ω of file), the number N of files, a size limit L, a compression ratio and provides in output a second set of files B={fj, . . . , fk} with B F, under the formed of the set T (including sorted second set of files), and a set of remaining files F′ (with F′⊂F) including files remaining in the priority queue.
  • The scheduling algorithm may be defined as the following:
  • N ← NumberOfItem(F)
    T = Sort(F , C)
    B ← 0, sizeB ← 0, i ← 0
    while (sizeB < (L − ε)) and (i < N) do
    Bi ← T[i]
    sizeB i = 1 B B [ i ] · ω R
    return B, T
    F' = F − B : files not selected for the publication
    for all remaining files increase the priority p by 1
    i ← 0
    N ← NumberOfItem(F')
    while (i < N) do F'[i].p ← F'[i].p + 1
    return F'.
  • The precision of the scheduling algorithm depends on the compression ratio that is estimated (the estimated ratio is the ratio used to compress the last files sent to the application server). If the ratio is better estimated, the batch size to send to the application server is thus optimized. To that end, a reinforcement learning is used to better estimate the ratio.
  • In one embodiment, the reinforcement learning is based on the Multi Armed Bandits Algorithm (MAB). In the MAB model, a decision maker repeatedly chooses among a finite set of actions. At each step t, a chosen action “a” yields a reward drawn from a probability distribution intrinsic to action “a” and unknown to the decision maker. The goal for the latter is to learn, as fast as possible, which are the actions yielding maximum reward in expectation. Multiple algorithms have been proposed within this framework. As explained hereinafter, a policy based on Upper Confidence Bounds (UCBs) has been shown to achieve optimal asymptotic performances in terms of the number of steps t.
  • The Multi Armed Bandits can be seen as a set of real distributions B={R1, . . . , Rk}, each distribution being associated with the rewards delivered by one of the K e N+ levers. Let μ1, . . . , μk be the mean values associated with these reward distributions.
  • The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process.
  • The regret p after T rounds is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards:

  • ρ=T μ*−Σt=1 T(r t)
  • where μ* is the maximal reward mean,
  • μ * = max k { μ k } ,
  • and rt the reward in round t.
  • A “zero regret strategy” is a strategy whose average regret per round ρ/T tends to zero with probability equal to “1” when the number of played rounds tends to infinity. Zero regret strategies are guaranteed to converged to an optimal strategy if enough rounds are played. Some strategies exist which provide a solution to the bandit problem.
  • In one embodiment, the Upper Confidence Bound algorithm is selected to provide a quick convergence to an optimal ratio with accuracy.
  • In the Upper Confidence Bound algorithm, the means of each arm are defined by
  • X j = ( 1 T j ) i = 1 t r i χ j = i
  • where t denotes the number of tries, Tj denotes the number of tries per arm j, ri denotes the reward for the try i;
    where χ is a function indicating that the machine j has been selected for the try i.
  • To calculate the index in each channel, a bias is introduced to allow the algorithm to explore the different machines:

  • B j =X j +A j
  • The bias must be chosen to have a logarithmic decrease of the regret (the regret is logarithmically bounded):

  • A j=√{square root over (2 log(t)/T j)}
  • The arm chosen is the one maximizing the sum of the two terms Xj and Aj.
  • An embodiment comprises a control device CD under the form of an apparatus comprising one or more processor(s), I/O interface(s), and a memory coupled to the processor(s). The processor(s) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor(s) can be a single processing unit or a number of units, all of which could also include multiple computing units. Among other capabilities, the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory.
  • The functions realized by the processor may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage. Other hardware, conventional and/or custom, may also be included.
  • The memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory includes modules and data. The modules include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The data, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules.
  • A person skilled in the art will readily recognize that steps of the methods, presented above, can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, for example, digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, where said instructions perform some or all of the steps of the described method. The program storage devices may be, for example, digital memories, magnetic storage media, such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
  • With reference to FIG. 2, a method for optimizing the scheduling of files to be sent to an application server according to one embodiment of the invention comprises steps S1 to S5.
  • In step S1, the collector module COL of the control device CD retrieves files from the database DB through the first telecommunication network TN1. The files are stored in the queue with a priority p with p=0. When files are not selected to be sent to the application server, these files become remaining files in the priority queue and have an updated priority incremented by 1. The collector module COL thus retrieves a first set of files among all the files currently stored in the priority queue.
  • The collector module retrieves a set of files regularly according to a publication period, for example every ten minutes, in order to send at least a part of this set of files to the application server. Said at least a part of this set of files will be removed from the priority queue that will be also feed by other files coming from the source databases for a next publication period.
  • In step S2, the optimizer module OPT applies the first set of files to a scheduling algorithm combined with a reinforcement learning, for example the Johnson scheduling combined with the multi armed bandits model, according to sub-steps S21 to S24 explained in more details hereinafter. Especially the scheduling algorithm combined with the reinforcement learning takes as input that can change according to the publication period, the first set of files, the compression ratio and the number of arm, and provides as output a second set of files intended to be sent to the application server and an updated compression ratio that will be used for the next publication period.
  • In step S21, the optimizer module OPT execute the scheduling algorithm that takes in input the first set of files F={f0, . . . , fn} retrieved f from the priority queue, a size limit L, a compression ratio Rt−1 and provides in output an intermediary set of files B={f0, . . . , fk} with B⊂F. For example, the Johnson scheduling is applied to the first set of files and selects an intermediary set of files among the first set of files by satisfying the following criteria: the sum of the weights of files of the intermediary set of files divided by the compression ratio is under the size limit L
  • In step S22, the optimizer module OPT executes the reinforcement learning on the intermediary set of files. For example, the reinforcement learning is based on the multi armed bandits model using N arms, with an odd number N>3. The intermediary set of files B={f0, . . . , fk} corresponds to an arm arm[0] with index “0”. Each arm corresponds to the intermediary set of files with more or less files. For example the arm arm[−1] corresponds to the intermediary set of files B without one file among the files f0, . . . , fk.
  • The reinforcement learning is done by exploration by reduction or by augmentation of the intermediary set of files B.
  • In exploration by reduction, arm[−i] corresponds to the intermediary set of files B without the i last files:
  • i ϵ [ 1 N 2 ] arm [ - i ] = { f 0 , , f k - i }
  • In exploration by augmentation, arm[+i] corresponds to the intermediary set of files B plus next i files:
  • i ϵ [ 1 N 2 ] arm [ + i ] = { f 0 , , f k + i }
  • In step S23, the optimizer module OPT uses a learning phase for the different arms, taking as feedback the size Ω of the last compressed file sent to the application server during the last publication period. Each arm receives a reward μ according to the formula:

  • iϵ[0 . . . Ni =arm[i]·weight/Ω
  • wherein μi is the reward for arm[i]
    wherein
  • k ϵ [ - N 2 N 2 ] arm [ k ] · weight = i = 0 k sizeof ( f i )
  • The reward for a given arm can be seen as the estimated weight of a compressed file based on the files corresponding to the intermediary set of files used by the given arm. An arm can be penalized if said the estimated weight is above the size limit.
  • In step S24, the Upper Confidence Bound algorithm is selected and applied to each arm. The optimizer module OPT selects the arm[i] that provides the best reward among the rewards of the arms, according to the formula:
  • i ϵ [ 1 N ] B j = X j + A j arm - selected = max j ( B j )
  • In step S3, the optimizer module OPT has selected an arm that contains a second set of files corresponding to the intermediate set of files with more or less files and provides the second set of files to the compression module COM.
  • The compression module COM compresses the second set of files with the compression ratio into a compressed file, the compressed file being intended to be sent to the application server and having a size Ω with a new value compared to the value of the size of the last compressed file sent to the application server during the last publication period.
  • In step S4, the publication module COM retrieves the compressed file, optionally adapts it with respect to the protocol used for publication. The publication module COM sends the compressed file to the application server through the second telecommunication network.
  • In step S5 that can take place before or in parallel to step S4, the optimizer module OPT has selected an arm that contains a second set of files containing the files {f0, . . . , fm}. The optimizer module OPT updates the compression ratio based on the size of the compressed file and the size of the files of the second set of files. The updated compression ratio can determined according to the following relation:
  • R t = R t - 1 + ω / Ω 2
  • wherein Ω is the size of the compressed file to be sent to the application server
    wherein ω is the size of the files of the second set of files ω=Σj=0 m size of (fj)
  • For the next period of publication, steps S1 to S5 are reiterated for a new first set of files, the optimizer module taking as input the updated compression ratio for the scheduling algorithm and the size of the compressed file for the reinforcement learning.
  • Although the present invention has been described above with reference to specific embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the invention is limited only by the accompanying claims and, other embodiments than the specific above are equally possible within the scope of these appended claims.
  • Furthermore, although exemplary embodiments have been described above in some exemplary combination of components and/or functions, it should be appreciated that, alternative embodiments may be provided by different combinations of members and/or functions without departing from the scope of the present disclosure. In addition, it is specifically contemplated that a particular feature described, either individually or as part of an embodiment, can be combined with other individually described features, or parts of other embodiments

Claims (11)

1. A method, performed in a control device, for optimizing the scheduling of files to be sent to an application server at regular time intervals, comprising:
retrieving a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
applying the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files,
compressing the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit, and
sending the compressed file to the application server,
wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file, and
wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.
2. The method according to claim 1, wherein the updated compression ratio is used for a next first set of files to be retrieved for a next time interval.
3. The method according to claim 1, wherein the scheduling algorithm takes as input the first set of files, the compression ratio and the size limit and provides as output the second set of files, and wherein the size of the second set of files divided by the compression ratio is less than the size limit.
4. The method according to claim 1, wherein the scheduling algorithm is based on a Johnson scheduling algorithm.
5. The method according to claim 4, wherein the reinforcement learning is based on a multi-armed bandits model.
6. The method according to claim 5, wherein the scheduling algorithm provides an intermediary set of files that is used by a set of arms that use more or less files than the intermediary set of files and one arm is selected according to Upper Confidence Bound and corresponds to the second set of files.
7. The method according to claim 6, wherein the sum of the weights of files of the intermediary set of files divided by the compression ratio is under the size limit.
8. The method according to claim 6, wherein a learning phase is used for the set of arms, taking as feedback the size of the previous compressed file sent to the application server.
9. A device for optimizing the scheduling of files to be sent to an application server at regular time intervals, the device comprising:
means for retrieving a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
means for applying the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files,
means for compressing the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit, and
means for sending the compressed file to the application server,
wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file, and
wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.
10. A non-transitory computer-readable medium having embodied thereon a computer program for executing a method for optimizing the scheduling of files to be sent to an application server at regular time intervals on a control device according to claim 1.
11. An apparatus for optimizing the scheduling of files to be sent to an application server at regular time intervals, the apparatus comprising:
one or more network interfaces to communicate with a telecommunication network;
a processor coupled to the network interfaces and configured to execute one or more processes; and
a memory configured to store a process executable by the processor, the process when executed operable to:
retrieve a first set of files from a database for a time interval, the first set of files being stored in a priority queue and carrying information sent from sensors linked to industrial machines,
apply the retrieved first set of files to a scheduling algorithm using a compression ratio and combined with a reinforcement learning in order to select a second set of files, compress the second set of files based on the compression ratio into a compressed file, wherein the size of the compressed file is below a size limit, and
send the compressed file to the application server,
wherein the compression ratio is updated based on the size of the second set of files and the size of the compressed file, and
wherein the reinforcement learning uses the size of a previous compressed file from a previous time interval.
US17/395,530 2020-08-11 2021-08-06 Optimization of files compression Active 2041-09-04 US11722551B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP20305923.3A EP3955128B1 (en) 2020-08-11 2020-08-11 Optimization of files compression
EP20305923 2020-08-11
EPEP20305923.3 2020-08-11

Publications (2)

Publication Number Publication Date
US20220053046A1 true US20220053046A1 (en) 2022-02-17
US11722551B2 US11722551B2 (en) 2023-08-08

Family

ID=72613888

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/395,530 Active 2041-09-04 US11722551B2 (en) 2020-08-11 2021-08-06 Optimization of files compression

Country Status (3)

Country Link
US (1) US11722551B2 (en)
EP (1) EP3955128B1 (en)
CN (1) CN114077589A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4395269A1 (en) * 2022-12-28 2024-07-03 Schneider Electric Industries Sas A method of streaming industrial telemetry data from an industrial site with congestion control and payload size optimisation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342786B2 (en) * 2012-06-15 2016-05-17 California Institute Of Technology Method and system for parallel batch processing of data sets using Gaussian process with batch upper confidence bound
CN106716938A (en) * 2014-10-31 2017-05-24 华为技术有限公司 Low jitter traffic scheduling on packet network
US20200226107A1 (en) * 2019-01-15 2020-07-16 Cisco Technology, Inc. Reinforcement learning for optimizing data deduplication
US20200322703A1 (en) * 2019-04-08 2020-10-08 InfiSense, LLC Processing time-series measurement entries of a measurement database

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342786B2 (en) * 2012-06-15 2016-05-17 California Institute Of Technology Method and system for parallel batch processing of data sets using Gaussian process with batch upper confidence bound
CN106716938A (en) * 2014-10-31 2017-05-24 华为技术有限公司 Low jitter traffic scheduling on packet network
US20200226107A1 (en) * 2019-01-15 2020-07-16 Cisco Technology, Inc. Reinforcement learning for optimizing data deduplication
US20200322703A1 (en) * 2019-04-08 2020-10-08 InfiSense, LLC Processing time-series measurement entries of a measurement database

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pase, F., Gunduz, D., & Zorzi, M. (2021). Contextual Multi-Armed Bandit with Communication Constraints whole document, https://openreview.net/forum?id=-spj8FZD4y2., retrieved from inter net 11/30/2022 (Year: 2021) *
Xia, W., Quek, T. Q., Guo, K., Wen, W., Yang, H. H., & Zhu, H. (2020). Multi-armed bandit-based client scheduling for federated learning. IEEE Transactions on Wireless Communications, 19(11), 7108-7123. , https://ieeexplore.ieee.org/abstract/document/9142401, retrieved from inernet 11/30/2022 (Year: 2020) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4395269A1 (en) * 2022-12-28 2024-07-03 Schneider Electric Industries Sas A method of streaming industrial telemetry data from an industrial site with congestion control and payload size optimisation

Also Published As

Publication number Publication date
EP3955128B1 (en) 2024-03-13
CN114077589A (en) 2022-02-22
EP3955128A1 (en) 2022-02-16
US11722551B2 (en) 2023-08-08

Similar Documents

Publication Publication Date Title
US11392644B2 (en) Optimized navigable key-value store
US8180914B2 (en) Deleting data stream overload
US10223437B2 (en) Adaptive data repartitioning and adaptive data replication
CN111447083A (en) Federal learning framework under dynamic bandwidth and unreliable network and compression algorithm thereof
WO2019184836A1 (en) Data analysis device, and multi-model co-decision system and method
CN109447274B (en) Distributed system for performing machine learning and method thereof
CN108509501A (en) A kind of inquiry processing method, server and computer readable storage medium
CN109471847B (en) I/O congestion control method and control system
US11722551B2 (en) Optimization of files compression
US20220156633A1 (en) System and method for adaptive compression in federated learning
CN109688229A (en) Session keeps system under a kind of load balancing cluster
CN113822456A (en) Service combination optimization deployment method based on deep reinforcement learning in cloud and mist mixed environment
US10952120B1 (en) Online learning based smart steering system for wireless mesh networks
CN114205852B (en) Intelligent analysis and application system and method for wireless communication network knowledge graph
CN109407997A (en) A kind of data processing method, device, equipment and readable storage medium storing program for executing
CN116048817A (en) Data processing control method, device, computer equipment and storage medium
CN116886619A (en) Load balancing method and device based on linear regression algorithm
CN117527708A (en) Optimized transmission method and system for enterprise data link based on data flow direction
US20230281101A1 (en) Auto insights into data changes
CN117172093A (en) Method and device for optimizing strategy of Linux system kernel configuration based on machine learning
Al Muktadir et al. Prediction and dynamic adjustment of resources for latency-sensitive virtual network functions
Zawad et al. Demystifying hyperparameter optimization in federated learning
US11960449B2 (en) Computer-readable recording medium storing information processing program, information processing method, and information processing apparatus
US20240104096A1 (en) High latency query optimization system
US11829419B1 (en) Managing hybrid graph data storage and retrieval for efficient graph query execution

Legal Events

Date Code Title Description
AS Assignment

Owner name: SCHNEIDER ELECTRIC INDUSTRIES SAS, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MORAND, DENIS;REEL/FRAME:057100/0314

Effective date: 20200824

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE