EP3662392A1 - Computer system for displaying the logistical path of entities over time - Google Patents
Computer system for displaying the logistical path of entities over timeInfo
- Publication number
- EP3662392A1 EP3662392A1 EP18762370.7A EP18762370A EP3662392A1 EP 3662392 A1 EP3662392 A1 EP 3662392A1 EP 18762370 A EP18762370 A EP 18762370A EP 3662392 A1 EP3662392 A1 EP 3662392A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- user
- data
- server
- processing
- application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 claims abstract description 47
- 230000009471 action Effects 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 36
- 238000004364 calculation method Methods 0.000 claims description 35
- 238000002360 preparation method Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 20
- 238000012800 visualization Methods 0.000 claims description 12
- 238000011282 treatment Methods 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 6
- 230000002123 temporal effect Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 238000011084 recovery Methods 0.000 claims description 2
- 238000002347 injection Methods 0.000 claims 1
- 239000007924 injection Substances 0.000 claims 1
- 238000003860 storage Methods 0.000 description 13
- 238000013528 artificial neural network Methods 0.000 description 10
- 210000002569 neuron Anatomy 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000004088 simulation Methods 0.000 description 8
- 230000001186 cumulative effect Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008520 organization Effects 0.000 description 4
- 230000006403 short-term memory Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000007787 long-term memory Effects 0.000 description 3
- 235000005956 Cosmos caudatus Nutrition 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- LPLLVINFLBSFRP-UHFFFAOYSA-N 2-methylamino-1-phenylpropan-1-one Chemical compound CNC(C)C(=O)C1=CC=CC=C1 LPLLVINFLBSFRP-UHFFFAOYSA-N 0.000 description 1
- 241000132539 Cosmos Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- QVFWZNCVPCJQOP-UHFFFAOYSA-N chloralodol Chemical compound CC(O)(C)CC(C)OC(O)C(Cl)(Cl)Cl QVFWZNCVPCJQOP-UHFFFAOYSA-N 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012432 intermediate storage Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
Definitions
- the present invention relates to the field of automatic process analysis by the processing of raw data consisting of a collection of descriptive information of isolated tasks, to calculate recurring sequences, and to provide graphical representations and predictive treatments.
- the prior art is known from US patent application US2017068705 describing a computer-implemented method for analyzing process data.
- the method includes receiving an Advanced Process Algebra Execution (APE) instruction, wherein the APE instruction defines a request for process instances from the storage means, and wherein the APE instruction comprises at least one operator. process and executing the APE instruction and reading the Instances process according to the APE instruction from the storage means, and providing the result of the query for further processing.
- APE Advanced Process Algebra Execution
- This article is about a comparative analysis of predictive business process monitoring methods that exploit the logs of the completed tasks of a process, in order to calculate predictions about the execution cases of these processes. Prediction methods are tailor-made for specific prediction tasks. The article considers that the accuracy of prediction methods is very sensitive to the available data set, forcing users to perform trial and error and adjust them when applying them in a specific context.
- This paper studies short-term memory neural networks (LSTMs) as an approach to building accurate models for a wide range of predictive process monitoring tasks. It shows that LSTMs outperform existing techniques in predicting the next event of a current case and its timestamp. Next, we show how to use models to predict the next task to predict the full suite of a current case.
- LSTMs short-term memory neural networks
- the TensorFlow article is also known: A System for
- US Patent Application US 2014214745 is still known describing a method for monitoring one or more update messages sent and received among the components of the distributed network system, the update messages including information associated with a state. of an object on the distributed network describe the state of
- the object to provide a predictive object state model and predict the occurrence of an artifact in response to the state of
- the solutions of the prior art are not adapted to the management of several sites, to provide for each site not only predictive information, but also a parameterizable graphical representation of the routes from predictive estimators common to all sites and based on common learning data.
- the transposition to such a path visualization application for a plurality of sites would require very long computation times for analyzing the data from a large number of data.
- the number of possible combinatorics may require several tens of hours of computation on a standard calculator.
- analytical solutions proposed in the prior art do not allow the use of additional data to those of the process and require to recalculate from the totality of the data, without it being possible to update the result incrementally.
- these analytical solutions only allow the use of data that have no missing values in both the process data and the additional process data.
- the invention aims to remedy its disadvantages by a computer system that allows a large number of users to access complex models, obtained by deep learning algorithms, from simple connected equipment.
- the invention relates, according to its most general meaning, to a computer system for viewing routes from the processing of at least one series of input data comprising a list of tasks stamped with the identifier of an object. , the identifier of an action and a temporal information, said system comprising connected "user" computer equipment executing a viewing application and at least one server remote executing an application for calculating a path model from said tables,
- an administration server comprising means of managing a plurality of user accounts and recording, in the account of each user, tables coming from the user as well as data relating to the specific configuration of the user; the user and the result of the processing performed on a shared computing server
- At least one shared computing server comprising a graphics processor GPU for executing a deep learning application from the data associated with a user and constructing a digital model then recorded in the user's account on at least one of said administration or calculation servers
- the equipment of the user executing an application for controlling the calculation, on one of said computing servers, of an analytic or predictive state for the retrieval and visualization of the data corresponding to the result of this calculation on the interface of the user equipment.
- the computer system further comprises at least one CPU for distributing the computing load between a plurality of shared computing servers.
- it comprises means for anonymizing the identifiers of the objects and / or the identifiers of the actions of each user, and recording in an encrypted form on the user's account means for converting the anonymized data, the data processed by the computing server (s) consisting exclusively of anonymised data.
- said encryption is performed by a hash function.
- a hash function Detailed description of a non-limiting example of
- FIG. 1 represents a simplified schematic view of the hardware architecture of the system according to the invention
- FIG. 2 represents a schematic view of the functional architecture of the data preparation.
- FIG. 3 represents a schematic view of the functional architecture of the data exploitation
- FIG. 4 represents a schematic view of a first example of a neural network for learning
- FIG. 5 represents a schematic view of a second example of a neural network for learning
- FIG. 6 represents a schematic view of a calculation server
- FIG. 7 represents a detailed schematic view of the hardware architecture of the system according to the invention.
- FIG. 8 represents a schematic view of a picking warehouse for the implementation of the invention.
- the system for implementing the invention comprises three main types of resources:
- server is meant a single computer machine, or a group of computer machines, or a virtual server ("cloud” in English).
- Connected Equipment Connected equipment (1 to 3) is typically standard equipment such as a cell phone, tablet or computer connected to a computer network, including the Internet.
- the invention does not require any hardware modification of the connected equipment (1 to 3). Access to services is realized:
- the invention makes it possible to manage a plurality of users, from one or more shared management servers and from one or more shared calculation servers.
- Figure 2 shows a schematic view of the functional architecture.
- the first step is to create an account on the administration server (100).
- the creation of the account (10) can be done by the administrator director, who then provides the access information to the user, including an identifier and a password and a link to the application or the web page access to the service.
- the creation of the account (10) can also be achieved by the opening of a session between a connected equipment (1) and the administration server (100), by a session to create an identifier associated with a password .
- An account can be associated with a plurality of routes, accessible by the same identifier.
- the creation of a new account (10) also controls the allocation of a specific storage space (50) assigned to the identifier corresponding to the account.
- the storage space (50) assigned to an identifier is private and inaccessible to other users.
- this storage space (50) is secure and accessible only by the user having the associated account, excluding access by a third party, including by an administrator.
- a setting is also made allowing or prohibiting certain features or preferences of the user (for example message language, or displaying a logo or customizing the user interfaces).
- the identification may be purely declarative, or associated with a secure procedure for example by double or triple identification.
- the next step (11) consists in creating a configuration digital file (51) of a path resulting in a name, parameters defining the structure of the tables that will be transmitted, for example:
- the next step (12) is to save in the dedicated storage space (50) a digital data file (52) comprising a series of time-stamped digital records.
- This recording can be made by transfer from the connected equipment (1), or by a designation the computer address where the data considered for an import is stored via a secure session controlled by the administration server (100).
- This functionality is performed via a connector between the user's account in the connected equipment (1) and the user's account on the administration server (100), as well as a third application.
- the input data includes directly transmitted input data (53) or data (54) stored on a remote resource to which the administration server (100) can connect.
- They consist of time stamped records, for example a table whose structure is for example of the following type:
- the records may further include additional information or data in the form of strings, names, numeric values or time values such as:
- This data can be provided by an automatic processing using sensors on a site, or a log file, or the output of an ERP software, or by a manual entry and more generally by any automatic or manual collection system timestamped data relating to events.
- the data (54) can also be derived from connected systems, from the analysis of the signals exchanged during a communication protocol, for example from the IMSI identifier in a GSM communication protocol, or the unique identifiers of connected objects transported in the LORA type communication protocol.
- the input data consists of:
- the step (12) comprises adapting the format of the input data (53, 54) according to the configuration of the configuration file (51), and recording the input data (53) in the converted form ( 55) on the administration server (100), the input data (54) being kept in the original form on the original resource, to enable on-the-fly conversion in the subsequent learning steps.
- the adaptation consists of standardizing the data structure and eventually converting the format of the dates into a standardized format.
- the conversion mode is saved to allow processing of the transmitted data later.
- a detailed path configuration step (13) is then performed by analyzing the converted input files (54, 55) to establish a list (56) of the events identified in the converted input data (54, 55). ).
- This list (56) can be associated with additional information such as the origin of the event, for example "internal” or “external” or scheduling of events according to a preferred sequence.
- This list (56) is also stored in the storage space of the client in question, in the configuration file (51) of the course.
- This solution makes it possible to automate the addition of the additional information (57) to the converted data (54, 55) to record a rich file (60) in the configuration file (51) and to enrich the file on the fly. (54) according to the above addition procedure.
- the on-the-fly conversion and enrichment alternative makes it possible to implement the invention in real time, whereas the alternative of recording converted and enriched files on the administration server makes it possible to carry out an analysis in deferred time, especially for uses where the input data is refreshed in deferred time.
- the steps (12) and / or (14) may further comprise an anonymization treatment consisting, for example, in hashing the identifier of each event, and in masking the name of the event, for example by substituting a random title at each event.
- Figure 3 shows a schematic view of the functional architecture of data mining and model generation.
- the data thus prepared are used, either in real time or offline, to optionally digital model (73) operated as a computer code (80).
- the first step (70) of this operation consists in calculating the graph of the courses according to the recordings (54, 55). This calculation is carried out by identifying, for each individual, transitions between two events according to the information timestamp.
- the result is recorded as a numerical oriented graph (71) whose vertices correspond to the events, and the edges correspond to the transitions with the indication of the direction of travel.
- This digital graph (71) is stored in the storage space associated with the account, and the configuration file (51) is modified to take into account the information relating to the calculation of a new graph.
- the use of a server dedicated to the calculation allows the administration server to order the solicitation of the calculation server optimally, and save the digital graphs in the user accounts asynchronously. In this case, it notifies the user of the availability of the digital graph after finalizing the processing.
- the processing can also provide quality indicators of the obtained digital graph.
- the digital graph (71) is used during the visualization step (72) in relation to the configuration file (51) containing the list of events (56) and the files (54, 55) as well as the data file (59, 60) to provide data for an application graphic hosted on the administration server (100) for access in web mode via a browser, or an application executed on the terminal (1 to 3) of the user, to provide a visual representation.
- This visual representation represents the flows as a function of the digital graph (71), with parameterization means such as filtering or addition of statistical information (for example thickness of the features depending on the number of occurrences), and extracting patterns corresponding to typical paths.
- processing also makes it possible to extract information on individuals and their paths to export them in the form of a digital table after filtering the courses as well as certain numerical or graphical summaries.
- the processing can also generate an alert in the form of an automatically generated message, for example in the form of an email or SMS .
- the user orders the creation of a model exploiting all the historical data (54, 55) and the enriched data (59, 60).
- This processing is very heavy, this processing is not performed on the administration server (100) nor on the connected equipment (1 to 3) but on a dedicated server (200) having at least one graphics card .
- This server (200) is shared for all users.
- the treatment is based on deep learning solutions with a LSTM (long / short term memory) two-level neuron network or recurrent neural networks.
- LSTM long / short term memory
- They are dynamic systems consisting of interconnected units (neurons) interacting nonlinearly, and where there is at least one cycle in the structure.
- the units are connected by arches (synapses) that have a weight.
- the output of a neuron is a nonlinear combination of its inputs.
- Their behavior can be studied with the theory of bifurcations, but the complexity of this study increases very rapidly with the number of neurons.
- the treatment is divided into two stages:
- Figure 4 shows a schematic view of a first example of a neural network for learning.
- the learning implements four distinct LSTM neuron networks (long / short term memory), depending on the nature of the data to be processed.
- the first network consists of two layers and applies more particularly to situations where the input data comprise only one temporal information corresponding to the beginning of each event and if the amount of additional data is limited.
- the first input layer (400) of 100 neurons is common to both networks (410, 420) of the next layer; it performs data training to provide weighted data to the second layer which consists of two sets (410, 420) of 100 neurons each.
- the first set (410) is specialized for the prediction of the following events. It provides a quality indicator corresponding to the probability of the predicted event.
- the second set (420) is specialized for predicting the beginning of the next event.
- Figure 5 shows a schematic view of a second example of a neural network for learning.
- the second network is constituted by two LSTM (long / short term memory) type layers, and is more particularly applicable to situations where the input data comprise two temporal information corresponding to the beginning and the end of each event, respectively. amount of additional data is important.
- the first input layer (500) of 100 neurons is common to the four networks (510, 520, 530, 540) of the next layer; it performs data learning to provide weighted data to the second layer which consists of four sets (510, 520, 530, 540) of 100 neurons each.
- the first set (510) is specialized for the prediction of the following events. It provides a quality indicator corresponding to the probability of the predicted event.
- the second set (520) is specialized for predicting the end of the current event.
- the third set (530) is specialized for predicting the beginning of the next event.
- the fourth set (540) is specialized for predicting the end of the next event.
- the predictive model is calculated using the KERAS (commercial name) library intended to interface with the TENSORFLOW (trade name) library written in Python (business name) language, allowing the use of graphic cards. for performing the calculations. Exploitation of the predictive model
- the user controls a query with parameters that determine an existing or virtual starting point in a process.
- the starting point is represented by a partial course, for example the current course of an individual or a typical partial course or of a particular interest.
- the processing is iterated recursively, to obtain the complete path and if necessary the total duration and the moment of each new event.
- the processing is executed on a computer (200) comprising graphic cards.
- This communication is carried out according to a protocol of the REST API type (commercial name).
- Figure 6 shows a schematic view of a calculation server.
- the computing server (200) has a hardware architecture comprising a plurality of processing units or processing cores, or multi-cores (called architectures multi-cores "in English terminology), and / or multi-nodes.
- Multi-core Central Processing Unit (CPU) CPUs or Graphics Processing Unit (GPU) graphics cards are examples of such hardware architectures.
- a GPU graphics card includes a large number of computing processors, typically hundreds, the term "many-cores" architecture or massively parallel architecture is then used. Initially dedicated to calculations related to the processing of graphic data, stored in the form of tables of two or three-dimensional pixels, GPU graphics cards are currently used more generally for any type of scientific computation requiring a large computing power and a processing parallel data.
- OpenMP Open Multi-Processing
- OpenCL Open Computing Language
- CUDA Computer Unified Device Architecture
- the server (200) includes a server machine (201) and a set of four computing devices (202-205).
- the server machine (201) is adapted to receive the execution instructions and to distribute the execution on the set of computing devices (202-205).
- GPU graphics cards for example NVIDIA Geforce GTX 1070 cards (trade name).
- the computing devices (202 to 205) are either physically inside the server machine (201), or inside other machines, or computing nodes, accessible either directly or via a communications network.
- the computing devices (202 to 205) are adapted to implement executable tasks transmitted by the server machine (201).
- Each computing device (202 to 205) comprises one or more calculation units (206 to 208).
- Each computing unit (206 to 208) comprises a plurality of processing units (209 to 210), or processing cores, in a typical "multi-core" architecture of 1920 cores.
- the server machine (201) comprises at least one processor and a memory capable of storing data and instructions.
- server machine (201) is adapted to execute a computer program having code instructions implementing a proposed parallel data processing optimization method.
- a program implementing the proposed parallel data processing optimization method is encoded into a software programming language known as the Python (Business Name) language.
- Python Software Name
- a particularly suitable programming language is the TENSORFLOW library (trade name) which can be interfaced with the CUDA language (trade name) and the cuDNN library (trade name).
- FIG. 7 represents a detailed schematic view of the hardware architecture of the system according to the invention.
- the system comprises several servers that are common to all users, namely an administration server (100), a graphics card computing server (200) and possibly a data acquisition server (300).
- an administration server 100
- a graphics card computing server 200
- possibly a data acquisition server 300
- each user accesses the system via a connected equipment (1 to 3) communicating with the servers (100, 200, 300) referred to above.
- the administration server (100) manages the accounts and the storage spaces of each of the users, as well as the alert sending application.
- Each user has dedicated storage space for recording:
- the administration server (100) also comprises a memory for the registration of the computer code of the application controlling the execution either on the local computer or on a remote virtual machine, the generation of the digital graph.
- the administration server (100) comprises means for the establishment of a secure tunnel with the calculation server (200) for the data exchange necessary for calculating a numerical graph or a predictive model and more usually the data exchanges with the different computing resources.
- the connected equipment (1 to 3) executes an application (600) directly or via a browser.
- This application (600) does not perform any storage on the connected equipment (1 to 3), all the data being stored on the administration server (200). This allows access by the user via any connected equipment (1 to 3), and to secure the sensitive data by avoiding permanent recording on unsecured equipment (1 to 3).
- This application (600) communicates with the administration server (100) for:
- the application (600) communicates with the calculation server (200) to retrieve on the fly on the connected equipment (1 to 3) the result of the prediction calculation (next event, travel time, etc.) and transmit the piloting instructions for calculating the prediction model.
- the input data can be constituted by the data coming from connected objects, for example the cell phones of passers-by in a public space, for example an airport or train station hall, a commercial center, a urban site, or supermarket or hospital.
- the data is picked up by beacons receiving the service signals, by extracting the technical data transported by the communication protocol, for example the IMSI identifier, the time stamp and the signal power.
- the system according to the invention makes it possible to automate the analysis of displacements and to make displacement flow predictions.
- the analyzed identifier is for example the Mac address for WIFI or LoraWan type communications.
- the following description relates to a particular variant, implementing a predictive model which, unlike some known predictive models, is not limited to situations where the intermediate steps are constant, with linear evolution laws, which does not does not correspond to reality, but is adapted to an order picking system comprising a plurality of intermediate positions, with sometimes complex pathways of the articles between the stock and the order shipping station
- Figure 8 shows an example of organizing a warehouse for the preparation of orders for items, from a limited number of references (a few tens to a few hundred, for a large number orders (tens of thousands), each order containing a few articles or dozens of articles, corresponding to some references, and some articles by references.
- the order orders (“pursue order" in English) arrive at stream of water with a distribution having one or more maximum and significant variability.
- the shipment of prepared orders is carried out in batches, for example to allow the grouping of orders according to the useful volume of the carrier. These groupings are organized in parallel, for example for the loading of several zones or dozens of zones, each corresponding to a delivery sector.
- the general problem is optimizing the organization of the warehouse and the resources allocated to reduce the delay between the arrival of the orders and the loading for the shipment of the orders prepared and regrouped, and to reduce the points of sale. accumulation, even though the forecast data are imperfect.
- FIG. 8 shows a processing zone for order and order preparation orders.
- This zone comprises a technical room (101) constituting a control station with an operator and a computer (102) connected to the information system of the article supplier.
- the commands are received on the computer (102) connected to a printer (103) for printing, upon receipt of each command order, a form comprising:
- cards (14) are deposited in a bunch of X series, for example by series of 100 cards.
- the installation also comprises a plurality of preparation stations (106 to 108).
- Each preparation station (106 to 108) has N cabinets (61 to 62; 71 to 73; 81 to 82) loaded with a stock of part of the available references. Thus, all the references are distributed over the N preparation positions, under form subsets of references, each preparation station (106 to 108) having L p intermediate storage cabinets of a given article reference.
- each preparation preparation station (106 to 108) is associated one or more reloading stations (106 to 108) of the cabinets.
- a recharging station (116 to 118) can be associated with several preparation stations.
- the operator of the preparation station takes a sheet, identifies the articles relating to it, and extracts from the corresponding cabinets the articles referred to, for the quantities mentioned on the sheet and deposit them in a carton (120) associated with a given card.
- This carton (20) is then moved on conveyor belt (21) to the next preparation station, then sent to one or more palletizing stations (130) where are grouped several cartons for the same delivery area and the same carrier .
- each of the boxes receives the references of a single preparation station (106 to 108).
- the pallets are then transported by mobile carriages (31, 32) to the loading area in trucks (33).
- the routing of the commands, from the order of order to the loading area, is modeled as a graph.
- the arcs connecting two vertices correspond to real transitions between two vertices, in accordance with the usual formalism of the generalized stochastic Petri nets.
- probabilistic laws can be determined either from historical data or fixed by an operator, based on expert data or simulation assumptions.
- the modeling includes an estimate of the distribution (40) of the orders of orders during the day, according to the historical data or according to expert data or simulation hypotheses.
- Treatment according to the invention is then carried out by a probabilistic calculation of the propagation of orders of orders arriving at the water level at the control station (101), to represent the evolution of the various nodes during the day. , the occupancy rate of each of the nodes and the travel time of each of the command orders.
- This simulation can be performed by a tool such as the software Cosmos (commercial name) which is a "statistical model checker” published by the Netherlands Normale Su Southerneure of Cachan. Its input formalism is stochastic Petri nets and evaluates formulas of quantitative HASL logic. This logic, based on the linear hybrid automata, makes it possible to describe complex performance indices relating to the execution paths accepted by the automaton. This tool thus provides a way to unify performance evaluation and verification in a very general framework.
- This cumulative time is calculated using the stochastic logic with hybrid automaton (HASL) applied to the aforementioned modeling system.
- the result provided by the tool Cosmos (trade name) is a data file describing the evolution in time of the graph, and in particular the temporal evolution of the number of orders in the queue of each of the stations. This calculation can be visualized by a curve represented in FIG.
- FIG. 10 represents an example of representation of the average value of the cumulative processing time (curve 50) and 99% confidence intervals (curve 51, 52).
- the overall objective of the invention is to optimize the physical organization of a control processing warehouse in order to anticipate situations preventing the respect of constraints relating to the processing time of the control commands of which only approximate the future flow and in particular to change the allocation of future resources optimally, in near real time.
- the historical data is recorded in the form of time-stamped log files, from the data provided by each of the workstations.
- This information may optionally include operators' identifiers in order to improve the relevance of the simulations by taking into account the operators present on each workstation and to optimize the composition of the teams according to the objectives targeted.
- the invention makes it possible to carry out treatments without requiring data collection means on each of the workstations.
- the invention is applicable to organizations based on manual operators using paper cards, on stations without equipment for capturing information and / or traceability in real time. Treatment of missing data
- an auto-encoder or auto-associator
- An auto-encoder is an artificial neural network used for unsupervised learning of discriminant characteristics.
- the goal of an auto-encoder is to learn a representation (encoding) of a set of data, usually in order to reduce the size of this set.
- the concept of auto-encoder is used for the learning of generative models.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Educational Administration (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1757550A FR3069934B1 (en) | 2017-08-04 | 2017-08-04 | COMPUTER SYSTEM FOR THE VISUALIZATION OF THE LOGISTIC JOURNEY OF ENTITIES OVER TIME |
FR1851477A FR3078185A1 (en) | 2018-02-21 | 2018-02-21 | METHOD FOR LOGISTIC CHAIN MANAGEMENT |
PCT/FR2018/052014 WO2019025744A1 (en) | 2017-08-04 | 2018-08-03 | Computer system for displaying the logistical path of entities over time |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3662392A1 true EP3662392A1 (en) | 2020-06-10 |
Family
ID=63442719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18762370.7A Pending EP3662392A1 (en) | 2017-08-04 | 2018-08-03 | Computer system for displaying the logistical path of entities over time |
Country Status (4)
Country | Link |
---|---|
US (1) | US20200204954A1 (en) |
EP (1) | EP3662392A1 (en) |
CA (1) | CA3071892A1 (en) |
WO (1) | WO2019025744A1 (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8408459B1 (en) * | 2005-01-14 | 2013-04-02 | Brightpoint, Inc. | 4PL system and method |
US20080306783A1 (en) * | 2007-06-05 | 2008-12-11 | Gm Global Technology Operations, Inc. | Modeling a supply chain |
US20130111430A1 (en) * | 2011-10-27 | 2013-05-02 | Yin Wang | Providing goods or services |
US9483334B2 (en) | 2013-01-28 | 2016-11-01 | Rackspace Us, Inc. | Methods and systems of predictive monitoring of objects in a distributed network system |
WO2015058216A1 (en) * | 2013-10-20 | 2015-04-23 | Pneuron Corp. | Event-driven data processing system |
US10162861B2 (en) | 2015-09-04 | 2018-12-25 | Celonis Se | Method for the analysis of processes |
-
2018
- 2018-08-03 US US16/636,157 patent/US20200204954A1/en active Pending
- 2018-08-03 WO PCT/FR2018/052014 patent/WO2019025744A1/en unknown
- 2018-08-03 CA CA3071892A patent/CA3071892A1/en active Pending
- 2018-08-03 EP EP18762370.7A patent/EP3662392A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CA3071892A1 (en) | 2019-02-07 |
WO2019025744A1 (en) | 2019-02-07 |
US20200204954A1 (en) | 2020-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Social media data analytics for business decision making system to competitive analysis | |
US11627053B2 (en) | Continuous data sensing of functional states of networked computing devices to determine efficiency metrics for servicing electronic messages asynchronously | |
CN106250987B (en) | A kind of machine learning method, device and big data platform | |
US9213983B2 (en) | Computing system, method, and non-transitory computer-readable medium for providing a multi-tenant knowledge network | |
US20170011418A1 (en) | System and method for account ingestion | |
US20180005274A1 (en) | Management system for high volume data analytics and data ingestion | |
CN110268409B (en) | Novel nonparametric statistical behavior recognition ecosystem for power fraud detection | |
US10069891B2 (en) | Channel accessible single function micro service data collection process for light analytics | |
US20150134401A1 (en) | In-memory end-to-end process of predictive analytics | |
WO2019015631A1 (en) | Method for generating combined features for machine learning samples and system | |
EP3051475A1 (en) | Data analysis system and method to enable integrated view of customer information | |
Deka | Big data predictive and prescriptive analytics | |
EP3639190B1 (en) | Descriptor learning method for the detection and location of objects in a video | |
CN111552728B (en) | Data processing method, system, terminal and storage medium of block chain | |
CN112925911B (en) | Complaint classification method based on multi-modal data and related equipment thereof | |
CN114707914A (en) | Supply and marketing management center platform system based on SaaS framework | |
EP3846091A1 (en) | Method and system for design of a predictive model | |
Issac et al. | Development and deployment of a big data pipeline for field-based high-throughput cotton phenotyping data | |
EP3662392A1 (en) | Computer system for displaying the logistical path of entities over time | |
Kumar | Big data analytics: an emerging technology | |
Ivkovic et al. | HyperETL: Facilitating Data Analysis of Private Blockchain | |
FR3104780A1 (en) | PROCESS FOR THE AUTOMATIC PRODUCTION OF A DIGITAL MULTIMEDIA REPORT OF AN EXPERTISE OF A CLAIM | |
Zillner et al. | D2. 7 Annual report on opportunities | |
FR3069934A1 (en) | COMPUTER SYSTEM FOR VISUALIZATION OF LOGISTIC COURSE OF ENTITIES IN TIME | |
US20230123421A1 (en) | Capturing Ordinal Historical Dependence in Graphical Event Models with Tree Representations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200203 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20220323 |