CN104850593A - Big data-based emergency supplies data storage and circulation monitoring method - Google Patents
Big data-based emergency supplies data storage and circulation monitoring method Download PDFInfo
- Publication number
- CN104850593A CN104850593A CN201510205821.3A CN201510205821A CN104850593A CN 104850593 A CN104850593 A CN 104850593A CN 201510205821 A CN201510205821 A CN 201510205821A CN 104850593 A CN104850593 A CN 104850593A
- Authority
- CN
- China
- Prior art keywords
- goods
- materials
- record
- data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 126
- 238000012544 monitoring process Methods 0.000 title claims abstract description 35
- 238000013500 data storage Methods 0.000 title claims abstract description 7
- 238000004458 analytical method Methods 0.000 claims abstract description 55
- 238000012546 transfer Methods 0.000 claims abstract description 6
- 239000000463 material Substances 0.000 claims description 165
- 230000008569 process Effects 0.000 claims description 91
- 238000007726 management method Methods 0.000 claims description 43
- 230000002159 abnormal effect Effects 0.000 claims description 28
- 238000003860 storage Methods 0.000 claims description 23
- 230000001419 dependent effect Effects 0.000 claims description 13
- 230000011218 segmentation Effects 0.000 claims description 8
- 239000000284 extract Substances 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 201000007094 prostatitis Diseases 0.000 claims description 3
- 230000007306 turnover Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 14
- 238000009826 distribution Methods 0.000 abstract description 14
- 238000012545 processing Methods 0.000 abstract description 9
- 238000013507 mapping Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000008520 organization Effects 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000002547 anomalous effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000007689 inspection Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000011282 treatment Methods 0.000 description 3
- 238000007630 basic procedure Methods 0.000 description 2
- 235000012206 bottled water Nutrition 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000003651 drinking water Substances 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000013517 stratification Methods 0.000 description 2
- 238000012384 transportation and delivery Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 238000010205 computational analysis Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 208000023088 sudden sensorineural hearing loss Diseases 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Warehouses Or Storage Devices (AREA)
Abstract
The invention discloses a big data-based emergency supplies data storage and circulation monitoring method. The method comprises the following steps: setting a task node, and forming a calculation ring by calculation nodes in a physical distribution network; extracting corresponding warehouse-in/out supplies record data from a database by the task node according to a received task request; separating warehouse objects serving as analysis objects in the corresponding warehouse-in/out supplies record data from a data set; performing Hash processing on the calculation nodes in the calculation ring through the task node, mapping the Hash-processed calculation nodes with the analysis objects, and assigning an analysis object split set to each calculation node; extracting analysis objects needing to be processed from the received analysis object split set by the calculation nodes, and acquiring other warehouse supplies record data having a supplies transfer relation in the analysis objects and the physical distribution network; and performing abnormity check on the warehouse supplies record data according to a received task object, and transmitting a check result to the task node. Through adoption of the big data-based emergency supplies data storage and circulation monitoring method, rapid analysis and monitoring can be performed on data.
Description
Technical field
The present invention relates to logistics large data analysis research and apply field, be specifically related to a kind of emergency materials data based on large data and store and circulation monitoring method.For the demand of emergency feature and logistic information management and monitoring, the present invention is storage environment based on Key-Value column data (HBase), realize based on parallel architecture the express-analysis supporting P2P relation in hierarchical network, realize the high-performance identification and analysis technology that path in logistics dynamic network, transmitting-receiving and transport are abnormal.The present invention has positive using value in traffic, aviation, damage control, environmental protection, people's flow monitoring, logistics are reviewed etc.
Background technology
Emergency materials (Emergency Supplies) refers to as tackling necessary protection material in the Emergent Public Events emergency disposal processes such as serious disaster, Sudden sensorineural hearing loss, occurred events of public safety and military conflict.It mainly comprises following classification:
● life support: these type of goods and materials mainly comprise for disaster area masses' life, health care goods and materials, comprise potable water, food, medicine, tent etc.;
● rescue outfit: these type of goods and materials are mainly used in disaster area rescue work, mainly comprises rescue engineering machinery, emergency communication support equipment etc.;
Emergency materials is that the important substance of burst accident emergency management and rescue and disposal supports.For earthquake, in emergency disposal work, need to assemble a large amount of various classification emergency resources.According to emergency command United Dispatching, transport, distribution.Because emergency materials has larger specific aim, such as: medicine succours demand, Shi Pin potable water for disaster area personnel placement demand, Large-Scale Equipment for rescue and relief work demand etc. for the wounded.Therefore, high-performance logistics management is set up for emergency feature and information service system is necessary.By this system can realize magnanimity emergency materials data unified management, High Performance Data Query, review and the in real time work such as abnormality detection.For carrying out smoothly of emergency disposal activity lays the foundation.The management of Emergency Logistics be unable to do without goods and materials object and logistics network two fundamental elements.In emergency, these two elements present the feature of following two aspects
1. goods and materials object batchization tissue: according to the feature of emergency, needs the similar goods and materials of assembling larger amt usually.Transport and distribution is realized by batch organizational form changed.Usually by batch based on, carry out unified disposal for individual subject.In the management activity of emergency materials, batch organizational form changed realizes the management of emergency materials circulation information, the basis of tracking and anomaly;
2. the stratification of emergent circulation joint system and logistics network tree-likeization: combine the requirement of country about contingency management system.Usually form in disaster area the collecting and distributing distribution system of goods and materials being level with province-city-county's administrative grade.In process material flow, according to emergent demand in batches, the directed transport of grouping, formed afterwards stratification, multithread to circulation organizational form.
Based on above feature, the monitoring of emergency materials circulation, is mainly included in following three aspects:
1. the distribution network monitoring of emergency materials: the monitoring being the hierarchical relationship between the circulation node in emergency materials distribution network.According to managing the requirement with despatching work, emergency materials, from one-level circulation node successively first nodes circulation downwards, does not allow the circulation between circulation mutually between same first nodes, crossgrade circulation and different administrative regions.Under normal circumstances, due to reasons such as emergency are chaotic, information communication is not smooth, the situation of material transfer mistake can be there is, therefore need the path that flows to for network-wide basis goods and materials in monitoring work to check, eliminate the situation of circulation mistake.
2. monitor the exception of Material Transportation: due to emergency, logistics management need of work follows the tracks of the transport of each batch, each grouping goods and materials, the situation such as goods and materials loss, Material Transportation inefficacy that Timeliness coverage causes due to a variety of causes;
3. based on batch emergency materials follow the tracks of, trace to the source and anomaly: goods and materials object by batch in units of realize distributing.Through grouping in distribution procedure, be distributed to different node.In process material flow, according to the quantity statistics of different node materials distribution and arrival, find the abnormal conditions in Emergency Logistics network; Can review fast be detained, the node of maloperation.Meanwhile, also can add up for flow situation based on the time, analyze the efficiency of logistics work in whole network, assist decision optimization.
The orderly storage of data, management, inquiry and carrying out of statistics are the bases that Emergency Logistics manages.Emergency has larger event correlation.Larger bursty data increment can be formed when the disaster time occurs.In most emergency, emergency materials is of a great variety, substantial amounts, often forms larger increment and impacts.Traditional relational database, due to the restriction of the aspect technology realization mechanisms such as the guarantee of its consistance and issued transaction, cannot tackle the performance response of sudden high increment Emergency Logistics data.For this situation, the present invention is based on large data technique and invented a kind of Emergency Logistics monitoring technique.
Summary of the invention
For the contradiction between the technical method of current emergency materials management domain and new growth requirement, a kind of emergency materials data based on large data are the object of the present invention is to provide to store and circulation monitoring method.The high-performance monitoring of other emergency materials various types of under large market demand background is realized by this method.Realize the monitoring of emergency materials circulation under various constraint condition on the basis of this, the abnormal occurrence in the Timeliness coverage process of circulation, for improving emergency materials management, building emergency guarantee system and effective technical support is provided.
Technical scheme of the present invention is:
Based on an emergency materials date storage method for large data, it is characterized in that, circulation node is stored in a Key-value database Store for the emergency materials record read or receive; Wherein, the storehouse mark storeID cryptographic hash of this database is the key of this record, and whole turnover goods and materials records in this storehouse, information of making an inventory of goods in a warehouse are as the corresponding value value of this key;
The Data Storage Models of described Key-value database Store is: SStorree={StoreID, resColumes}, resColumes={rCol
i| i=1,2 ... n}; Wherein, resColumes is the goods and materials record row race of this database, and it is made up of n row rCol, and the i-th class goods and materials are recorded in row rCol
iin;
For arbitrary row rCol={colName, rColArray}, rColArray={rColCell
j| j=1,2 ... ..m}; Wherein colName is the title of row, corresponding with goods and materials batch identification resourceID; RColArray is row bunch set, rColCell
jthe jth row bunch rColcell in row bunch set rColArray, for storing and inbound/outbound process goods and materials recorded information in the same period of working as the corresponding goods and materials of prostatitis rcol; M is the sum of row bunch rColcell.
Further, a described row bunch set rColArray comprises two parts information: 1) based on the inbound/outbound process goods and materials recorded information of time flow relation tissue; 2) information of making an inventory of goods in a warehouse of material storage situation in described database is recorded.
Further, a described row bunch set rColArray creates with Time segments division, and each period correspondence creates or expands row, forms row bunch set
Wherein
for the inbound/outbound process goods and materials recorded information between kth-1 to k moment;
for the information of making an inventory of goods in a warehouse between kth-1 to k moment;
values={ (from, to, Number, time, objectID)
p| p=1,2, ... s}, wherein Lname is the title of this period goods and materials record bunch, and Values is the inbound/outbound process goods and materials recorded information of this batch of goods and materials in kth-1 to k moment, and from is goods and materials source-information, to is that material flow is to information, Number is inbound/outbound process quantity information, and Time is inbound/outbound process running time information, and objectID is the goods and materials mark of operation.
Further, described Lname is added timestamp by the mark of corresponding batch goods and materials and forms.
Further, to make an inventory of goods in a warehouse described in information
Wherein, SName is the title of record bunch of making an inventory of goods in a warehouse, bunch corresponding with the goods and materials record of this period;
for t
k-1moment store the inventories of classification goods and materials,
for t
kmoment store the inventories of classification goods and materials.
Based on an emergency materials circulation monitoring method for large data, the steps include:
1) in logistics net, arrange a task node, the computing node in logistics network is formed ring by described task node;
2) described task node is according to the task requests received, and extracts corresponding inbound/outbound process goods and materials record data from database; And storehouse object is wherein separated as analytic target with data set;
3) map with described analytic target after described task node carries out Hash process to the computing node in described ring, divide cut set for each computing node distributes an analytic target;
4) itself and described analytic target divide cut set to send to corresponding computing node together according to task requests creation task object by described task node; Wherein, described task object is used for buffer memory task essential information, map information, and monitors each computing node tasks carrying situation and collect summarized results to described task node;
5) described computing node is concentrated to extract from the analytic target segmentation received needs analytic target to be processed, then obtains in this analytic target and logistics net that other have the storehouse goods and materials record data of material transfer relation; Then according to the task object received, follow-up for anomaly is carried out to these storehouse goods and materials record data, and check result is sent to described task node.
Further, the method for described follow-up for anomaly is:
71) the inbound/outbound process goods and materials record data of each circulation node of this storehouse of computing node record extracting data; Then according to these inbound/outbound process goods and materials record data, obtain the inbound/outbound process goods and materials record data of transporting relation with its generation, obtain a transhipment path, check whether this transhipment path meets contingency management rule, if exist abnormal, as path dependent options record;
72) obtain shipping and receiving information corresponding with current record transhipment relation storehouse from described task node, check whether to there is the not corresponding situation of transmitting-receiving, if existed, as transmitting-receiving exception record;
73) the two record times of A, B and the time difference of management expectancy are required to contrast, if exceed this time difference, then as transport exception record; Wherein, record A is the record that goods and materials p enters a storehouse, and these record identification goods and materials are from b storehouse simultaneously; Record B is the record that goods and materials p goes out b storehouse, and this record identification mails to a storehouse simultaneously.
Further, the method for described follow-up for anomaly is:
81) computing node obtains present analysis object from distributed analytic target collection keys, and whole inbound/outbound process goods and materials record keyLogs of present analysis object;
82) computing node obtains an inbound/outbound process goods and materials record log from current keyLogs, if this record log, for entering storehouse record, obtains goods and materials source-information in current record log, then check the membership credentials of goods and materials storehouse, source and Current Library, if violate contingency management rule, recording current record log is path dependent options; If this record log is outbound goods and materials records, then from this record log, obtain material flow to information, then check the membership credentials of material flow to storehouse and Current Library, if violation contingency management rule, recording this record log is path dependent options;
83) if this record log meets contingency management rule, then checking whether shipping and receiving difference writing time of this record log meets and impose a condition, if do not met, then recording current record log abnormal for transporting.
Further, described task node is according to the task requests received, the method extracting corresponding inbound/outbound process goods and materials record data from database is: task node obtains the row of corresponding title according to the goods and materials batch information resourceID in task requests from database, if there are corresponding row, then according to the time constraint condition t in task requests
1determine its epoch boundaries (t
k, t
k+1), time conditions t2 determines its epoch boundaries (t
m, t
m+1), the row cluster name then obtaining t1, t2 respectively corresponding claims, then by these two row bunch and the data record formation selected works of whole period row bunch between it; Then filter one by one from these selected works, erasing time does not meet t
1, t
2the data of constraint; Finally using the inbound/outbound process goods and materials record data of the remaining data in selected works as correspondence.
The present invention is based on the column database (HBase) that Key-Value is model to support for storing; Parallel architecture computing system is adopted to realize monitoring and anomalous identification in real time; By the mode that data set is separated with object set, realize the technology supporting P2P relationship analysis in logistics network.On the basis of this, provide high performance anomalous identification and circulation to monitor the instrument of grade towards emergency command, meet the basic demand of emergency disposal work, realize express-analysis and monitoring, its step comprises:
1) high-performance management, the tissue of logistics information is realized based on Key-Value column database;
The feature that 1-1) feature of surround stream information and Key-Vaule database purchase expand, many-valued inquiry is low with index usefulness, the adaptive logistic information management model of employing increment, realizes unified management and the tissue of logistics information.
1-2) in above-mentioned administrative model, realize the management of row according to goods and materials mark, realize the quick indexing of materials and equipment classification;
1-3) in above-mentioned administrative model, automatically divide the segmentation of logistics information according to the time period (1 hour), the entire object stream information of each time slice formed a pair independently inbound/outbound process record bunch with make an inventory of goods in a warehouse aggregate of data.Formed the title to arrange bunch with time period and material information, realized quick-searching;
1-4) in data increment process, automatically form new row bunch according to the time period, and upgrade corresponding period inner disc library information;
1-5) on the basis of above-mentioned storage organization, by common constraint inquiry in the title holder flow management work of row, row bunch.Overcome the defect of Key-Value column database (HBase);
2) for the Data Management Analysis feature in logistics network monitoring, set up parallel architecture, realize by this parallel architecture the high-performance analysis supporting P2P relation.This framework is made up of two-layer, and upper strata task node realizes Data Segmentation, task scheduling and cache management; Lower floor's computing node realizes data processing;
2-1) parallel architecture accepts analysis request;
2-2) task node extracts the goods and materials inbound/outbound process record data in corresponding whole storehouses from underlying database according to task requests; And storehouse object (the corresponding storehouse mark StoreID of each storehouse object) is wherein separated as analytic target with data set; Each database is an analytic target, and data set is the data record stored in database; Each circulation node is all stored in the record data of this node in this underlying database.
2-3) computing node of bottom is formed ring by task node, maps after being carried out Hash process with analytic target collection, forms the segmentation of analytic target;
2-4) task node sends to each computing node according to task requests creation task object together with its point cut set with analytic target; Meanwhile, data set buffer memory is created; Task object is mainly used in buffer memory task essential information, map information, monitors each computing node tasks carrying situation, and collect summarized results, this task object needs to send to computing node, at calculation task implementation buffer memory, and communicates with task node;
2-5) computing node receives an assignment dispatch request, carries out computational analysis;
2-6) in computation process, computing node extracts the analytic target that segmentation is concentrated, and obtains the storehouse that in this analytic target and logistics net, other have material transfer relation record data by the buffer memory of task node;
2-7) according to analysis request, computing node completes follow-up for anomaly.
3) in analytic process, each circulation node is analyzed successively;
4) computing node extracts the inbound/outbound process information of each circulation node in the analysis subset (i.e. the storehouse to be processed object set of this computing node distribution) of this locality;
4-1) according to this inbound/outbound process information, obtain the library information transporting relation with its generation and obtain a transhipment path, check whether the path of itself and Current Library meets contingency management rule and (just created before rule, be kept in computing node), if exist abnormal, as path dependent options record;
4-2) from task node buffer memory, obtain shipping and receiving information corresponding with current record in transhipment relation storehouse, check whether to there is the not corresponding situation of transmitting-receiving, if existed, as transmitting-receiving exception record;
4-3) two the record times of A, B and time differences of management expectancy is required to contrast, the time difference of whether this transhipment work exceedes management expectancy, if exceeded, exception record is transported in conduct; Wherein, A is recorded: goods and materials p enters the record in a storehouse, and meanwhile, these record identification goods and materials are from b storehouse; Record B: goods and materials p goes out the record in b storehouse, and meanwhile, this record identification mails to a storehouse;
4-4) complete the inspection work of whole inbound/outbound process record, the whole abnormal informations forming Current Library gather;
4-5) complete the inspection work of whole analytic target, form partial analysis result;
5) gather whole analysis result, complete the analytical work of logistics network, return whole anomalous identification situation;
By said process, the present invention is directed to the cutting edge technology demand of emergency materials circulation monitoring under current large market demand environment, utilize key value database to realize the Management And Organization of streaming logistics information; On the basis of computational logic layering scheduling, realize high-performance P2P relationship analysis by parallel node, meet the requirement of emergency materials circulation monitoring.
The present invention, on the principle basis of " data-driven ", breaks through traditional storage based on relational database-process architecture mode restriction.Realize the express-analysis of magnanimity logistics stream data;
Compared with prior art, positive achievement of the present invention is embodied in:
1. the present invention utilizes key value database to realize the high-performance management of magnanimity stream data.Meanwhile, store for key value database the problem low with index efficiency that expand, invented adaptive logistics information storage administration model.This model with goods and materials mark with time interval identify carry out arrange and arrange bunch tissue, realize the orderly management of streaming logistics record data.Meet inquiry and the response performance demand analyzed;
2. the present invention is based on parallel architecture.Calculated by multi-point cooperative and realize emergency materials flow network P2P relation express-analysis under large data environment and Real-Time Monitoring.In analysis and observation process, in conjunction with the means that calculating data set is separated with analytic target collection, splits based on the analytic set of Hash mapping.Under realizing parallel environment, high-performance P2P relation calculates;
3. on above-mentioned large data technique basis, the present invention proposes a kind of Emergency Logistics abnormality detection and recognition methods, can to the path dependent options in hierarchical network, transport is abnormal and transhipment is abnormal carries out high-performance analysis, meets the requirement of Emergency Logistics management and managing;
4. show in the experiment of the emergency materials circulation monitoring based on large data.Method provided by the present invention reaches 30% in overall computational performance lifting, effectively improves the demand of magnanimity emergency materials circulation monitoring when ensureing enough accuracy in computations;
To sum up, the circulation behavior that the present invention is directed to emergency materials realizes high-performance monitoring, provides the core technology of magnanimity logistics data management under large data background.Utilize parallel architecture to improve data-handling capacity, in logistics field, there is positive using value.
Accompanying drawing explanation
Fig. 1 invention overall technical architecture;
Fig. 2 adaptive logistics information storage administration model;
Fig. 3 logistics data update;
Fig. 4 is based on the query manipulation of time-constrain;
Fig. 5 supports the parallel computation Organization Chart of P2P relationship analysis;
Fig. 6 supports the task node parallel computation process flow diagram of P2P relationship analysis;
Fig. 7 supports the computing node parallel computation process flow diagram of P2P relationship analysis;
Fig. 8 logistics large data high-performance monitoring analysis process figure.
Embodiment
For making above-mentioned feature and advantage of the present invention become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
The focus of work of emergency materials circulation monitoring has 2 points: 1) monitoring storage is abnormal abnormal with transhipment.Check in current logistics network, whether there is the situation of distribution, transhipment mistake.Meanwhile, check in logistics network, whether material storage situation exists exception; 2 check that the transport in current logistics transhipment is abnormal, whether there is path confusion, transhipment inefficacy and lose, transport sluggish situation.For command scheduling and decision-making provide foundation.
In conjunction with data height increment, high concurrent feature in emergency.The present invention is based on Hbase and set up master data storage environment.Store for HBase database key-value memory model and expand and the inefficient problem of Organization of Data.The present invention realizes the self-adaptation tissue of logistics inbound/outbound process stream data by the mode of extension columns bunch.Provide support for carrying out the whole network logistics follow-up for anomaly.
In mass data high performance calculating, the present invention designs a kind of parallel architecture.Different from based on batch mode Mapreduce, data set is separated with analytic target collection by this framework, is split by analytic target collection and organizes parallel computation; In computation process, the mode shared with internal memory realizes the high-performance analysis of massive data sets.It is made up of two levels.Upper strata is made up of a task node, is responsible for the management of the extraction of data set, the segmentation of task and scheduling and buffer memory.Multiple computing nodes of lower floor carry out computing according to the task definition of distributing.Exception in analysis, recognition network.Result is formed and analyzes conclusion after converging.
On the basis of this parallel architecture, the present invention establishes and proposes the abnormal technology detecting fast, identify of Emergency Logistics.The stock of Emergency Logistics can be realized by this technology, distribution, transhipment are abnormal, also can analyze transport efficacy.Meet the basic demand of logistics monitoring in emergency disposal activity.
The concrete technical scheme of the present invention as shown in Figure 1
1. adaptive logistic information management and organize models
For increment high in emergent activity, high concurrent data characteristics.The present invention realizes master data storage administration based on Hbase.HBase database is the Nosql database based on Key-value.It designs based on CAP and BASE principle.There is better basic I/O performance and Scalability.Relatively be applicable to the management of large data in emergent scene.But the essence of Key-Value database is that an IO formed with Hash relation stores index hash.Only in the query aspects better performances based on main Key.Many-valued inquiry in Emergency Logistics monitoring application, constraint inquiry reverse side poor-performing.For this problem, the present invention, on the basis of Key-Value, designs a kind of Emergency Logistics information memory model, while giving full play to Hbase performance, meets the demand of upper layer analysis.
As shown in Figure 2, it is defined as follows this model structure:
SStorree={StoreID,resColumes},resColumes={rCol
i|i=1,2,...n}
Wherein StoreID is storehouse mark, and this mark is used for main key.Whole goods and materials inbound/outbound process in each storehouse, the information of making an inventory of goods in a warehouse are recorded in row corresponding to this main key.
ResColumes is the material information row race in this storehouse.It is made up of one group of row rCol.I-th class goods and materials are all recorded in row rCol
iin:
rCol={colName,rColArray},rColArray={rColCell
j|j=1,2,.....m}
Wherein colName is the title of these row, and this title is corresponding with the mark resourceID of particular batch goods and materials, in queries, can directly locate the column position stored in Current Library by batch identification; RColCell
jthe jth row bunch rColcell in row bunch set rColArray; It is for storing and inbound/outbound process information in the same period of working as the corresponding goods and materials of prostatitis rCol; M is the row bunch rColcell sum in row bunch set.
RColArray is row bunch set.The material information in a storehouse is made up of two parts: 1) inbound/outbound process log information; 2) to make an inventory of goods in a warehouse information.Inbound/outbound process information organizes the basic daily record of inbound/outbound process based on time flow relation; Information of making an inventory of goods in a warehouse records this material storage situation of this storehouse of special time as requested.In whole emergency, because logistics work is heavy, the inbound/outbound process data in a storehouse can undergoes rapid expansion.Meanwhile, support owing to lacking necessary time-constrain inquiry.The inbound/outbound process information retrieving, inquire about specific time period is wherein comparatively difficult.For this situation, the present invention adopt based on time item overflow self-adapting cluster tissue.By this emergency with specific time period interval (1 hour) for divide, set up respective column bunch.
Wherein
for the logistics information record bunch (i.e. inbound/outbound process log information) between kth-1 to k moment;
for information of the making an inventory of goods in a warehouse record bunch between kth-1 to k moment;
values={(from,to,Number,time,objectID)
p|p=1,2,...s}
Wherein Lname is the title of this period logistics information record bunch, and this title is added timestamp by the mark of this batch of goods and materials and forms: def (Lname)=resourceID ∪ t
k-1∪ t
k;
Values is in this period, the inbound/outbound process information record of this batch of goods and materials.It is made up of a time series.
From is goods and materials source-information, and when this information is in-stockroom operation information, its record is to the shipping bulk plant mark of this storehouse transhipment goods and materials; When this information is outbound operation information, the mark in its minute book storehouse;
To be material flow to information, when this information is in-stockroom operation information, its record identifies to this storehouse; When this information is outbound operation information, it records the mark in storehouse of receiving;
Number is inbound/outbound process quantity information;
Time is this inbound/outbound process running time information;
ObjectID is the goods and materials mark of this operation;
Wherein SName is the title of this record bunch of making an inventory of goods in a warehouse, and the logistics information record of its title and this period is bunch corresponding:
def(SName)=def(Lname)∪"::Storage"
Can by name-matches quick position between record bunch and logistics information bunch by making an inventory of goods in a warehouse.
for t
k-1the inventories of such goods and materials in this storehouse of moment
for t
kthe inventories of such goods and materials in this storehouse of moment
In emergency materials logistics data incremental process, by the storage organization of above-mentioned model realization Key-valueization.This process as shown in Figure 3.
Its basic procedure is as follows:
1. accept inbound/outbound process information Msg;
2. from this Msg, obtain the storehouse mark of current operation;
3. check in current data table whether there is major key corresponding to this storehouse mark.If had, perform step 5, otherwise perform step 4;
4. be designated main key with this storehouse in tables of data, create new storehouse record;
5. from this Msg information, obtain goods and materials object information, and extract the goods and materials batch information resourceID of its correspondence;
6. check in current data table whether there is the row title corresponding with resourceID; If there are this row, perform step 8 otherwise perform step 7;
7. in current data table, create new row, the title of these row take resourceID as mark;
8. obtain this running time t by Msg, determine epoch boundaries t according to t
k-1with t
k
9. check in the row that in Current Library record, resourceID is corresponding and whether have and corresponding t
k-1with t
kthe logistics information bunch of period, if had, performs step 10 otherwise performs step 11;
10. current Msg is recorded in row corresponding to present period bunch, end data update;
11. create and (t
k-1, t
k) logistics information corresponding to period bunch and record bunch of making an inventory of goods in a warehouse;
12. upgrade (t
k-2, t
k-1) bunch in t
k-1the inventory data in moment; Upgrade (t
k-1, t
k) inventory data in moment;
13. perform step 10;
Realize in incremental process by operation above, based on time item self-adapting data tissue, by the fast query based on time-constrain bunch can be realized;
This model, take resourceID as row title, claims for row cluster name with goods and materials mark, time boundary; By this organizational form, the fast query of time-constrain can be realized.This query script is as Fig. 4:
This flow process is as follows:
1. task node receives the inquiry request req that user submits to, and this request comprises goods and materials batch information resourceID and time constraint condition t
1, t
2, t
1<t
2;
2. from current data table, the row of corresponding title are obtained according to resourceID;
3. if there are row corresponding to resourceID in current data table, perform step 5, otherwise perform step 4;
4. return current queries NULL result, terminate current queries operation;
5. determine its epoch boundaries (t according to time conditions t1 in req
k, t
k+1), wherein t
k≤ t1≤t
k+1
6. determine its epoch boundaries (t according to time conditions t2 in req
m, t
m+1), wherein t
m≤ t2≤t
m+1
7. the row cluster name obtaining t1 corresponding claims resourceID@tk@t
k+1
8. the row cluster name obtaining t2 corresponding claims resourceID@t
m@t
m+1
9. obtain 7, two row bunch and the data record of whole period row bunch between it in 8, formation selected works;
10. filter one by one in selected works, erasing time does not meet t
1, t
2the data of constraint;
Remaining data in selected works is returned inquiry as Query Result by 11..
2. support the parallel processing architecture of P2P relationship analysis
In logistics monitoring process, usually need the follow-up for anomaly carrying out the whole network.This inspection is according to materials distribution between different sink and the unmatched situation of transhipment relation recognition inbound/outbound process.Because in emergency, data volume is comparatively huge, and between different sink, hierarchical relationship is complicated, and traditional unit process cannot ensure response performance requirement.Take Mapreduce as the parallel architecture of representative be large data high-performance treatments framework popular in recent years.It take Map-Reduce as basic calculating flow process.Task matching and computation organization is carried out after first data being carried out splitting (split) in processing procedure.There is larger problem in this mode, under normal circumstances, need the mode with P2P in based on the whole network route analysis process, carries out relation inspection for other nodes of storing in a warehouse whole in single storage node and the whole network.Map-reduce framework due to the pattern of data diversity process, the more difficult requirement meeting this calculation task.For this problem, the present invention, on the basis of aforementioned Emergency Logistics memory model, proposes a kind of parallel computation framework.This framework is made up of two levels, and upper strata task layer is responsible for task Organization of Data, data complete or collected works cache management and task scheduling; On the basis that bottom computation layer data complete or collected works buffer memory is mutual, carry out point-to-point relationship analysis.Meet substantially will going of Emergency Logistics monitoring.The basic condition of this parallel parsing framework is illustrated in Fig. 5:
This framework is made up of two levels
Task layer: it is formed primarily of task node and data complete or collected works buffer memory, and wherein each task node distributes a buffer memory.Task management node is responsible for the collaborative of whole calculation task and tissue.Its principle according to " load average " is set up centralized dispatching strategy and is carried out distributing to task and organize.In computation process, it receives the emergency materials data set to be analyzed (namely inquiring about the selected works returned) of input.Analytic target (storehouse object) list is formed from selecting concentration filter according to the main key in selected works.Its despatching work sets up mapping relations between storehouse list object and bottom computing node.In this relation process of establishing, first-selected by whole for bottom computing node formation ring.Hash process is carried out to each node identification.Then, range mapping is carried out in the analytic target list of Hash result and aforementioned extraction.So just can form analytic target being uniformly distributed in computing node.After computing node task matching, the analytic target information subset of individual node mapping is given node tasks interface by task layer.Then task node starts whole calculation task and starts data processing.
In data processing, task node, by data access interface, provides the access services of data set in buffer memory, thus realizes the p2p formula relationship analysis of global scope towards computing node.
Computation layer: it is formed primarily of computing node.Computing node is responsible for the computing work calculating farm-out.In task implementation, the calculation task subpackage of its node distribution that received an assignment by task interface.Contain this node in subpackage and need analytic target subset to be processed.In process of calculation analysis.Computing node antithetical phrase concentrates each analytic target (namely circulate node on) to carry out p2p relationship analysis towards the overall situation.It, by data access structure, obtains whole relation datas of this analytic target from the buffer memory of task node.Relation follow-up for anomaly is carried out in this locality.Complete after whole analytic target sends out process, abnormal information result is converged and returns application request by task node.
This Computational frame is Develop Data process on the basis of parallel architecture.It realizes high-performance P2P relationship analysis by the mode that object is separated with set of relations, meets the basic demand of Earthquake Emergency Work.
The basic procedure of this process is illustrated in Fig. 6.
Task node treatment scheme is as follows:
1, task node obtains the task requests req that application is submitted to, therefrom obtains the material information resourceID that will analyze and time constraint condition t
1, t
2;
2, task node obtains the whole warehouse inbound/outbound process recorded information dataTbl under constraint condition from underlying database according to the content in request;
3, task node obtains warehouse mark (main key) list all participating in logistics from datatTbl.Form analytic target list keysArray;
4, task node obtains whole computing node information in computation layer from current system;
5, computing node is formed the closed computation ring that first place connects, then carry out Hash process to the mark of wherein each computing node, form the Hash mapping code that the overall situation is unique, whole Hash mapping code forms nodeHashArray;
6, the analytic target in keyArray is split, form one group of subset.Each subset is made up of a group analysis object, and corresponding with the Hash mapping code in nodeHashArray;
7, task node is present analysis task creation task object and buffer memory, is placed in the buffer by dataTbl (i.e. emergency materials data set to be analyzed), forms holotopy data set buffer memory;
8, the analytic target subset of segmentation in mission bit stream and step 6 is sent to computing node corresponding to Hash by task node;
9, after computing node obtains whole mission bit stream, startup analysis calculation task.Overall p2p (point-to-point) relationship analysis is carried out to the object in local analytics object subset;
10, after computing node completes native object analysis, the abnormal information of discovery is returned task node;
11, after task node gathers full detail, application is returned;
The analytic target subset of distributing according to this locality after computing node obtains the solicited message of this task carries out relationship analysis.In analytic process, it extracts object in subset successively, obtains the whole records with its generation logistics interactive relation, carry out anomalous identification analysis by the buffer memory of task node.This process is illustrated in Fig. 7:
This process is as follows:
1, computing node obtains current task information job (i.e. the subpackage of storehouse object and mission bit stream) and the analytic target collection keys that distributes;
2, computing node is set up buffer memory according to the cache information in job and task node and is connected, if success, performs step 4 otherwise performs step 3;
3, computing node throw exception information, terminates current task operation;
4, computing node start-up simulation task, arranges present analysis task flagging i, makes i=0;
5, computing node obtains i-th analytic target: key from keys
i;
6, according to key
ithe mark in acquisition of information present analysis storehouse, obtain the whole recorded informations with Current Library generation material transfer by task node buffer memory;
7, path and transmitting-receiving is checked successively extremely according to the recorded information in step 6;
8, the abnormal results that Current Library is analyzed is preserved;
9, complete the Treatment Analysis task of existing object, task flagging i=i+1 is set;
10, the border of current task mark i spilling keys then performs step 11 otherwise performs step 5;
11, whole abnormal information is gathered return task node.
3. high-performance logistics network follow-up for anomaly
The present invention realizes storage and the tissue of magnanimity logistics record data by aforementioned Emergency Logistics memory model.The abnormal high-performance analysis of logistics is realized by parallel processing architecture.On the basis of this, the present invention realizes the logistics monitoring in emergency by path dependent options, transmitting-receiving exception and logistics time three aspects:
Path dependent options checks: the turnover goods record checking all warehouses in logistics network, analyzes in Ku-Ku transport process, whether exist do not meet contingency management system material flow to situation.If existed, as path dependent options process;
Whether transmitting-receiving follow-up for anomaly: between two storehouses that transhipment relation occurs, check its delivery respectively and receive record, mating.If exist and record unmatched situation, then as transmitting-receiving abnormality processing;
Logistics transportation is abnormal: between two storehouses that transhipment relation occurs, and checks delivery availability and arrival time whether within job requirement time difference range, if requiring between the time difference, is then regarded as qualified transport operation.Otherwise as transport abnormality processing.
Whole computation process is as Fig. 8, and its flow process is as follows:
1. obtain analysis task request;
2. task node is finished the work scheduling, and start-up simulation node carries out analytical work;
3. computing node obtains present analysis object from distributed keys;
4. whole inbound/outbound process record keyLogs of present analysis object are obtained by task node buffer memory;
5. from current keyLogs, obtain an inbound/outbound process record log;
6. current log then performs step 7 for entering storehouse record, otherwise performs step 13;
7. obtain from information in current log, check the storehouse of from and the membership credentials of Current Library according to management system, if violated the rules, record the path dependent options of current log;
If 8. not corresponding with current record in the node of corresponding storehouse inbound/outbound process record, then obtain the issuance records of storehouse node corresponding to from from task node buffer memory; If record exists, perform step 9, otherwise the transmitting-receiving of recording current log is abnormal;
9. issuance records and the time of current record and the transport time difference of regulation are required to contrast, judge whether shipping and receiving time that current log records meets and impose a condition, if shipping and receiving difference writing time does not meet impose a condition, then the transport of recording current log is abnormal;
10. if Current Library object also has inbound/outbound process log, perform step 5, otherwise perform step 11;
If need the object analyzed in 11. current keys, perform step 3 otherwise perform step 12;
12. gather whole abnormal informations, return task node.
13. current records are outbound record, obtain to information from log, check the storehouse of to and the membership credentials of Current Library, if violated the rules, record the path dependent options of current log according to management system;
14. obtain storehouse node corresponding to from task node buffer memory receives record; If record exists, perform step 15, otherwise the transmitting-receiving of recording current log is abnormal;
Record of receiving requires to contrast with the time of current record and the transport time difference of regulation by 15., if shipping and receiving difference writing time does not meet impose a condition, then the transport of recording current log is abnormal;
16. if Current Library objects also have inbound/outbound process log, perform step 5, otherwise perform step 11;
If need the object analyzed in 17. current keys, perform step 3 otherwise perform step 12;
18. gather whole abnormal informations, return task node.
Claims (9)
1. based on an emergency materials date storage method for large data, it is characterized in that, circulation node is stored in a Key-value database Store for the emergency materials record read or receive; Wherein, the storehouse mark storeID cryptographic hash of this database is the key of this record, and whole turnover goods and materials records in this storehouse, information of making an inventory of goods in a warehouse are as the corresponding value value of this key;
The Data Storage Models of described Key-value database Store is: SStorree={StoreID, resColumes}, resColumes={rCol
i| i=1,2 ... n}; Wherein, resColumes is the goods and materials record row race of this database, and it is made up of n row rCol, and the i-th class goods and materials are recorded in row rCol
iin;
For arbitrary row rCol={colName, rColArray}, rColArray={rColCell
j| j=1,2 ... ..m}; Wherein colName is the title of row, corresponding with goods and materials batch identification resourceID; RColArray is row bunch set, rColCell
jthe jth row bunch rColcell in row bunch set rColArray, for storing and inbound/outbound process goods and materials recorded information in the same period of working as the corresponding goods and materials of prostatitis rcol; M is the sum of row bunch rColcell.
2. the method for claim 1, is characterized in that, a described row bunch set rColArray comprises two parts information: 1) based on the inbound/outbound process goods and materials recorded information of time flow relation tissue; 2) information of making an inventory of goods in a warehouse of material storage situation in described database is recorded.
3. method as claimed in claim 1 or 2, is characterized in that, a described row bunch set rColArray creates with Time segments division, and each period correspondence creates or expands row, forms row bunch set
Wherein
for the inbound/outbound process goods and materials recorded information between kth-1 to k moment;
for the information of making an inventory of goods in a warehouse between kth-1 to k moment;
values={ (from, to, Number, time, objectID)
p| p=1,2, ... s}, wherein Lname is the title of this period goods and materials record bunch, and Values is the inbound/outbound process goods and materials recorded information of this batch of goods and materials in kth-1 to k moment, and from is goods and materials source-information, to is that material flow is to information, Number is inbound/outbound process quantity information, and Time is inbound/outbound process running time information, and objectID is the goods and materials mark of operation.
4. method as claimed in claim 3, is characterized in that, described Lname is added timestamp by the mark of corresponding batch goods and materials and forms.
5. method as claimed in claim 3, is characterized in that, described in make an inventory of goods in a warehouse information
Wherein, SName is the title of record bunch of making an inventory of goods in a warehouse, bunch corresponding with the goods and materials record of this period;
for t
k-1moment store the inventories of classification goods and materials,
for t
kmoment store the inventories of classification goods and materials.
6., based on an emergency materials circulation monitoring method for large data, wherein emergency materials currency data is the data stored based on method described in claim 1, the steps include:
1) in logistics net, arrange a task node, the computing node in logistics network is formed ring by described task node;
2) described task node is according to the task requests received, and extracts corresponding inbound/outbound process goods and materials record data from database; And storehouse object is wherein separated as analytic target with data set;
3) map with described analytic target after described task node carries out Hash process to the computing node in described ring, divide cut set for each computing node distributes an analytic target;
4) itself and described analytic target divide cut set to send to corresponding computing node together according to task requests creation task object by described task node; Wherein, described task object is used for buffer memory task essential information, map information, and monitors each computing node tasks carrying situation and collect summarized results to described task node;
5) described computing node is concentrated to extract from the analytic target segmentation received needs analytic target to be processed, then obtains in this analytic target and logistics net that other have the storehouse goods and materials record data of material transfer relation; Then according to the task object received, follow-up for anomaly is carried out to these storehouse goods and materials record data, and check result is sent to described task node.
7. method as claimed in claim 6, it is characterized in that, the method for described follow-up for anomaly is:
71) the inbound/outbound process goods and materials record data of each circulation node of this storehouse of computing node record extracting data; Then according to these inbound/outbound process goods and materials record data, obtain the inbound/outbound process goods and materials record data of transporting relation with its generation, obtain a transhipment path, check whether this transhipment path meets contingency management rule, if exist abnormal, as path dependent options record;
72) obtain shipping and receiving information corresponding with current record transhipment relation storehouse from described task node, check whether to there is the not corresponding situation of transmitting-receiving, if existed, as transmitting-receiving exception record;
73) the two record times of A, B and the time difference of management expectancy are required to contrast, if exceed this time difference, then as transport exception record; Wherein, record A is the record that goods and materials p enters a storehouse, and these record identification goods and materials are from b storehouse simultaneously; Record B is the record that goods and materials p goes out b storehouse, and this record identification mails to a storehouse simultaneously.
8. method as claimed in claim 7, it is characterized in that, the method for described follow-up for anomaly is:
81) computing node obtains present analysis object from distributed analytic target collection keys, and whole inbound/outbound process goods and materials record keyLogs of present analysis object;
82) computing node obtains an inbound/outbound process goods and materials record log from current keyLogs, if this record log, for entering storehouse record, obtains goods and materials source-information in current record log, then check the membership credentials of goods and materials storehouse, source and Current Library, if violate contingency management rule, recording current record log is path dependent options; If this record log is outbound goods and materials records, then from this record log, obtain material flow to information, then check the membership credentials of material flow to storehouse and Current Library, if violation contingency management rule, recording this record log is path dependent options;
83) if this record log meets contingency management rule, then checking whether shipping and receiving difference writing time of this record log meets and impose a condition, if do not met, then recording current record log abnormal for transporting.
9. method as claimed in claims 6 or 7, it is characterized in that, described task node is according to the task requests received, the method extracting corresponding inbound/outbound process goods and materials record data from database is: task node obtains the row of corresponding title according to the goods and materials batch information resourceID in task requests from database, if there are corresponding row, then according to the time constraint condition t in task requests
1determine its epoch boundaries (t
k, t
k+1), time conditions t2 determines its epoch boundaries (t
m, t
m+1), the row cluster name then obtaining t1, t2 respectively corresponding claims, then by these two row bunch and the data record formation selected works of whole period row bunch between it; Then filter one by one from these selected works, erasing time does not meet t
1, t
2the data of constraint; Finally using the inbound/outbound process goods and materials record data of the remaining data in selected works as correspondence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510205821.3A CN104850593B (en) | 2015-04-27 | 2015-04-27 | A kind of storage of emergency materials data and circulation monitoring method based on big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510205821.3A CN104850593B (en) | 2015-04-27 | 2015-04-27 | A kind of storage of emergency materials data and circulation monitoring method based on big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104850593A true CN104850593A (en) | 2015-08-19 |
CN104850593B CN104850593B (en) | 2018-10-30 |
Family
ID=53850238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510205821.3A Expired - Fee Related CN104850593B (en) | 2015-04-27 | 2015-04-27 | A kind of storage of emergency materials data and circulation monitoring method based on big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104850593B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184579A (en) * | 2015-09-01 | 2015-12-23 | 立德高科(北京)数码科技有限责任公司 | Commodity traceability method and traceability system based on combined label |
CN105956816A (en) * | 2016-07-19 | 2016-09-21 | 成都镜杰科技有限责任公司 | Cargo transportation information intelligent processing method |
CN107451765A (en) * | 2016-05-30 | 2017-12-08 | 阿里巴巴集团控股有限公司 | A kind of asynchronous logistics data processing method and processing device, commodity distribution control method and device |
CN107609879A (en) * | 2016-07-07 | 2018-01-19 | 阿里巴巴集团控股有限公司 | It is a kind of to identify the method, apparatus and system for usurping logistics information |
CN108108857A (en) * | 2018-01-12 | 2018-06-01 | 福建师范大学 | River suddenly accident emergency resources scheduling optimization method |
CN108319538A (en) * | 2018-02-02 | 2018-07-24 | 世纪龙信息网络有限责任公司 | The monitoring method and system of big data platform operating status |
CN108960697A (en) * | 2017-05-24 | 2018-12-07 | 北大方正集团有限公司 | Record the method and device for information of tracing to the source |
CN111444187A (en) * | 2020-03-31 | 2020-07-24 | 温州大学 | Big data storage system based on computer |
CN116128390A (en) * | 2023-04-17 | 2023-05-16 | 长沙智医云科技有限公司 | Medical consumable cold chain transportation monitoring method based on Internet of things |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103023970A (en) * | 2012-11-15 | 2013-04-03 | 中国科学院计算机网络信息中心 | Method and system for storing mass data of Internet of Things (IoT) |
US8996482B1 (en) * | 2006-02-10 | 2015-03-31 | Amazon Technologies, Inc. | Distributed system and method for replicated storage of structured data records |
-
2015
- 2015-04-27 CN CN201510205821.3A patent/CN104850593B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8996482B1 (en) * | 2006-02-10 | 2015-03-31 | Amazon Technologies, Inc. | Distributed system and method for replicated storage of structured data records |
CN103023970A (en) * | 2012-11-15 | 2013-04-03 | 中国科学院计算机网络信息中心 | Method and system for storing mass data of Internet of Things (IoT) |
Non-Patent Citations (1)
Title |
---|
田爱雪: ""基于海量数据存储的性能测试与优化研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184579A (en) * | 2015-09-01 | 2015-12-23 | 立德高科(北京)数码科技有限责任公司 | Commodity traceability method and traceability system based on combined label |
CN107451765A (en) * | 2016-05-30 | 2017-12-08 | 阿里巴巴集团控股有限公司 | A kind of asynchronous logistics data processing method and processing device, commodity distribution control method and device |
CN107451765B (en) * | 2016-05-30 | 2020-12-25 | 菜鸟智能物流控股有限公司 | Asynchronous logistics data processing method and device and logistics management method and device |
CN107609879A (en) * | 2016-07-07 | 2018-01-19 | 阿里巴巴集团控股有限公司 | It is a kind of to identify the method, apparatus and system for usurping logistics information |
CN105956816A (en) * | 2016-07-19 | 2016-09-21 | 成都镜杰科技有限责任公司 | Cargo transportation information intelligent processing method |
CN108960697A (en) * | 2017-05-24 | 2018-12-07 | 北大方正集团有限公司 | Record the method and device for information of tracing to the source |
CN108960697B (en) * | 2017-05-24 | 2021-12-17 | 北大方正集团有限公司 | Method and device for recording traceability information |
CN108108857A (en) * | 2018-01-12 | 2018-06-01 | 福建师范大学 | River suddenly accident emergency resources scheduling optimization method |
CN108108857B (en) * | 2018-01-12 | 2021-11-09 | 福建师范大学 | Emergency resource scheduling optimization method for river sudden pollution event |
CN108319538A (en) * | 2018-02-02 | 2018-07-24 | 世纪龙信息网络有限责任公司 | The monitoring method and system of big data platform operating status |
CN111444187A (en) * | 2020-03-31 | 2020-07-24 | 温州大学 | Big data storage system based on computer |
CN111444187B (en) * | 2020-03-31 | 2022-07-29 | 温州大学 | Big data storage system based on computer |
CN116128390A (en) * | 2023-04-17 | 2023-05-16 | 长沙智医云科技有限公司 | Medical consumable cold chain transportation monitoring method based on Internet of things |
CN116128390B (en) * | 2023-04-17 | 2023-06-30 | 长沙智医云科技有限公司 | Medical consumable cold chain transportation monitoring method based on Internet of things |
Also Published As
Publication number | Publication date |
---|---|
CN104850593B (en) | 2018-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104850593A (en) | Big data-based emergency supplies data storage and circulation monitoring method | |
Zhong et al. | Big Data for supply chain management in the service and manufacturing sectors: Challenges, opportunities, and future perspectives | |
CN109840253A (en) | Enterprise-level big data platform framework | |
Brockett et al. | Using rank statistics for determining programmatic efficiency differences in data envelopment analysis | |
CN103631922B (en) | Extensive Web information extracting method and system based on Hadoop clusters | |
CN104809634B (en) | Tourism data is investigated and monitoring system | |
CN107766402A (en) | A kind of building dictionary cloud source of houses big data platform | |
Ngo et al. | Designing and implementing data warehouse for agricultural big data | |
CN107809322A (en) | The distribution method and device of work order | |
CN106095953A (en) | A kind of real estate data integration method based on GIS | |
CN111385143B (en) | Police information cloud platform | |
Moharm et al. | Big data in ITS: Concept, case studies, opportunities, and challenges | |
CN104219088A (en) | Hive-based network alarm information OLAP method | |
Wu et al. | Optimization of Order‐Picking Problems by Intelligent Optimization Algorithm | |
CN111353085A (en) | Cloud mining network public opinion analysis method based on feature model | |
CN112633621A (en) | Power grid enterprise management decision system and method based on PAAS platform | |
Zhou et al. | A multi-agent distributed data mining model based on algorithm analysis and task prediction | |
Benatia et al. | QR-code enabled product traceability system: a big data perspective | |
Qin et al. | Cutting down the travel distance of put systems at Kunming International Flower Auction Market | |
Hu et al. | 5G‐Oriented IoT Big Data Analysis Method System | |
Vasin et al. | Exploring regional innovation systems through a convergent platform for Big Data | |
Nath et al. | Supply chain management (SCM): Employing various big data and metaheuristic strategies | |
Huang et al. | Modeling Agricultural Logistics Distribution Center Location Based on ISM. | |
Sayed et al. | A conceptual framework for using big data in Egyptian agriculture | |
Biao et al. | A multi-agent-based research on tourism supply chain risk management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20180904 Address after: 100049 Yuquan West Street, Shijingshan District, Beijing 1 Applicant after: Zhang Ling Applicant after: Earthquake In China emergency rescue center Address before: 100218 unit four, unit 4, five District, Tiantongyuan, Changping District, Beijing 301 Applicant before: Zhang Ling |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20181030 Termination date: 20190427 |