Summary of the invention
The present invention is directed to above problem, proposed a kind ofly to make up the method stride aspect data analysis hinge based on Internet of Things, solved the poor efficiency treatment state of the aspect of the striding system design that is not applied to Internet of Things at present.
For achieving the above object, the present invention takes following technical scheme: make up the method for striding aspect data analysis hinge based on Internet of Things, may further comprise the steps: the first step: utilize neuron Distribution and localization compiling method, the unit that has intelligent CPU in the Internet of Things is encoded, construct edge neuron, regional core neuron and core neuron; Second step: embed in order to distinguish the lightweight layering differentiation algorithm of LM and ST to edge neuron and neuronic each the intelligent CPU of regional core; The 3rd step: neuronic each intelligent CPU embeds the queuing parallel algorithm that is delivered to the neuronic data message of core in order to dispatcher-controlled territory core neuron to regional core.
Preferred version: the neuron Distribution and localization compiling method detailed process of the first step is as follows:
A. with the bottommost element of the neuronic unique machine identifier in each edge as compiling method, that is: Basic Element ID;
B. be example with an edge neuron, this neuron has a unique Basic Element ID, then according to the level of neural net up, first regional core neuron, namely regional core neuron (first order) information is: Region Center ID (1);
C. suppose in the network of city A have three edge neurons to be connected to a regional core neuron (first order), adopt the weight compiling method so: these three neuronic location codings in edge are respectively:
DL Code=Basic Element ID+1+Region Center ID (1);
DL Code=Basic Element ID+2+Region Center ID (1);
DL Code=Basic Element ID+3+Region Center ID (1);
D. under the situation from regional core neuron (first order) to regional core neuron (second level), suppose that province P has 5 cities that are similar to city A, is respectively A, B, C, D, E, so according to the weight compiling method described in the step C, if city A weight orientates 1 as, then other cities are followed successively by 2,3,4,5; Then this time, the edge neuron location coding among the step B is:
DL Code=Basic Element ID+1+Region Center ID(1)+1+Region Center ID (2);
E. the core neuron also has unique identification information, below is called NerveCenterID, and suppose core cardiac nerve unit administered 4 provinces, and according to the weight coding, so, the edge neuron location coding among the step B is:
DL Code=Basic Element ID+1+Region Center ID(1)+1+Region Center ID(2)+1+Nerve Center ID;
F. for a lot of grades of neuronic situations in zone are arranged, encode according to the method in the above step equally, and the like, begin to be added to Nerve Center ID from Basic Element id information always.
Preferred version: the detailed process in second step comprises the steps: the local area network (LAN) for edge neuron and regional core neuron (first order) composition, the intelligent CPU of zone core neuron (first order) finishes differentiation for LM and ST by programmed settings, LM finishes this locality, and ST then passes to higher level-1 area core neuron (second level); Zone core neuron (second level), its intelligent CPU also finishes differentiation for LM and ST by programmed settings.
Preferred version: the queuing parallel algorithm detailed process in the 3rd step comprises the steps:
A. set variable a: AD, be used for representing the edge neuron to the neuronic level distance of core, the value of AD equals the neuronic information in edge and arrives the regional neuronic number that the core neuron need pass through;
B. set variable a: SL, be used for representing the important level of each LAN, the value of SL equals the summation of value of SL that the neuronic information in edge arrives the All Ranges network of core neuron process;
C. set variable a: IIL, be used for representing what the edge neuron collected, and will send to the significance level of the neuronic information of core through regional core neurons at different levels;
D. based on the above, the value of setting the queuing order FinalOrder that arrives the neuronic information of core so is:
Final Order=W1*AD+W2*SL+W3*IIL, W1 wherein, W2, W3 are under the actual conditions, for steps A, B, the weighted value of three problems among the C, span: 0-1.
Preferred version: also comprised for the 4th step: heavyweight kernel scheduling algorithm is embedded into the neuronic intelligent CPU of each regional core neuron and core the inside, allows the core neuron task of deal with data information can be distributed to idle or non-regional core neuron of operating at full capacity is handled.
Preferred version: the heavyweight kernel scheduling algorithm detailed process in the 4th step comprises the steps:
The first, set variable a: ACD, be used for representing that the neuronic information of regional core arrives the regional neuronic number that the core neuron need pass through;
The second, on each regional core neuron, set a variable: Mission Status, be used for this neuronic idle degree of expression, have three states:
When this zone core neuron is in when operating at full capacity state, Mission Status=2 non-ly operates at full capacity during state Mission Status=1 when this zone core neuron is in, when this zone core neuron is in its idle state, Mission Status=0;
The 3rd, set variable a: TimeSign, be used for expression core neuron and note the time that this zone neuron sends information; When the core neuron is distributed to data processing of information all multizone core neurons, follow according to three kinds of data set priority orders and come the neuronic principle of dispatcher-controlled territory core, the priority orders of three kinds of data sets is: Mission Status〉Time Sign〉ACD.
Preferred version: concrete operations are: during the Mission Status=1 of regional core neuron oneself, just send the value of this variable to the core neuron, after the core neuron was received this variable, just judging can be to the regional core neuron distribute data information handling task of this information of transmission; And the core neuron is noted the time T ime Sign that this zone neuron sends information.
Preferred version: concrete operations are: during the Mission Status=0 of regional core neuron oneself, just send the value of this variable to the core neuron, after the core neuron was received this variable, just judging can be to the regional core neuron distribute data information handling task of this information of transmission; And the core neuron is noted the time T ime Sign that this zone neuron sends information.
Preferred version: the 3rd concrete operations are: at first according to picking out available regional core neuron in the Mission Status, the regional core neuron of Mission Status=0 is better than Mission Status=1's to the core neuron; Then, select regional core neuron according to Time Sign, Time Sign is more new, and priority is more high; At last, select regional neuron according to the value of ACD, the ACD value is more big, shows that regional neuron is more far away apart from the core neuron, and selecteed priority is more low.
In sum, owing to adopted technique scheme, concrete beneficial effect of the present invention is: for Internet of Things has designed the data processing scheme that a cover is striden aspect, be beneficial to the design construction of the man-machine interactive system of Internet of Things application layer and data surface more.
Specific as follows:
1, from the unique identifications such as CPU numbering of Internet of Things intelligent terminal machine, defines parameter DL Code and tried to achieve its value, be used for the location coding method.
2, having designed lightweight layering differentiation algorithm, be used for the pass through mechanism that makes progress step by step of data, guaranteed the accurate submission of data, also is simultaneously that data dispatch is afterwards handled the foundation basis.
3, the design of queuing parallel algorithm has provided a kind of data processor system of Internet of Things centre data processing mechanism, has improved the treatment effeciency of data.
4, the proposition of heavyweight kernel scheduling algorithm, give the mechanism of Internet of Things for a load balancing on the problem of mass data processing, alleviated to the computer hardware facility such as CPU the consumption of internal memory, saved resource, the data of being convenient to Internet of Things are handled the requirement that really reaches cloud computing.
Embodiment
Disclosed all features in this specification, or the step in disclosed all methods or the process except mutually exclusive feature and/or step, all can make up by any way.
At first basic conception is elaborated:
1.1 the edge neuron, regional core neuron, the core neuron:
According to aspect characteristics and the regional structure characteristics that Internet of Things has, the present invention designs three kinds of necessary neurons of data analysis system, below is separately effect and introduction in detail:
First, edge neuron (Nerve Factor): well-known, in the perception of Internet of Things least significant end for the external world, mainly by radio-frequency (RF) identification (RFID), infrared inductor, the global location device, monitoring camera, instruments such as laser scanner are formed, and finish for signal, image, the collection of temperature etc. raw information.The edge neuron that the present invention says is exactly by the CPU that extensively is distributed in these instruments, and the unit that after a while algorithm coding of setting forth formed of the present invention.
The second, regional core neuron: the regional core neuron that the present invention mentions is distributed in each local area network (LAN), wide area network and the proprietary network, is responsible for the message scheduling of each network section and handles.Such as: the local area network (LAN) inside of a company, settle a regional core neuron, it is to have CPU, the unit that also has the present invention after a while the algorithm coding of setting forth to be formed.On physical structure, it is connected with the edge neuron by network.Its major function is: accept information that the edge neuron sends, partial information is sent to more senior regional core neuron, necessary information or operational order are fed back to the edge neuron.
The 3rd, core neuron: for the various application of Internet of Things, all exist the mechanism of a central hub to be responsible for relating to the analyzing and processing of all data of a certain application.Such as: for a logistics company, necessarily exist a machine room to be responsible for distributing record, bill audit, distributing the processing of affirmation etc. information, to make things convenient for parent company for the management of national each company of all goods in all parts of the country.This centre data hinge adds that the present invention is exactly the core neuron with the unit that the algorithm coding of setting forth is formed after a while.
Be example with the logistics company, the structure that three kinds of units constitute is shown in Figure 1:
For logistics company, there is a regional core neuron (first order) in each city, is responsible for handling record, the scheduling of this city logistics, and gives provincial company feedback information.
For provincial company, there are a plurality of regional core neurons (first order) that belong to the city of this province to be connected in provincial regional core neuron (second level).Zone core neuron (second level) function comprises record, the scheduling to the logistics information in province's scope, and gives national parent company feedback information.
For core neuron (superlative degree), be positioned at logistics company whole nation parent company inside, be responsible for receiving next information as for regional core neuron (second level), and information is recorded, dispatches, then the feedback information of necessity is returned regional core neuron (second level).
This neuronal structure figure is applicable in the structure of present many Internet of Things, comprises public security organ, bank, department of government unit etc.Its structure is analogized and can be got with this icon.
1.2. neuron Distribution and localization compiling method:
The Internet of Things data analysis hinge that is constituted by the neuron unit that sets forth in 1.1, the effect of its maximum is exactly collection, processing, the scheduling of the information of carrying out in gamut, so locating information source or feedback information will be the primary demands of Internet of Things to the Internet of Things point of expectation exactly.Based on above requirement, the present invention has created a kind of coded system, is used for accurately locating the position at each neuron place.
The Distribution and localization coding (Distribute Location Code, hereinafter to be referred as: method rule DL Code):
D. for each edge neuron, such as a camera, temperature inductor etc. all is that unique machine identifier is arranged, and is generally RFID.We in the present invention, are called the bottom and the most basic element of this number information as compiling method: Basic Element ID.
E. we are example with an edge neuron (Nerve Factor), this neuron has a unique Basic Element ID, then according to the level of neural net up, first regional core neuron, it is regional core neuron (first order), each regional neuron also is the intelligent CPU that oneself is arranged, and this CPU also has unique numbering, and we to call regional core neuron (first order) information in the following text are: Region Center ID(1).
F. suppose in the network of city A have three edge neurons to be connected to a regional core neuron (first order), we adopt the weight compiling method so: these three neuronic location codings in edge are respectively:
DL Code=Basic Element ID+1+Region Center ID (1);
DL Code=Basic Element ID+2+Region Center ID (1);
DL Code=Basic Element ID+3+Region Center ID (1);
D. under the situation from regional core neuron (first order) to regional core neuron (second level), suppose that province P has 5 cities that are similar to city A, is respectively A, B, C, D, E, so according to the weight compiling method described in the step C, if city A weight orientates 1 as, then other cities are followed successively by 2,3,4,5.Then this time, edge neuron (NerveFactor) location coding among the step B is:
DL Code=Basic Element ID+1+Region Center ID(1)+1+Region Center ID (2);
E. the core neuron also has unique identification information, below is called Nerve Center ID. and 4 provinces have been administered by suppose core cardiac nerve unit, and according to the weight coding, so, the edge neuron among the step B (Nerve Factor) location coding is:
DL Code=Basic Element ID+1+Region Center ID(1)+1+Region Center ID(2)+1+Nerve Center ID;
F. for a lot of grades of neuronic situations in zone are arranged, encode according to the method in the above step equally, and the like, begin to be added to Nerve Center ID from Basic Element id information always.Because above information code all is unique, so can accurately locate each edge neuron.
1.3. algorithm is distinguished in the lightweight layering:
The present invention has created this algorithm, and the function of this algorithm is: the information type that distinguish to transmit, and can analyze which informational needs and be submitted to high-rise neuron, which needs this locality to finish dealing with.To elaborate the mechanism of this algorithm below.
On Internet of Things, the information that edge neuron (Nerve Factor) is gathered is divided according to the layering in the background technology, can roughly be exemplified below:
The temperature information that temperature controller is gathered, if be used for reporting to the police, temperature inductor is that this locality of making responds then that information belongs to chain of command so.If temperature information is used for monitoring booth plant, so just temperature information need be passed to higher level regional core neuron and analyze, then belong to the scope of one's knowledge.If also will be stored in the core neuron, be used for the temperature analysis in later stage, excavate, specify the temperature control scheme of more optimizing, then information belongs to data surface.
This shows, for wanting local the processing still need be submitted to more high-rise information, can not divide according to the aspect of Internet of Things.Therefore, the algorithm that the present invention relates to all is the algorithm of the aspect of striding, and can not produce functional impact because of the difference of aspect under the information.
The specific algorithm performing step is as follows:
For edge neuron (Nerve Factor), it is the bottom unit that has intelligent CPU, and also can differentiate the information that collects naturally is to need local the processing, still submits to the upper level neuron.
Hereinafter begin, the information that we finish this locality abbreviates as: Local Mission (LM), the information that needs to submit to abbreviates as: Super Task (ST).
Local area network (LAN) for edge neuron (Nerve Factor) and regional core neuron (first order) composition, the intelligent CPU of zone core neuron (first order) still can pass through programmed settings, finish the differentiation for LM and ST, like this, LM finishes this locality, such as the use of intra-company's monitoring head.ST then passes to higher level-1 area core neuron (second level), such as the Lighting Control Assembly of a certain single company of company, information need be passed to the illumination control centre in whole building.
Arrived regional core neuron (second level), its intelligent CPU can pass through programmed settings equally, finishes the differentiation for LM and ST.Handle so step by step and upload, the information that then occupy different aspects just can be according to specifying requirement, accurate must being delivered on the ability zone core neuron with this information of processing.
1.4. queuing parallel algorithm:
The present invention has created this algorithm, and the thought of this algorithm design is based on following:
For having the neuronic extensive Internet of Things of core, always there are a lot of information to submit to the core neuron, wherein the most important thing is: the information by a wide area network is submitted to after the neuronic processing of core, need feed back to the another one wide area network.Object lesson as: in the logistics company, goods is finished the affirmation message of sending at province A, need be submitted to logistics whole nation parent company, feeds back to goods again and sends ground province B, allows the client see.The neuronic processing speed of core is limited, and it is also limited to load, so some informational needs priority treatment are always arranged.
The function of this algorithm is: allow regional core neuron be submitted to the neuronic information of core, rank according to certain weight standard, enter the core neuron in proper order according to queuing then, and obtain handling.
The specific design of this algorithm is as follows:
A. by in the neuroid structure chart in 1.1, as can be seen, different edge neuron (Nerve Factor) is to the core neuron, and the level of process is different.So according to one's analysis in the hinge, set a variable: Away Distance (is called for short: AD), be used for representing that edge neuron (Nerve Factor) is to the neuronic level distance of core at neuron number for we.The information that the value of AD equals edge neuron (Nerve Factor) arrives the regional neuronic number that the core neuron need pass through.
B. in reality, various local area network (LAN)s, wide area network, proprietary network are different for the significance level of a society.Important level such as bank network is higher than civilian entertainment network.We set a variable: Significant Level (is called for short: SL), be used for representing the important level of each LAN.The information that the value of SL equals edge neuron (Nerve Factor) arrives the summation of value of SL of the All Ranges network of core neuron process.
For example: in the subbranch of Chengdu street Bank of China, a banknote scanner is edge neuron (Nerve Factor), and the information of its record belongs to this local area network (LAN) of this subbranch, supposes that the SL value is 3.And this subbranch belongs to bank's metropolitan area network of Chengdu Bank of China, suppose that this metropolitan area network SL value is 3, and Chengdu Bank of China belongs to this wide area network of Sichuan Province, suppose that this wide area network SL value is 3, Bank of China whole nation general headquarters there so, the SL value of the information of this that obtains a banknote scanner transmission is: 3+3+3=9.
C. in reality, be delivered to the neuronic information of core on the Internet of Things, also have significance level and distinguish.Information such as the warning system of public security organ is just important than the information of certain residential quarter street lamp illumination system.The important level of information is formulated by state-owned unit is unified, makes things convenient for the nationwide to use.We set a variable: Information Importance Level (is called for short: IIL), be used for representing what edge neuron (Nerve Factor) collected, and will send to the significance level of the neuronic information of core through regional core neurons at different levels.
D. based on the above, our value of setting the queuing order (Final Order) that arrives the neuronic information of core is so:
Final Order=W1*AD+W2*SL+W3*IIL, W1 wherein, W2, W3 are under the actual conditions, for steps A, B, the weighted value of three problems among the C, span: 0-1.
1.5. heavyweight kernel scheduling algorithm:
The present invention has created this algorithm, the design philosophy of this algorithm based on following some:
The first, Internet of Things center cardiac nerve unit has the highest intelligent CPU, and it can be distributed to the task of certain one piece of data information of processing other competent CPU and finish.
The second, with regard to whole Internet of Things neuroid, have a certain period, certain some regional core neuron is in idle or non-state of operating at full capacity.
Based on above 2 points, we propose a feasible scheme: because core neuron live load is big, handle a lot of information, and in the same time period, have idle or non-regional core neuron of operating at full capacity in the Internet of Things, so our design weight level kernel scheduling algorithm allows the core neuron task of deal with data information can be distributed to idle or non-regional core neuron of operating at full capacity is handled.So just given full play to that Internet of Things extensively distributes, the characteristics of cloud computing.Below, elaborate this algorithm steps:
Heavyweight kernel scheduling algorithm steps:
The first, as seen from Figure 1, different regional core neurons is to the core neuron, and the level of process is different.So according to one's analysis in the hinge, set a variable: Away Center Distance (is called for short: ACD), be used for representing that regional core neuron is to the neuronic level distance of core at neuron number for we.The value of ACD equals the neuronic information of regional core and arrives the regional neuronic number that the core neuron need pass through.
The second, on each regional core neuron, set a variable: Mission Status, be used for this neuronic idle degree of expression, have three states:
When this zone core neuron is in when operating at full capacity state, Mission Status=2, the core neuron can't allocating task be given this zone core neuron.
When this zone core neuron is in non-operating at full capacity during state, Mission Status=1, the core neuron can allocating task be given this zone core neuron.Concrete operations are: during the Mission Status=1 of regional core neuron oneself, just send the value of this variable to the core neuron, after the core neuron was received this variable, just judging can be to the regional core neuron distribute data information handling task of this information of transmission.And the core neuron is noted the time that this zone neuron sends information, represents with Time Sign.
When this zone core neuron is in its idle state, Mission Status=0, the core neuron can allocating task be given this zone core neuron.Concrete operations are: during the Mission Status=0 of regional core neuron oneself, just send the value of this variable to the core neuron, after the core neuron was received this variable, just judging can be to the regional core neuron distribute data information handling task of this information of transmission.And the core neuron is noted the time that this zone neuron sends information, represents with Time Sign.
Based on the above, in the neuronic cpu data of core, store three data sets, be respectively: regional core neuron is apart from the neuronic distance of core in the Internet of Things: the ACD data set;
The core neuron is received comes from the neuronic state information of regional core: Mission Status data set;
The time that comes from regional neuron transmission Mission Status information of core neuron record: Time Sign data set;
What pay special attention to is: three data sets are to be mutually related, and following table has been listed concrete several examples:
The example data of variable in table 1 database
The 3rd, when the core neuron is distributed to data processing of information all multizone core neurons, follow according to three kinds of data set priority orders and come the neuronic principle of dispatcher-controlled territory core.The priority orders of three kinds of data sets is: Mission Status〉Time Sign〉ACD, operation is exemplified below:
At first according to picking out available regional core neuron in the Mission Status, the regional core neuron of Mission Status=0 is better than Mission Status=1's to the core neuron.
Then, select regional core neuron according to Time Sign, Time Sign is more new, priority is more high, this is based on actual conditions: idle recently regional core neuron also is in idle condition probably, and may be in busy state from current time idle neuron more of a specified duration is very big.
At last, select regional neuron according to the value of ACD, the ACD value is more big, shows that regional neuron is more far away apart from the core neuron, and selecteed priority is more low.Because distance is more far away, data message needed time in transmission course is more long.
Based on above-mentioned selection course, last, core neuron and available regional core neuron have been formed intelligent cpu system more than, and the core neuron assigns the task to available regional core neuron, finishes the processing of certain one piece of data information jointly.
Next introduce concrete implementation step:
The first step: finish the unit that each that is connected into Internet of Things is had an intelligent CPU with the neuron Distribution and localization compiling method in 1.2 and encode, and the DL Code that obtains store the core neuron into and have the database of limiting operation (hereinafter to be referred as: Center Data Base), like this, the core neuron can pass through Query Database, obtains each edge neuron (Nerve Factor) or the neuronic accurate position of each regional core.
Second step: distinguish algorithm with the lightweight layering in 1.3, be embedded into the neuronic intelligent CPU of edge neuron and regional core the inside, allow these intelligent CPU can finish the desired function of this algorithm, that is: distinguish LM and ST.Concrete coded system, with C++ commonly used or JAVA language all can, adopt condition judgment, can finish.
The 3rd step: according to the requirement of the queuing parallel algorithm in 1.4, and according to the situation of actual Internet of Things, configure the W1 among the 1.4 step D, W2, the value of W3.After finishing, regional core neuron is delivered to the neuronic data message of core just can finish good queuing sequential scheduling according to the queuing parallel algorithm.
The 4th step: according to the requirement of 1.5 middle heavyweight kernel scheduling algorithms, setting up Internet of Things, perhaps increase regional core in the Internet of Things neuronic the time, all need to calculate the value of ACD, and store among the Center Data Base, write code then, be embedded into the neuronic intelligent CPU of each regional core the inside, allow each intelligent CPU that the value of Mission Status can be provided to the core neuron.Concrete coded system, with C++ commonly used or JAVA language all can, adopt condition judgment, can finish.
At last, write code, the intelligent CPU the inside of embedded nuclear cardiac nerve unit, allow core neuron record come from the time that regional neuron sends Mission Status information, concrete coded system all can with C++ or JAVA language commonly used, adopt bag and the class of time monitoring, can finish.
Above-mentioned ACD, Mission Status, Time Sign will store among the Center Data Base.The core neuron is the information in the Query Database, and distributes according to the scheduling that the heavyweight kernel scheduling algorithm steps in 1.5 is finished data message.
Sum up: the Internet of Things neuroid structure that all algorithms of above narrating and concrete step constitute is exactly that the present invention creates: stride the aspect intelligent data and analyze hinge.
More than show and described basic principle of the present invention, principal character and advantage.Above embodiment is only in order to describe technical scheme of the present invention rather than technical method is limited; the present invention is extensible on using to be other modification, variation and application, and thinks that all such modifications, variation and application all fall in the claimed scope of the invention.