CN105205158A - Big data retrieval method based on cloud computing - Google Patents

Big data retrieval method based on cloud computing Download PDF

Info

Publication number
CN105205158A
CN105205158A CN201510629459.2A CN201510629459A CN105205158A CN 105205158 A CN105205158 A CN 105205158A CN 201510629459 A CN201510629459 A CN 201510629459A CN 105205158 A CN105205158 A CN 105205158A
Authority
CN
China
Prior art keywords
node
index
retrieval
local
local index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510629459.2A
Other languages
Chinese (zh)
Inventor
赖真霖
文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sixiang Lianchuang Technology Co Ltd
Original Assignee
Chengdu Sixiang Lianchuang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sixiang Lianchuang Technology Co Ltd filed Critical Chengdu Sixiang Lianchuang Technology Co Ltd
Priority to CN201510629459.2A priority Critical patent/CN105205158A/en
Publication of CN105205158A publication Critical patent/CN105205158A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing

Abstract

The invention provides a big data retrieval method based on cloud computing. The method includes the steps that an index structure of a cloud storage system is divided into the two levels of a main index and local indexes; data content is stored into the local indexes; indexes are built for metadata issued by all the local indexes in the main index through a chained queue; retrieval is achieved through correlation of the metadata. The data retrieval method is provided, retrieval in multiple modes is effectively supported, expandability is high, concurrency of the main index is improved, dynamic real-time performance is good, and it is guaranteed that overall loads of an index structure are balanced.

Description

Based on the large data retrieval method of cloud computing
Technical field
The present invention relates to data processing, particularly a kind of large data retrieval method based on cloud computing.
Background technology
Cloud computing system can provide mass memory, reliability services, therefore day by day comes into one's own.In cloud infrastructure, the computing machine be connected to each other together by thousands of is formed " cloud " that provide service, and a large amount of users can share this block " cloud " simultaneously, and cuts out resource requirement according to the actual demand of oneself.As an important component part of cloud Data processing, the current cloud storage system overwhelming majority all adopts the mode of distributed hash table to carry out index building, and data are organized into the form of key-value pair.Therefore, this kind of cloud storage system only supports keyword search, and visits data by point type retrieval.But, in the practical application for the large data nowadays grown up, user may tend to adopt multiple key assignments to carry out various dimensions retrieval, and now existing solution can only scan whole data set and then obtains result for retrieval by running a backstage batch processing task.But this kind of solution lacks real-time, newly stored in data tuple can not be retrieved in time, the batch processing task on backstage must be waited to complete scanning and just can be retrieved.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of large data retrieval method based on cloud computing, comprising:
Based on a large data retrieval method for cloud computing, it is characterized in that, comprising:
The index structure of cloud storage system is divided into master index and local index two-stage, data content is stored in local index, and in master index, utilize linked queue to be that the metadata that all local indexs are issued sets up index, realize retrieval by the association of metadata.
Preferably, the described index structure by cloud storage system is divided into master index and local index two-stage, comprises further:
1) storage space of the cloud storage system of subordinate is divided, the spatial dimension managed according to equivalent and the good each local index of orderly policy setting;
2) according to step 1) spatial dimension of the local index that distributes management, by the data-mapping in cloud storage system in the local index of correspondence, after mapping process completes, be in order between the inner and each local index of each local index;
3) its five-star node is published in the master index of higher level by each local index of subordinate respectively, the node of coming issued in master index by subordinate, the linked queue index of the structure overall situation, then associates each local index, forms complete index space;
4) each local index of subordinate progressively carries out the iteration issue of node downwards, increases than and issues the growth ratio of rear master index EMS memory occupation, judge whether the node that will continue to issue local index downwards according to the retrieval rate estimated after issue; If estimate retrieval rate to increase than lower than master index EMS memory occupation increase ratio, then stop issuing to subordinate.
Preferably, the described association by metadata realizes retrieval, comprises further:
First the entrance using master index as retrieval, by retrieval master index, determines the actual local index comprising data to be retrieved; Secondly, retrieval process is transmitted to this local index, after retrieving established data by this local index, directly return to the promoter of retrieval request, concrete steps comprise:
1) interval to be retrieved is sent to the cloud Platform Server of higher level, the entry key of master index using the lower bound in interval as retrieval, retrieves in master index; 2) after the master index of higher level navigates to concrete local index according to the key of lower bound, retrieval process is transmitted to the subordinate's local index issuing this key; 3) when local index receives the retrieval process request forwarding and, first according to interval to be retrieved, the index of oneself is traveled through, till meeting the interval upper bound of retrieval; If the range of management of a local index has been retrieved in interval to be retrieved, then retrieval request is transmitted to the rear stepbrother of this local index, the data set retrieved directly has been returned to the request end of retrieval from local index.
Preferably: before submission Data Update, each affairs first check after these affairs read data, whether have these data of other transactions modify; If other affairs have renewal, the affairs submitted to are carried out rollback; 2 zone bits and 1 lock is also comprised in each node of described linked queue; Wherein, whether marked zone bit is for identifying this node just deleted; Whether linked zone bit identifies this node and inserts completely, and namely the pointer field of all levels all upgrades complete, and each node safeguards a lock lock respectively; Also define 2 sentinel node head and tail in addition, its key assignments is respectively constant min_int and max_int;
First the positioning action of described linked queue is searched from the superlative degree of sentinel node head, decline successively, every one-level finds key assignments k position to be checked or sentinel node tail stops, if find the node that k is corresponding, then upgrade node i and represent the superlative degree of this node, record its every grade corresponding predecessor node pre [i];
The update of described linked queue comprises:
1) first call positioning action, the result of restoring to normal position, if find present node, namely key assignments is that the node of k exists, and can not insert, otherwise, enter the operation of step 2;
2) lock to predecessor node array pre is bottom-up;
3) verify whether the next node of pre and the descendant node array succ returned changes, if the next node of pre and succ changes, then first discharges lock just now, then reorientates pre and succ; If pre and succ not there occurs change, carry out step 4;
4) from end level, upwards carry out update, then putting linked zone bit is true, represents that inserting node links completely, finally discharges all locks;
The knot removal that the deletion action of described linked queue will be specified, first location node, then judge that whether the state of present node is as linking completely, and not just deleted, if this node state is reasonable, then this node is locked, but likely this node is deleted by other threads, now return false, otherwise to put node marked zone bit be that true is then bottom-up locks to predecessor node, if the state of succ and pre changes, lock then before release, then reorientates node; Finally, then the physics deletion carrying out node discharges all locks, returns true;
The search operaqtion of described linked queue first by the position of positioning searching node, then returns result for retrieval, and corresponding predecessor node and descendant node; If do not find respective nodes, present node is just deleted, or present node does not connect completely, then retrieve failure; If have found respective nodes, and this node is not just deleted and link completely, then retrieve successfully.
The present invention compared to existing technology, has the following advantages:
The present invention proposes a kind of data retrieval method, effectively support the retrieval of various ways, extensibility is strong, and improve the concurrency of master index, dynamic real-time is good, ensure that index structure overall load is balanced.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the large data retrieval method based on cloud computing according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
An aspect of of the present present invention provides a kind of large data retrieval method based on cloud computing.Fig. 1 is the large data retrieval method process flow diagram based on cloud computing according to the embodiment of the present invention.The present invention utilizes self-defining index structure, effectively supports the retrieval of various ways, and extensibility is strong, and dynamic real-time is good; Utilize and divide and combinational algorithm, solve the hot issue in local servers, ensure that the load balancing of index structure entirety.In addition, the present invention also introduces linked queue in higher level's master index, improves the load-carrying properties of master index, improves the concurrency of master index, improves the throughput of overall index.
Whole index structure is divided into upper and lower two-stage by the present invention, and the data of index are specifically stored in the index of subordinate, and the index of higher level then plays the effect of a location and guiding.When index is set up, first data set to be indexed can be split, according to average principle, be divided into the subset (number of division is equal with the local index server of subordinate) comprising equal amount of data.Then, ready-portioned data subset and subordinate's index server one_to_one corresponding, set up local index in each subordinate index server based on linked queue.On the basis that local index has been set up, each local index can select a part of node as oneself index range " representative ", and is published in the master index of higher level.During issue, be not directly by intact for downstream site copy to superior node, but extract these metadata being published node (comprising the key of index, local index server ip address, local index server disk physical block number), only metadata is sent in higher level's index, to reach the memory cost alleviating higher level's index, and store the object of more nodes.After master index receives the metadata that each local index of subordinate issues, these set of metadata are made into an overall index by the form of linked queue, logically each for subordinate independently local index are associated, maintain the global consistency of index space.The master index of higher level is as the entrance of whole index, and by the location of master index, search operaqtion forwards on the some concrete local indexs of subordinate, finally finds the data of needs in subordinate, then returns.
Current internet, applications adopts cloud storage system to the large data of the business of preserving usually, and these cloud storage systems generally provide the entrance of access with the form of hash table, can not support range retrieval well.The present invention proposes a kind of secondary index be implemented on cloud storage system.Whole index construction process is explained as follows:
1) storage space of the cloud storage system of subordinate is divided, the spatial dimension managed according to equivalent and the good each local index of orderly policy setting.Hash table is due to the characteristic of its Hash maps, and wherein the storage of each key is unordered.Assuming that the present invention adopts 3 local indexs to preserve the data of 1-12 storage space in distributed storage, according to the principle of equivalent, each local index should store 4 data.
2) according to the local index range of management that the 1st step distributes, by the data-mapping in cloud storage system in the local index of correspondence.After mapping process completes, each local index inside will be orderly, is also orderly between each local index simultaneously.
3) its five-star node is published in the master index of higher level by each local index of subordinate respectively, the node of coming issued in master index by subordinate, the linked queue index of the structure overall situation, then associates each local index, forms complete index space.
4) each local index of subordinate progressively carries out the iteration issue of node downwards.Increase than according to retrieval rate after the issue estimated and issue the growth ratio of rear master index EMS memory occupation, judging whether the node that will continue to issue local index downwards.If estimate retrieval rate to increase than lower than master index EMS memory occupation increase ratio, then stop issuing to subordinate.
When subordinate's local index superior master index issue host node associates, introduce dynamic publishing adjustment algorithm, the local index optionally superior publisher node of subordinate.In higher level, build the index of the overall situation, safeguard the globality of index structure.When subordinate local index issues superior publisher node, employing be the host node quantity that top-down mode progressively increases issue.First, five-star node is published in master index by each local index, and then each local index increases than judging whether that will continue past subordinate issues according to above-mentioned two.When expansion is issued downwards, only need the metadata of the new node do not comprised before to be sent to the master index of higher level (node do not had before only inserting in master index).
For linked queue, the data of its index are stored in the node of level of the most end, and each node upwards raises with certain probability simultaneously, and the part of rising uses as the acceleration node of retrieval.Therefore, in linked queue, the quantity of top-down node will increase with the form of power level.
When implementing search operaqtion to this index structure, first the master index of higher level is as the entrance of retrieval, by retrieval master index, determines which local index is actual and contains data to be checked.Secondly, retrieval process will be transmitted to this local index, after retrieving established data, directly returns to the promoter of retrieval request by this local index.Detailed process is as follows:
1) interval to be retrieved can be sent to the cloud Platform Server of higher level, and the entry key of master index using the lower bound in interval as retrieval, retrieves in master index.2) after the master index of higher level navigates to concrete local index according to the key of lower bound, retrieval process is forwarded to the subordinate's local index issuing this key.3) when local index receives the retrieval process request forwarding and, first according to interval to be retrieved, the index of oneself can be traveled through, till meeting the interval upper bound of retrieval.If the range of management of a local index has been retrieved in interval to be retrieved, then need rear stepbrother retrieval request being transmitted to this local index.Owing to being also orderly mutually between each local index, therefore this forwarding can ensure the integrality retrieved and correctness.Range retrieval process terminates, and the data set retrieved directly returns to the request end of retrieval from local index.
Range retrieval is one of principal character of the present invention, and singly-bound retrieval is as special circumstances of range retrieval, and its processing procedure is consistent with the flow process of above-mentioned introduction.The key distinction is, singly-bound retrieval does not relate to the traversal of local index inside, does not relate to the forwarding between local index yet.For the situation of singly-bound retrieval, after higher level's master index is transmitted to a certain local index of subordinate, directly finds data to be checked in this local index inside, can return.
In index structure of the present invention, whole index space is divided into mutually disjoint subset, and each subset is safeguarded by independent local index respectively.Along with adjustment operations such as the dynamic insertion of index and deletions, the size of local index likely there will be difference, and the local index namely had becomes large gradually, and some local indexs can diminish on the contrary.The size of local index changes and the appearance of the load between each local index likely can be caused unbalanced, because relatively large local index, accessed probability can strengthen.Therefore, corresponding dynamic patitioning algorithm is needed to solve the hot issue that may exist in local index.In order to describe this partitioning algorithm, first provide 4 variable-definitions:
1) S is a linked queue, and it is made up of some orderly chained lists, and all data are kept at Level1, are kept at Level2 with the part of nodes that certain probability selection goes out, then higher level then by that analogy.
2) key (x) represents the key that the element x be kept in S is corresponding.
3) level (x) represents the height of the superior node that the element x be kept in S is corresponding.Level (S) then represents the maximum height of all elements be kept in S.
4) wall (x, l) represents element x rightmost the 1st the element y be kept in S, and this element y satisfies condition key (x) <key (y) and level (y) >l.
The definition that the partitioning algorithm of local index depends on wall (x, l) has come.Algorithm accepts 3 parameter: S 1, for dividing for carrying out the local index linked queue processed; S 2, for receiving S 1divide the local index linked queue of latter half data out; L is the parameter determining to divide position.
Its algorithm idiographic flow is:
1) wall (S is selected 1, i), wherein i is from level (S 1) be decremented to l, progressively attempt until find the 1st the wall (S determined 1, i).
2) with wall (S 1, i) be boundary, point to wall (S by the linked queue of first half 1, i) heir pointer of node is revised as NULL, namely from S 1the node of middle deletion latter half.With wall (S 1, i) be boundary, the node of latter half sent to S 2, be inserted into S one by one 2linked queue in.
First partitioning algorithm can detect the node whether having a node at Levell can be used as division.If no, will progressively reduce until one successfully can be found as the node of partitioning site.Those first half nodes pointing to partitioning site can go on record, successfully can upgrade relevant indicator linking after dividing.After all preliminary work all completes, the node of latter half will be migrated to a new server, and those first half nodes being retained in original server will upgrade their associated pointers, by the new server needing the pointed of adjustment to move to.
The index structure that the present invention proposes, in fact index data is stored in each local index, and the master index of higher level is then used for associating each local index, safeguards the global consistency of index space.The master index of higher level, as the entrance of whole index, bears pressure maximum, and the efficiency therefore how improving higher level's master index becomes the problem of this research important thinking.There is insertion, deletion action situation in master index, but this generic operation proportion can not be too high, should be more a large amount of search operaqtion.Based on analysis above, the present invention adopts linked queue technology to realize in higher level's index.
The present invention supposes that the affairs of multi-user concurrent can not affect each other when processing, and each affairs can process the data of impact separately when not producing lock.Before submission Data Update, whether each affairs can first check after these affairs read data, have other affairs to have modified again these data.If other affairs have renewal, the affairs submitted to can carry out rollback.In the environment that data collision is less, the cost of transaction rollback lower than the cost of locking data during reading data, therefore can obtain the handling capacity higher than other concurrency control methods once in a while.
2 zone bits and 1 lock is also comprised in each node of linked queue of the present invention.Whether marked zone bit is for identifying this node just deleted.Whether linked zone bit identifies this node and inserts completely, and namely the pointer field of all levels all upgrades complete.Because present invention employs fine-grained lock here, so each node safeguards a lock respectively.In addition, also define 2 sentinel node head and tail, its key value is respectively constant min_int and max_int.
The basic operation of linked queue comprises location, insertion, deletes and retrieval.
Retrieval, insertion and deletion action all depend on positioning action.First search from the superlative degree of sentinel node head, decline successively, every one-level finds key assignments k position to be checked or sentinel node tail stops.If find the node that k is corresponding, then upgrade the superlative degree that i mark represents this node.No matter whether this node exists, all need to record its every grade corresponding predecessor node pre [i].If have found node corresponding to k in certain one-level, be then obviously all identical (i.e. himself node) in the value of the present node succ being positioned at below this grade, so only need upgrade once.Locate operation itself need not lock, and its effect returns 3 tuples, represents the result level of location, the position succ of present node, the position preds of predecessor node.
In update:
1) first call positioning action, the result of restoring to normal position, if find present node, then illustrate that key assignments is that the node of k exists, can not insert.Otherwise, carry out ensuing update.
2) lock to predecessor node array pre is bottom-up.
3) verify whether the next node of pre and the descendant node array succ returned changes.If the next node of pre and succ there occurs change, then first discharge lock just now, then reorientate pre and succ.If pre and succ not there occurs change, carry out next step.
4) from end level, upwards carry out update, then putting linked zone bit is true, represents that inserting node links completely, finally discharges all locks.
The knot removal that deletion action will be specified, basic step is the same with update, first location node, and then judge that whether the state of present node is reasonable, namely this node links completely, and not just deleted.If this node state is reasonable, then this node is locked, but likely this node is deleted by other threads, now returns false, otherwise putting node marked zone bit is true.Next the same with update, bottom-uply lock to predecessor node, if the state of succ and pre changes, then the lock before discharging, then, reorientates node.Finally, the physics carrying out node is deleted, and then discharges all locks, returns true.
Search operaqtion first by the position of positioning searching node, then returns result for retrieval, and corresponding predecessor node and descendant node.Here do not adopt any lock mechanism and synchronization mechanism, therefore, search operaqtion is N-free diet method.If do not find respective nodes, present node is just deleted, or present node does not connect completely, then retrieve failure.If find respective nodes, and this node is not just deleted and link completely, be then once successfully retrieve.
In sum, the present invention proposes a kind of data retrieval method, effectively support the retrieval of various ways, extensibility is strong, and improve the concurrency of master index, dynamic real-time is good, ensure that index structure overall load is balanced.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.

Claims (4)

1., based on a large data retrieval method for cloud computing, it is characterized in that, comprising:
The index structure of cloud storage system is divided into master index and local index two-stage, data content is stored in local index, and in master index, utilize linked queue to be that the metadata that all local indexs are issued sets up index, realize retrieval by the association of metadata.
2. method according to claim 1, is characterized in that, the described index structure by cloud storage system is divided into master index and local index two-stage, comprises further:
1) storage space of the cloud storage system of subordinate is divided, the spatial dimension managed according to equivalent and the good each local index of orderly policy setting;
2) according to step 1) spatial dimension of the local index that distributes management, by the data-mapping in cloud storage system in the local index of correspondence, after mapping process completes, be in order between the inner and each local index of each local index;
3) its five-star node is published in the master index of higher level by each local index of subordinate respectively, the node of coming issued in master index by subordinate, the linked queue index of the structure overall situation, then associates each local index, forms complete index space;
4) each local index of subordinate progressively carries out the iteration issue of node downwards, increases than and issues the growth ratio of rear master index EMS memory occupation, judge whether the node that will continue to issue local index downwards according to the retrieval rate estimated after issue; If estimate retrieval rate to increase than lower than master index EMS memory occupation increase ratio, then stop issuing to subordinate.
3. method according to claim 2, is characterized in that, the described association by metadata realizes retrieval, comprises further:
First the entrance using master index as retrieval, by retrieval master index, determines the actual local index comprising data to be retrieved; Secondly, retrieval process is transmitted to this local index, after retrieving established data by this local index, directly return to the promoter of retrieval request, concrete steps comprise:
1) interval to be retrieved is sent to the cloud Platform Server of higher level, the entry key of master index using the lower bound in interval as retrieval, retrieves in master index; 2) after the master index of higher level navigates to concrete local index according to the key of lower bound, retrieval process is transmitted to the subordinate's local index issuing this key; 3) when local index receives the retrieval process request forwarding and, first according to interval to be retrieved, the index of oneself is traveled through, till meeting the interval upper bound of retrieval; If the range of management of a local index has been retrieved in interval to be retrieved, then retrieval request is transmitted to the rear stepbrother of this local index, the data set retrieved directly has been returned to the request end of retrieval from local index.
4. method according to claim 3, also comprises: before submission Data Update, each affairs first check after these affairs read data, whether have these data of other transactions modify; If other affairs have renewal, the affairs submitted to are carried out rollback; 2 zone bits and 1 lock is also comprised in each node of described linked queue; Wherein, whether marked zone bit is for identifying this node just deleted; Whether linked zone bit identifies this node and inserts completely, and namely the pointer field of all levels all upgrades complete, and each node safeguards a lock lock respectively; Also define 2 sentinel node head and tail in addition, its key assignments is respectively constant min_int and max_int;
First the positioning action of described linked queue is searched from the superlative degree of sentinel node head, decline successively, every one-level finds key assignments k position to be checked or sentinel node tail stops, if find the node that k is corresponding, then upgrade node i and represent the superlative degree of this node, record its every grade corresponding predecessor node pre [i];
The update of described linked queue comprises:
1) first call positioning action, the result of restoring to normal position, if find present node, namely key assignments is that the node of k exists, and can not insert, otherwise, enter the operation of step 2;
2) lock to predecessor node array pre is bottom-up;
3) verify whether the next node of pre and the descendant node array succ returned changes, if the next node of pre and succ changes, then first discharges lock just now, then reorientates pre and succ; If pre and succ not there occurs change, carry out step 4;
4) from end level, upwards carry out update, then putting linked zone bit is true, represents that inserting node links completely, finally discharges all locks;
The knot removal that the deletion action of described linked queue will be specified, first location node, then judge that whether the state of present node is as linking completely, and not just deleted, if this node state is reasonable, then this node is locked, but likely this node is deleted by other threads, now return false, otherwise to put node marked zone bit be that true is then bottom-up locks to predecessor node, if the state of succ and pre changes, lock then before release, then reorientates node; Finally, then the physics deletion carrying out node discharges all locks, returns true;
The search operaqtion of described linked queue first by the position of positioning searching node, then returns result for retrieval, and corresponding predecessor node and descendant node; If do not find respective nodes, present node is just deleted, or present node does not connect completely, then retrieve failure; If have found respective nodes, and this node is not just deleted and link completely, then retrieve successfully.
CN201510629459.2A 2015-09-29 2015-09-29 Big data retrieval method based on cloud computing Pending CN105205158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510629459.2A CN105205158A (en) 2015-09-29 2015-09-29 Big data retrieval method based on cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510629459.2A CN105205158A (en) 2015-09-29 2015-09-29 Big data retrieval method based on cloud computing

Publications (1)

Publication Number Publication Date
CN105205158A true CN105205158A (en) 2015-12-30

Family

ID=54952841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510629459.2A Pending CN105205158A (en) 2015-09-29 2015-09-29 Big data retrieval method based on cloud computing

Country Status (1)

Country Link
CN (1) CN105205158A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506473A (en) * 2017-09-05 2017-12-22 郑州升达经贸管理学院 A kind of big data search method based on cloud computing
CN108984787A (en) * 2018-07-30 2018-12-11 佛山市甜慕链客科技有限公司 It is a kind of for generate index multiple data fields method and system
CN109408543A (en) * 2018-09-26 2019-03-01 蓝库时代(北京)科技有限公司 A kind of intelligence relationship net sniff method
CN112883016A (en) * 2021-04-28 2021-06-01 睿至科技集团有限公司 Data storage optimization method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834734A (en) * 2015-05-18 2015-08-12 成都博元科技有限公司 Efficient data analysis and processing method
CN104834733A (en) * 2015-05-18 2015-08-12 成都博元科技有限公司 Big data mining and analyzing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834734A (en) * 2015-05-18 2015-08-12 成都博元科技有限公司 Efficient data analysis and processing method
CN104834733A (en) * 2015-05-18 2015-08-12 成都博元科技有限公司 Big data mining and analyzing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周维等: "基于并发跳表的云数据处理双层索引架构研究", 《计算机研究与发展》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506473A (en) * 2017-09-05 2017-12-22 郑州升达经贸管理学院 A kind of big data search method based on cloud computing
CN107506473B (en) * 2017-09-05 2020-10-27 郑州升达经贸管理学院 Big data retrieval method based on cloud computing
CN108984787A (en) * 2018-07-30 2018-12-11 佛山市甜慕链客科技有限公司 It is a kind of for generate index multiple data fields method and system
CN109408543A (en) * 2018-09-26 2019-03-01 蓝库时代(北京)科技有限公司 A kind of intelligence relationship net sniff method
CN109408543B (en) * 2018-09-26 2021-07-23 北京华宝智慧科技有限公司 Intelligent relation network sniffing method
CN112883016A (en) * 2021-04-28 2021-06-01 睿至科技集团有限公司 Data storage optimization method and system
CN112883016B (en) * 2021-04-28 2021-07-20 睿至科技集团有限公司 Data storage optimization method and system

Similar Documents

Publication Publication Date Title
US9575976B2 (en) Methods and apparatuses to optimize updates in a file system based on birth time
US9672235B2 (en) Method and system for dynamically partitioning very large database indices on write-once tables
US7702640B1 (en) Stratified unbalanced trees for indexing of data items within a computer system
US10296498B2 (en) Coordinated hash table indexes to facilitate reducing database reconfiguration time
US7890541B2 (en) Partition by growth table space
US7418544B2 (en) Method and system for log structured relational database objects
US10191932B2 (en) Dependency-aware transaction batching for data replication
US9529881B2 (en) Difference determination in a database environment
US8161244B2 (en) Multiple cache directories
US10678792B2 (en) Parallel execution of queries with a recursive clause
CN105975587B (en) A kind of high performance memory database index organization and access method
US10657154B1 (en) Providing access to data within a migrating data partition
US8229916B2 (en) Method for massively parallel multi-core text indexing
CN104794123A (en) Method and device for establishing NoSQL database index for semi-structured data
CN102362273A (en) Dynamic hash table for efficient data access in relational database system
CN111460023A (en) Service data processing method, device, equipment and storage medium based on elastic search
US20080313209A1 (en) Partition/table allocation on demand
US20140222870A1 (en) System, Method, Software, and Data Structure for Key-Value Mapping and Keys Sorting
US20100235344A1 (en) Mechanism for utilizing partitioning pruning techniques for xml indexes
US7080075B1 (en) Dynamic remastering for a subset of nodes in a cluster environment
CN105205158A (en) Big data retrieval method based on cloud computing
CN113721862B (en) Data processing method and device
CN105279241A (en) Cloud computing based big data processing method
CN106383826A (en) Database checking method and apparatus
US7752181B2 (en) System and method for performing a data uniqueness check in a sorted data set

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151230