CN106202271A - The read method of the product database of OTA - Google Patents

The read method of the product database of OTA Download PDF

Info

Publication number
CN106202271A
CN106202271A CN201610505426.1A CN201610505426A CN106202271A CN 106202271 A CN106202271 A CN 106202271A CN 201610505426 A CN201610505426 A CN 201610505426A CN 106202271 A CN106202271 A CN 106202271A
Authority
CN
China
Prior art keywords
read
data
time
data segment
application server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610505426.1A
Other languages
Chinese (zh)
Inventor
管凌云
贾晓明
陈瑞亮
李巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ctrip Computer Technology Shanghai Co Ltd
Original Assignee
Ctrip Computer Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ctrip Computer Technology Shanghai Co Ltd filed Critical Ctrip Computer Technology Shanghai Co Ltd
Priority to CN201610505426.1A priority Critical patent/CN106202271A/en
Publication of CN106202271A publication Critical patent/CN106202271A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries

Abstract

The invention discloses the read method of the product database of a kind of OTA, comprise the following steps: S1, to split data to be cached in described product database be some data segments;S2, select some application servers;S3, described some application servers read from described product database respectively described some data segments and will read data write memory database in cache;S4, described application server inquire about data time from described memory database read data inquire about.The read method of the product database of the OTA that the present invention provides, when carrying out product inquiry, the foundation of the caching of product database and renewal process introduce the concept of distributed reading data, memory database is utilized to reduce the competition to product data base resource, thus improve the speed that caching is set up and updated, improve the overall performance of product inquiry, it is possible to the response speed when competition reducing the resource to product database and then the inquiry request that magnanimity can be improved.

Description

The read method of the product database of OTA
Technical field
The present invention relates to the reading of the product database of a kind of OTA (Online Travel Agency, online tourism society) Method.
Background technology
The product inquiry of large-scale OTA needs the querying condition given for user to export corresponding Query Result rapidly, Such as the real time price in hotel, house type and residue room quantity etc. are inquired about.In early days simple and compare solution intuitively and be Allowing the direct access product database of application server to respond inquiry request, the advantage of this way is that handling process is simple, nothing Redundant data, ensure that simultaneously and obtains Query Result more in real time.But, because to the reading of product database after all Being the read-write to disk, and the read or write speed of disk has the limit, the performance of the product inquiry that therefore which results in OTA is difficult To be protected, especially when the visit capacity of OTA increases, product database can be by bigger pressure, and this can become impact The bottleneck of access performance.
In order to solve above-mentioned bottleneck, it is common practice to increase caching.Realization is cached with two kinds of ways, and the first is passive Caching, i.e. only has its result after certain querying condition is used to just can be buffered;Another kind is active cache, i.e. should The most actively read data from product database with server to be put into caching for subsequent query.Be no matter active cache also It is passive caching, data cached can have that application server is local or (Redis is a use increased income such as Redis ANSI C language writes, support network, can also can the data base of log type of persistence based on internal memory) or Memcached In the memory database of (Memcached is a high performance distributed memory target cache system).Where no matter it is buffered in, There is a task that two needs complete: one grows out of nothing when being initial and sets up data cached, and another is according to data cached Freshness requirements upgrade in time caching.It is desirable that from product database reading real-time it can be appreciated that complete the two task Data.Along with hotel and the quick growth of house type number, add hotel's room rate, room availability, the Rapid Variable Design of room amount and inquiry about the hotels Requirement of real-time and foreground queries amount increase need increasing application server, complete the two task and mean often One application server will ceaselessly read the data of magnanimity, common competing product database resource from product database.This Will cause two performance issues: one caches foundation slowly when being initial, two is that the incremental data cached updates not in time, and these are asked Topic is all likely to result in inquiry about the hotels information and is forbidden, and causes order failure and traffic lost.
Summary of the invention
The technical problem to be solved in the present invention is to overcome in prior art the product database of OTA in the face of magnanimity During inquiry request, response speed is slow and may result in the inaccurate defect of Query Result, it is provided that one can reduce product number The reading of the product database of the OTA of the response speed according to the competition of resource in storehouse and then when can improve the inquiry request of magnanimity Method.
The present invention is to solve above-mentioned technical problem by following technical proposals:
The read method of the product database of a kind of OTA, its feature is, comprises the following steps: S1, split described product Data to be cached in data base are some data segments;S2, select some application servers;S3, described some application servers divide From described product database, do not read described some data segments and will the data write memory database of reading cache;S4、 Described application server reads data when inquiring about data from described memory database and inquires about.
In this programme, during inquiry product database, introduce the concept of distributed reading data, poke in i.e. utilizing Build caching according to storehouse, first inquire about in memory database when application server needs to inquire about product database, thus reduce Competition to product data base resource, thus response speed when improving the inquiry request of magnanimity;Additionally including application server When caching is built in deposit data storehouse, split by the data treating caching, some application servers read parallel and wait to cache Data after write memory database such that it is able to improve the speed that caching is set up and updated in memory database, thus improve Overall performance.
It is preferred that step S4Further comprising the steps of: the number that described application server will read from described memory database Local cache is carried out according to being saved in described application server.
In this programme, application server inquire about data time, first inquire about local cache, if local cache inquiry less than, Reading data from memory database again to inquire about, the data of reading are saved in this locality simultaneously and cache, in case subsequent query Use during data.
It is preferred that step S3Some application servers described in if read described respectively from described product database at random Dry data segment.
In this programme, each application server uses when reading from product database wait the some data segments cached Random sequence reads, thus reduces multiple stage application server and read the probability of same data segment.
It is preferred that described memory database is Redis or Memcached.
It is preferred that use distributed lock in described memory database, described distributed lock is that every segment data section arranges lock shape State parameter, described lock status parameter includes TRUE and FALSE two states, step S3In each application server read one Whether the described lock status parameter judging described data segment to be read before data segment to be read is FALSE state, if it is not, The most do not read described data segment to be read, the most then read described data segment to be read, arrange described to be read simultaneously The described lock status parameter of data segment is TRUE state, and arranges described in described data segment to be read after reading again Lock status parameter is FALSE state.
In this programme, TRUE state representation current time has application server that this segment data section is being built caching behaviour Making, it is exactly to read data segment from product database and be written to memory database the behaviour carrying out caching that what is called builds caching Making, FALSE state is original state, represents that this segment data section current time does not has application server that it is built caching behaviour Make.Take building of this segment data section when application server to cache temporary, before formally starting to build, lock status parameter can be set to TRUE state, when the caching of this segment data section built up by application server when, can be set to FALSE state.Each application takes Whether business device reading the described lock status parameter judging described data segment to be read before a data segment to be read is FALSE state, if it is not, the most do not read described data segment to be read, the most then reads described data segment to be read, simultaneously The described lock status parameter arranging described data segment to be read is TRUE state, and read after arrange again described in continue The described lock status parameter of the data segment taken is FALSE state.Can be very big by employing distributed lock in memory database Ground reduces by two use above servers and builds the probability of caching simultaneously for same data segment.
It is preferred that described distributed lock also arranges parameter locking time, step S for every segment data section3In arrange described in treat The described lock status parameter of the data segment read arranges the locking of described data segment to be read the most simultaneously when being TRUE state time Between parameter be that to arrange the described lock status parameter of described data segment to be read be time value during TRUE state;Step S3In should With server when judging that the described lock status parameter of described data segment to be read is TRUE state, then judge current time and Described data segment to be read described locking time parameter difference whether more than very first time threshold value, the most described should Reading described data segment to be read with server, parameter locking time simultaneously arranging described data segment to be read is current Time, if it is not, the most do not read described data segment to be read.
In this programme, by relatively data segment to be read locking time parameter value and the difference of current time with in advance The very first time threshold value preset, if difference exceedes very first time threshold value, then illustrate that data segment to be read is different between being read out Often time-out, depends on even if the lock status parameter of data segment to be read when other application server finds this situation is TRUE state So can read this data segment and write memory database.Locking time, the effect of parameter was to prevent application server from building Occur the when of caching that abnormal conditions exit, this time application server will not actively lock status parameter from TRUE state Being set to FALSE state, other application servers can be according to lock status parameter and parameter locking time, it is judged that lock is the most overtime. If the difference of parameter locking time and current time sets up time-out time i.e. very first time threshold more than or equal to the largest buffered arranged Value, then will be considered that problematic, and other application servers can be current time this time modification, continues to build this segment data section Caching.
It is preferred that described distributed lock also arranges time cost parameter for every segment data section, described time cost parameter is used In characterizing step S3Middle application server reads described data segment to be read the number that will read from described product database The time spent according to caching in write memory database.
In this programme, time cost parameter builds, for monitoring each segment data section, the time that caching spends, and assesses the big of segmentation Little the most reasonable.In general, the every period building caching can not be oversize, if oversize, then explanation data segment should be got again Smaller.
It is preferred that described distributed lock also arranges renewal time parameter, step S for every segment data section3In each application clothes When business device arranges described renewal time parameter for caching complete after by caching in the data described memory database of write of reading Time;Step S3In each application server judge current time before described data segment to be read and described treat reading The difference of the described renewal time parameter of the data segment read whether more than the second time threshold preset, the most described should Described data segment to be read is read, if it is not, the most do not read described data segment to be read with server.
In this programme, update time parameter and build the time of caching for recording this segment data section the last time, for application clothes Business device is used for judging whether to need again this segment data section to be built caching.If within the effect duration that data allow, the most not More than the second time threshold, then need not again build caching, only when the last update time exceeded allow the longest time Between, then can trigger and build caching.
It is preferred that step S3Described in some application servers from described product database, read described some numbers respectively During according to section, the quantity of the application server simultaneously reading described product database is four to the maximum.
In this programme, when the situation occurring application server cluster to restart, if application server is a lot of in cluster, institute Some application servers can be exerted heavy pressures on to product database when product database reads data simultaneously.Therefore in this programme Limit one to allow to read data cached maximum number of concurrent from product database simultaneously.Every application server is restarted and is caused this locality The when of data cached whole loss, all can be toward memory database write the reboot time of application server, when any one Application server has had four application servers reading product simultaneously in seeing the set time from memory database when starting Product data base toward when writing data in memory database, it the most only from memory database reading data not from product database reading According to writing in memory database.
The most progressive effect of the present invention is: the read method of the product database of the OTA that the present invention provides, and is carrying out During product inquiry, the foundation of the caching of product database and renewal process introduce the concept of distributed reading data, in utilizing The competition to product data base resource is reduced in deposit data storehouse, thus improves the speed that caching is set up and updated, and improves product inquiry Overall performance.By product database data efficient is read memory database cache or application server this locality is delayed Deposit, until data cached by being read the data in product database by the division of labor of multiple stage application server after splitting in product database Being then written in memory database share for other application servers, other application servers are within sharing the most as required Deposit data banked cache reads application server local cache.The present invention can reduce the resource to product database competition and then The response speed during inquiry request of magnanimity can be improved.
Accompanying drawing explanation
Fig. 1 is the flow chart of the read method of the product database of the OTA of a preferred embodiment of the present invention.
Fig. 2 is the schematic diagram that in a preferred embodiment of the present invention, step S103 random order reads data segment.
Detailed description of the invention
Further illustrate the present invention below by the mode of embodiment, but the most therefore limit the present invention to described reality Execute among example scope.
As it is shown in figure 1, the read method of the product database of a kind of OTA, comprise the following steps:
Data to be cached in S101, partitioning products data base are some data segments, and in the present embodiment, product is hotel, I.e. in segmentation hotel data base, data to be cached are that some data segments are for subsequent read.
S102, select some application servers, in memory database Redis, build caching, in the present embodiment for follow-up Select four application servers, if the quantity reading the application server of hotel data base is too much simultaneously, easily cause hotel Database resource dog-eat-dog, causes reading data performance low.
S103, four application servers choose data segment to be read with random order respectively from hotel data base, Random order advantageously reduces four application servers and reads the probability of same data segment simultaneously.
Using distributed lock in S104, memory database Redis, distributed lock is that every segment data section arranges lock status ginseng Number, lock status parameter includes TRUE and FALSE two states, and application server judges the lock status parameter of data segment to be read Whether it is FALSE, if it is not, then perform step S106, the most then performs step S105.
S105, distributed lock also arrange renewal time parameter for every segment data section, and application server judges number to be read According to the requirement updating the most satisfied more new data of time parameter of section, i.e. judge current time and the renewal of data segment to be read The difference of time parameter whether more than the second time threshold preset, the most then performs step S108, if it is not, then perform step S107。
S106, distributed lock also arrange parameter locking time for every segment data section, and this parameter is used for preventing application server Read during a data segment due to abnormal cause cause reading time-out, specially application server judge to be read The lock status parameter of data segment when being TRUE state, then judge parameter locking time of current time and data segment to be read Difference whether more than very first time threshold value, the most then perform step S108, if it is not, then perform step S107.
S107, application server are abandoned reading data segment the most to be read, if also there is a need to the data segment read, Returning S103, if all data segments have all read, then present application server is built caching task in Redis and is terminated.
It is TRUE that S108, application server arrange the lock status parameter of current data segment to be read, arranges ginseng locking time Number is current real-time time.
S109, application server read data segment to be read from hotel data base, and write in Redis.
It is current real-time time that S110, application server arrange the time parameter that updates of current data section, it addition, in order to The size segmentation of statistical data section is the most suitable, and distributed lock also arranges time cost parameter for every segment data section, the most also needs The time cost parameter that notebook data section to be arranged is corresponding, so-called time cost parameter is used to characterize application server from hotel's number According to storehouse being read data segment to be read and the data of reading being write the time that in Redis, caching is spent, if spent Between long, then need the size of data segment, again partition data in set-up procedure S101.
S111, participate in building the application server of caching or other application servers when needs inquiry data from Redis Read data to inquire about, the data of reading are carried out local cache in application server this locality simultaneously, if local built slow Depositing, application server, when performing query task, can being inquired about by local cache now, if do not inquired, going the most again Inquiring about in Redis, if also do not inquired, Ze Zaiqu hotel data base inquires about, can basis during inquiry Need to repeat abovementioned steps.
In the present embodiment, step S101 uses every 200000 data needing caching to constitute one when partition data section Data segment.In Fig. 2, server 1 to server 4 is four application servers chosen in step S102, and data segment to be cached is First order is 1 to 9, and in four application servers, each application server is according to random suitable of as shown in Figure 2 Xu Cong hotel data base is sequentially read out and writes into memory database based on Redis each data segment.Can by Fig. 2 Know, use random order reading data segment can reduce by four application servers and read the probability of same data segment simultaneously. Before Mei Cong hotel data base reads a data segment, application server first consults this data segment the most Already in On Redis, if Already in Redis, and this data segment does not has expired, then need not again read or write Redis.As Really application server also needs to set up local cache, then at this moment this data segment also should be loaded into local cache.
In the present embodiment, it is contemplated that when there is the situation that application server cluster is restarted, if application server in cluster Quantity is a lot, and all of application server is read data Shi Huigei hotel data base from hotel data base simultaneously and exerted heavy pressures on. Limiting one to allow to read data cached maximum number of concurrent from hotel data base, the present embodiment is limited to 4 simultaneously.Every application The when that Server Restart causing local cache data all to lose, all can write restarting of present application server in Redis Time, in seeing from Redis nearest one minute when any application server starts, there are 4 or more than 4 Other application server is simultaneously reading hotel data base and toward when writing data in Redis, and it is the most only from Redis reading data the most not Read data from hotel data base to write Redis.
In the present embodiment, the foundation and renewal process of inquiry about the hotels caching introduce the concept of distributed reading data, Utilize memory database Redis to reduce the competition to hotel's database resource, thus improve the speed that caching is set up and updated, carry High overall performance.Specifically, full dose or increment treat data cached divided by multiple stage application server by certain allocation rule Part work and part study takes hotel data base, and the data of reading are then written in memory database Redis share, then for other application servers Other application servers read application server local cache from the caching of the memory database Redis shared as required.This The method of embodiment mainly has several feature:
First, have the most data cached splicing mechanism.For substantial amounts of data cached, can be according to data record Bar number or data volume are divided into multiple data segment.The caching of full dose and the caching of increment can use identical or different segmentation Mechanism.
Second, have rational data pull distribution mechanism.According to certain rule, all of data segment is distributed to many Individual application server.Rule for distribution can be the static rule reserved in advance, it is also possible to be to build caching progress according to actual and The fixed DP with certain randomness.
3rd, application server pulls data cached to memory database Redis, pulling data parallel according to allocation rule Certain synchronization mechanism can be used to avoid multiple stage application server to repeat to pull same data segment during section.
4th, when application server needs set up and update local cache, can read in memory database Redis Data are to local internal memory.
5th, it is also possible to a caching is set and sets up the monitoring instrument of progress.
6th, there is suitable protection mechanism and prevent too much application server from pulling caching number from data base concurrency simultaneously According to.
7th, there is the unexpected off-line of application server and carry out fault-tolerant mechanism, it is ensured that the reliable of caching is set up and update.
In the present embodiment, monitoring instrument monitoring data show: other application servers from Redis for data The elapsed time of local cache is built less than or equal to 39 milliseconds in storehouse, and the time directly built local cache from hotel data base and need is 3236 milliseconds, the read method of the product database of the OTA provided by the present invention thus can be provided, be effectively improved slow Deposit the speed set up and update, alleviate the pressure of product database server, and promote the overall performance of whole system and stablize Property.
Although the foregoing describing the detailed description of the invention of the present invention, it will be appreciated by those of skill in the art that this is only Being to illustrate, protection scope of the present invention is defined by the appended claims.Those skilled in the art without departing substantially from On the premise of the principle of the present invention and essence, these embodiments can be made various changes or modifications, but these changes and Amendment each falls within protection scope of the present invention.

Claims (9)

1. the read method of the product database of an OTA, it is characterised in that comprise the following steps:
S1, to split data to be cached in described product database be some data segments;
S2, select some application servers;
S3, described some application servers read from described product database respectively described some data segments and will read number Cache according in write memory database;
S4, described application server inquire about data time from described memory database read data inquire about.
2. the read method of the product database of OTA as claimed in claim 1, it is characterised in that step S4Also include following step Rapid: the data read from described memory database are saved in described application server and carry out local slow by described application server Deposit.
3. the read method of the product database of OTA as claimed in claim 1, it is characterised in that step S3Described in some should From described product database, described some data segments are read respectively at random with server.
4. the read method of the product database of OTA as claimed in claim 1, it is characterised in that described memory database is Redis or Memcached.
5. the read method of the product database of OTA as claimed in claim 4, it is characterised in that in described memory database Use distributed lock, described distributed lock is that every segment data section arranges lock status parameter, described lock status parameter include TRUE and FALSE two states, step S3In each application server read judge before a data segment to be read described in continue Whether the described lock status parameter of the data segment taken is FALSE state, if it is not, the most do not read described data segment to be read, if Being then to read described data segment to be read, the described lock status parameter simultaneously arranging described data segment to be read is TRUE State, and to arrange the described lock status parameter of described data segment to be read again after reading be FALSE state.
6. the read method of the product database of OTA as claimed in claim 5, it is characterised in that described distributed lock is also Every segment data section arranges parameter locking time,
Step S3The middle described lock status parameter arranging described data segment to be read is treated described in arranging when being TRUE state the most simultaneously Parameter locking time of data segment read is that to arrange the described lock status parameter of described data segment to be read be TRUE state Time time value;
Step S3Middle application server when judging that the described lock status parameter of described data segment to be read is TRUE state, then Judge current time and described data segment to be read described locking time parameter difference whether more than very first time threshold value, The most described application server reads described data segment to be read, arranges the locking of described data segment to be read simultaneously Time parameter is current time, if it is not, the most do not read described data segment to be read.
7. the read method of the product database of OTA as claimed in claim 5, it is characterised in that described distributed lock is also Every segment data section arranges time cost parameter, and described time cost parameter is used for characterizing step S3Middle application server is from described product Product data base reads described data segment to be read and the data of reading are write that caching in memory database spent time Between.
8. the read method of the product database of OTA as claimed in claim 5, it is characterised in that described distributed lock is also Every segment data section arranges renewal time parameter,
Step S3In each application server the data of reading are being write in described memory database and are arranging described renewal after caching Time when time parameter is to cache complete;
Step S3In each application server judge current time and described to be read before described data segment to be read reading The difference of described renewal time parameter of data segment whether more than the second time threshold preset, the most described application takes Business device reads described data segment to be read, if it is not, the most do not read described data segment to be read.
9. the read method of the product database of the OTA as according to any one of claim 1 to 8, it is characterised in that step S3 Described in some application servers when reading described some data segments from described product database respectively, read described product simultaneously The quantity of the application server of product data base is four to the maximum.
CN201610505426.1A 2016-06-30 2016-06-30 The read method of the product database of OTA Pending CN106202271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610505426.1A CN106202271A (en) 2016-06-30 2016-06-30 The read method of the product database of OTA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610505426.1A CN106202271A (en) 2016-06-30 2016-06-30 The read method of the product database of OTA

Publications (1)

Publication Number Publication Date
CN106202271A true CN106202271A (en) 2016-12-07

Family

ID=57463922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610505426.1A Pending CN106202271A (en) 2016-06-30 2016-06-30 The read method of the product database of OTA

Country Status (1)

Country Link
CN (1) CN106202271A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911248A (en) * 2017-11-27 2018-04-13 北京百度网讯科技有限公司 Upgrade method and device
CN110673893A (en) * 2019-09-24 2020-01-10 携程计算机技术(上海)有限公司 Configuration method and system of application program, electronic device and storage medium
CN113918530A (en) * 2021-12-14 2022-01-11 北京达佳互联信息技术有限公司 Method and device for realizing distributed lock, electronic equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130103655A1 (en) * 2011-10-21 2013-04-25 International Business Machines Corporation Multi-level database compression
CN103561013A (en) * 2013-10-29 2014-02-05 联想中望系统服务有限公司 Streaming media data distributing system
CN103729239A (en) * 2013-11-18 2014-04-16 芜湖大学科技园发展有限公司 Distributed type lock algorithm of mirror-image metadata
CN104065636A (en) * 2013-07-02 2014-09-24 腾讯科技(深圳)有限公司 Data processing method and system
CN104932953A (en) * 2015-06-04 2015-09-23 华为技术有限公司 Data distribution method, data storage method, and relevant device and system
CN105224255A (en) * 2015-10-14 2016-01-06 浪潮(北京)电子信息产业有限公司 A kind of storage file management method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130103655A1 (en) * 2011-10-21 2013-04-25 International Business Machines Corporation Multi-level database compression
CN104065636A (en) * 2013-07-02 2014-09-24 腾讯科技(深圳)有限公司 Data processing method and system
CN103561013A (en) * 2013-10-29 2014-02-05 联想中望系统服务有限公司 Streaming media data distributing system
CN103729239A (en) * 2013-11-18 2014-04-16 芜湖大学科技园发展有限公司 Distributed type lock algorithm of mirror-image metadata
CN104932953A (en) * 2015-06-04 2015-09-23 华为技术有限公司 Data distribution method, data storage method, and relevant device and system
CN105224255A (en) * 2015-10-14 2016-01-06 浪潮(北京)电子信息产业有限公司 A kind of storage file management method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911248A (en) * 2017-11-27 2018-04-13 北京百度网讯科技有限公司 Upgrade method and device
CN107911248B (en) * 2017-11-27 2020-11-10 北京百度网讯科技有限公司 Upgrading method and device
CN110673893A (en) * 2019-09-24 2020-01-10 携程计算机技术(上海)有限公司 Configuration method and system of application program, electronic device and storage medium
CN110673893B (en) * 2019-09-24 2023-06-09 携程计算机技术(上海)有限公司 Application program configuration method, system, electronic device and storage medium
CN113918530A (en) * 2021-12-14 2022-01-11 北京达佳互联信息技术有限公司 Method and device for realizing distributed lock, electronic equipment and medium
CN113918530B (en) * 2021-12-14 2022-05-13 北京达佳互联信息技术有限公司 Method and device for realizing distributed lock, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN105549905B (en) A kind of method that multi-dummy machine accesses distributed objects storage system
CN102831156B (en) Distributed transaction processing method on cloud computing platform
US7783607B2 (en) Decentralized record expiry
CN101013381B (en) Distributed lock based on object memory system
CN106599199A (en) Data caching and synchronization method
CN108509462B (en) Method and device for synchronizing activity transaction table
CN111338766A (en) Transaction processing method and device, computer equipment and storage medium
WO2021027956A1 (en) Blockchain system-based transaction processing method and device
CN103106286B (en) Method and device for managing metadata
US20070288587A1 (en) Transactional shared memory system and method of control
US20090063807A1 (en) Data redistribution in shared nothing architecture
US9922086B1 (en) Consistent query of local indexes
CN111767327B (en) Data warehouse construction method and system with dependency relationship among data streams
CN103226598A (en) Method and device for accessing database and data base management system
CN106777085A (en) A kind of data processing method, device and data query system
CN106202271A (en) The read method of the product database of OTA
CN109669975A (en) A kind of industry big data processing system and method
CN115587118A (en) Task data dimension table association processing method and device and electronic equipment
US11741081B2 (en) Method and system for data handling
CN111159140A (en) Data processing method and device, electronic equipment and storage medium
CN102724301B (en) Cloud database system and method and equipment for reading and writing cloud data
EP3377970B1 (en) Multi-version removal manager
US11138231B2 (en) Method and system for data handling
CN112131305A (en) Account processing system
CN115964444A (en) Cloud native distributed multi-tenant database implementation method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161207