CN101154191B - Processing method for fast data access - Google Patents

Processing method for fast data access Download PDF

Info

Publication number
CN101154191B
CN101154191B CN200610159960A CN200610159960A CN101154191B CN 101154191 B CN101154191 B CN 101154191B CN 200610159960 A CN200610159960 A CN 200610159960A CN 200610159960 A CN200610159960 A CN 200610159960A CN 101154191 B CN101154191 B CN 101154191B
Authority
CN
China
Prior art keywords
control end
caching data
main control
mirror image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200610159960A
Other languages
Chinese (zh)
Other versions
CN101154191A (en
Inventor
陈志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Corp
Original Assignee
Inventec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Corp filed Critical Inventec Corp
Priority to CN200610159960A priority Critical patent/CN101154191B/en
Publication of CN101154191A publication Critical patent/CN101154191A/en
Application granted granted Critical
Publication of CN101154191B publication Critical patent/CN101154191B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a processing method of rapidly fetching data and is applied to a double prepare aid service system which are provided with a main control end and a prepare aid control end; themain control end mirrors a cache data stored in the main control end into a mirroring cache data; the mirroring cache data is transmitted to the prepare aid control end through a transmission unit; when the main control end judges that the prepare aid control end can not store the mirroring catch data, the main control end refreshes the catch data to a hard disk arranged on the main control end.

Description

The disposal route of caching data
Technical field
The invention relates to the disposal route of caching data, refer to the method that a kind of pair of redundant service system handled a caching data especially between a main control end and a redundant control end.
Background technology
The world today is an Information technologyization, flourish epoch of electronic industry, the high-tech product that various counter is derived, mobile communication technology development are rapidly, and with we live closely bound up, it shortened interpersonal in time with the space on distance.Along with present electronic product (for example: computing machine) and the popularizing and high utilization rate of mechanics of communication (as: network), and under the adding fuel to the flames of each many inducement of tame manufacturer, competition on the market is more and more fierce, all big enterprises all constantly weed out the old and bring forth the new, with newly, steal victory, use and attract liking of many users, because the user is more and more higher for the requirement of electronic product and Communications service, can following electronic product and Communications service provide more convenient, service has faster become the user and has estimated whether one of important indicator of leading other country of various countries' high-tech product and mechanics of communication.
So-called " server " is a kind of high-performance computer (Server), it mainly acts on the node that is as network, with the data on storage or the processing network, and general server is at least by a processor, one hard disk, members such as one storer and system bus are formed, this member is at the application of network and tailor-make, make server have higher processing power, stability, reliability, security, extensibility and manageability, especially the demand along with current information development of technology and information flow grows with each passing day, general company or group use server so that relevant information to be provided, download, services such as mail, quotidian ordinary affair have been become, therefore, how to provide more stable, more humane information is used and service environment, and the processing power of server and stability promptly become one of most important key.
For avoiding server because of unusual condition that a variety of causes produced, and then cause data access or network service disruption, promptly there is the dealer to develop a kind of pair of redundant service system, see also shown in Figure 1, be respectively equipped with a main control end 1 and redundant control end 2 in this system, wherein this main control end 1 is transmission or the reception of carrying out data packet between this service system and a network, make this service system be able to by and this network between carry out the transmission and the reception (as: network informations such as relevant information, download, mail) of data packet, so that the network of relation information service to be provided.
For avoiding unusual condition taking place because of this main control end 1, and then cause the situation of this service system generation break in service, this main control end 1 is under the state of normal operation, carrying out data synchronization with this redundant control end 2 upgrades, after making that this main control end 1 lost efficacy, can take over immediately and become new main control end by this redundant control end 2, and make this service system 1 proceed service.
As the set stored the first stroke caching data A of a high-speed cache 10 of this main control end 1, after being become the first stroke mirror image caching data A ' by mirror image, this the first stroke mirror image caching data A ' is stored in another high-speed cache 20 interior (as shown in Figure 1) of this redundant control end 2, second caching data B that this high-speed cache 10 is stored, after being become one second mirror image caching data B ' by mirror image, this second mirror image caching data B ' is stored in this another high-speed cache 20 (as shown in Figure 2), and the 3rd stored caching data C of this high-speed cache 10, after being become one the 3rd mirror image caching data C ' by mirror image, but the 3rd mirror image caching data C ' can't be stored in another high-speed cache 20 (as shown in Figure 3), then this redundant control end 2 can abandon the 3rd mirror image caching data C ', makes that the content of 20 of this high-speed cache 10 and this another high-speed caches is inconsistent.
Summary of the invention
Because aforesaid many disappearances, the inventor is through the permanent research and experiment of making great efforts, and development and Design goes out the disposal route of a kind of caching data of the present invention finally, in the hope of by proposition of the present invention, can contribute to some extent society.
Purpose of the present invention, provide a kind of disposal route of caching data, be to be useful in a pair of redundant service system, main control end when this pair redundant service system, with the caching data in its high-speed cache, when being processed into a mirror image caching data by its set mirror image mechanism, this main control end is sent to this mirror image caching data one redundant control end of this pair redundant service system by a transmission unit.Receive this redundant control end answer when this main control termination and can't store this mirror image caching data, then this main control end directly flushes to this caching data in one hard disk of this main control end of this caching data correspondence, so, can prevent the problem that the mirror image caching data is lost.
Description of drawings
Fig. 1 is the synoptic diagram that prior art the first stroke mirror image caching data is stored into another storer;
Fig. 2 is the synoptic diagram that second mirror image caching data of prior art is stored into another storer;
Fig. 3 is the synoptic diagram that the 3rd mirror image caching data of prior art can't be stored into another storer;
Fig. 4 is the synoptic diagram that mirror image caching data of the present invention is stored into another storer or hard disk;
Fig. 5 is an action flow chart of the present invention.
Symbol description:
The main control end ... ... 3 high-speed caches ... ... ... 30
Hard disk ... ... ... .32 redundant control end ... ... .4
Another high-speed cache ... ..40 transmission unit ... ... ... 5
The first stroke caching data ... second caching data of A ... ... B
The 3rd caching data ... C the first stroke mirror image caching data ... A '
The 3rd mirror image caching data of second mirror image caching data B ' ... C '
Embodiment
For making the object, the technical characteristics and the effect of the present invention, further by understanding with understand, now cooperate graphicly for embodiment, be described in detail as follows:
The present invention is a kind of disposal route of caching data, sees also shown in Figure 4ly, and this method is to be applied in a pair of standby system, and this system comprises a main control end 3 and a redundant control end 4.Wherein this main control end 3 is provided with a high-speed cache 30, this high-speed cache 30 is in order to store caching data, this redundant control end 4 is provided with another high-speed cache 40, and this main control end 3 is processed into a mirror image caching data by its set mirror image mechanism with this caching data, see also Fig. 5, when these main control end 3 desires are sent to this redundant control end 4 by a transmission unit 5 with this mirror image caching data, and desire is when being stored in another high-speed cache 40 with this mirror image caching data, 4 of this main control end 3 and redundant control ends, will handle according to the following step:
(1) this main control end 3 is set up a data structure information according to this mirror image caching data, this data structure information includes one at least and transmits the number of times record, the content that this transmission number of times record is write down, be that this main control end 3 is sent to this redundant control end 4 with this mirror image caching data, and can't be stored in the transmission number of times of this another high-speed cache 40;
(2) this data structure information is placed the set formation of this main control end 3;
(3) this main control end 3 is sent to this redundant control end 4 with this mirror image caching data, and handles other follow-up mirror image caching data by this transmission unit 5, and waits for the answer of redundant control end 4;
(4) receive this mirror image caching data when this redundant control end 4, then judge the storage area of this another high-speed cache 40, whether enough store this mirror image caching data: this redundant control end 4 is to add up according to these another high-speed cache 40 present stored data capacities and the data capacity of this mirror image caching data, obtain a comparison data capacity, compare the size of data capacity with the comparison data capacity of these another high-speed cache 40 maximums again, in order to judge the storage area of this another high-speed cache 40, whether enough store this mirror image caching data, when the data capacity of these another high-speed cache 40 maximums big frequently to the data capacity, then carry out step (5), otherwise carry out step (7);
(5) this redundant control end 4 is stored in this another high-speed cache 40 with this mirror image caching data, and send this redundant control end 4 stored this mirror image caching data one store information give this main control end 3;
(6) this main control end 3 receives this store information, then the data structure information to should the mirror image caching data in this formation is deleted, and promptly finishes;
(7) this redundant control end 4 can't be stored in this mirror image caching data this another high-speed cache 40, and then this redundant control end 4 can be replied an announcement information that can't store and be given this main control end 3;
(8) when this main control end 3 receives announcement information, then this main control end 3 can increase once transmit number of times in this transmission number of times record;
(9) this main control end 3 is judged the transmission number of times that is write down in this transmission number of times record, this main control end 3 default one maximum number of times that transmit whether have been surpassed, the transmission number of times that in this main control end 3 is judged this transmission number of times record, is write down, surpass this maximum and transmitted number of times, carry out the following step (10), otherwise carry out step (3);
(10) this main control end 3 directly refreshes this caching data (flush) in the set hard disk 32 of this main control end 3, promptly finishes.
For knowing that this another high-speed cache 40 can't store the state of mirror image caching data, be to describe with Fig. 4, stored the first stroke caching data A and second caching data B of this high-speed cache 30 wherein, in regular turn by above-mentioned steps of the present invention, all be processed into the first stroke mirror image caching data A ' and second mirror image caching data B ' respectively, and be stored in respectively in this another high-speed cache 40, and the 3rd stored caching data C of this high-speed cache 30, after being become one the 3rd mirror image caching data C ' by mirror image, because, the storage area that this redundant control end 4 is judged this another high-speed cache 40 is not enough to store the 3rd mirror image caching data C ' (shown in Fig. 4 dotted line), then this main control end 3 can flush to the 3rd caching data C in this hard disk 32, so, can prevent the problem that caching data is lost.
In the present invention, this data structure information still comprises a caching data identification gauge outfit and a content pointers, and comprise in this caching data identification gauge outfit that one installs the data length of recognition data, a block address (block address) and this caching data etc., wherein this device recognition data is the numbering of the hard disk in this caching data source, this block address (block address) is the block address (block address) of this caching data at this hard disk, and this content pointers then is the storage address that this caching data is stored in this high-speed cache 30.So, the transmission number of times that in this main control end 3 is judged this transmission number of times record, is write down, when having surpassed this maximum transmission number of times, this main control end 3 promptly one stores the address according to what this content pointers read that 30 pairs of this high-speed caches should content pointers, in order to obtain this caching data, afterwards, this main control end 3 is again according to this caching data identification gauge outfit, and this caching data is flushed to block address to hard disk that should caching data identification gauge outfit.
By as can be known above-mentioned, this main control end 3 can't be stored under the situation of this another high-speed cache 40 at this mirror image caching data, has following three kinds of mechanism:
1, this main control end 3 this mirror image caching data that can retransfer;
2, this main control end 3 has the maximum transmission number of times restriction of this mirror image caching data that retransfers; And
3, this main control end 3 is about to this caching data and flushes to hard disk 32 in the time can't heavily giving this mirror image caching data.
In addition, this redundant control end 4 has following three kinds of mechanism when receiving this mirror image caching data:
1, this redundant control end 4 can be judged the storage volume of this another high-speed cache 40;
2, this redundant control end 4 is judged this mirror image caching data, whether can be stored in this another high-speed cache 40;
3, this redundant control end 4 is when this mirror image caching data is stored in this another high-speed cache 40, reply this store information give this main control end 3, and this mirror image caching data is replied this announcement information and is given this main control end 3 when being stored in this another high-speed cache 40.
The above only is a specific embodiment of the best of the present invention, and structural attitude of the present invention is not limited thereto, and anyly is familiar with this skill person in field of the present invention, can think easily and variation or modification, all can be encompassed in the claim of this case.

Claims (10)

1. the disposal route of a caching data, this method is applied in a pair of standby system, this system comprises a main control end and a redundant control end, this main control end is provided with a high-speed cache, this high-speed cache is in order to store a caching data, this redundant control end is provided with another high-speed cache, and this pair standby system is handled according to the following step:
This main control end is processed into a mirror image caching data by a mirror image mechanism with this caching data;
This main control end is sent to this redundant control end by a transmission unit with this mirror image caching data; And
When another high-speed cache of judging this redundant control end when this main control end can't store this mirror image caching data, then this caching data is flushed in this main control end in the pairing hard disk of this caching data.
2. the method for claim 1 wherein when this main control end is processed into this mirror image caching data by this transmission unit with this caching data, also comprises the following steps:
This main control end is set up a data structure information according to this mirror image caching data, this data structure information includes one at least and transmits the number of times record, the content that this transmission number of times record is write down, be that this main control end is sent to this redundant control end with this mirror image caching data, and can't be stored in the transmission number of times of this another high-speed cache;
This data structure information is placed the set formation of this main control end; And
This main control end is handled according to the step that sends this mirror image caching data to this redundant control end again.
3. method as claimed in claim 2 wherein when this redundant control end receives this mirror image caching data, also comprises the following steps to handle:
Judge the storage area of this another high-speed cache, whether enough store this mirror image caching data; And
The storage area of judging this another high-speed cache when this redundant control end enough stores this mirror image caching data, then this mirror image caching data is stored in this another high-speed cache, and by this transmission unit send this redundant control end stored this mirror image caching data one store information give this main control end.
4. method as claimed in claim 3 is wherein received this during store information when this main control termination, then the data structure information to should the mirror image caching data in this formation is deleted.
5. method as claimed in claim 3, wherein when this redundant control end can't be stored in this another high-speed cache with this mirror image caching data, this redundant control end was replied an announcement information that can't store this mirror image caching data and is given this main control end.
6. method as claimed in claim 5, wherein when this main control termination is received this announcement information,, handle according to the following step:
This main control end can transmit to increase in the number of times record and once transmit number of times at this;
This main control end is judged the transmission number of times that is write down in this transmission number of times record, whether has surpassed this main control end default one maximum number of times that transmits; And
The transmission number of times that is write down in this main control end is judged this transmission number of times record has surpassed this maximum transmission number of times, represents that promptly this main control end judges this redundant control end and can't store this mirror image caching data.
7. method as claimed in claim 6, wherein work as this main control end and judge the transmission number of times that is write down in this transmission number of times record, when surpassing this maximum transmission number of times, this main control end is then handled the step that this mirror image caching data sends this redundant control end to according to this transmission unit.
8. method as claimed in claim 6, wherein this data structure information also comprises a caching data identification gauge outfit and a content pointers, the transmission number of times that in this main control end is judged this transmission number of times record, is write down, when having surpassed this maximum transmission number of times, this main control end flushes to this caching data in this hard disk according to the following step:
This main control end is stored in one of this high-speed cache according to this caching data that is write down in this content pointers and stores the address, and this storage address of reading this high-speed cache is in order to obtain this caching data; And
This main control end is by the set device recognition data of this caching data identification gauge outfit, obtain numbering, the set block address of this caching data identification gauge outfit of the hard disk in this caching data source, obtain the block address at this hard disk of this caching data, and this caching data identification this caching data that gauge outfit write down is at the data length of this hard disk, in order to this caching data is flushed to the block address to hard disk that should caching data identification gauge outfit.
9. method as claimed in claim 6, wherein this redundant control end is judged the storage area of this another high-speed cache according to the following step, whether enough stores this mirror image caching data:
This another high-speed cache is stored data capacity at present, and the data capacity with this mirror image caching data adds up, and obtains a comparison data capacity;
Compare the size of data capacity with the comparison data capacity of this another high-speed cache maximum; And
When the data capacity of this another high-speed cache maximum is big to the data capacity frequently, then the step that is stored in this another high-speed cache according to this mirror image caching data is handled.
10. method as claimed in claim 9, wherein the data capacity when this another high-speed cache maximum is little to the data capacity frequently, and then the step that this mirror image caching data can't be stored in this another high-speed cache according to this redundant control end is handled.
CN200610159960A 2006-09-28 2006-09-28 Processing method for fast data access Expired - Fee Related CN101154191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200610159960A CN101154191B (en) 2006-09-28 2006-09-28 Processing method for fast data access

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200610159960A CN101154191B (en) 2006-09-28 2006-09-28 Processing method for fast data access

Publications (2)

Publication Number Publication Date
CN101154191A CN101154191A (en) 2008-04-02
CN101154191B true CN101154191B (en) 2010-05-19

Family

ID=39255861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200610159960A Expired - Fee Related CN101154191B (en) 2006-09-28 2006-09-28 Processing method for fast data access

Country Status (1)

Country Link
CN (1) CN101154191B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102455879B (en) * 2010-10-21 2014-10-15 群联电子股份有限公司 Memory storage device, memory controller and method for automatically generating filled document

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1482773A (en) * 2003-04-11 2004-03-17 清华紫光比威网络技术有限公司 Method for implementing fault tolerant transmission control protocol
CN1831782A (en) * 2006-03-10 2006-09-13 四川大学 Allopatric data image method of network information system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1482773A (en) * 2003-04-11 2004-03-17 清华紫光比威网络技术有限公司 Method for implementing fault tolerant transmission control protocol
CN1831782A (en) * 2006-03-10 2006-09-13 四川大学 Allopatric data image method of network information system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开2006-235736A 2006.09.07

Also Published As

Publication number Publication date
CN101154191A (en) 2008-04-02

Similar Documents

Publication Publication Date Title
CN100473012C (en) Message recording merging method and user terminal
CN106878309B (en) Safety early warning method and device applied to network payment
CN101686209B (en) Method and device for storing message in message retransmission system
JP2003044608A5 (en)
CN104683961A (en) Name card interaction method and device and terminal
CN104113466A (en) Harassing phone call identification method, client, server and system
CN102420778A (en) Method and system for marking instant communication read message as unread state
CN100414869C (en) Method and system for implementing message subscription through Internet
CN103581846B (en) A kind of user's business card update method and system
CN111277483A (en) Multi-terminal message synchronization method, server and storage medium
CN101154191B (en) Processing method for fast data access
CN101778124A (en) Method for accessing Internet by mobile client end and page access server
CN101404797B (en) Storage method, storage management apparatus and storage system for long and short messages
CN105991683A (en) Data transmission method and device
US20140379820A1 (en) Email address and telephone number unification systems and methods
CN101626628B (en) Digital number and web address mapping and pushing system
US20030074414A1 (en) Electronic mail rejecting system, method therefor, and storage medium storing control program thereof
CN101860821A (en) Method and system for acquiring instant messages
WO2005025155A1 (en) Reply recognition in communications
CN101668253B (en) Identification method of mobile terminal contact person, system and mobile terminal
CN100463402C (en) Method and device for recording display of communication information in communication system
CN103139723A (en) Method and system of processing multimedia message and multimedia information and device
KR100832609B1 (en) Wireless data service system for supporting various application service and method for operating contents data on the system
CN102790782B (en) A kind of information processing method of microblogging and system
CN101631281A (en) Method and system for storing short messages, mobile terminal and server

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100519

Termination date: 20160928