Summary of the invention
The embodiment of the present application provides a kind of data processing method, to reduce the waste to resource.
Correspondingly, the embodiment of the present application also provides a kind of data processing equipment, a kind of electronic equipment and a kind of storage Jie
Matter, to guarantee the implementation and application of the above method.
To solve the above-mentioned problems, this application discloses a kind of data processing methods, comprising: receives requesting client request
Source data map information;The map information is transmitted to the requesting client, and the mapping of the source data is believed
Breath is cached;After the map information of the caching meets setting condition, the map information of the caching is cleared up.
Optionally, further includes: after receiving the request to the map information of the caching, institute is read from data buffer storage
State the map information and feedback of caching.
Optionally, the map information to the source data caches, and comprises at least one of the following step: will be described
Memory queue is written in the map information of source data;Array is written into the map information of the source data;By reflecting for the source data
Penetrate information write-in data acquisition system;Mapping set is written into the map information of the source data.
Optionally, further includes: judge whether the storage time of the map information of the caching exceeds time threshold;If described
Storage time exceeds time threshold, then meets setting condition;If the storage time without departing from time threshold, is unsatisfactory for setting
Condition.
Optionally, further includes: judge whether the data volume of the map information of caching exceeds storage threshold value;If the data volume
Beyond storage threshold value, then meet setting condition;If the data volume is unsatisfactory for imposing a condition without departing from storage threshold value.
Optionally, the map information to the caching is cleared up, and comprises at least one of the following step: according to storage
Time determines map information for clearance, and the map information for clearance is removed from data buffer storage;It is determined according to utilization rate
Map information for clearance removes the map information for clearance from data buffer storage.
The embodiment of the present application also discloses a kind of data processing method, comprising: the first mapping for receiving requesting client is asked
It asks;The corresponding source data of first mapping request is obtained, judges the mapping for whether being stored with the source data in data buffer storage
Information;In the case where the judgment result is yes, the map information is read from data buffer storage and is fed back.
Optionally, judge in the data buffer storage after the map information of the not stored source data, send the second mapping and ask
It asks;The map information for receiving the source data obtained according to second mapping request, received map information is transmitted to
The requesting client.
It optionally, further include following at least one to received map information progress caching step: by the source data
Memory queue is written in map information;Array is written into the map information of the source data;The map information of the source data is write
Enter data acquisition system;Mapping set is written into the map information of the source data.
Optionally, further includes: judge whether the storage time of the map information of the caching exceeds time threshold;If described
Storage time exceeds time threshold, then meets setting condition;If the storage time without departing from time threshold, is unsatisfactory for setting
Condition.
Optionally, further includes: judge whether the data volume of the map information of caching exceeds storage threshold value;If the data volume
Beyond storage threshold value, then meet setting condition;If the data volume is unsatisfactory for imposing a condition without departing from storage threshold value.
Optionally, further include the steps that the following at least one pair map information for meeting setting condition is cleared up: according to
Storage time determines map information for clearance, and the map information for clearance is removed from data buffer storage;According to utilization rate
It determines map information for clearance, the map information for clearance is removed from data buffer storage.
The embodiment of the present application also discloses a kind of data processing equipment, comprising: receiving module, for receiving requesting client
The map information of the source data of request;Feedback module, for the map information to be transmitted to the requesting client;Cache mould
Block is cached for the map information to the source data;Cleaning modul is set for the map information satisfaction in the caching
After fixed condition, the map information of the caching is cleared up.
Optionally, feedback module is also used to after receiving the request to the map information of the caching, from data buffer storage
The middle map information and feedback for reading the caching.
Optionally, the cache module includes: queue cache sub-module, for the map information of the source data to be written
Memory queue;Array cache sub-module, for array to be written in the map information of the source data;Gather cache sub-module, uses
In data acquisition system is written in the map information of the source data;Mapped cache submodule, for believing the mapping of the source data
Breath write-in mapping set.
Optionally, further includes: cleaning judgment module, for judging whether the storage time of map information of the caching surpasses
Time threshold out;If the storage time exceeds time threshold, meet setting condition;If the storage time is without departing from the time
Threshold value is then unsatisfactory for imposing a condition.
Optionally, further includes: cleaning judgment module, for judging whether the data volume of map information of caching exceeds storage
Threshold value;If the data volume meets setting condition beyond storage threshold value;If the data volume is without departing from storage threshold value, no
Meet and imposes a condition.
Optionally, the cleaning modul comprises determining that submodule, for determining that mapping for clearance is believed according to storage time
Breath;And/or map information for clearance is determined according to utilization rate;Remove submodule, for removed from data buffer storage it is described to
The map information of cleaning.
The embodiment of the present application also discloses a kind of data processing equipment, comprising: request receiving module, for receiving request visitor
First mapping request at family end;Judgment module judges data buffer storage for obtaining the corresponding source data of first mapping request
In whether be stored with the map information of the source data;Feedback module is cached, in the case where the judgment result is yes, from
The map information is read in data buffer storage and is fed back.
Optionally, further includes: mapping request module, for sending the second mapping request;Receiving module is mapped, for receiving
According to the map information for the source data that second mapping request obtains, the caching feedback module is also used to receive
Map information be transmitted to the requesting client.
Optionally, further includes: mapped cache module, for memory queue to be written in the map information of the source data;With/
Or, array is written in the map information of the source data;And/or data acquisition system is written into the map information of the source data;
And/or mapping set is written into the map information of the source data.
Optionally, the judgment module, when being also used to judge whether the storage time of the map information of the caching exceeds
Between threshold value;If the storage time exceeds time threshold, meet setting condition;If the storage time is without departing from time threshold
Value, then be unsatisfactory for imposing a condition.
Optionally, the judgment module, is also used to judge whether the data volume of the map information of caching exceeds storage threshold value;
If the data volume meets setting condition beyond storage threshold value;If the data volume is unsatisfactory for setting without departing from storage threshold value
Fixed condition.
Optionally, further includes: cache cleaner module, for determining map information for clearance according to storage time, from number
According to removing the map information for clearance in caching;And/or map information for clearance is determined according to utilization rate, from data
The map information for clearance is removed in caching.
The embodiment of the present application also discloses a kind of electronic equipment, comprising: one or more processors;Refer to being stored thereon with
One or more machine readable medias of order, when being executed by one or more of processors, so that the electronic equipment is held
Data processing method of the row as described in one or more in the embodiment of the present application.
The embodiment of the present application also discloses one or more machine readable medias, is stored thereon with instruction, when by one or
When multiple processors execute, so that electronic equipment executes the data processing side as described in one or more in the embodiment of the present application
Method.
Compared with prior art, the embodiment of the present application includes the following advantages:
In the embodiment of the present application, it can be obtained after the map information that requesting client requests source data by server-side
The map information of the source data of requesting client request, is then transmitted to the requesting client for map information, and to institute
The map information for stating source data is cached, and so as to obtain from caching when needing map information, and no longer inquires clothes
End be engaged in obtain.And for the map information of caching, can also be cleared up according to its setting condition, so as to clear up in time
Data avoid EMS memory occupation excessive, reduce the waste to resource.
Specific embodiment
In order to make the above objects, features, and advantages of the present application more apparent, with reference to the accompanying drawing and it is specific real
Applying mode, the present application will be further described in detail.
The embodiment of the present application can obtain the request by server-side when requesting client requests the map information of source data
Then map information is transmitted to the requesting client by the map information of the requested source data of client, and to the source
The map information of data is cached, thus can be directly acquired when calling the requesting client next time by caching, without
Query service end obtains again.And for the map information of caching, the source data can also be reflected after its setting condition
It penetrates information to be cleared up, so as to clear up data in time, avoids EMS memory occupation excessive.
In the embodiment of the present application, source data be it is various there are the data of map information, such as account, the various data in address,
Such as the map information of an account is one or more marks (Identity, ID), the map information of an address is one
Geographical location, for another example the map information of a logical address is memory address etc..Exist between source data and its map information and reflects
Relationship is penetrated, the two can be the data of identical or different type.
Referring to Fig.1, a kind of schematic diagram of data processing is provided.
The data processing system includes: requesting client 10, cache client 20 and server-side 30, wherein server-side 30
Various data, including source data and map information, such as the incidence relation of each account etc. are stored with, so as to provide mapping
(Mapping) service i.e. mapping obtains the map information of given source data.Cache client 20 can assist requesting client 10 to hold
The mapping of row data, can auxiliary request client obtain the map information and caching of given source data, can also be to the mapping of caching
Information carries out periodic cleaning.The mapping that requesting client 10 can obtain given source data when required by cache client 20 is believed
Breath.Wherein, requesting client 10 can be various application programs client, such as social application program (Application,
APP), APP, instant messaging APP, shopping APP etc. are paid, cache client is understood that becoming requesting client provides
The plug-in unit of Mapping service, multiple requesting clients can share the cache client, so that requesting client is being counted
Cache client is called to realize when according to mapping.
The embodiment of the present application can be mapped between different accounts, and source data includes the first account, and map information includes the
Two accounts.Wherein, for account for logging in the application, such as the first account logs in social activity APP, and the second account logs in payment
The intercommunication of different application can be achieved by the mapping between different accounts by APP etc..It can also be in same account in the embodiment of the present application
Number the sub- accounts of difference between map, then source data includes the sub- account of account, and mapping data include other sub- accounts of the account,
Other described sub- accounts are that source data corresponds to the sub- account except sub- account, if the sub- account more than two of the account, passes through
Primary mapping can obtain the corresponding each sub- account of the account as map information, thus can when needing the sub- account under the account
It is obtained from caching, without re-requesting, improving treatment effeciency and reducing request number of times.
In the embodiment of the present application, one or more data can be obtained as map information based on source data, this can be inquired
Source data corresponding one or more mapping relations obtain map information.Such as showing including multiple sub- accounts under an account
In example, other ID of the account are can be obtained in any ID based on the account.Such as multiple sub- accounts are corresponded under the account of website A,
The ID of sub- account include: account identification (accountId), login banner (loginId), enterprise customer identify (companyId),
The member of website A identifies (AmemberId), the user identifier (Aid) of website A, user's pet name (nickname) etc., in different fields
Sub- account needed for scape is different.Each ID is obtained when can map between the ID for requesting the account for the first time to be cached, thus
It is subsequent directly query caching to obtain when needing to be mapped to other ID.
Wherein, requesting client can be issued to cache client and be requested, and to obtain the map information of source data, then be cached
Client is issued to server-side and is requested, and receives the map information of the source data.Caching hardship is short to be transmitted to request for map information
Client and the map information of the source data is cached, the map information of source data is such as cached to queue, array
In equal data buffer storages.Map information committed memory is excessive in order to prevent, can also the map information of caching meet impose a condition after,
The map information of the caching is cleared up, all or part of map information in data buffer storage is deleted.
A kind of exemplary process flow is as follows: requesting client 10 sends the first mapping request to cache client 20, is somebody's turn to do
Source data, such as account information are carried in first mapping request;Cache client 20 inquires in data buffer storage whether be stored with this
The map information of source data, obtains and is fed back if having stored, should if not stored send the second mapping request to server-side 30
The source data for needing to map is carried in second mapping request;Server-side 30 can be inquired according to source data, obtain source data
Cache client 20 is fed back to after map information;After cache client 20 gets the map information of source data, request is fed back to
Client 10 simultaneously caches the map information of source data, such as storage is into queue, array data buffer storage.And client
20 can also clear up the map information of the various source datas of caching, for example, according to storage time, according to the number of data buffer storage
According to the setting condition of the settings such as amount cleaning, to delete corresponding map information after meeting setting condition.
In the embodiment of the present application, the data buffer storage of cache mapping information can be multiple types, specifically can be according to used
Data structure determines, wherein data buffer storage comprises at least one of the following: memory queue, array, data acquisition system set, mapping set
Map, also can be used other data structures composition, and the embodiment of the present application is not construed as limiting this.Wherein, data buffer storage can be in caching visitor
Family end is established when obtaining the map information of source data from server-side for the first time, such as memory is established when getting map information for the first time
Queue.
Wherein, memory queue is a kind of linear list, and the access of data is characterized in first in first out, i.e., in the front end of table
(front) delete operation is carried out, and carries out insertion operation at the rear end of table (rear), wherein the end for carrying out insertion operation is known as
Tail of the queue, the end for carrying out delete operation is known as team's head, therefore the memory queue is considered as a kind of restricted linear list of operation.I.e.
The map information of the source data newly obtained is inserted into from tail of the queue, and is deleted when deleting map information since team's head.
Array is a kind of set of data element, and usual array can store same type of data, wherein the mapping stored
Information can be unordered storage.
Data acquisition system set is a kind of data acquisition system, wherein the data object stored i.e. map information can not be by specific side
Formula sequence, and storing data is not repeated without repeating objects.
Mapping set map is the mapping set of a data, wherein the data stored can be according to the mapping mode of key-value pair
Storage stores that is, by way of key-value, therefore source data is key, map information value, consequently facilitating obtaining
The map information of source data.
Therefore, the map information to the source data caches, and comprises at least one of the following step: by the source
Memory queue is written in the map information of data;Array is written into the map information of the source data;By the mapping of the source data
Data acquisition system is written in information;Mapping set is written into the map information of the source data.
The embodiment of the present application can also clear up the map information of the source data of caching automatically, to avoid memory spilling.Its
In, the cleaning of map information can be judged in several ways, such as according to time, data volume etc., it specifically can be according to set
The cleaning condition set determines.
It is time conditions that one of which, which imposes a condition, can set the time threshold of map information caching, such as one day, one week
Deng to can determine whether the storage time of the map information of the caching exceeds time threshold;If the storage time exceeds
Time threshold then meets setting condition;If the storage time without departing from time threshold, is unsatisfactory for imposing a condition.Meeting
After setting condition, the map information that storage time exceeds time threshold can be cleared up.
It is storage condition that another, which imposes a condition, i.e., judges according to the data volume of the map information of caching, therefore can determine whether
Whether the data volume of the map information of caching exceeds storage threshold value;If the data volume of the map information of the caching is beyond storage threshold
Value, then meet setting condition;If the data volume of the map information of the caching is unsatisfactory for setting item without departing from storage threshold value
Part.After meeting setting condition, all or part of map information can be cleared up from data buffer storage, it is as longer between storage in cleared up
Map information, or the cleaning lower map information of utilization rate etc., obtain corresponding memory space.It wherein, can be in setting data
The size of caching, such as the size of data of queue, array can be accordingly set when caching, therefore slow to data in storage mapping information
When depositing, if the data buffer storage has been filled with or the remaining space of data buffer storage is not enough to cache current map information, characterization caching
Map information data volume beyond storage threshold value, can clearing up all or part of map information, to have obtained corresponding storage empty
Between.
Wherein, the map information of the caching is cleared up, comprises at least one of the following step: is true according to storage time
Fixed map information for clearance, removes the map information for clearance from data buffer storage;It is determined according to utilization rate for clearance
Map information, the map information for clearance is removed from data buffer storage.It determines to meet to impose a condition and needs to clear up number
When according to caching, the longer map information of storage time can be cleared up, i.e., the map information using storage time beyond time threshold as
Map information for clearance, for another example using the most normal N number of map information of storage time as map information etc. for clearance, then from
The map information for clearance is removed in data buffer storage, it is subsequent that new map information can be stored in data buffer storage.It can also be clear
The lower map information of utilization rate is managed, i.e., the number being called according to map information determines utilization rate, by utilization rate lower than use
The map information of rate threshold value is as map information for clearance, and for another example the smallest top n map information of utilization rate is as clearance
Map information etc., the map information for clearance is then removed from data buffer storage, it is subsequent to be stored in data buffer storage
New map information.
To the map information of the source data returned by data buffer storage storage service end, thus in requesting client next time
When request, it can be obtained directly from data buffer storage without re-requesting server-side, reduce the number of far call, reduce more
It is secondary to call the performance cost generated when mapping (Mapping) service.
And the map information of caching can be cleared up automatically, wherein client is designed according to thread-level, that is, uses a line
Process control cache client, data buffer storage etc., so that existing different mappings request be avoided to lead to line by different threads realization
The too many problem of number of passes avoids the memory as caused by Thread Count is too many from overflowing.
Also, above-mentioned realization logic is encapsulated in cache client, hence for requesting client, server-side etc. application according to
Rely side's unaware, without intrusion, using relying party without change cost.
Referring to Fig. 2, a kind of step flow chart of data processing method embodiment of the application is shown.
Step 202, the map information of the source data of requesting client request is received.
Requesting client may need to carry out the mapping of source data under various scenes, such as account is reflected between different application
It penetrates, therefore transmittable corresponding request is to cache client, cache client is according to requesting to determine source data, then to server-side
The map information of the source data is requested, it accordingly can be from the map information of the requested source data of server-side acquisition request client.
Wherein, when there are mapping relations between multiple data, the respectively data with mapping relations can be obtained based on the source data and are used as and are reflected
Penetrate information.The source data includes account or the corresponding mark of the account;The map information includes that the account is corresponding
One or more mark.
Step 204, the map information is transmitted to the requesting client, and to the map information of the source data into
Row caching.
The map information of source data is transmitted to corresponding requesting client, and can also be to the map information of the source data
It is cached, i.e., is stored the map information of source data into corresponding data buffer storage, so that subsequent receiving to described slow
After the request for the map information deposited, directly it can read and feed back from data buffer storage, without inquiring server-side again.
Wherein, the map information to the source data caches, and comprises at least one of the following step: by the source
Memory queue is written in the map information of data;Array is written into the map information of the source data;By the mapping of the source data
Data acquisition system is written in information;Mapping set is written into the map information of the source data.The data buffer storage specifically stored can be according to
It is determined according to setting.
Step 206, after the map information of the caching meets and imposes a condition, the map information of the caching is carried out clear
Reason.
The map information of caching can also be cleared up automatically, to avoid excessive control is occupied, therefore can determine whether caching
Whether map information meets setting condition, and after the map information of caching meets and imposes a condition, believes the mapping of the caching
Breath is cleared up.
Wherein, for time conditions, it can determine whether the storage time of the map information of the caching exceeds time threshold;
If the storage time exceeds time threshold, meet setting condition;If the storage time is discontented with without departing from time threshold
Foot imposes a condition.After meeting setting condition, the map information that storage time exceeds time threshold can be cleared up.
For storage condition, it can determine whether the data volume of the map information of caching exceeds storage threshold value;If the caching
Map information data volume beyond storage threshold value, then meet setting condition;If the data volume of the map information of the caching is not
Beyond storage threshold value, then it is unsatisfactory for imposing a condition.After meeting setting condition, all or part of reflect can be cleared up from data buffer storage
Information is penetrated, such as longer map information, or the cleaning lower map information of utilization rate etc. between cleaning storage, is obtained corresponding
Memory space.
It can also be when storage mapping information be to data buffer storage, if the data buffer storage has been filled with or the remaining space of data buffer storage
It is not enough to cache current map information, characterizes the data volume of the map information of caching beyond storage threshold value, whole can be cleared up
Or part mapping information has obtained corresponding memory space.
Cleaning for the map information of caching can determine map information for clearance according to storage time, slow from data
The middle removal map information for clearance is deposited, map information for clearance can also be determined according to utilization rate, from data buffer storage
Remove the map information for clearance.
In conclusion request visitor can be obtained by server-side after the map information that requesting client requests source data
The map information of the source data of family end request, is then transmitted to the requesting client for map information, and to the source data
Map information cached, so as to obtaining from caching when needing map information, and no longer query service end is obtained
It takes.And for the map information of caching, it can also be cleared up according to its setting condition, so as to clear up data in time, be kept away
It is excessive to exempt from EMS memory occupation.
Therefore requesting client to cache client request source data map information can, cache client can determine whether to count
According to the map information for whether being stored with the source data in caching;If being stored with the map information of the source data, from data
The map information is read in caching and is fed back;If the map information of the not stored source data, sends the second mapping request;
The map information is transmitted to described ask by the map information for receiving the source data obtained according to second mapping request
Seek client.
Referring to Fig. 3, a kind of processing interaction schematic diagram of Mapping service in the embodiment of the present application is shown.
Step 302, requesting client 10 sends the first mapping request to cache client 20.
Step 304, cache client 20 generates and sends the second mapping request to server-side 30.
Step 306, cache client 20 receives the map information of the source data.
Step 308, the map information of the source data is sent to requesting client 10 by cache client 20, and back feeding this reflect
Information is penetrated to memory queue, i.e., is stored the map information of the source data into memory queue.
Step 310, requesting client 10 sends the first mapping request to cache client 20.
Step 312, cache client 20 obtains the map information of the source data in local memory queue.
Step 314, the map information of the source data is sent to requesting client 10 by cache client 20.
To which to server-side request, the source can be stored later when needing the map information of a source data for the first time
The map information of data is reduced and is repeatedly called to directly obtain and feed back from caching when requesting client needs again again
The performance cost generated when Mapping.
The source data includes account or the corresponding mark of the account;The map information includes that the account is corresponding
One or more mark.The account of website A corresponds to the ID of sub- account and includes: account identification (accountId), logs in example as above
Identify the user of (loginId), enterprise customer's mark (companyId), member's mark (AmemberId) of website A, website A
Identify (Aid), user's pet name (nickname) etc..APP01 needs the loginId of the account, will give birth to as source data the account
At the first mapping request, the cache client not stored account corresponding ID then produces the second mapping request and obtains from server
The account corresponding ID is taken, to receive the account corresponding each ID, then can feed back loginId to APP01.Then exist
When APP02 needs the companyId based on the account to inquire AmemberId, first can be sent using companyId as source data
It requests to cache client, so that cache client can be from data cached middle acquisition companyId corresponding A memberId, then
Feed back to APP02.
For the map information of caching, cache client can also carry out the operation such as clearing up, and avoid EMS memory occupation excessive.
Referring to Fig. 4, the step flow chart of the application another kind data processing method embodiment is shown.
Step 402, the first mapping request of requesting client is received.
Step 404, the corresponding source data of first mapping request is obtained, judges whether to be stored in data buffer storage described
The map information of source data.
When requesting client needs the map information of source data, the first mapping request can be transmitted to cache client, caching
Whether client can obtain the source data of carrying from first mapping request, then inquire in local data buffer storage and be stored with
The map information of the source data.If so, being stored with the map information of the source data, step 406 can be performed;If it is not, i.e. not
The map information of the source data is stored, step 408 can be performed.
Step 406, the map information is read from data buffer storage and is fed back.
If being stored with the map information of the source data, i.e., inquires and cached before the map information of the source data
It crosses, therefore the map information can be read from data buffer storage, then the map information of the source data is fed back to and is asked accordingly
Seek client.I.e. for the map information of the source data cached, the source data is requested again in the requesting client
When map information, the map information can be read from data buffer storage and is fed back.
Step 408, the second mapping request is sent.
If the map information of the not stored source data, the second mapping request can be generated based on the source data, then sent out
The second mapping request is sent to obtain the map information of the source data.
Step 410, the map information for receiving the source data obtained according to second mapping request, reflects received
It penetrates information and is transmitted to the requesting client.
Server-side, which is based on second mapping request, can obtain source data, then can inquire the corresponding mapping letter of the source data
Breath such as obtains the map information of source data in tair from its caching, such as the Mapping value that Mapping is serviced, then by source data
Map information return to cache client.Then the map information of the correspondingly received source data of cache client is believed mapping
Breath is transmitted to the requesting client, so that requesting client can continue to execute subsequent process flow.
Step 412, received map information is cached.
Cache client can also cache the map information of source data, such as will be in the map information write-in of source data
It deposits in the data buffer storages such as queue, array, data acquisition system, mapping set.
Step 414, judge whether the map information of the caching meets setting condition.
The map information stored in data buffer storage can also be cleared up automatically, thus can determine whether the caching map information whether
Meet and imposes a condition.It imposes a condition if meeting, thens follow the steps 416, if being unsatisfactory for imposing a condition, return step 414 is under
It is secondary to continue to judge.
Wherein, the judgement that time conditions can be performed is when judging whether the storage time of the map information of the caching exceeds
Between threshold value;If the storage time exceeds time threshold, meet setting condition;If the storage time is without departing from time threshold
Value, then be unsatisfactory for imposing a condition.Wherein, the judgement of above-mentioned time conditions can be periodically executed, so that periodic cleaning caches.
Also the judgement that storage condition can be performed judges whether the data volume of the map information of caching exceeds storage threshold value;If
The data volume then meets setting condition beyond storage threshold value;If the data volume is unsatisfactory for setting without departing from storage threshold value
Condition.It wherein, can be to execute the judgement of storage condition in cache mapping information, so that it is guaranteed that for increasing depositing for map information newly
Storage.
Step 416, the map information of the caching is cleared up.
After the map information of caching meets and imposes a condition, the map information of the caching is cleared up, it can be according to depositing
The storage time determines map information for clearance, and the map information for clearance is removed from data buffer storage, can also be according to using
Rate determines map information for clearance, and the map information for clearance is removed from data buffer storage.So as to from caching
It determines the map information for needing to clear up and deletion, clears up the memory space of caching.
It is very big in business scenario downward dosage to map (Mapping) service, primary request may need repeatedly to call
Mapping is converted, therefore the embodiment of the present application stores the map information of source data using cache client, and
Primary request can obtain the associated multiple data of source data as map information, reduce the property generated when repeatedly calling Mapping
Energy expense, while realizing the automatic cleaning of caching, relying party has no to perceive.
To use thread-level cache client, for memory queue is as data buffer storage, the mapping of a source data is discussed
Information corresponds to the treatment process of Mapping service.
Wherein, thread-level cache client can be used, so that the logic of local cache, cleaning data is encapsulated in thread-level
In cache client, so that the life cycle of data buffer storage is consistent with the calling thread of cache client.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method
It closes, but those skilled in the art should understand that, the embodiment of the present application is not limited by the described action sequence, because according to
According to the embodiment of the present application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should
Know, the embodiments described in the specification are all preferred embodiments, and related movement not necessarily the application is implemented
Necessary to example.
On the basis of the above embodiments, the present embodiment additionally provides a kind of data processing equipment, can be applied to electronics and sets
In standby.
Referring to Fig. 5, a kind of structural block diagram of data processing equipment embodiment of the application is shown, can specifically include as follows
Module:
Receiving module 502, the map information of the source data for receiving requesting client request.
Feedback module 504, for the map information to be transmitted to the requesting client.
Cache module 506 is cached for the map information to the source data.
Cleaning modul 508, for believing the mapping of the caching after the map information of the caching meets and imposes a condition
Breath is cleared up.
To sum up, the requesting client can be obtained by server-side after the map information that requesting client requests source data
Then map information is transmitted to the requesting client, and reflected to the source data by the map information of the source data of request
It penetrates information to be cached, so as to obtaining from caching when needing map information, and no longer query service end obtains.And
And for the map information of caching, it can also be cleared up according to its setting condition, so as to clear up data in time, avoid memory
It occupies excessive.
Referring to Fig. 6, a kind of structural block diagram of data processing equipment alternative embodiment of the application is shown, can specifically include
Following module:
Receiving module 502, the map information of the source data for receiving requesting client request.
Feedback module 504, for the map information to be transmitted to the requesting client.
Cache module 506 is cached for the map information to the source data.
Judgment module 510 is cleared up, for judging whether the map information of the source data meets setting condition.
Cleaning modul 508, for believing the mapping of the caching after the map information of the caching meets and imposes a condition
Breath is cleared up.
Wherein, feedback module 504 are also used to after receiving the request to the map information of the caching, slow from data
Deposit the middle map information and feedback for reading the caching.
The cache module 506 includes: queue cache sub-module 5062, array cache sub-module 5064, set caching
Module 5066 and mapped cache submodule 5068, in which:
Queue cache sub-module 5062, for memory queue to be written in the map information of the source data.
Array cache sub-module 5064, for array to be written in the map information of the source data.
Gather cache sub-module 5066, for data acquisition system to be written in the map information of the source data.
Mapped cache submodule 5068, for mapping set to be written in the map information of the source data.
Judgment module 510 is cleared up, for judging whether the storage time of map information of the source data exceeds time threshold
Value;If the storage time exceeds time threshold, meet setting condition;If the storage time without departing from time threshold,
It is unsatisfactory for imposing a condition.
Judgment module 510 is cleared up, for judging whether the data volume of data buffer storage exceeds storage threshold value, the data buffer storage
It comprises at least one of the following: memory queue, array, data acquisition system, mapping set;It is deposited if the data volume of the data buffer storage exceeds
Threshold value is stored up, then meets setting condition;If the data volume of the data buffer storage is unsatisfactory for imposing a condition without departing from storage threshold value.
The cleaning modul 508 comprises determining that submodule 5082 and removes submodule 5084, in which:
Submodule 5082 is determined, for determining map information for clearance according to storage time;And/or according to utilization rate
Determine map information for clearance;
Submodule 5084 is removed, for removing the map information for clearance from data buffer storage.
On the basis of the above embodiments, the present embodiment additionally provides a kind of data processing equipment, can be applied to electronics and sets
In standby.
Referring to Fig. 7, show the structural block diagram of the application another kind data processing equipment embodiment, can specifically include as
Lower module:
Request receiving module 702, for receiving the first mapping request of requesting client.
Judgment module 704 judges whether deposit in data buffer storage for obtaining the corresponding source data of first mapping request
Contain the map information of the source data.
Feedback module 706 is cached, in the case where the judgment result is yes, reading the mapping letter from data buffer storage
It ceases and feeds back.
To the map information of the source data returned by data buffer storage storage service end, thus in requesting client next time
When request, it can be obtained directly from data buffer storage without re-requesting server-side, reduce the number of far call, reduce more
It is secondary to call the performance cost generated when mapping (Mapping) service.
Referring to Fig. 8, the structural block diagram of the application another kind data processing equipment alternative embodiment is shown, specifically can wrap
Include following module:
Request receiving module 702, for receiving the first mapping request of requesting client.
Judgment module 704 judges whether deposit in data buffer storage for obtaining the corresponding source data of first mapping request
Contain the map information of the source data.
Feedback module 706 is cached, for reading the map information from data buffer storage and feeding back;And it will be received
Map information is transmitted to the requesting client.
Mapping request module 708 sends for judging in data buffer storage after the map information of the not stored source data
Two mapping requests.
Receiving module 710 is mapped, for receiving the mapping letter of the source data obtained according to second mapping request
Breath.
Mapped cache module 712, for being cached to received map information, wherein can reflecting the source data
Penetrate information write-in memory queue;And/or array is written into the map information of the source data;And/or by the source data
Data acquisition system is written in map information;And/or mapping set is written into the map information of the source data.
Cache cleaner module 714, for clearing up the map information for meeting setting condition, wherein can be according to storage
Time determines map information for clearance, and the map information for clearance is removed from data buffer storage;And/or according to using
Rate determines map information for clearance, and the map information for clearance is removed from data buffer storage.
In one alternative embodiment of the application, the judgment module 704 is also used to judge the map information of the caching
Whether storage time exceeds time threshold;If the storage time exceeds time threshold, meet setting condition;If the storage
Time without departing from time threshold, then is unsatisfactory for imposing a condition.
In another alternative embodiment of the application, the judgment module 704 is also used to judge the number of the map information of caching
According to amount whether beyond storage threshold value;If the data volume meets setting condition beyond storage threshold value;If the data volume does not surpass
Threshold value is stored out, then is unsatisfactory for imposing a condition.
Wherein, the source data includes account or the corresponding mark of the account;The map information includes the account
Corresponding one or more marks.
And the map information of caching can be cleared up automatically, wherein client is designed according to thread-level, that is, uses a line
Process control cache client, data buffer storage etc., so that existing different mappings request be avoided to lead to line by different threads realization
The too many problem of number of passes avoids the memory as caused by Thread Count is too many from overflowing.
Also, thread-level cache client can be used, so that the logic of local cache, cleaning data is encapsulated in thread-level
In cache client, so that the life cycle of data buffer storage is consistent with the calling thread of cache client.Above-mentioned realization logic envelope
In cache client, hence for requesting client, server-side etc. using relying party's unaware, without intrusion, using dependence
Side is without change cost.
Embodiment of the disclosure can be implemented as using any suitable hardware, firmware, software, or and any combination thereof into
The device of the desired configuration of row, which may include the electronic equipments such as server (cluster), terminal device.Fig. 9 schematically shows
The exemplary means 900 that can be used for realizing each embodiment described herein are gone out.
For one embodiment, Fig. 9 shows exemplary means 900, the device have one or more processors 902,
It is coupled to the control module (chipset) 904 of at least one of (one or more) processor 902, is coupled to control mould
The memory 906 of block 904, is coupled nonvolatile memory (the NVM)/storage equipment 908 for being coupled to control module 904
To one or more input-output apparatus 910 of control module 904, and it is coupled to the network interface of control module 906
912。
Processor 902 may include one or more single or multiple core processors, processor 902 may include general processor or
Any combination of application specific processor (such as graphics processor, application processor, Baseband processor etc.).In some embodiments,
Device 900 can be as equipment such as the servers at transcoding end described in the embodiment of the present application.
In some embodiments, device 900 may include one or more computer-readable medium (examples with instruction 914
Such as, memory 906 or NVM/ store equipment 908) and mutually merge with the one or more computer-readable medium and be configured as
Execute instruction 914 one or more processors 902 to realize module thereby executing movement described in the disclosure.
For one embodiment, control module 904 may include any suitable interface controller, with to (one or more)
At least one of processor 902 and/or any suitable equipment communicated with control module 904 or component provide any appropriate
Interface.
Control module 904 may include Memory Controller module, to provide interface to memory 906.Memory Controller
Module can be hardware module, software module and/or firmware module.
Memory 906 can be used for for example, load of device 900 and storing data and/or instruction 914.One is implemented
Example, memory 906 may include any suitable volatile memory, for example, DRAM appropriate.In some embodiments, it stores
Device 906 may include four Synchronous Dynamic Random Access Memory of Double Data Rate type (DDR4SDRAM).
For one embodiment, control module 904 may include one or more i/o controllers, to deposit to NVM/
It stores up equipment 908 and (one or more) input-output apparatus 910 provides interface.
For example, NVM/ storage equipment 908 can be used for storing data and/or instruction 914.NVM/ storage equipment 908 can wrap
It includes any suitable nonvolatile memory (for example, flash memory) and/or may include that any suitable (one or more) is non-volatile
Property storage equipment (for example, one or more hard disk drive (HDD), one or more CD (CD) drivers and/or one or
Multiple digital versatile disc (DVD) drivers).
NVM/ storage equipment 908 may include a part for the equipment being physically mounted on as device 900
Storage resource or its a part that the equipment can be not necessarily as by equipment access.For example, NVM/ storage equipment 908 can
It is accessed by network via (one or more) input-output apparatus 910.
(one or more) input-output apparatus 910 can be provided for device 900 interface with other any equipment appropriate
Communication, input-output apparatus 910 may include communication component, audio component, sensor module etc..Network interface 912 can be dress
900 offer interfaces are set with by one or more network communications, device 900 can according to one or more wireless network standards and/
Or arbitrary standards in agreement and/or agreement are carried out wireless communication with the one or more components of wireless network, such as are accessed
Wireless network based on communication standard, such as WiFi, 2G, 3G, 4G or their combination carry out wireless communication.
For one embodiment, at least one of (one or more) processor 902 can be with one of control module 904
Or the logic of multiple controllers (for example, Memory Controller module) is packaged together.For one embodiment, (one or more
It is a) at least one of processor 902 can be packaged together with the logic of one or more controllers of control module 904 with shape
At system in package (SiP).For one embodiment, at least one of (one or more) processor 902 can be with control mould
The logic of one or more controllers of block 904 is integrated on same mold.For one embodiment, (one or more) processing
At least one of device 902 can be integrated on same mold with the logic of one or more controllers of control module 904 with shape
At system on chip (SoC).
In various embodiments, device 900 can be, but not limited to be: server, desk-top calculating equipment or mobile computing are set
Terminal devices such as standby (for example, lap-top computing devices, handheld computing device, tablet computer, net books etc.).In each embodiment
In, device 900 can have more or fewer components and/or different frameworks.For example, in some embodiments, device 900 wraps
Include one or more video cameras, keyboard, liquid crystal display (LCD) screen (including touch screen displays), nonvolatile memory end
Mouth, mutiple antennas, graphic chips, specific integrated circuit (ASIC) and loudspeaker.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple
Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with
The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiments of the present application may be provided as method, apparatus or calculating
Machine program product.Therefore, the embodiment of the present application can be used complete hardware embodiment, complete software embodiment or combine software and
The form of the embodiment of hardware aspect.Moreover, the embodiment of the present application can be used one or more wherein include computer can
With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code
The form of the computer program product of implementation.
The embodiment of the present application is referring to according to the method for the embodiment of the present application, terminal device (system) and computer program
The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions
In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these
Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals
Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices
Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices
In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet
The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram
The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that
Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus
The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart
And/or in one or more blocks of the block diagram specify function the step of.
Although preferred embodiments of the embodiments of the present application have been described, once a person skilled in the art knows bases
This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as
Including preferred embodiment and all change and modification within the scope of the embodiments of the present application.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap
Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article
Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited
Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of data processing method provided herein, a kind of data processing equipment, a kind of server-side and one
Kind of storage medium, is described in detail, and specific case used herein carries out the principle and embodiment of the application
It illustrates, the description of the example is only used to help understand the method for the present application and its core ideas;Meanwhile for this field
Those skilled in the art, according to the thought of the application, there will be changes in the specific implementation manner and application range, to sum up
Described, the contents of this specification should not be construed as limiting the present application.