CN104978278B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN104978278B
CN104978278B CN201410148478.9A CN201410148478A CN104978278B CN 104978278 B CN104978278 B CN 104978278B CN 201410148478 A CN201410148478 A CN 201410148478A CN 104978278 B CN104978278 B CN 104978278B
Authority
CN
China
Prior art keywords
memory
data
target data
processing
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410148478.9A
Other languages
Chinese (zh)
Other versions
CN104978278A (en
Inventor
童寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201410148478.9A priority Critical patent/CN104978278B/en
Publication of CN104978278A publication Critical patent/CN104978278A/en
Application granted granted Critical
Publication of CN104978278B publication Critical patent/CN104978278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Stored Programmes (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data processing method and device. Wherein, the method comprises the following steps: receiving a request sent by at least one application in the plurality of applications for processing target data; searching address information of target data in a memory according to the request, wherein the target data stored in the memory are shared to a plurality of applications; and processing the target data in the memory indicated by the address information, and updating the processed target data in the memory. The method and the device solve the technical problem that in the prior art, the processing speed is low due to the fact that data are required to be called from the data files corresponding to the applications, and achieve the technical effect that data sharing is carried out on the memory data corresponding to the applications, and then the data processing speed is improved.

Description

Data processing method and device
Technical Field
The present application relates to the field of computers, and in particular, to a data processing method and apparatus.
Background
At present, the existing methods for processing data in application generally adopt the following two ways:
first, each individual application depends on the local memory, and data processing is performed on data in the local memory of each application, which not only results in waste of memory space, but also results in a large amount of communication overhead and IO overhead during data update, and in addition, a data copy stored in the local memory of each application also results in update synchronization cost of data.
Secondly, the third-party storage component is relied on, and the stored data is processed through the newly introduced third-party storage component, so that the maintenance cost of the equipment is improved, and the third-party storage component cannot be conveniently and freely started and changed along with the automatic change of the application, so that the performance loss caused by the network IO is further caused.
At present, the data processing method in the prior art also has the technical problems of memory space waste, high data updating overhead, high data cost, high equipment cost, incapability of changing the data to be processed along with the change of the application, and the like.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, and the method and device at least solve the technical problem of low processing speed caused by the fact that data are required to be called from data files corresponding to applications in the prior art.
According to an aspect of an embodiment of the present application, there is provided a data processing method, including: receiving a request sent by at least one application in the plurality of applications for processing target data; searching address information of target data in a memory according to the request, wherein the target data stored in the memory are shared to a plurality of applications; and processing the target data in the memory indicated by the address information, and updating the processed target data in the memory.
Optionally, before receiving a request for processing the target data sent by at least one of the plurality of applications, the method further includes: all or part of data to be processed by a plurality of applications is mapped into a memory, and address information for identifying the position of the data in the memory is established.
Optionally, mapping all or part of data to be processed by the plurality of applications into the memory includes: judging whether the size of all data of the plurality of applications is smaller than a capacity threshold of a memory; if the size of all the data of the plurality of applications is judged to be smaller than or equal to the capacity threshold value of the memory, all the data of the plurality of applications are mapped into the memory; and if the size of all the data of the plurality of applications is judged to be larger than the capacity threshold value of the memory, mapping partial data of the plurality of applications into the memory.
Optionally, if it is determined that the size of all the data of the multiple applications is greater than the capacity threshold of the memory, mapping part of the data of the multiple applications to the memory includes: mapping initialization data when a plurality of applications are started into a memory, wherein the initialization data comprises: data stored in the memory before the last closing of the plurality of applications; continuing mapping under the condition that the capacity threshold value of the memory also meets a preset condition, wherein the preset condition comprises the following steps: the data in the memory has not reached the capacity threshold in addition to the initialization data.
Optionally, processing the target data in the memory indicated by the address information, and updating the processed target data in the memory includes: after a first request for processing data sent by a first application in the multiple applications is received, processing target data in a memory indicated by the address information according to first operation information carried in the first request to obtain processed target data, and updating the processed target data in the memory; after a second request for processing data sent by a second application of the multiple applications is received, the processed target data is processed again according to second operation information carried in the second request to obtain the processed target data, and the processed target data is updated in the memory.
Optionally, the processing of the target data comprises at least one of: loading target data in the memory; modifying target data in the memory; and deleting the target data in the memory.
Optionally, the method further comprises: and storing the processed data in the memory to a corresponding disk file according to a preset requirement.
According to another aspect of the embodiments of the present application, there is also provided a data processing apparatus, including: a receiving unit configured to receive a request for processing target data sent by at least one of the plurality of applications; the searching unit is used for searching the address information of the target data in the memory according to the request, wherein the target data stored in the memory is shared by a plurality of applications; and the processing unit is used for processing the target data in the memory indicated by the address information and updating the processed target data in the memory.
Optionally, the apparatus further comprises: and the mapping unit is used for mapping all or part of data to be processed by a plurality of applications into the memory and establishing address information for identifying the position of the data in the memory.
Optionally, the mapping unit includes: the judging module is used for judging whether the size of all data of the plurality of applications is smaller than the capacity threshold of the memory; the first mapping module is used for mapping all the data of the plurality of applications into the memory when the size of all the data of the plurality of applications is judged to be smaller than or equal to the capacity threshold value of the memory; and the second mapping module is used for mapping part of the data of the plurality of applications to the memory when judging that the size of all the data of the plurality of applications is larger than the capacity threshold of the memory.
Optionally, the second mapping module comprises: the first mapping submodule is used for mapping initialization data when a plurality of applications are started into a memory, wherein the initialization data comprises: data stored in the memory before the last closing of the plurality of applications; a second mapping submodule, configured to continue mapping when a capacity threshold of the memory further satisfies a predetermined condition, where the predetermined condition includes: the data in the memory has not reached the capacity threshold in addition to the initialization data.
Optionally, the processing unit comprises: the first processing module is used for processing the target data in the memory indicated by the address information according to the first operation information carried in the first request after receiving the first request for processing the data sent by the first application in the plurality of applications, so as to obtain the processed target data, and updating the processed target data in the memory; and the second processing module is used for processing the processed target data again according to second operation information carried in the second request after receiving a second request which is sent by a second application of the plurality of applications and is used for processing the data, so as to obtain the processed target data, and updating the processed target data in the memory.
Optionally, the processing unit comprises at least one of: the loading module is used for loading the target data in the memory; the modification module is used for modifying the target data in the memory; and the deleting module is used for deleting the target data in the memory.
Optionally, the apparatus further comprises: and the storage unit is used for storing the processed data in the memory to the corresponding disk file according to a preset requirement.
In the embodiment of the application, data to be processed by a plurality of applications are mapped into corresponding memories, and address information for uniquely identifying the positions of the data is established. At least one application in the plurality of applications sends a request for processing target data, address information of the target data in a memory is searched according to the request, and then the target data is processed according to data in the memory indicated by the address information.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of an alternative data processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative data processing method according to an embodiment of the present application;
FIG. 3 is a flow diagram of another alternative data processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another alternative data processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative data processing apparatus according to an embodiment of the present application; and
FIG. 6 is a schematic diagram of another alternative data processing apparatus according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example 1
According to an embodiment of the present application, there is provided a data processing method, as shown in fig. 1, the method including:
s102, receiving a request for processing target data sent by at least one application in a plurality of applications;
optionally, in this embodiment, the data processing method may be applied to a data processing scenario in a terminal application, where the terminal may include, but is not limited to, at least one of the following: cell-phone, panel computer. Optionally, the processing of the target data in this embodiment includes at least one of the following operations: loading target data in the memory; modifying target data in the memory; and deleting the target data in the memory. Optionally, in this embodiment, the cache component receives the request for processing the target data, where the cache component in this embodiment may be located in each application respectively.
For example, referring to fig. 2, assuming that the application a is a video playing application, the application B is an instant messaging application, and the application C is a position navigation application, the applications share data in the memory, and when any one application requests to process target data in the memory, cross-process processing can be implemented, and the processed target data is updated in the memory. For example, after receiving a request of an application B for loading target data related to user login information, a cache component loads the user login information to a corresponding location in a memory through address information corresponding to the request, and updates the target data in the memory after the loading is completed; the cache component receives a request of the application C for modifying the target data of the user login information, and can know that the address information requested by the application B is consistent with the address information requested by the application A through the request, modify the target data (namely the user login information) at the same position, and update the modified target data in the memory. The above examples are only examples, and the present application is not limited thereto.
Optionally, the data processing method in this embodiment may include, but is not limited to: before receiving a request sent by at least one application in the plurality of applications for processing target data, mapping all or part of data to be processed by the plurality of applications into a memory. Optionally, the partial data in this embodiment includes but is not limited to: null, initialization data. Wherein the initialization data includes but is not limited to: data stored in memory by a plurality of applications prior to the last shutdown.
Optionally, in this embodiment, but not limited to, the data processed in the memory may be stored to the corresponding disk file according to a predetermined requirement. Wherein, the predetermined requirement may include but is not limited to: timing, and sending out a request by a user. And storing the modified data in the memory to a disk file at proper time so as to enable the modification of the application in the memory to be persistent to the corresponding disk file.
S104, searching the address information of the target data in the memory according to the request;
optionally, the target data stored in the memory in this embodiment is shared by multiple applications. That is, multiple applications may share data in the memory and perform cross-process processing on target data of the address range indicated by the same address information.
Optionally, the address information in this embodiment includes, but is not limited to, a space name of a namespace and a file name of a file in the corresponding namespace, different data in the application first corresponds to different namespaces, and then is identified by different files in a refined manner, so that the data in the memory can be uniquely identified by different files in the address information. For example, the address information of the location indicated by file a in namespace a may be identified by a-a, and the specific address range indicated by the address information may be 0x0000 (memory start location) to 0xFFFF (memory end location). Optionally, the address information established by the applications in the present embodiment at the time of initialization is not uniquely fixed.
Optionally, in this embodiment, when mapping all or part of data to be processed by multiple applications into the memory, address information for identifying a location of the data in the memory may also be established, which may specifically include the following steps:
s1, judging whether the namespace contains a file for uniquely identifying the position of the data in the memory;
s2, if the namespace includes the file for uniquely identifying the position of the data in the memory, the path formed by the namespace and the file name of the file is used as the address information corresponding to the request;
and S3, if the file for uniquely identifying the position of the data in the memory is not contained in the name space, creating a new file, and taking a path formed by the name space and the file name of the created file as the address information corresponding to the request.
For example, as shown in fig. 2, when an application a in a plurality of applications is started, the cache component is initialized together, and address information of memory data is established in the cache component, for example, address information of an address range indicated by a file a in a namespace a may be identified by a-a, and a specific address range indicated by the address information may be 0x0000 (memory start location) -0 xFFFF (memory end location).
Further, when it is known from the received request for processing the target data sent by the application B that the application B also needs to process the target data in the address range indicated by the file a in the namespace a, the application B does not need to reestablish the address information of the memory data, and can directly process the target data in the corresponding address range through the established address information, so as to achieve the effect that different applications share the data in the memory.
Here, as further described with reference to table 1, the address information includes a namespace corresponding to the processing request, which is identified by capital letters, and a file name of a file included in the namespace, which is identified by lowercase letters, and data stored in the shared memory of the three applications in table 1.
TABLE 1
Figure BDA0000490867890000051
And S106, processing the target data in the memory indicated by the address information, and updating the processed target data in the memory.
For example, taking application a as an example to load target data, and application a, application B, and application C share data in the memory, when the cache component receives a request for loading target data related to user login information from application a, the cache component loads the user login information into a corresponding address range in the memory through address information corresponding to the request (for example, address information of an address range indicated by file a in namespace a), and updates the target data in the memory after the loading is completed.
According to the embodiment provided by the application, the corresponding address information of the target data in the memory is searched according to the request for processing the target data sent by at least one application in the multiple applications, wherein the target data stored in the memory is shared to the multiple applications, the target data in the address range indicated by the address information is processed, and the processed target data is updated in the memory, so that the data in the memory is shared by the multiple applications across processes, and the data processing speed is further improved.
As an optional scheme, before receiving a request for processing target data sent by at least one application of the plurality of applications, the method further includes:
s1, mapping all or part of the data to be processed by the plurality of applications to the memory, and establishing address information for identifying the location of the data in the memory.
Optionally, in this embodiment, the to-be-processed data may be, but is not limited to, mapped by all or part of the data. Optionally, the condition of whether to map all data into the memory in this embodiment includes, but is not limited to: whether the data size of the total data is less than or equal to a capacity threshold of the memory. Optionally, the partial data in this embodiment includes but is not limited to: null, initialization data. Wherein the initialization data includes but is not limited to: data stored in memory by a plurality of applications prior to the last shutdown. Optionally, in this embodiment, the data stored in the memory before the previous shutdown of the multiple applications may be: the history data, for example, may be login information of the user, history information accessed by the user. The above example is only an example, and the present embodiment does not limit this.
A method of establishing address information for identifying a location of data in a memory is further described with reference to examples. For example, when the application a is started, the cache component initializes and establishes address information for identifying the location of data in the memory, and assuming that the address information initialized and established by the application a this time is a file a in the namespace a, the address information is identified by a-a, and a specific address range indicated by the address information may be 0x0000 (memory start location) -0 xFFFF (memory end location). When the application B is started, a processing request for the target data of the address range indicated by the address information (i.e., the file a in the namespace a is identified by a-a) is also sent, so that the application B can process the data in a cross-process manner, and the purpose of sharing the data in the memory is achieved.
According to the embodiment provided by the application, the target data to be processed is mapped into the memory firstly, and the address information used for identifying the position in the memory is established for the target data, so that the target data in the memory can be shared by the different applications through the same address information, the data processing speed is increased, and the user experience is improved.
As an optional scheme, mapping all or part of data to be processed by a plurality of applications into a memory includes:
s1, judging whether the size of all data of a plurality of applications is smaller than the capacity threshold of the memory;
s2, if the size of all the data of the plurality of applications is judged to be smaller than or equal to the capacity threshold value of the memory, all the data of the plurality of applications are mapped into the memory;
s3, if it is determined that the size of all the data of the plurality of applications is larger than the capacity threshold of the memory, mapping part of the data of the plurality of applications to the memory.
For example, when the memory size threshold is 500M, it is determined whether the size of all data to be processed by the application a that transmits the request for processing the target data satisfies a condition smaller than the memory size threshold (i.e., 500M).
For another example, when the capacity threshold of the memory is 500M, assuming that all data to be processed by the application a that sends the request for processing the target data is 200M, that is, the size of all data of the application a is smaller than the capacity threshold of the memory, all data of the application a is mapped into the memory.
For another example, when the capacity threshold of the memory is 500M, assuming that all data to be processed by the application a that sends the request for processing data is 800M, that is, the size of all data of the application is greater than the capacity threshold of the memory, only part of the data of the application a is mapped into the memory.
According to the embodiment provided by the application, the target data to be processed is mapped in a proper amount by comparing the size of the data to be mapped and the capacity threshold of the memory, so that the memory can carry out reasonable data mapping in the range of the capacity threshold, and system faults caused by overlarge mapping quantity are avoided.
As an alternative, as shown in fig. 3, if it is determined that the size of all the data of the multiple applications is greater than the capacity threshold of the memory, mapping part of the data of the multiple applications to the memory includes:
s302, mapping initialization data when a plurality of applications are started into a memory;
optionally, the initialization data in this embodiment may include: the data stored in the memory before the previous closing of the multiple applications may be, in this embodiment, the data stored in the memory before the previous closing of the multiple applications: the history data, for example, may be login information of the user, history information accessed by the user. The above example is only an example, and the present embodiment does not limit this.
For example, when the threshold of the memory size is 500M, and all data to be processed by the application a, the application B, and the application C is 800M, that is, greater than the threshold of the memory size, only the initialization data of the application at the time of starting will be mapped. For another example, if the application a includes the user login information 10M before the last closing, the application B includes the history information 50M accessed by the user before the closing of the jacket, and the user C does not store data before the last closing, the total initialization data 60M of the applications a, B, and C when started will be mapped to the memory based on the above determination.
And S304, continuing to map under the condition that the capacity threshold value of the memory also meets a preset condition.
Optionally, the predetermined condition in this embodiment includes: the data in the memory has not reached the capacity threshold in addition to the initialization data. For example, after the application a is started and the initialization data is mapped to the memory, and the data in the memory does not reach the capacity threshold of the memory, the target to be processed by the application a may be continuously mapped to the memory.
Optionally, in this embodiment, when the application runs, a request is made to load new target data, and if the loaded target data exceeds the address range indicated by the address information of the memory, the mapped data in the memory may be replaced with the newly loaded data.
For example, when the application a runs normally, a request for loading new target data is sent to the cache component, the newly loaded target data is mapped according to the address range indicated by the address information, and if the size of the data to be loaded exceeds the address range indicated by the address information of the memory, the mapped data in the memory may be replaced with the newly loaded data. Referring to fig. 4, assume that there are 8 location spaces in the memory, wherein the mapped target data are 1000, 0001, 0010, 1011, 0100, 0101, 0110, and 0111, respectively, and the address range indicated by the address information is already occupied, and if the target data 1111, 1001, and 1010 needs to be loaded, the target data 1111, 1001, and 1010 needs to replace the mapped target data 1000, 0001, and 0010 in the memory.
Through the embodiment provided by the application, when the size of all the data of the plurality of applications is judged to be larger than the capacity threshold of the memory, part of the data of the plurality of applications is mapped into the memory, and similarly, when the size of the target data requested by the applications is larger than the address range indicated by the address information, the mapped data in the memory can be replaced by the newly loaded data, so that the old data is covered by the new data, and the data space is greatly saved.
As an optional scheme, processing target data in the memory indicated by the address information, and updating the processed target data in the memory includes:
s1, after receiving a first request for processing data sent by a first application of the multiple applications, processing target data in the memory indicated by the address information according to first operation information carried in the first request to obtain processed target data, and updating the processed target data in the memory;
and S2, after receiving a second request for processing data sent by a second application of the multiple applications, reprocessing the processed target data according to second operation information carried in the second request to obtain reprocessed target data, and updating the reprocessed target data in the memory.
For example, as shown in fig. 2, assuming that an application a, an application B, and an application C share data in a memory, after a cache component receives a first request sent by the application B, where the first request carries first operation information for loading target data related to user login information, the user login information is loaded into an address range indicated by the address information in the memory through corresponding address information obtained in the first request, and the target data in the memory after the loading is completed is updated; after the cache component receives a second request sent by the application C, where the second request carries second operation information for modifying target data related to user login information, and if it is known through the request that address information requested by the application B is consistent with address information requested by the application a, the cache component modifies the target data (i.e., the user login information) in the same address range, and updates the modified target data in the memory.
According to the embodiment provided by the application, different applications can directly process the data in the memory by sharing the data in the memory, and the data processing speed is further improved.
As an alternative, the processing of the target data includes at least one of the following operations:
s1, loading the target data in the memory;
s2, modifying the target data in the memory;
and S3, deleting the target data in the memory.
As an optional scheme, the method further comprises:
and S1, storing the processed data in the memory to the corresponding disk file according to the preset requirement.
Optionally, in this embodiment, but not limited to, the data processed in the memory may be stored to the corresponding disk file according to a predetermined requirement. Wherein, the predetermined requirement may include but is not limited to: timing, and sending out a request by a user. And storing the modified data in the memory to a disk file at proper time so as to enable the modification of the application in the memory to be persistent to the corresponding disk file.
This application provides a preferred embodiment to further explain this application, but it should be noted that this preferred embodiment is only for better describing this application and should not be construed as unduly limiting this application.
Example 2
According to an embodiment of the present application, there is also provided a data processing apparatus, as shown in fig. 5, in this embodiment, the apparatus includes:
(1) a receiving unit 502, configured to receive a request for processing target data sent by at least one application of a plurality of applications;
optionally, in this embodiment, the data processing method may be applied to a data processing scenario in a terminal application, where the terminal may include, but is not limited to, at least one of the following: cell-phone, panel computer. Optionally, the processing of the target data in this embodiment includes at least one of the following operations: loading target data in the memory; modifying target data in the memory; and deleting the target data in the memory. Optionally, in this embodiment, the cache component receives the request for processing the target data, where the cache component in this embodiment may be located in each application respectively.
For example, referring to fig. 2, assuming that the application a is a video playing application, the application B is an instant messaging application, and the application C is a position navigation application, the applications share data in the memory, and when any one application requests to process target data in the memory, cross-process processing can be implemented, and the processed target data is updated in the memory. For example, after receiving a request of an application B for loading target data related to user login information, a cache component loads the user login information to a corresponding location in a memory through address information corresponding to the request, and updates the target data in the memory after the loading is completed; the cache component receives a request of the application C for modifying the target data of the user login information, and can know that the address information requested by the application B is consistent with the address information requested by the application A through the request, modify the target data (namely the user login information) at the same position, and update the modified target data in the memory. The above examples are only examples, and the present application is not limited thereto.
Optionally, the data processing method in this embodiment may include, but is not limited to: before receiving a request sent by at least one application in the plurality of applications for processing target data, mapping all or part of data to be processed by the plurality of applications into a memory. Optionally, the partial data in this embodiment includes but is not limited to: null, initialization data. Wherein the initialization data includes but is not limited to: data stored in memory by a plurality of applications prior to the last shutdown.
Optionally, in this embodiment, but not limited to, the data processed in the memory may be stored to the corresponding disk file according to a predetermined requirement. Wherein, the predetermined requirement may include but is not limited to: timing, and sending out a request by a user. And storing the modified data in the memory to a disk file at proper time so as to enable the modification of the application in the memory to be persistent to the corresponding disk file.
(2) A searching unit 504, configured to search address information of target data in a memory according to the request, where the target data stored in the memory is shared by multiple applications;
optionally, the target data stored in the memory in this embodiment is shared by multiple applications. That is, multiple applications may share data in the memory and perform cross-process processing on target data of the address range indicated by the same address information.
Optionally, the address information in this embodiment includes, but is not limited to, a space name of a namespace and a file name of a file in the corresponding namespace, different data in the application first corresponds to different namespaces, and then is identified by different files in a refined manner, so that the data in the memory can be uniquely identified by different files in the address information. For example, the address information of the location indicated by file a in namespace a may be identified by a-a, and the specific address range indicated by the address information may be 0x0000 (memory start location) to 0xFFFF (memory end location). Optionally, the address information established by the applications in the present embodiment at the time of initialization is not uniquely fixed.
Optionally, in this embodiment, when mapping all or part of data to be processed by multiple applications into the memory, address information for identifying a location of the data in the memory may also be established, which may specifically include the following steps:
s1, judging whether the namespace contains a file for uniquely identifying the position of the data in the memory;
s2, if the namespace includes the file for uniquely identifying the position of the data in the memory, the path formed by the namespace and the file name of the file is used as the address information corresponding to the request;
and S3, if the file for uniquely identifying the position of the data in the memory is not contained in the name space, creating a new file, and taking a path formed by the name space and the file name of the created file as the address information corresponding to the request.
For example, as shown in fig. 2, when an application a in a plurality of applications is started, the cache component is initialized together, and address information of memory data is established in the cache component, for example, address information of an address range indicated by a file a in a namespace a may be identified by a-a, and a specific address range indicated by the address information may be 0x0000 (memory start location) -0 xFFFF (memory end location).
Further, when it is known from the received request for processing the target data sent by the application B that the application B also needs to process the target data in the address range indicated by the file a in the namespace a, the application B does not need to reestablish the address information of the memory data, and can directly process the target data in the corresponding address range through the established address information, so as to achieve the effect that different applications share the data in the memory.
Here, as further described with reference to table 2, the address information includes a namespace corresponding to the processing request, which is identified by capital letters, and a file name of a file included in the namespace, which is identified by lowercase letters, and data stored in the shared memory of the three applications in table 2.
TABLE 2
Figure BDA0000490867890000111
(3) The processing unit 506 is configured to process the target data in the memory indicated by the address information, and update the processed target data in the memory.
For example, taking application a as an example to load target data, and application a, application B, and application C share data in the memory, when the cache component receives a request for loading target data related to user login information from application a, the cache component loads the user login information into a corresponding address range in the memory through address information corresponding to the request (for example, address information of an address range indicated by file a in namespace a), and updates the target data in the memory after the loading is completed.
According to the embodiment provided by the application, the corresponding address information of the target data in the memory is searched according to the request for processing the target data sent by at least one application in the multiple applications, wherein the target data stored in the memory is shared to the multiple applications, the target data in the address range indicated by the address information is processed, and the processed target data is updated in the memory, so that the data in the memory is shared by the multiple applications across processes, and the data processing speed is further improved.
As an alternative, as shown in fig. 6, in this embodiment, the apparatus further includes:
(1) a mapping unit 602, configured to map all or part of data to be processed by multiple applications into a memory, and establish address information for identifying a location of the data in the memory.
Optionally, in this embodiment, the to-be-processed data may be, but is not limited to, mapped by all or part of the data. Optionally, the condition of whether to map all data into the memory in this embodiment includes, but is not limited to: whether the data size of the total data is less than or equal to a capacity threshold of the memory. Optionally, the partial data in this embodiment includes but is not limited to: null, initialization data. Wherein the initialization data includes but is not limited to: data stored in memory by a plurality of applications prior to the last shutdown. Optionally, in this embodiment, the data stored in the memory before the previous shutdown of the multiple applications may be: the history data, for example, may be login information of the user, history information accessed by the user. The above example is only an example, and the present embodiment does not limit this.
Further described in connection with the examples, a method of establishing address information for identifying a location of data in a memory. For example, when the application a is started, the cache component initializes and establishes address information for identifying the location of data in the memory, and assuming that the address information initialized and established by the application a this time is a file a in the namespace a, the address information is identified by a-a, and a specific address range indicated by the address information may be 0x0000 (memory start location) -0 xFFFF (memory end location). When the application B is started, a processing request for the target data of the address range indicated by the address information (i.e., the file a in the namespace a is identified by a-a) is also sent, so that the application B can process the data in a cross-process manner, and the purpose of sharing the data in the memory is achieved.
According to the embodiment provided by the application, the target data to be processed is mapped into the memory firstly, and the address information used for identifying the position in the memory is established for the target data, so that the target data in the memory can be shared by the different applications through the same address information, the data processing speed is increased, and the user experience is improved.
As an optional solution, the mapping unit 602 includes:
(1) the judging module is used for judging whether the size of all data of the plurality of applications is smaller than the capacity threshold of the memory;
(2) the first mapping module is used for mapping all the data of the plurality of applications into the memory when the size of all the data of the plurality of applications is judged to be smaller than or equal to the capacity threshold value of the memory;
(3) and the second mapping module is used for mapping part of the data of the plurality of applications to the memory when judging that the size of all the data of the plurality of applications is larger than the capacity threshold of the memory.
For example, when the memory size threshold is 500M, it is determined whether the size of all data to be processed by the application a that transmits the request for processing the target data satisfies a condition smaller than the memory size threshold (i.e., 500M).
For another example, when the capacity threshold of the memory is 500M, assuming that all data to be processed by the application a that sends the request for processing the target data is 200M, that is, the size of all data of the application a is smaller than the capacity threshold of the memory, all data of the application a is mapped into the memory.
For another example, when the capacity threshold of the memory is 500M, assuming that all data to be processed by the application a that sends the request for processing data is 800M, that is, the size of all data of the application is greater than the capacity threshold of the memory, only part of the data of the application a is mapped into the memory.
According to the embodiment provided by the application, the target data to be processed is mapped in a proper amount by comparing the size of the data to be mapped and the capacity threshold of the memory, so that the memory can carry out reasonable data mapping in the range of the capacity threshold, and system faults caused by overlarge mapping quantity are avoided.
As an optional solution, the second mapping module includes:
(1) the first mapping submodule is used for mapping initialization data when a plurality of applications are started into a memory, wherein the initialization data comprises: data stored in the memory before the last closing of the plurality of applications;
optionally, the initialization data in this embodiment may include: the data stored in the memory before the last closing of the plurality of applications may be, in this embodiment, the data stored in the memory before the last closing of the plurality of applications: the history data, for example, may be login information of the user, history information accessed by the user. The above example is only an example, and the present embodiment does not limit this.
For example, when the threshold of the memory size is 500M, and all data to be processed by the application a, the application B, and the application C is 800M, that is, greater than the threshold of the memory size, only the initialization data of the application at the time of starting will be mapped. For another example, if the application a includes the user login information 10M before the last closing, the application B includes the history information 50M accessed by the user before the closing of the jacket, and the user C does not store data before the last closing, the total initialization data 60M of the applications a, B, and C when started will be mapped to the memory based on the above determination.
(2) A second mapping submodule, configured to continue mapping when a capacity threshold of the memory further satisfies a predetermined condition, where the predetermined condition includes: the data in the memory has not reached the capacity threshold in addition to the initialization data.
Optionally, the predetermined condition in this embodiment includes: the data in the memory has not reached the capacity threshold in addition to the initialization data. For example, after the application a is started and the initialization data is mapped to the memory, and the data in the memory does not reach the capacity threshold of the memory, the target to be processed by the application a may be continuously mapped to the memory.
Optionally, in this embodiment, when the application runs, a request is made to load new target data, and if the loaded target data exceeds the address range indicated by the address information of the memory, the mapped data in the memory may be replaced with the newly loaded data.
For example, when the application a runs normally, a request for loading new target data is sent to the cache component, the newly loaded target data is mapped according to the address range indicated by the address information, and if the size of the data to be loaded exceeds the address range indicated by the address information of the memory, the mapped data in the memory may be replaced with the newly loaded data. Referring to fig. 4, assume that there are 8 location spaces in the memory, wherein the mapped target data are 1000, 0001, 0010, 1011, 0100, 0101, 0110, and 0111, respectively, and the address range indicated by the address information is already occupied, and if the target data 1111, 1001, and 1010 needs to be loaded, the target data 1111, 1001, and 1010 needs to replace the mapped target data 1000, 0001, and 0010 in the memory.
Through the embodiment provided by the application, when the size of all the data of the plurality of applications is judged to be larger than the capacity threshold of the memory, part of the data of the plurality of applications is mapped into the memory, and similarly, when the size of the target data requested by the applications is larger than the address range indicated by the address information, the mapped data in the memory can be replaced by the newly loaded data, so that the old data is covered by the new data, and the data space is greatly saved.
As an alternative, the processing unit 506 includes:
(1) the first processing module is used for processing the target data in the memory indicated by the address information according to the first operation information carried in the first request after receiving the first request for processing the data sent by the first application in the plurality of applications, so as to obtain the processed target data, and updating the processed target data in the memory;
(2) and the second processing module is used for processing the processed target data again according to second operation information carried in the second request after receiving a second request which is sent by a second application of the plurality of applications and is used for processing the data, so as to obtain the processed target data, and updating the processed target data in the memory.
For example, as shown in fig. 2, assuming that an application a, an application B, and an application C share data in a memory, after a cache component receives a first request sent by the application B, where the first request carries first operation information for loading target data related to user login information, the user login information is loaded into an address range indicated by the address information in the memory through corresponding address information obtained in the first request, and the target data in the memory after the loading is completed is updated; after the cache component receives a second request sent by the application C, where the second request carries second operation information for modifying target data related to user login information, and if it is known through the request that address information requested by the application B is consistent with address information requested by the application a, the cache component modifies the target data (i.e., the user login information) in the same address range, and updates the modified target data in the memory.
According to the embodiment provided by the application, different applications can directly process the data in the memory by sharing the data in the memory, and the data processing speed is further improved.
As an alternative, the processing unit comprises at least one of: the loading module is used for loading the target data in the memory; the modification module is used for modifying the target data in the memory; and the deleting module is used for deleting the target data in the memory.
As an optional solution, in this embodiment, the apparatus further includes:
(1) and the storage unit is used for storing the processed data in the memory to the corresponding disk file according to a preset requirement.
Optionally, in this embodiment, but not limited to, the data processed in the memory may be stored to the corresponding disk file according to a predetermined requirement. Wherein, the predetermined requirement may include but is not limited to: timing, and sending out a request by a user. And storing the modified data in the memory to a disk file at proper time so as to enable the modification of the application in the memory to be persistent to the corresponding disk file.
This application provides a preferred embodiment to further explain this application, but it should be noted that this preferred embodiment is only for better describing this application and should not be construed as unduly limiting this application.
From the above description, it can be seen that, in the embodiment of the present application, data to be processed by a plurality of applications is mapped into corresponding memories, and address information for uniquely identifying the location of the data is established. At least one application in the plurality of applications sends a request for processing target data, address information of the target data in a memory is searched according to the request, and then the target data is processed according to data in the memory indicated by the address information.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (5)

1. A data processing method, comprising:
receiving a request sent by at least one application in the plurality of applications for processing target data;
searching address information of the target data in a memory according to the request, wherein the target data stored in the memory are shared to the plurality of applications;
processing the target data in the memory indicated by the address information, and updating the processed target data in the memory;
storing the processed data in the memory to a corresponding disk file according to a preset requirement;
before receiving the request for processing target data sent by at least one of the plurality of applications, the method further comprises:
if the size of all data to be processed by the plurality of applications is judged to be larger than the capacity threshold value of the memory, mapping historical data stored in the memory before the last closing of the plurality of applications to the memory, and establishing address information for identifying the positions of the historical data in the memory; the historical data includes login information of the user and/or historical information accessed by the user.
2. The method of claim 1, further comprising:
continuing mapping when the capacity threshold of the memory further meets a predetermined condition, wherein the predetermined condition comprises: the data in the memory has not reached a capacity threshold in addition to the historical data.
3. The method according to claim 1, wherein the processing the target data in the memory indicated by the address information and updating the processed target data in the memory comprises:
after a first request which is sent by a first application of the multiple applications and used for processing the data is received, processing the target data in the memory indicated by the address information according to first operation information carried in the first request to obtain processed target data, and updating the processed target data in the memory;
after receiving a second request for processing the data sent by a second application of the multiple applications, re-processing the processed target data according to second operation information carried in the second request to obtain re-processed target data, and updating the re-processed target data in the memory.
4. The method of claim 3, wherein the processing of the target data comprises at least one of:
loading the target data in the memory;
modifying the target data in the memory;
and deleting the target data in the memory.
5. A data processing apparatus, comprising:
a receiving unit configured to receive a request for processing target data sent by at least one of the plurality of applications;
a searching unit, configured to search, according to the request, address information of the target data in a memory, where the target data stored in the memory is shared by the multiple applications;
the processing unit is used for processing the target data in the memory indicated by the address information and updating the processed target data in the memory;
the storage unit is used for storing the processed data in the memory to a corresponding disk file according to a preset requirement;
the device further comprises: a preprocessing unit, configured to map, before the receiving unit receives the request for processing target data sent by at least one of the multiple applications, historical data stored in the memory before the previous closing of the multiple applications to the memory if it is determined that the size of all data to be processed by the multiple applications is greater than a capacity threshold of the memory, and establish address information used for identifying a location of the historical data in the memory; the historical data includes login information of the user and/or historical information accessed by the user.
CN201410148478.9A 2014-04-14 2014-04-14 Data processing method and device Active CN104978278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410148478.9A CN104978278B (en) 2014-04-14 2014-04-14 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410148478.9A CN104978278B (en) 2014-04-14 2014-04-14 Data processing method and device

Publications (2)

Publication Number Publication Date
CN104978278A CN104978278A (en) 2015-10-14
CN104978278B true CN104978278B (en) 2020-05-29

Family

ID=54274807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410148478.9A Active CN104978278B (en) 2014-04-14 2014-04-14 Data processing method and device

Country Status (1)

Country Link
CN (1) CN104978278B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463395B (en) * 2016-06-03 2020-10-09 腾讯科技(深圳)有限公司 Component calling method and device
CN106657284A (en) * 2016-11-29 2017-05-10 成都华为技术有限公司 Data stream processing method and device
CN108228876A (en) * 2018-01-19 2018-06-29 维沃移动通信有限公司 A kind of method and mobile terminal for reading file data
CN108334383B (en) * 2018-03-30 2021-09-14 联想(北京)有限公司 Information processing method and electronic equipment
CN110598085B (en) * 2018-05-24 2023-11-10 华为技术有限公司 Information query method for terminal and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1740978A (en) * 2004-08-23 2006-03-01 华为技术有限公司 Method for realing sharing internal stored data base and internal stored data base system
CN102455943A (en) * 2010-10-19 2012-05-16 上海聚力传媒技术有限公司 Method for carrying out data sharing based on memory pool, and computer device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101329657A (en) * 2007-06-19 2008-12-24 瑞达信息安全产业股份有限公司 System and method for safe sharing dynamic memory of multi-application space
CN101650670B (en) * 2008-08-14 2013-01-09 鸿富锦精密工业(深圳)有限公司 Electronic system capable of sharing application program configuration parameters and method thereof
CN102006241B (en) * 2010-12-17 2013-11-27 曙光信息产业股份有限公司 Method for receiving message through buffer area shared by multiple applications
CN103425538B (en) * 2012-05-24 2016-05-11 深圳市腾讯计算机系统有限公司 Process communication method and system
CN103605577B (en) * 2013-12-04 2017-06-30 广州博冠信息科技有限公司 The resource share method and equipment of striding course

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1740978A (en) * 2004-08-23 2006-03-01 华为技术有限公司 Method for realing sharing internal stored data base and internal stored data base system
CN102455943A (en) * 2010-10-19 2012-05-16 上海聚力传媒技术有限公司 Method for carrying out data sharing based on memory pool, and computer device

Also Published As

Publication number Publication date
CN104978278A (en) 2015-10-14

Similar Documents

Publication Publication Date Title
CN104978278B (en) Data processing method and device
US8504792B2 (en) Methods and apparatuses to allocate file storage via tree representations of a bitmap
US11354230B2 (en) Allocation of distributed data structures
CN105760199B (en) A kind of application resource loading method and its equipment
US20150222695A1 (en) Distributed processing system and method of operating the same
US10459729B2 (en) Map tables for hardware tables
US10754869B2 (en) Managing data format of data received from devices in an internet of things network
CN103677674B (en) A kind of data processing method and device
CN107426041B (en) Method and device for analyzing command
CN112074818A (en) Method and node for enabling access to past transactions in a blockchain network
CN107436910B (en) Data query method and device
CN108959122A (en) A kind of store method, device and the terminal of upgrade package downloading
US10838875B2 (en) System and method for managing memory for large keys and values
CN108319634B (en) Directory access method and device for distributed file system
US20160140140A1 (en) File classification in a distributed file system
CN104407990B (en) A kind of disk access method and device
US10558571B2 (en) Second level database file cache for row instantiation
CN106227541A (en) A kind of program updates download process method and mobile terminal
CN109271193B (en) Data processing method, device, equipment and storage medium
US20170104683A1 (en) Dynamically segmenting traffic for a/b testing in a distributed computing environment
EP3499819A1 (en) Load balancing method and related device
CN107430546B (en) File updating method and storage device
CN108090087B (en) File processing method and device
CN110866380A (en) Method and terminal for filling in information field content
US9996318B2 (en) FIFO memory having a memory region modifiable during operation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191209

Address after: P.O. Box 31119, grand exhibition hall, hibiscus street, 802 West Bay Road, Grand Cayman, Cayman Islands

Applicant after: Innovative advanced technology Co., Ltd

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant