WO2014185043A1 - Dispositif de traitement d'informations, procédé d'anonymisation d'informations et support d'enregistrement - Google Patents

Dispositif de traitement d'informations, procédé d'anonymisation d'informations et support d'enregistrement Download PDF

Info

Publication number
WO2014185043A1
WO2014185043A1 PCT/JP2014/002480 JP2014002480W WO2014185043A1 WO 2014185043 A1 WO2014185043 A1 WO 2014185043A1 JP 2014002480 W JP2014002480 W JP 2014002480W WO 2014185043 A1 WO2014185043 A1 WO 2014185043A1
Authority
WO
WIPO (PCT)
Prior art keywords
generalization
policy
information processing
processing apparatus
data
Prior art date
Application number
PCT/JP2014/002480
Other languages
English (en)
Japanese (ja)
Inventor
隆夫 竹之内
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2015516909A priority Critical patent/JPWO2014185043A1/ja
Publication of WO2014185043A1 publication Critical patent/WO2014185043A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0407Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the identity of one or more communicating identities is hidden

Definitions

  • the present invention relates to information processing, and in particular to data anonymization.
  • the personal data includes data (sensitive data (SD: Sensitive Data) or Sensitive Data Attribute) related to the individual that you do not want to disclose. For this reason, it is necessary to protect personal privacy in order to disclose personal data.
  • sensitive data SD: Sensitive Data
  • Sensitive Data Attribute Sensitive Data Attribute
  • Anonymization technology is one technology that protects privacy.
  • the information processing apparatus related to the present invention for example, deletes an identifier (ID: Identifier) that uniquely identifies an individual from personal data and publishes the data.
  • ID Identifier
  • personal data may include data that can identify (specify) an individual when combined with other data.
  • QID quadsi-identifier
  • the information processing apparatus related to the present invention anonymizes the quasi-identifier (QID) so as to satisfy a predetermined policy for protecting the personal data to be provided.
  • K-anonymity is a policy that guarantees anonymization in which “k” or more pieces of data including the same quasi-identifier or pair of quasi-identifiers are included in each group of data.
  • I-diversity is a policy that guarantees anonymization in which “l” or more sensitive data is included in each group of data.
  • T-proximity is a policy that guarantees that the difference between the distance in the distribution of sensitive data between groups and the distance in the distribution of all attributes is equal to or less than “t”.
  • M-invariance is a policy for guaranteeing that there are “m” or more records with the same combination of quasi-identification information in the sequential disclosure of data, and that all records have different sensitive data.
  • k-anonymization is anonymization satisfying “k-anonymity”.
  • l-diversification is anonymization satisfying “l-diversity”.
  • t-proximity and m-invariant are anonymization satisfying “t-proximity” and “m-invariance”.
  • Non-Patent Document 1 Many anonymization techniques have been proposed (see, for example, Non-Patent Document 1). “Mondrian Multidimensional” described in Non-Patent Document 1 is a method of dividing quasi-identifiers into one group and then dividing the data into a plurality of groups so as to satisfy k-anonymity.
  • the number of data providers is not limited to one, and there may be a plurality of cases.
  • the information processing device of each providing source anonymizes the data individually and provides it to the user device.
  • the user device needs to receive anonymized data from a plurality of information processing devices of the providing sources and aggregate the anonymized data.
  • the data stored by the provider is not the same. Therefore, for example, when the number of data stored by the provider is different, the information processing apparatus related to the present invention anonymizes the data based on different generalization policies. Similarly, when the QIDs included in the data are different, the information processing apparatus related to the present invention anonymizes the data based on different generalization policies. And when the generalization policy of the anonymization of a provider does not correspond, the user apparatus cannot aggregate the anonymized data received from the information processing apparatuses of a plurality of providers related to the present invention.
  • Patent Literature 1 and Non-Patent Literature 1 indicate that the user device cannot aggregate the anonymized data that has been provided. There was a problem.
  • An object of the present invention is to provide an information processing apparatus, an information anonymization method, and a recording medium that solve the above-described problems.
  • An information processing apparatus is configured to determine a generalization policy cooperation determination that determines a common generalization policy that is a generalization policy for anonymizing data that is used in common with the other apparatus in cooperation with another apparatus. Means and anonymization means for anonymizing data based on the common generalization policy.
  • a common generalization policy that is a generalization policy for anonymization of data used in common with the other device is determined in cooperation with the other device, and the common generalization policy is determined. Anonymize the data based on the conversion policy.
  • a computer-readable recording medium in which a program according to an embodiment of the present invention is recorded has a common generalization policy that is a generalization policy for anonymizing data used in common with the other device in cooperation with the other device.
  • a computer apparatus is caused to execute a program including a process of determining and a process of anonymizing data based on the common generalization policy.
  • FIG. 1 is a diagram showing data for explaining the operation of the information processing apparatus related to the present invention.
  • FIG. 2 is a diagram showing data for explaining the operation of the information processing apparatus related to the present invention.
  • FIG. 3 is a block diagram illustrating an example of a configuration of a system including the information processing apparatus according to the first embodiment of the present invention.
  • FIG. 4 is a block diagram illustrating an example of the configuration of the information processing apparatus according to the first embodiment.
  • FIG. 5 is a block diagram illustrating an example of the configuration of the information processing apparatus according to the first embodiment.
  • FIG. 6 is a flowchart illustrating an example of the operation of the information processing apparatus according to the first embodiment.
  • FIG. 7 is a diagram illustrating data for explaining the operation of the information processing apparatus according to the first embodiment.
  • FIG. 8 is a diagram illustrating data for explaining the operation of the information processing apparatus according to the first embodiment.
  • FIG. 9 is a block diagram illustrating an example of another configuration of the information processing apparatus according to the first embodiment.
  • FIG. 1 is a diagram showing data for explaining the operation of the information processing apparatus related to the present invention.
  • provider A the information processing apparatus of provider A
  • provider B the information processing apparatus of provider B
  • provider A anonymizes data 1000 on the upper left.
  • the provider A anonymizes the quasi-identifiers (QID1 and QID2) into one group like the data 1001 shown in the upper center.
  • the provider A divides QID1 into two groups (generalization width “120-125” and generalization width “126-129”) with the central value “125” of QID1 as a boundary, and on the upper right side It anonymizes like the data 1002 to show.
  • the provider B anonymizes the data 2000 on the lower left.
  • the provider B anonymizes the quasi-identifiers (QID1 and QID2) into one group like the data 2001 shown in the lower center.
  • the provider B divides QID1 into two groups (generalization width “120-124” and generalization width “125-129”) with “124” being the median value of QID1 as a boundary. It anonymizes like the data 2002 to show.
  • FIG. 2 is a diagram showing data for explaining the operation of the information processing apparatus related to the present invention.
  • the anonymized data 1002 of the provider A and the anonymized data 2002 of the provider B have different boundaries. Therefore, the user apparatus can assume a plurality of connection methods (mappings) between the group of the provider A and the group of the provider B.
  • a group with QID1 “125-129” of provider B includes a common QID with a group with QID1 “120-125” and QID1 “126-129” of provider A. For this reason, the user apparatus cannot determine which group of the provider A the group with the QID “125-129” of the provider B of the received anonymized data.
  • the information processing device related to the present invention has a problem that the user device cannot aggregate the provided anonymized data. It was.
  • the information processing apparatus related to the present invention anonymizes the data using, for example, the method described below.
  • the first method is as follows.
  • the information processing apparatus related to the present invention stores a common generalization policy in advance. And the information processing apparatus relevant to this invention anonymizes data based on the common generalization policy to preserve
  • the second method is as follows.
  • the information processing apparatus related to the present invention mutually discloses the QID. And the information processing apparatus relevant to this invention determines the policy of anonymization using QID of all the information processing apparatuses.
  • the information processing apparatus using the first method has a problem that the data to be stored cannot be anonymized optimally.
  • the information processing apparatus stores four data with QIDs “1”, “8”, “13”, and “19”.
  • the information processing apparatus satisfies “2-anonymity”.
  • the information processing apparatus can adopt, for example, generalization policies of “0-9” and “10-19” in order to anonymize the stored data. Therefore, it is assumed that the information processing apparatus stores “0-9” and “10-19” as common generalization policies in advance.
  • the information processing apparatus additionally stores data with QIDs “5”, “7”, “14”, and “17”.
  • the information processing apparatus anonymizes the data based on the generalization policies “0-5”, “6-9”, “10-14”, and “15-20”, for example, -Anonymity can be secured.
  • the information processing apparatus using the first method has determined the generalization policies (“0-9” and “10-19”) in advance. Therefore, the information processing apparatus divides the data into “1, 5, 7, 8” and “13, 14, 17, 19” according to the generalization policy. As described above, the information processing apparatus using the first method has a problem in that it cannot carry out optimal anonymization.
  • data including QID is a property for the provider. Therefore, the data provider wants to avoid disclosing data including the QID in a state where it is not anonymized to other providers.
  • the information processing apparatus using the second method has a problem that it is difficult to implement in actual operation.
  • FIG. 3 is a block diagram showing an example of the configuration of the information processing system 40 including the information processing apparatus 10 and the information processing apparatus 30 according to the first embodiment of the present invention.
  • the information processing system 40 includes an information processing device 10, a user device 20, and an information processing device 30.
  • the information processing apparatus 10, the user apparatus 20, and the information processing apparatus 30 are connected via a general communication path, for example, a network or a bus.
  • User device 20 receives anonymized data from information processing device 10 and information processing device 30. Then, the user device 20 uses the anonymized data after aggregation.
  • the user device 20 is not particularly limited as long as it is a device that processes general data. Therefore, detailed description of the user device 20 is omitted.
  • the information processing apparatus 10 anonymizes the data and transmits it to the user apparatus 20 so that the user apparatus 20 can aggregate the anonymized data.
  • the information processing apparatus 30 is the same apparatus as the information processing apparatus 10. However, the information processing apparatus 10 cooperates with other information processing apparatuses (for example, the information processing apparatus 30) as will be described later. Therefore, in order to clarify the following description of the cooperation, the information processing apparatus 30 is assigned a reference numeral different from that of the information processing apparatus 10.
  • the information processing apparatus 10 will be described as an apparatus that is a main subject of cooperation.
  • the information processing apparatus 30 will be described as an apparatus that responds to the information processing apparatus 10. That is, the information processing apparatus 30 corresponds to “another information processing apparatus 10” that responds to the information processing apparatus 10.
  • the information processing apparatus 10 and the information processing apparatus 30 are one each, the number is the illustration for the convenience of description.
  • the information processing apparatus 10 according to the present embodiment may cooperate with a plurality of information processing apparatuses 30.
  • the information processing apparatus 10 will be further described with reference to the drawings.
  • FIG. 4 is a block diagram illustrating an example of the configuration of the information processing apparatus 10 according to the present embodiment.
  • each of the information processing apparatus 10 and the information processing apparatus 30 is one, but the number of the information processing apparatuses 10 and 30 is an example as in FIG.
  • the information processing apparatus 10 anonymizes data in cooperation with the information processing apparatus 30.
  • the information processing apparatus 10 includes an anonymization unit 110 and a generalization policy cooperation determination unit 120.
  • the generalization policy cooperation determination unit 120 cooperates (communications) with the information processing apparatus 30 and determines a generalization policy to be shared (hereinafter referred to as “common generalization policy”). That is, the generalization policy cooperation determination unit 120 determines a common generalization policy in cooperation with the “other information processing apparatus 10”. It can be said that the generalization policy cooperation determination unit 120 shares the common generalization policy in cooperation with the information processing apparatus 30.
  • the common generalization policy is a generalization policy used for anonymization of data in common between the information processing apparatus 10 and the information processing apparatus 30.
  • the common generalization policy is, for example, a QID division point (boundary) or a range of data after QID division (generalization width).
  • the anonymization unit 110 anonymizes data based on the common generalization policy determined by the generalization policy cooperation determination unit 120.
  • the information processing apparatus 10 transmits the anonymized data thus anonymized to the user apparatus 20.
  • the information processing apparatus 10 and the information processing apparatus 30 have a common generalization policy for anonymization. Therefore, the user device 20 can collect the received anonymized data.
  • the information processing apparatus 10 may share a generalization policy within a predetermined range. Then, the information processing apparatus 10 may determine a generalization policy that is suitable for its own apparatus with respect to a generalization policy that is not shared (hereinafter referred to as “individual generalization policy”).
  • the information processing apparatus 10 can anonymize data based on the individual generalization policy in addition to anonymization based on the common generalization policy.
  • the generalization policy cooperation determination unit 120 may store information on data attributes in addition to the function of determining the common generalization policy in cooperation.
  • the generalization policy cooperation determination unit 120 may store information regarding the attribute type of data to be anonymized.
  • the type of attribute is not particularly limited.
  • the following attribute types can be assumed.
  • the generalization policy cooperation determination unit 120 may determine whether the generalization policy used by the anonymization unit 110 is a common generalization policy based on the stored information.
  • the anonymization unit 110 may use information stored by the generalization policy cooperation determination unit 120 for anonymization of data. For example, when deleting an identifier from data, the anonymization unit 110 may determine the attribute to be deleted based on information indicating that the attribute corresponds to the identifier stored by the generalization policy cooperation determination unit 120.
  • the information processing apparatus 10 will be further described with reference to the drawings.
  • FIG. 5 is a block diagram illustrating an example of the configuration of the information processing apparatus 10.
  • the information processing apparatus 10 includes an anonymization unit 110, a generalization policy linkage determination unit 120, a pre-anonymization data storage unit 160, an anonymized data storage unit 170, and a transmission unit 180.
  • the pre-anonymization data storage unit 160 stores pre-anonymization data.
  • the information processing device 10 transmits the pre-anonymization data to the user device 20 after anonymization.
  • the anonymization unit 110 anonymizes the data before anonymization based on the common generalization policy determined by the generalization policy cooperation determination unit 120 in anonymizing the data before anonymization. create. Moreover, as already demonstrated, the anonymization part 110 may anonymize data using an individual generalization policy in addition to a common generalization policy. Furthermore, the anonymization unit 110 may use information stored by the generalization policy cooperation determination unit 120 for anonymizing data.
  • the anonymization unit 110 stores the anonymized data in the anonymized data storage unit 170. Also, the anonymization unit 110 responds to the request from the user device 20 and sends the anonymized data to the transmission unit 180. Note that the anonymization unit 110 may store data in the middle of anonymization in the anonymized data storage unit 170.
  • the anonymized data storage unit 170 stores the anonymized data anonymized by the anonymization unit 110.
  • the transmission unit 180 transmits the anonymized data received from the anonymization unit 110 to the user device 20. Therefore, the transmission unit 180 controls communication with the user device 20.
  • the transmission unit 180 may receive the anonymized data from the anonymized data storage unit 170 without passing through the anonymization unit 110 and transmit the anonymized data to the user device 20.
  • the generalization policy cooperation determination unit 120 determines the common generalization policy used by the anonymization unit 110 with the information processing apparatus 30 as described above. Therefore, the generalization policy cooperation determination unit 120 includes an anonymity parameter storage unit 130, a common parameter setting unit 140, and a communication unit 150.
  • the anonymity parameter storage unit 130 stores information on the types of attributes already described, for example, information on the QID (common QID) that the generalization policy cooperation determination unit 120 shares with the information processing apparatus 30 in the generalization policy. That is, the anonymity parameter storage unit 130 holds information (anonymity parameters) for determining whether or not the generalization policy used by the anonymization unit 110 is a common generalization policy.
  • the anonymity parameter storage unit 130 may store other types already described, for example, information on QIDs that do not share a generalization policy, or information on other attributes.
  • the anonymity parameter storage unit 130 has information set in advance.
  • an administrator of the information processing apparatus 10 may operate the information processing apparatus 10 to store (set) information in the anonymity parameter storage unit 130.
  • the common parameter setting unit 140 determines a common generalization policy (common parameter) based on information stored in the anonymity parameter storage unit 130 in cooperation with the information processing apparatus 30.
  • the common parameter setting unit 140 will be further described under the following assumptions using a specific example.
  • the data of the information processing apparatus 10 includes QID1 and QID2 as quasi-identifiers to be anonymized.
  • the anonymity parameter storage unit 130 is set with information for determining the generalization policy of QID1 in cooperation.
  • the anonymity parameter storage unit 130 is set with information for sharing QID1.
  • information for not linking the determination of the generalization policy of QID2 is set. That is, information that the QID2 is not shared is set in the anonymity parameter storage unit 130. Therefore, QID1 is a common QID, and the generalization policy of QID1 is a common generalization policy.
  • QID2 is not a common QID, and the generalization policy of QID2 is an individual generalization policy.
  • the common parameter setting unit 140 determines whether QID1 is a common QID based on information stored by the anonymity parameter storage unit 130. In this case, QID1 is a common QID. Therefore, the common parameter setting unit 140 starts cooperation with the information processing apparatus 30 for commonization of the common generalization policy (QID1 generalization policy).
  • the common parameter setting unit 140 determines a common generalization policy used for anonymization based on the common generalization policy of the own device and the received common generalization policy. To do.
  • the common parameter setting unit 140 gives up cooperation. In this case, the information processing apparatus 10 anonymizes data as in the individual generalization policy described below.
  • the common parameter setting unit 140 determines whether QID2 is a common QID based on information stored by the anonymity parameter storage unit 130. In this case, QID2 is not a common QID. Therefore, the common parameter setting unit 140 does not cooperate with the information processing apparatus 30. In this case, the information processing apparatus 10 anonymizes the data based on the individual generalization policy (QID2 generalization policy).
  • the common parameter setting unit 140 determines whether to share the information based on information stored by the anonymity parameter storage unit 130.
  • the common parameter setting unit 140 determines that the common generalization policy of its own device is used. Is transmitted to the information processing apparatus 30. Then, the information processing apparatus 10 determines the common generalization policy used for anonymization based on the received common generalization policy and the common generalization policy of the own apparatus.
  • the common parameter setting unit 140 responds to the information processing apparatus 30. do not do. However, the information processing apparatus 10 may notify the information processing apparatus 30 that it does not cooperate.
  • a generalization policy for example, QID2 generalization policy
  • the common parameter setting unit 140 may determine whether to cooperate based on the content of the received generalization policy.
  • the communication unit 150 mediates communication with the information processing apparatus 30 of the common parameter setting unit 140. Therefore, the communication unit 150 controls communication with the communication unit 150 of the information processing device 30.
  • FIG. 6 is a flowchart illustrating an example of the anonymization operation of the information processing apparatus 10 according to the first embodiment.
  • the generalization policy is described as a QID division point (boundary) as an example. That is, the information processing apparatus 10 shares the QID division points.
  • the anonymity secured by the information processing apparatus 10 is determined in advance.
  • the common QID common QID
  • the information processing apparatus 10 knows the information processing apparatus 30 that cooperates (for example, the number of apparatuses that cooperate with each other and their addresses).
  • the anonymization unit 110 of the information processing apparatus 10 determines the QID to be divided based on the data stored by the pre-anonymization data storage unit 160 (step S210). For example, the anonymization unit 110 may select a QID having the widest value range. Alternatively, the anonymization unit 110 may select QIDs in order in a round robin manner.
  • the anonymization unit 110 sends the determined QID generalization policy to the common parameter setting unit 140.
  • the anonymization unit 110 determines the division point (boundary) of the QID, and sends the QID and the boundary to the common parameter setting unit 140 as a generalization policy.
  • the common parameter setting unit 140 determines whether or not the received QID is a common QID based on the information stored by the anonymity parameter storage unit 130 (step S220).
  • the common parameter setting unit 140 shares a generalization policy (for example, a common QID division point (boundary)) with the information processing apparatus 30 via the communication unit 150. (Step S230).
  • a generalization policy for example, a common QID division point (boundary)
  • the common parameter setting unit 140 operates as follows.
  • the common parameter setting unit 140 notifies the information processing apparatus 30 of the sharing with the QID determined in step S210 (for example, dividing the QID). That is, the common parameter setting unit 140 notifies the sharing of the QID. Then, the common parameter setting unit 140 waits for a response regarding cooperation from the information processing apparatus 30.
  • the common parameter setting unit 140 When receiving a response indicating that all information processing apparatuses 30 cooperate, the common parameter setting unit 140 notifies the information processing apparatus 30 of the common generalization policy (for example, the boundary of common QID division). Then, the common parameter setting unit 140 waits for notification of the common generalization policy from the information processing apparatus 30. When the common generalization policy is received from all the information processing devices 30, the common parameter setting unit 140 proceeds to step S240.
  • the common generalization policy for example, the boundary of common QID division
  • the information processing device 10 and the information processing device 30 responding that cooperation is performed As with, commonality of common generalization policies should be linked. However, in the case of cooperation with some information processing apparatuses 30, the information processing apparatus 10 may stop cooperation. In that case, the information processing apparatus 10 may operate in the same manner as when a response indicating that the information processing apparatuses 30 described below do not cooperate is received.
  • the common parameter setting unit 140 may operate similarly to the case of the individual generalization policy described later. For example, the common parameter setting unit 140 returns the generalization policy received from the anonymization unit 110 to the anonymization unit 110.
  • the common parameter setting unit 140 need not be limited to the transmission of the cooperation notification as the start of communication in the common cooperation of common generalization policies.
  • the common parameter setting unit 140 may determine a common generalization policy to be negotiated with the information processing apparatus 30 in advance and shared in common without determining a generalization policy to be shared in advance.
  • the common parameter setting unit 140 may transmit the common generalization policy and the cooperation notification together without transmitting the common generalization policy and the notification of commonization of the QID as a separate notification.
  • the information processing apparatus 10 may determine in advance that the transmission of the common generalization policy also serves as a notification of cooperation.
  • the common parameter setting unit 140 determines a generalization policy used for data anonymization based on the received common generalization policies (step S240). For example, when the common generalization policy is a QID boundary, the information processing apparatus 10 may use an average value of the received QID boundary as the generalization policy.
  • the information processing apparatus 30 also determines a generalization policy based on the received common generalization policy. Therefore, the information processing apparatus 10 and the information processing apparatus 30 calculate the same generalization policy (for example, a QID boundary) as the generalization policy used for anonymization.
  • a generalization policy for example, a QID boundary
  • the information processing apparatus 10 and the information processing apparatus 30 determine the generalization policy for anonymization in cooperation.
  • the common parameter setting unit 140 After determining the generalization policy, the common parameter setting unit 140 returns the determined generalization policy to the anonymization unit 110.
  • the common parameter setting unit 140 may not receive the common generalization policy from the information processing apparatus 30 due to, for example, a network failure or a failure of the information processing apparatus 30.
  • the common parameter setting unit 140 may return the boundary received from the anonymization unit 110 to the anonymization unit 110 as the generalization policy. That is, the information processing apparatus 10 may anonymize data using the generalization policy determined by the anonymization unit 110 when the generalization policy cannot be determined in cooperation.
  • the information processing apparatus 10 may notify the user apparatus 20 of the failure.
  • the common parameter setting unit 140 returns the boundary received from the anonymization unit 110 to the anonymization unit 110 as a generalization policy.
  • the anonymization unit 110 divides the QID based on the generalization policy received from the common parameter setting unit 140 (step S250).
  • the information processing apparatus 10 cooperates with the information processing apparatus 30 and anonymizes data based on the common generalization policy.
  • the information processing apparatus 10 does not cooperate with the information processing apparatus 30 and anonymizes the data based on the generalization policy determined by the own apparatus.
  • the anonymization unit 110 confirms the anonymity of the data (step S260).
  • the anonymization unit 110 proceeds to the division of the next QID (step S210).
  • the information processing apparatus 10 repeats the division as long as the anonymity is satisfied.
  • the anonymization unit 110 cancels the immediately preceding division and ends the anonymization process (step S270).
  • the information processing apparatus 10 When the previous division is a common QID, the information processing apparatus 10 notifies the linked information processing apparatus 30 of the cancellation of generalization.
  • the information processing apparatus 10 may change the division point in cooperation with the information processing apparatus 30.
  • the information processing apparatus 10 may notify the information processing apparatus 30 of the end of cooperation after the anonymization process is completed.
  • the information processing apparatus 10 determines a generalization policy in cooperation with the information processing apparatus 30 in the case of a generalization policy to be shared in anonymization.
  • FIG. 7 is a diagram illustrating data for explaining the operation of determining the generalization policy of the information processing apparatus 10.
  • FIG. 7 shows the data of the information processing apparatus 10 (apparatus A in FIG. 7), for example.
  • the lower part of FIG. 7 shows data of another information processing apparatus 10 (that is, information processing apparatus 30 and apparatus B of FIG. 7).
  • QID1 shown in FIG. 7 is a common QID.
  • the anonymization unit 110 first anonymizes data 3000 and data 4000 into data 3001 and data 4001 in the most anonymized state. That is, the anonymization unit 110 anonymizes each QID into one group.
  • the data 3001 and data 4001 shown in the center of FIG. 7 are the first anonymized states of QID1 and QID2.
  • the anonymization unit 110 determines the dividing point (boundary) of QID1. For example, the anonymization unit 110 of the device A determines the average “125” of QID1 of the data 3001 as the boundary. Similarly, the anonymization unit 110 of the device B determines the average “124” of QID1 of the data 4001 as a boundary.
  • the information processing apparatus 10 calculates the average of QID1 as a boundary.
  • the information processing apparatus 10 has no particular limitation on how to determine the boundary.
  • the information processing apparatus 10 may use the average of the groups having the largest size (the number of records is large) among the groups of QID1 as a boundary.
  • the data 3001 and the data 4001 shown in FIG. 7 are in the initial state, the number of groups is 1, and the size of the group is 5. That is, the group of the device A and the device B shown in FIG. 7 is the largest group. Then, the devices A and B calculate the average of the largest group, and determine “125” and “124” as the boundaries, respectively.
  • the information processing apparatus 10 need not be limited to the average of the group having the largest size (the number of records is large) as a boundary.
  • the information processing apparatus 10 may use the median value of the group as the boundary.
  • the information processing apparatus 10 may select another group such as a group having a wide range.
  • the anonymization unit 110 sends QID1 and the boundary to the common parameter setting unit 140.
  • the common parameter setting unit 140 determines whether QID1 is a common QID.
  • QID1 is a common QID.
  • the common parameter setting unit 140 of the device A and the common parameter setting unit 140 of the device B communicate with each other via the communication unit 150 at the boundary of QID1 that is the common generalization policy.
  • apparatus A transmits an average “125” of QID1 and receives an average “124” of QID1 of apparatus B.
  • the common parameter setting unit 140 returns the determined generalization policy (common generalization policy) to the anonymization unit 110.
  • the anonymization unit 110 anonymizes data based on the received generalization policy (here, the boundary “124” of QID1).
  • FIG. 8 is a diagram illustrating data for explaining the anonymization operation of the information processing apparatus 10 according to the present embodiment.
  • the data 3002 of the device A and the data 4002 of the device B are displayed side by side so that the data can be easily compared.
  • the data boundary of the anonymized data 3002 of the device A and the anonymized data 4002 of the device B match. Therefore, the user device 20 can collect data.
  • the information processing apparatus 10 anonymizes data.
  • the information processing apparatus 10 can aggregate the data after the user apparatus 20 is anonymized, and can obtain an effect of providing the data by anonymizing the data appropriately for the data provider.
  • the generalization policy cooperation determination unit 120 of the information processing apparatus 10 determines a common generalization policy to be shared in cooperation with the information processing apparatus 30 (that is, another information processing apparatus 10) in anonymization. Furthermore, the generalization policy cooperation determination unit 120 notifies the optimal generalization policy at that time determined by the anonymization unit 110. Therefore, the generalization policy cooperation determination unit 120 can determine a more appropriate generalization policy as compared to the case where the generalization policy is determined in advance. And it is because the anonymization part 110 of the information processing apparatus 10 can anonymize data based on the common generalization policy determined in cooperation. Therefore, the user device 20 can aggregate the data after anonymization.
  • the information processing apparatus 10 can be anonymized without transmitting data to the information processing apparatus 30.
  • the information processing apparatus 10 can determine the common generalization policy by transmitting the common generalization policy to the information processing apparatus 30. And the information processing apparatus 10 can anonymize data based on a common generalization policy. Thus, the information processing apparatus 10 can anonymize the data without transmitting the data to the information processing apparatus 30.
  • the information processing device 10 needs to set the data value of the common generalization policy to the same generalized value (global recoding: Global (Re-Coding). is there.
  • the information processing apparatus 10 does not need to anonymize the data value of the individual generalization policy so as to satisfy the global recoding.
  • the information processing apparatus 10 may set the data value of the individual generalization policy as a different generalized value (local recoding: Local Re-Coding).
  • the information processing apparatus 10 may be anonymized using data (name, preference, etc.) that can be categorized in addition to numerical data that allows easy range setting and size determination.
  • the information processing apparatus 10 may apply a conceptual tree classification system (taxonomy) to the data and anonymize the data.
  • taxonomy conceptual tree classification system
  • the information processing apparatus 10 is not limited to the top-down anonymization method that repeats the division as illustrated in FIG. 6, and may use a bottom-up anonymization method that repeats the combination. Alternatively, the information processing apparatus 10 may combine top down and bottom up.
  • the information processing apparatus 10 and the information processing apparatus 30 may have overlapping requests from both.
  • the information processing apparatus 10 may determine the common generalization policy in cooperation based on the operation described above.
  • the information processing apparatus 10 and the information processing apparatus 30 need to select the common QID.
  • the information processing apparatus 10 and the information processing apparatus 30 may determine which common QID is used by arbitrating. Alternatively, the information processing apparatus 10 and the information processing apparatus 30 may set a priority order when cooperation requests overlap in advance.
  • the information processing apparatus 10 may arbitrate and determine the common QID determined in cooperation.
  • the information processing apparatus 10 may determine a predetermined priority order of the common QID in advance. For example, the information processing apparatus 10 may adopt a common QID having the largest number of cooperation requests as a common QID.
  • the information processing apparatus 10 and the information processing apparatus 30 transmit a common generalization policy of the determined common QID. Subsequent operations may be the same as those already described.
  • the configuration of the information processing apparatus 10 is not limited to the above description.
  • the information processing apparatus 10 may divide each component into a plurality of components.
  • the information processing apparatus 10 does not need to be configured by one apparatus.
  • the information processing apparatus 10 may be configured using a device including the anonymization unit 110 connected via a network and a device including the generalization policy cooperation determination unit 120.
  • the information processing apparatus 10 may configure either or both of the pre-anonymization data storage unit 160 and the anonymized data storage unit 170 as an external storage device.
  • the information processing apparatus 10 may be configured with a plurality of components by one apparatus.
  • the information processing apparatus 10 may be realized as a computer apparatus including a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory).
  • the information processing apparatus 10 may further be realized as a computer apparatus including an input / output connection circuit (IOC: Input Output Circuit) and a network interface circuit (NIC: Network Interface Circuit).
  • IOC Input Output Circuit
  • NIC Network Interface Circuit
  • FIG. 9 is a block diagram illustrating an example of a configuration of an information processing device 60 that is a modification of the information processing device 10 of the present embodiment.
  • the information processing device 60 includes a CPU 610, a ROM 620, a RAM 630, an internal storage device 640, an IOC 650, and a NIC 680, and constitutes a computer.
  • CPU 610 reads a program from ROM 620.
  • the CPU 610 controls the RAM 630, the internal storage device 640, the IOC 650, and the NIC 680 based on the read program. And CPU610 controls these structures and implement
  • the CPU 610 may use the RAM 630 as a temporary program storage when realizing each function.
  • the CPU 610 may read the program included in the storage medium 700 storing the program so as to be readable by a computer using a storage medium reading device (not shown). Alternatively, the CPU 610 may receive a program from an external device (not shown) via the NIC 680.
  • ROM 620 stores programs executed by CPU 610 and fixed data.
  • the ROM 620 is, for example, a P-ROM (Programmable-ROM) or a flash ROM.
  • the RAM 630 temporarily stores programs executed by the CPU 610 and data.
  • the RAM 630 is, for example, a D-RAM (Dynamic-RAM).
  • the internal storage device 640 stores data and programs stored in the information processing device 60 for a long time. Further, the internal storage device 640 may operate as a temporary storage device for the CPU 610.
  • the internal storage device 640 is, for example, a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), or a disk array device.
  • the IOC 650 mediates data between the CPU 610, the input device 660, and the display device 670.
  • the IOC 650 is, for example, an IO interface card.
  • the input device 660 is a device that receives an input instruction from an operator of the information processing apparatus 60.
  • the input device 660 is, for example, a keyboard, a mouse, or a touch panel.
  • the display device 670 is a device that displays information to the operator of the information processing apparatus 60.
  • the display device 670 is a liquid crystal display, for example.
  • the NIC 680 relays data exchange with an external device via a network.
  • the NIC 680 is, for example, a LAN (Local Area Network) card.
  • the information processing apparatus 60 configured as described above can obtain the same effects as the information processing apparatus 10.
  • the information processing apparatus 10 anonymizes data based on the generalization policy shared with the information processing apparatus 30.
  • the common generalization policy may be different from the optimal generalization policy for the information processing apparatus 10.
  • the information processing apparatus 10 differs in the degree of difficulty (degree of difficulty) of data anonymization according to, for example, the data amount (data size) or anonymity of data to be handled.
  • the difficulty level is an index indicating the difficulty of ensuring the anonymity of data.
  • the difficulty level is an index that increases in value as it is difficult to ensure data anonymity.
  • the difficulty level may be an index that decreases in value as it is difficult to ensure anonymity of data.
  • the information processing apparatus 10 that handles data having a small data size has fewer boundary candidates than the information processing apparatus 10 that handles data having a large data size.
  • the information processing apparatus 10 determines that the boundary of the data that can be divided is Limited to around the median.
  • the information processing apparatus 10 having a small data size has a higher degree of difficulty in securing anonymity than the information processing apparatus 10 having a large data size.
  • the information processing apparatus 10 has different degrees of difficulty in securing anonymity even if the data size is the same.
  • the information processing apparatus 30 communicates with the information processing apparatus 30 information regarding the difficulty level or difficulty level of ensuring anonymity.
  • the data size is an example of a factor that determines the difficulty level of securing data.
  • the data size is an example of an index whose value decreases as it is difficult to ensure data anonymity.
  • K-Anonymity is more difficult to secure as the value of “k” is larger. Therefore, the value of “k” in k-anonymity is an example of a factor that determines the difficulty of securing data. Note that the value of “k” for k-anonymity is an example of an index whose value increases as it is difficult to ensure data anonymity.
  • the generalization policy cooperation determination part 120 of the information processing apparatus 10 which concerns on this embodiment determines a common generalization policy in consideration of the information regarding the difficulty level or difficulty level of anonymity.
  • the configuration of the information processing apparatus 10 of the present embodiment is the same as that of the first embodiment, and thus the description of the configuration is omitted. Also, description of operations similar to those in the first embodiment will be omitted, and operations unique to the present embodiment will be described.
  • the data size (number of records) of device A is “100”.
  • the data size (number of records) of device B is “10”.
  • “5-anonymity” is secured.
  • the data can be divided into two groups with a data size (number of records) of “5” as a group after division.
  • the device B cannot satisfy “5-anonymity” of the data in the divided group.
  • the generalization policy linkage determination unit 120 of the information processing apparatus 10 does not determine the generalization policy that is shared based on both generalization policies as the generalization policy. .
  • the generalization policy linkage determination unit 120 of the information processing apparatus 10 shares the generalization policy of the information processing apparatus 10 (apparatus B) with a small data size (number of records). Determined as generalization policy.
  • the generalization policy cooperation determination unit 120 of the information processing apparatus 10 may change the generalization policy determination method as the anonymization process progresses. That is, the information processing apparatus 10 is not limited to the data size to be stored, and may use the divided data size.
  • the generalization policy cooperation determination unit 120 of the information processing device 10 may determine a common generalization policy based on the generalization policies of all the information processing devices 10.
  • the data size (number of records) after division is a predetermined multiple (for example, “k” of “k ⁇ anonymity”) to be secured (for example, “k” of “k-anonymity”). 3 times).
  • the generalization policy cooperation determination unit 120 of the information processing apparatus 10 may prioritize the generalization policy of the information processing apparatus 10 whose data size (number of records) has been reduced, and use the common generalization policy.
  • the opposite case is a case where the data size (number of records) after the division of any one of the information processing apparatuses 10 becomes smaller than a predetermined multiple for the anonymity to be secured.
  • the generalization policy linkage determination unit 120 of the information processing device 10 may handle the generalization policies of the information processing devices 10 in consideration of the data size, instead of handling them to the same extent.
  • the information processing apparatus 10 may set a weight based on the data size of each information processing apparatus 10 (for example, a weight inversely proportional to the data size).
  • the information processing apparatus 10 multiplies the boundary value by a weight that is inversely proportional to the data size, as shown in the following formula (1), and sets the boundary value in the generalization policy (the point of division) ) May be determined.
  • the boundary value (edge1) in the device A is “120”, and the boundary value (edge2) in the device B is “126”.
  • the boundary value of Equation (1) is as follows.
  • the boundary value obtained using Equation (1) is close to the boundary value of device A having a small data size. That is, priority is given to the boundary of the device A where it is difficult to ensure anonymization. As a result, in the device A having a small data size, many divisions are possible. That is, the generalization policy of apparatus A is given priority.
  • the information processing apparatus 10 may operate as follows, for example.
  • the information processing apparatus 10 may prioritize the generalization policy of the information processing apparatus 10 having a large “k” value.
  • the information processing apparatus 10 may cooperate in consideration of the difficulty level of anonymity. For example, when ensuring “k-anonymity”, the information processing apparatus 10 may use the value of “k” as a weight.
  • the information processing apparatus 10 may use the following mathematical formula (2).
  • Equation (2) is as follows.
  • the boundary value obtained using Equation (2) is close to the boundary value of the device A having high anonymity (“k” is large). That is, priority is given to the boundary of the device A where it is difficult to ensure anonymization. As a result, in the device A that is difficult to anonymize, many divisions are possible.
  • the information processing apparatus 10 may combine the above.
  • the information processing apparatus 10 may use the difficulty level of ensuring anonymity even in the selection of the common QID described in the modification of the first embodiment.
  • the information processing apparatus 10 of the present embodiment can obtain the effect of setting an appropriate generalization policy even when the difficulty level of ensuring anonymity is different in the information processing apparatus 10. it can.
  • the information processing apparatus 10 changes the priority generalization policy determination method based on the degree of difficulty in securing anonymity.
  • the information processing apparatus 10 determines a generalization policy to be prioritized based on the data size of the data to be anonymized (data size to be stored or data size after division) or anonymity.
  • the information processing apparatus 10 selects the generalization policy of the information processing apparatus 10 having a small data size or high anonymity.
  • the information processing apparatus 10 gives priority to the generalization policy of the information processing apparatus 10 having a small data size or high anonymity.
  • the information processing apparatus 10 according to the present embodiment can easily ensure anonymization of the information processing apparatus 10 that is difficult to anonymize.
  • Appendix 1 A generalization policy linkage determining means for determining a common generalization policy that is a generalization policy of anonymization of data used in common with the other device in cooperation with another device; And an anonymizing means for anonymizing data based on the common generalization policy.
  • the generalization policy linkage determination means is The information processing apparatus according to appendix 1, wherein a generalization policy of at least some attributes of the data to be anonymized is determined as the common generalization policy.
  • the anonymization means is The information processing apparatus according to claim 2, wherein in addition to the common generalization policy, the attribute generalization policy is configured to anonymize data based on at least a part of an attribute generalization policy that is not included in the common generalization policy. .
  • the generalization policy linkage determination means is The information processing apparatus according to appendix 2 or appendix 3, wherein the attribute used as the common generalization policy is determined as the other apparatus.
  • the common generalization policy is a quasi-identifier generalization policy;
  • the information processing apparatus according to any one of Supplementary Note 1 to Supplementary Note 4, wherein the common generalization policy includes a generalization width and / or a boundary of the reference identifier.
  • the generalization policy linkage determination means is The common generalization policy is determined based on the degree of difficulty, which is an index indicating the difficulty of securing anonymization of data to be secured in the own device and the other device when anonymizing data.
  • the information processing apparatus according to any one of the above.
  • Appendix 7 The information processing apparatus according to appendix 6, wherein the difficulty level is calculated based on anonymized data size or anonymity.
  • the generalization policy linkage determination means is Anonymity parameter storage means for holding anonymity parameters that are information for determining whether the generalization policy used by the anonymization means is the common generalization policy;
  • a common parameter setting means for determining a common generalization policy in cooperation with the other device;
  • the information processing apparatus according to any one of appendix 1 to appendix 7, further comprising: a communication unit that mediates communication between the common parameter setting unit and the other device.
  • the pre-anonymization data storage means for storing the pre-anonymization data to be anonymized by the anonymization means, Anonymized data storage means for storing anonymized data anonymized by the anonymization means,
  • the information processing apparatus according to any one of Supplementary Note 1 to Supplementary Note 8, comprising: transmission means for transmitting the anonymized data to a user device.
  • the generalization policy linkage determination means is The information processing apparatus according to any one of Supplementary Note 1 to Supplementary Note 9, wherein an apparatus that prioritizes cooperation in determining the common generalization policy of a plurality of apparatuses or an attribute of a generalization policy that is prioritized is determined in advance.
  • Appendix 12 A process of determining a common generalization policy that is a generalization policy of anonymization of data used in common with the other device in cooperation with another device;
  • a computer-readable recording medium storing a program for causing a computer device to execute processing for anonymizing data based on the common generalization policy.

Abstract

La présente invention concerne un dispositif de traitement d'informations pour permettre à un dispositif d'utilisateur d'intégrer des données rendues anonymes et permettre également à des données d'être rendues anonymes de manière appropriée pour le fournisseur, lequel dispositif comprenant : un moyen de détermination de la coordination de règlement de généralisation qui, en coordination avec un autre dispositif, détermine un règlement de généralisation commun, qui est un règlement de généralisation pour l'anonymisation de données, partagée avec l'autre dispositif ; et un moyen d'anonymisation pour rendre anonymes les données en fonction du règlement de généralisation commune.
PCT/JP2014/002480 2013-05-15 2014-05-12 Dispositif de traitement d'informations, procédé d'anonymisation d'informations et support d'enregistrement WO2014185043A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2015516909A JPWO2014185043A1 (ja) 2013-05-15 2014-05-12 情報処理装置、情報匿名化方法、及び、プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013103192 2013-05-15
JP2013-103192 2013-05-15

Publications (1)

Publication Number Publication Date
WO2014185043A1 true WO2014185043A1 (fr) 2014-11-20

Family

ID=51898034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/002480 WO2014185043A1 (fr) 2013-05-15 2014-05-12 Dispositif de traitement d'informations, procédé d'anonymisation d'informations et support d'enregistrement

Country Status (2)

Country Link
JP (1) JPWO2014185043A1 (fr)
WO (1) WO2014185043A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6695511B1 (ja) * 2019-05-21 2020-05-20 三菱電機株式会社 匿名化手法導出装置、匿名化手法導出方法、匿名化手法導出プログラム、及び、匿名化手法導出システム
JP7380183B2 (ja) 2019-12-23 2023-11-15 日本電気株式会社 匿名性劣化情報出力防止装置、匿名性劣化情報出力防止方法および匿名性劣化情報出力防止プログラム

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012067213A1 (fr) * 2010-11-16 2012-05-24 日本電気株式会社 Système de traitement d'informations et procédé d'anonymisation
WO2012093522A1 (fr) * 2011-01-05 2012-07-12 日本電気株式会社 Dispositif d'anonymisation
WO2012165518A1 (fr) * 2011-06-02 2012-12-06 日本電気株式会社 Système d'anonymisation distribué, dispositif d'anonymisation distribué et procédé d'anonymisation distribué
JP2013041536A (ja) * 2011-08-19 2013-02-28 Fujitsu Ltd 情報処理方法及び装置
WO2013121738A1 (fr) * 2012-02-17 2013-08-22 日本電気株式会社 Dispositif d'anonymisation distribuée, et procédé d'anonymisation distribuée

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012067213A1 (fr) * 2010-11-16 2012-05-24 日本電気株式会社 Système de traitement d'informations et procédé d'anonymisation
WO2012093522A1 (fr) * 2011-01-05 2012-07-12 日本電気株式会社 Dispositif d'anonymisation
WO2012165518A1 (fr) * 2011-06-02 2012-12-06 日本電気株式会社 Système d'anonymisation distribué, dispositif d'anonymisation distribué et procédé d'anonymisation distribué
JP2013041536A (ja) * 2011-08-19 2013-02-28 Fujitsu Ltd 情報処理方法及び装置
WO2013121738A1 (fr) * 2012-02-17 2013-08-22 日本電気株式会社 Dispositif d'anonymisation distribuée, et procédé d'anonymisation distribuée

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PAWEL JURCZYK ET AL.: "Distributed Anonymization: Achieving Privacy for Both Data Subjects and Data Providers", DATA AND APPLICATIONS SECURITY 2009, LNCS 5645, 2009, pages 191 - 207, XP047307475, Retrieved from the Internet <URL:http://www.mathcs.emory.edu/~lxiong/research/pub/jurczyk09distributed.pdf> doi:10.1007/978-3-642-03007-9_13 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6695511B1 (ja) * 2019-05-21 2020-05-20 三菱電機株式会社 匿名化手法導出装置、匿名化手法導出方法、匿名化手法導出プログラム、及び、匿名化手法導出システム
WO2020235008A1 (fr) * 2019-05-21 2020-11-26 三菱電機株式会社 Dispositif de dérivation de technique d'anonymisation, procédé de dérivation de technique d'anonymisation, programme de dérivation de technique d'anonymisation et système de dérivation de technique d'anonymisation
JP7380183B2 (ja) 2019-12-23 2023-11-15 日本電気株式会社 匿名性劣化情報出力防止装置、匿名性劣化情報出力防止方法および匿名性劣化情報出力防止プログラム

Also Published As

Publication number Publication date
JPWO2014185043A1 (ja) 2017-02-23

Similar Documents

Publication Publication Date Title
US10185773B2 (en) Systems and methods of precision sharing of big data
US11016808B2 (en) Multi-tenant license enforcement across job requests
US11361092B2 (en) Contextual access of data
KR102508177B1 (ko) 데이터베이스 통합을 위한 자격증명이 없는 외부 스테이지
US20110185364A1 (en) Efficient utilization of idle resources in a resource manager
EP2784697A1 (fr) Procédé de traitement d&#39;interrogation de base de données de graphes et appareil
Bijon et al. Mitigating multi-tenancy risks in iaas cloud through constraints-driven virtual resource scheduling
EP3346413B1 (fr) Système et programme de gestion d&#39;informations de privilèges
US20200036731A1 (en) Predictive real-time and scheduled anti-virus scanning
WO2019153095A1 (fr) Système et procédé de gestion de consentement basé sur une chaîne de blocs
JP4839585B2 (ja) 資源情報収集配信方法およびシステム
Lakhan et al. Hybrid workload enabled and secure healthcare monitoring sensing framework in distributed fog-cloud network
CN115422273A (zh) 数据湖元数据处理方法、装置、电子设备、介质及产品
WO2014185043A1 (fr) Dispositif de traitement d&#39;informations, procédé d&#39;anonymisation d&#39;informations et support d&#39;enregistrement
Shekhar et al. MTLBP: a novel framework to assess multi-tenant load balance in cloud computing for cost-effective resource allocation
JP2017027137A (ja) 情報処理装置、情報処理方法、及び、プログラム
Colajanni et al. On the provision of services with UAVs in disaster scenarios: a two-stage stochastic approach
di Vimercati et al. Security-aware data allocation in multicloud scenarios
JP2015141642A (ja) 利用同意管理装置
Cardinaels et al. Job assignment in large-scale service systems with affinity relations
US8898192B2 (en) Managing database inquiries
US11803432B1 (en) Data clean rooms using defined access
US10102216B2 (en) System for associating related digital assets
Li et al. Extended efficiency and soft-fairness multiresource allocation in a cloud computing system
Do et al. A generalized model for investigating scheduling schemes in computational clusters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14798433

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015516909

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14798433

Country of ref document: EP

Kind code of ref document: A1