NZ756030B2 - Configurable annotations for privacy-sensitive user content - Google Patents
Configurable annotations for privacy-sensitive user contentInfo
- Publication number
- NZ756030B2 NZ756030B2 NZ756030A NZ75603018A NZ756030B2 NZ 756030 B2 NZ756030 B2 NZ 756030B2 NZ 756030 A NZ756030 A NZ 756030A NZ 75603018 A NZ75603018 A NZ 75603018A NZ 756030 B2 NZ756030 B2 NZ 756030B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- content
- threshold quantity
- threshold
- user
- annotation
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract 19
- 238000012544 monitoring process Methods 0.000 claims abstract 3
- 230000000977 initiatory effect Effects 0.000 claims 6
- 230000014509 gene expression Effects 0.000 claims 2
- 230000008520 organization Effects 0.000 claims 2
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
- G06F21/6263—Protecting personal data, e.g. for financial or medical purposes during internet communication, e.g. revealing personal data from cookies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/70—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
- G06F21/82—Protecting input, output or interconnection devices
- G06F21/84—Protecting input, output or interconnection devices output devices, e.g. displays or monitors
Abstract
Systems, methods, and software for data privacy annotation frameworks for user applications are provided herein. An exemplary method includes identifying at least a first threshold quantity, an elasticity factor for modifying the first threshold quantity to a second threshold quantity, and an indication of a threshold resiliency property indicating when the second threshold quantity overrides the first threshold quantity. The method includes monitoring a content edit process of user content to identify a quantity of the user content that contains sensitive data corresponding to one or more predetermined data schemes, and during the content edit process, enabling and disabling presentation of annotation indicators for the content elements based at least in part on a current quantity with regard to the first threshold quantity, the elasticity factor for the first threshold quantity when enabled, and the indication of the threshold resiliency property.
Claims (15)
1. A method of operating user application, the method comprising: identifying at least a first threshold quantity, an elasticity factor for modifying the first threshold quantity to a second threshold quantity when enabled, and an indication of a threshold resiliency property indicating when the second threshold quantity overrides the first threshold quantity; monitoring a content edit process of user content in a user data file to identify a quantity of content elements among the user content that contain sensitive data corresponding to one or more predetermined data schemes, which includes using a shared data loss protection service to identify the sensitive data based on the user content, the shared data loss protection service being configured to perform sensitive data identification for a plurality of user applications; during the content edit process, enabling and disabling presentation of annotation indicators for one or more of the content elements based at least in part on a current quantity of the content elements with regard to the first threshold quantity, the elasticity factor for the first threshold quantity when enabled, and the indication of the threshold resiliency property; and responsive to user input associated with at least one of the content elements that contain sensitive data, replacing the at least one of the content elements with obfuscated content.
2. The method of claim 1, wherein the annotation indicators comprise one or more of: a global indicator presented in a user interface to the user application that applies to the user data file; and individual indicators presented in the user interface positioned proximate to individual content elements containing the sensitive data.
3. The method of claim 1, further comprising: during the content edit process: based at least on the current quantity of content elements exceeding the first threshold quantity, initiating presentation of at least one annotation indicator in the user interface that flags the user content in the user interface as containing at least first sensitive data; based at least on the current quantity of content elements initially exceeding the first threshold quantity and subsequently falling below the first threshold quantity when the elasticity factor is applied to the first threshold quantity, establishing a second threshold quantity based at least on the elasticity factor for removal of the presentation of the at least one annotation indicator; based at least on the current quantity of content elements falling below the second threshold quantity when the elasticity factor is applied to the first threshold quantity, initiating removal of the presentation of the at least one annotation indicator; based at least on the current quantity of content elements initially falling below the second threshold quantity and subsequently exceeding the second threshold quantity when the threshold resiliency property is applied to the second threshold quantity, initiating presentation of at least one further annotation indicator in the user interface that flags the user content in the user interface as containing at least second sensitive data; based at least on the current quantity of content elements initially exceeding the first threshold quantity and subsequently falling below the first threshold quantity when the elasticity factor is not applied to the first threshold quantity, removing presentation of the at least one annotation indicator; and based at least on the current quantity of content elements initially falling below the second threshold quantity and subsequently exceeding the second threshold when the resiliency property is not applied to the second threshold quantity, withholding presentation of the at least one further annotation indicator that flags the user content in the user interface as containing at least the second sensitive data until the quantity of content elements exceeds the first threshold quantity.
4. A data privacy annotation framework for a user application, comprising: one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, based at least on being read and executed by the processing system, direct the processing system to at least: identify one or more of a first threshold quantity, an elasticity factor for the first threshold quantity, and an indication of a threshold resiliency property; monitor user content in a user data file presented for content editing in a user interface to the user application to identify a quantity of content elements containing sensitive data among the user content corresponding to one or more predetermined data schemes, which includes using a shared data loss protection service to identify the sensitive data based on the user content, the shared data loss protection service being configured to perform sensitive data identification for a plurality of user applications; during the content editing: based at least on the quantity of content elements exceeding the first threshold quantity, initiate presentation of at least one annotation indicator in the user interface that flags the user content in the user interface as containing at least first sensitive data; based at least on the quantity of content elements initially exceeding the first threshold quantity and subsequently falling below the first threshold quantity when the elasticity factor is applied to the first threshold quantity, establish a second threshold quantity based at least on the elasticity factor for removal of the presentation of the at least one annotation indicator; and based at least on the quantity of content elements initially falling below the second threshold quantity and subsequently exceeding the second threshold quantity when the threshold resiliency property is applied to the second threshold quantity, initiate presentation of at least one further annotation indicator in the user interface that flags the user content in the user interface as containing at least second sensitive data; and responsive to user input associated with at least one of the content elements containing sensitive data, replace the at least one of the content elements with obfuscated content.
5. The data privacy annotation framework of claim 4, comprising further program instructions, based at least on being read and executed by the processing system, direct the processing system to at least: during the content editing, based at least on the quantity of content elements falling below the second threshold quantity when the elasticity factor is applied to the first threshold quantity, initiate removal of the presentation of the at least one annotation indicator; during the content editing, based at least on the quantity of content elements initially exceeding the first threshold quantity and subsequently falling below the first threshold quantity when the elasticity factor is not applied to the first threshold quantity, remove presentation of the at least one annotation indicator; during the content editing, based at least on the quantity of content elements initially falling below the second threshold quantity and subsequently exceeding the second threshold when the resiliency property is not applied to the second threshold quantity, withhold presentation of the at least one further annotation indicator that flags the user content in the user interface as containing at least the second sensitive data until the quantity of content elements exceeds the first threshold quantity.
6. The data privacy annotation framework of claim 4, wherein identifying one or more of the first threshold quantity, the elasticity factor for the first threshold quantity, and the indication of a threshold resiliency property comprises determining an annotation policy established for a target entity associated with the content editing, the annotation policy comprising one or more of the first threshold quantity, the elasticity factor for the first threshold quantity, and the indication of a threshold resiliency property.
7. The data privacy annotation framework of claim 6, wherein the target entity comprises at least one of a user performing the content editing, an organization that comprises the user performing the content editing, and an application type of the user application.
8. The data privacy annotation framework of claim 4, wherein the at least one annotation indicator and the at least one further annotation indicator each comprise one or more of: a global indicator presented in the user interface that applies to the user data file; individual indicators presented in the user interface positioned proximate to individual content elements containing the sensitive data.
9. The data privacy annotation framework of claim 4, wherein the one or more predetermined data schemes are defined by one or more expressions used by a classification service to parse the user content and identify ones of the content elements containing data indicative of one or more predetermined content patterns or one or more predetermined content types.
10. A method of providing a data privacy annotation framework for a user application, the method comprising: identifying one or more of a first threshold quantity, an elasticity factor for the first threshold quantity, and an indication of a threshold resiliency property; monitoring user content in a user data file presented for content editing in a user interface to the user application to identify a quantity of content elements containing sensitive data among the user content corresponding to one or more predetermined data schemes, which includes using a shared data loss protection service to identify the sensitive data based on the user content, the shared data loss protection service being configured to perform sensitive data identification for a plurality of user applications; during the content editing: based at least on the quantity of content elements exceeding the first threshold quantity, initiating presentation of at least one annotation indicator in the user interface that flags the user content in the user interface as containing at least first sensitive data; based at least on the quantity of content elements initially exceeding the first threshold quantity and subsequently falling below the first threshold quantity when the elasticity factor is applied to the first threshold quantity, establishing a second threshold quantity based at least on the elasticity factor for removal of the presentation of the at least one annotation indicator; and based at least on the quantity of content elements initially falling below the second threshold quantity and subsequently exceeding the second threshold quantity when the threshold resiliency property is applied to the second threshold quantity, initiating presentation of at least one further annotation indicator in the user interface that flags the user content in the user interface as containing at least second sensitive data; and responsive to user input associated with at least one of the content elements containing sensitive data, replacing the at least one of the content elements with obfuscated content.
11. The method of claim 10, further comprising: during the content editing, based at least on the quantity of content elements falling below the second threshold quantity when the elasticity factor is applied to the first threshold quantity, initiating removal of the presentation of the at least one annotation indicator; during the content editing, based at least on the quantity of content elements initially exceeding the first threshold quantity and subsequently falling below the first threshold quantity when the elasticity factor is not applied to the first threshold quantity, removing presentation of the at least one annotation indicator; and during the content editing, based at least on the quantity of content elements initially falling below the second threshold quantity and subsequently exceeding the second threshold when the resiliency property is not applied to the second threshold quantity, withholding presentation of the at least one further annotation indicator that flags the user content in the user interface as containing at least the second sensitive data until the quantity of content elements exceeds the first threshold quantity.
12. The method of claim 10, wherein identifying one or more of the first threshold quantity, the elasticity factor for the first threshold quantity, and the indication of a threshold resiliency property comprises determining an annotation policy established for a target entity associated with the content editing, the annotation policy comprising one or more of the first threshold quantity, the elasticity factor for the first threshold quantity, and the indication of a threshold resiliency property.
13. The method of claim 12, wherein the target entity comprises at least one of a user performing the content editing, an organization that comprises the user performing the content editing, and an application type of the user application.
14. The method of claim 10, wherein the at least one annotation indicator and the at least one further annotation indicator each comprise one or more of: a global indicator presented in the user interface that applies to the user data file; individual indicators presented in the user interface positioned proximate to individual content elements containing the sensitive data.
15. The method of claim 10, wherein the one or more predetermined data schemes are defined by one or more expressions used by a classification service to parse the user content and identify ones of the content elements containing data indicative of one or more predetermined content patterns or one or more predetermined content types.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/466,988 US10410014B2 (en) | 2017-03-23 | 2017-03-23 | Configurable annotations for privacy-sensitive user content |
PCT/US2018/022284 WO2018175163A1 (en) | 2017-03-23 | 2018-03-14 | Configurable annotations for privacy-sensitive user content |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ756030A NZ756030A (en) | 2023-12-22 |
NZ756030B2 true NZ756030B2 (en) | 2024-03-26 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
PH12019550175A1 (en) | Configurable annotations for privacy-sensitive user content | |
US11537478B2 (en) | Automation and optimization of data recovery after a ransomware attack | |
CN109284251A (en) | Blog management method, device, computer equipment and storage medium | |
US20140317286A1 (en) | Monitoring computer and method | |
CN107783829B (en) | Task processing method and device, storage medium and computer equipment | |
CN105429801B (en) | A kind of flux monitoring method and device | |
CN105607986A (en) | Acquisition method and device of user behavior log data | |
WO2018122890A1 (en) | Log analysis method, system, and program | |
CN105187241A (en) | Log recording method and system based on linux kernel | |
CN104321753B (en) | For the method for usage amount that monitoring resource is presented, computing device and record has the recording medium of the program for execution thereon | |
KR102045772B1 (en) | Electronic system and method for detecting malicious code | |
CN105207831B (en) | The detection method and device of Action Events | |
US20220401825A1 (en) | Alarm method and apparatus for tabletop game, electronic device and storage medium | |
CN104636661A (en) | Method and system for analyzing Android application program | |
JP6528381B2 (en) | Log management device, log management program, and log management method | |
CN113691395A (en) | Network operation and maintenance method and device, computer equipment and storage medium | |
CN107957719B (en) | Robot and abnormity monitoring method and device thereof | |
NZ756030B2 (en) | Configurable annotations for privacy-sensitive user content | |
CN102970298A (en) | Method, equipment and system for secret leakage prevention | |
CN106210014A (en) | A kind of information processing method and equipment | |
JP6798504B2 (en) | Log analysis system, log analysis method and program | |
CN106776623B (en) | User behavior analysis method and device | |
CN107562599A (en) | A kind of parameter detection method and device | |
US9864668B2 (en) | Apparatus, method, and system for event data processing | |
CN114610577A (en) | Target resource locking method, device, equipment and medium |