US20210297432A1 - Generation of an anomalies and event awareness evaluation regarding a system aspect of a system - Google Patents
Generation of an anomalies and event awareness evaluation regarding a system aspect of a system Download PDFInfo
- Publication number
- US20210297432A1 US20210297432A1 US17/301,356 US202117301356A US2021297432A1 US 20210297432 A1 US20210297432 A1 US 20210297432A1 US 202117301356 A US202117301356 A US 202117301356A US 2021297432 A1 US2021297432 A1 US 2021297432A1
- Authority
- US
- United States
- Prior art keywords
- evaluation
- data
- analysis
- rating
- anomaly
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 886
- 238000004458 analytical method Methods 0.000 claims abstract description 1493
- 238000000034 method Methods 0.000 claims abstract description 1282
- 230000008569 process Effects 0.000 claims description 601
- 230000006870 function Effects 0.000 claims description 553
- 238000013461 design Methods 0.000 claims description 217
- 230000007812 deficiency Effects 0.000 claims description 189
- 230000008520 organization Effects 0.000 claims description 126
- 230000015654 memory Effects 0.000 claims description 114
- 230000002776 aggregation Effects 0.000 claims description 17
- 238000004220 aggregation Methods 0.000 claims description 17
- 238000001514 detection method Methods 0.000 description 611
- 238000010586 diagram Methods 0.000 description 299
- 238000012544 monitoring process Methods 0.000 description 203
- 238000007726 management method Methods 0.000 description 91
- 238000012545 processing Methods 0.000 description 90
- 238000012360 testing method Methods 0.000 description 78
- 238000003860 storage Methods 0.000 description 70
- 238000012937 correction Methods 0.000 description 59
- 230000004044 response Effects 0.000 description 51
- 230000006854 communication Effects 0.000 description 47
- 238000004891 communication Methods 0.000 description 45
- 230000004224 protection Effects 0.000 description 44
- 238000013500 data storage Methods 0.000 description 41
- 238000007405 data analysis Methods 0.000 description 38
- 230000006855 networking Effects 0.000 description 32
- 230000018109 developmental process Effects 0.000 description 31
- 230000001105 regulatory effect Effects 0.000 description 31
- 238000012795 verification Methods 0.000 description 31
- 230000000694 effects Effects 0.000 description 30
- 238000013075 data extraction Methods 0.000 description 27
- 238000011161 development Methods 0.000 description 27
- 238000012549 training Methods 0.000 description 26
- 230000006872 improvement Effects 0.000 description 21
- 238000012423 maintenance Methods 0.000 description 19
- 238000005067 remediation Methods 0.000 description 18
- 230000002093 peripheral effect Effects 0.000 description 16
- 238000007781 pre-processing Methods 0.000 description 15
- 230000008878 coupling Effects 0.000 description 14
- 238000010168 coupling process Methods 0.000 description 14
- 238000005859 coupling reaction Methods 0.000 description 14
- 239000000463 material Substances 0.000 description 14
- 239000011159 matrix material Substances 0.000 description 14
- 230000007246 mechanism Effects 0.000 description 13
- 238000011084 recovery Methods 0.000 description 13
- 230000000875 corresponding effect Effects 0.000 description 12
- 230000001186 cumulative effect Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 238000009434 installation Methods 0.000 description 11
- 230000033772 system development Effects 0.000 description 10
- 241000700605 Viruses Species 0.000 description 8
- 230000002349 favourable effect Effects 0.000 description 7
- 230000002265 prevention Effects 0.000 description 7
- 230000002155 anti-virotic effect Effects 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000009897 systematic effect Effects 0.000 description 6
- 230000002411 adverse Effects 0.000 description 5
- 230000001976 improved effect Effects 0.000 description 5
- 238000013439 planning Methods 0.000 description 5
- 238000012502 risk assessment Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 4
- 230000000116 mitigating effect Effects 0.000 description 4
- 238000007619 statistical method Methods 0.000 description 4
- 230000026676 system process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000008439 repair process Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000002547 anomalous effect Effects 0.000 description 2
- 238000003339 best practice Methods 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000013478 data encryption standard Methods 0.000 description 2
- 238000012854 evaluation process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000005669 field effect Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003012 network analysis Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001681 protective effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000008093 supporting effect Effects 0.000 description 2
- CIWBSHSKHKDKBQ-JLAZNSOCSA-N Ascorbic acid Chemical compound OC[C@H](O)[C@H]1OC(=O)C(O)=C1O CIWBSHSKHKDKBQ-JLAZNSOCSA-N 0.000 description 1
- 241000027036 Hippa Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005204 segregation Methods 0.000 description 1
- 235000008113 selfheal Nutrition 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012956 testing procedure Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000002747 voluntary effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/22—Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
- G06F11/26—Functional testing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0709—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0751—Error or fault detection not based on redundancy
- G06F11/0754—Error or fault detection not based on redundancy by exceeding limits
- G06F11/076—Error or fault detection not based on redundancy by exceeding limits by exceeding a count or rate limit, e.g. word- or bit count limit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/079—Root cause analysis, i.e. error or fault diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0793—Remedial or corrective actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3447—Performance evaluation by modeling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3476—Data logging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3495—Performance evaluation by tracing or monitoring for systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2379—Updates performed during online database operations; commit processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1433—Vulnerability analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/034—Test or assess a computer or a system
Definitions
- This disclosure relates to computer systems and more particularly to evaluation of a computer system.
- a typical system includes networking equipment, end point devices such as computer servers, user computers, storage devices, printing devices, security devices, and point of service devices, among other types of devices.
- the networking equipment includes routers, switches, edge devices, wireless access points, and other types of communication devices that intercouple in a wired or wireless fashion.
- the networking equipment facilitates the creation of one or more networks that are tasked to service all or a portion of a company's communication needs, e.g., Wide Area Networks, Local Area Networks, Virtual Private Networks, etc.
- Each device within a system includes hardware components and software components. Hardware components degrade over time and eventually are incapable of performing their intended functions. Software components must be updated regularly to ensure their proper functionality. Some software components are simply replaced by newer and better software even though they remain operational within a system.
- IT Information Technology
- Cyber-attacks are initiated by individuals or entities with the bad intent of stealing sensitive information such as login/password information, stealing proprietary information such as trade secrets or important new technology, interfering with the operation of a system, and/or holding the system hostage until a ransom is paid, among other improper purposes.
- a single cyber-attack can make a large system inoperable and cost the system owner many millions of dollars to restore and remedy.
- FIG. 1 is a schematic block diagram of an embodiment of a networked environment that includes systems coupled to an analysis system in accordance with the present disclosure
- FIGS. 2A-2D are schematic block diagrams of embodiments of a computing device in accordance with the present disclosure.
- FIGS. 3A-3E are schematic block diagrams of embodiments of a computing entity in accordance with the present disclosure.
- FIG. 4 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure
- FIG. 5 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure
- FIG. 6 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure
- FIG. 7 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure
- FIG. 8 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of system elements in accordance with the present disclosure
- FIG. 9 is a schematic block diagram of an example of a system section of a system selected for evaluation in accordance with the present disclosure.
- FIG. 10 is a schematic block diagram of another example of a system section of a system selected for evaluation in accordance with the present disclosure.
- FIG. 11 is a schematic block diagram of an embodiment of a networked environment having a system that includes a plurality of system assets coupled to an analysis system in accordance with the present disclosure
- FIG. 12 is a schematic block diagram of an embodiment of a system that includes a plurality of physical assets coupled to an analysis system in accordance with the present disclosure
- FIG. 13 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of system assets coupled to an analysis system in accordance with the present disclosure
- FIG. 14 is a schematic block diagram of another embodiment of a system that includes a plurality of physical assets coupled to an analysis system in accordance with the present disclosure
- FIG. 15 is a schematic block diagram of another embodiment of a system that includes a plurality of physical assets coupled to an analysis system in accordance with the present disclosure
- FIG. 16 is a schematic block diagram of another embodiment of a system that includes a plurality of physical assets in accordance with the present disclosure
- FIG. 17 is a schematic block diagram of an embodiment of a user computing device in accordance with the present disclosure.
- FIG. 18 is a schematic block diagram of an embodiment of a server in accordance with the present disclosure.
- FIG. 19 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of system functions coupled to an analysis system in accordance with the present disclosure
- FIG. 20 is a schematic block diagram of another embodiment of a system that includes divisions, departments, and groups in accordance with the present disclosure
- FIG. 21 is a schematic block diagram of another embodiment of a system that includes divisions and departments, which include system elements in accordance with the present disclosure
- FIG. 22 is a schematic block diagram of another embodiment of a division of a system having departments, which include system elements in accordance with the present disclosure
- FIG. 23 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of security functions coupled to an analysis system in accordance with the present disclosure
- FIG. 24 is a schematic block diagram of an embodiment an engineering department of a division that reports to a corporate department of a system in accordance with the present disclosure
- FIG. 25 is a schematic block diagram of an example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 26 is a schematic block diagram of another example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 27 is a schematic block diagram of another example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 28 is a schematic block diagram of another example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 29 is a schematic block diagram of an example of the functioning of an analysis system evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 30 is a schematic block diagram of another example of the functioning of an analysis system evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 31 is a diagram of an example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 32 is a diagram of another example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 33 is a diagram of another example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 34 is a diagram of another example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure
- FIG. 35 is a schematic block diagram of an embodiment of an analysis system coupled to a system in accordance with the present disclosure.
- FIG. 36 is a schematic block diagram of an embodiment of a portion of an analysis system coupled to a system in accordance with the present disclosure
- FIG. 37 is a schematic block diagram of another embodiment of a portion of an analysis system coupled to a system in accordance with the present disclosure.
- FIG. 38 is a schematic block diagram of an embodiment of a data extraction module of an analysis system coupled to a system in accordance with the present disclosure
- FIG. 39 is a schematic block diagram of another embodiment of an analysis system coupled to a system in accordance with the present disclosure.
- FIG. 40 is a schematic block diagram of another embodiment of an analysis system coupled to a system in accordance with the present disclosure.
- FIG. 41 is a schematic block diagram of an embodiment of a data analysis module of an analysis system in accordance with the present disclosure.
- FIG. 42 is a schematic block diagram of an embodiment of an analyze and score module of an analysis system in accordance with the present disclosure.
- FIG. 43 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 44 is a diagram of another example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 45 is a diagram of an example of an identification evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure.
- FIG. 46 is a diagram of an example of a protect evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure
- FIG. 47 is a diagram of an example of a detect evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure
- FIG. 48 is a diagram of an example of a respond evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure
- FIG. 49 is a diagram of an example of a recover evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure
- FIG. 50 is a diagram of a specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 51 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 52 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 53 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 54 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 55 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 56 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure
- FIG. 57 is a diagram of an example of identifying deficiencies and auto-corrections by an analysis system analyzing a section of a system in accordance with the present disclosure
- FIG. 58 is a schematic block diagram of an embodiment of an evaluation processing module of an analysis system in accordance with the present disclosure.
- FIG. 59 is a state diagram of an example of an analysis system analyzing a section of a system in accordance with the present disclosure
- FIG. 60 is a logic diagram of an example of an analysis system analyzing a section of a system in accordance with the present disclosure
- FIG. 61 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 62 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 63 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 64 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 65 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 66 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 67 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure.
- FIG. 68 is a logic diagram of an example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 69 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 70 is a schematic block diagram of an example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 71 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for generating an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 72 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 73 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 74 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 75 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 76 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 77 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 78 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 79 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 80 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 81 is a schematic block diagram of an embodiment of a data analysis module of an analysis system in accordance with the present disclosure.
- FIG. 82 is a schematic block diagram of another embodiment of a data analysis module of an analysis system in accordance with the present disclosure.
- FIG. 83 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 84 is a logic diagram of a further example of generating a process rating for understanding of system build for issue detection security functions of an organization in accordance with the present disclosure
- FIG. 85 is a logic diagram of a further example of generating a process rating for understanding of verifying issue detection security functions of an organization in accordance with the present disclosure
- FIG. 86 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 87 is a logic diagram of an example of generating a process rating by the analysis system in accordance with the present disclosure.
- FIG. 88 is a logic diagram of an example of generating a process rating by the analysis system in accordance with the present disclosure.
- FIG. 89 is a logic diagram of an example of generating a process rating by the analysis system in accordance with the present disclosure.
- FIG. 90 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 91 is a logic diagram of a further example of generating a policy rating for understanding of system build for issue detection security functions of an organization in accordance with the present disclosure
- FIG. 92 is a logic diagram of a further example of generating a policy rating for understanding of verifying issue detection security functions of an organization in accordance with the present disclosure
- FIG. 93 is a logic diagram of an example of generating a policy rating by the analysis system in accordance with the present disclosure.
- FIG. 94 is a logic diagram of an example of generating a policy rating by the analysis system in accordance with the present disclosure.
- FIG. 95 is a logic diagram of an example of generating a policy rating by the analysis system in accordance with the present disclosure.
- FIG. 96 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 97 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure
- FIG. 98 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure
- FIG. 99 is a logic diagram of an example of generating a documentation rating by the analysis system in accordance with the present disclosure.
- FIG. 100 is a logic diagram of an example of generating a documentation rating by the analysis system in accordance with the present disclosure
- FIG. 101 is a logic diagram of an example of generating a documentation rating by the analysis system in accordance with the present disclosure
- FIG. 102 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 103 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure
- FIG. 104 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure
- FIG. 105 is a logic diagram of an example of generating an automation rating by the analysis system in accordance with the present disclosure
- FIG. 106 is a logic diagram of an example of generating an automation rating by the analysis system in accordance with the present disclosure
- FIG. 107 is a logic diagram of an example of generating an automation rating by the analysis system in accordance with the present disclosure.
- FIG. 108 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 109 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for generating an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 110 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 111 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 112 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 113 is a diagram of an example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure
- FIG. 114 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure
- FIG. 115 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure.
- FIG. 116 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure
- FIG. 117 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure.
- FIG. 118 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 119 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 120 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 121 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 122 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 123 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 124 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 125 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 126 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 127 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 128 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 129 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 130 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 131 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 132 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 133 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure
- FIG. 134 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure.
- FIG. 135 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure.
- FIG. 1 is a schematic block diagram of an embodiment of a networked environment that includes one or more networks 14 , external data feeds sources 15 , a plurality of systems 11 - 13 , and an analysis system 10 .
- the external data feed sources 15 includes one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more non-business associated computing devices 24 (e.g., publicly available servers 27 and subscription based servers 28 ), one or more BOT (i.e., internet robot) computing devices 25 , and one or more bad actor computing devices 26 .
- BOT i.e., internet robot
- the analysis system 10 includes one or more analysis computing entities 16 , a plurality of analysis system modules 17 (one or more in each of the systems 11 - 13 ), and a plurality of storage systems 19 - 21 (e.g., system A private storage 19 , system B private storage 20 , through system x private storage 21 , and other storage).
- Each of the systems 11 - 13 includes one or more network interfaces 18 and many more elements not shown in FIG. 1 .
- a computing device may be implemented in a variety of ways. A few examples are shown in FIGS. 2A-2D .
- a computing entity may be implemented in a variety of ways. A few examples are shown in FIGS. 3A-3E .
- a storage system 19 - 21 may be implemented in a variety of ways.
- each storage system is a standalone database.
- the storage systems are implemented in a common database.
- a database is a centralized database, a distributed database, an operational database, a cloud database, an object-oriented database, and/or a relational database.
- a storage system 19 - 21 is coupled to the analysis system 10 using a secure data pipeline to limit and control access to the storage systems.
- the secure data pipeline may be implemented in a variety of ways.
- the secure data pipeline is implemented on a provide network of the analysis system and/or of a system under test.
- the secure data pipeline is implemented via the network 14 using access control, using network controls, implementing access and control policies, using encryption, using data loss prevention tools, and/or using auditing tools.
- the one or more networks 14 includes one or more wide area networks (WAN), one or more local area networks (LAN), one or more wireless LANs (WLAN), one or more cellular networks, one or more satellite networks, one or more virtual private networks (VPN), one or more campus area networks (CAN), one or more metropolitan area networks (MAN), one or more storage area networks (SAN), one or more enterprise private networks (EPN), and/or one or more other type of networks.
- WAN wide area networks
- LAN local area networks
- WLAN wireless LAN
- cellular networks one or more satellite networks
- VPN virtual private networks
- CAN campus area networks
- MAN metropolitan area networks
- SAN storage area networks
- EPN enterprise private networks
- a system proficiency resource 22 is a source for data regarding best-in-class practices (for system requirements, for system design, for system implementation, and/or for system operation), governmental and/or regulatory requirements, security risk awareness and/or risk remediation information, security risk avoidance, performance optimization information, system development guidelines, software development guideline, hardware requirements, networking requirements, networking guidelines, and/or other system proficiency guidance.
- “Framework for Improving Critical Instructure Cybersecurity”, Version 1.1, Apr. 16, 2018 by the National Institute of Standards and Technology (NIST) is an example of a system proficiency in the form of a guideline for cybersecurity.
- a business associated computing device 23 is one that is operated by a business associate of the system owner. Typically, the business associated computing device 23 has access to at least a limited portion of the system to which the general public does not have access. For example, the business associated computing device 23 is operated by a vendor of the organization operating the system and is granted limited access for order placement and/or fulfillment. As another example, the business associated computing device 23 is operated by a customer of the organization operating the system and is granted limited access for placing orders.
- a non-business associated computing device 24 is a computing device operated by a person or entity that does not have a business relationship with the organization operating the system. Such non-business associated computing device 24 are not granted special access to the system.
- a non-business associated computing device 24 is a publicly available server 27 to which a user computing device of the system may access.
- a non-business associated computing device 24 is a subscription based servers 28 to which a user computing device of the system may access if it is authorized by a system administrator of the system to have a subscription and has a valid subscription.
- the non-business associated computing device 24 is a computing device operated by a person or business that does not have an affiliation with the organization operating the system.
- a bot (i.e., internet robot) computing device 25 is a computing device that runs, with little to no human interaction, to interact with a system and/or a computing device of a user via the internet or a network.
- a bad actor computing device 26 is a computing device operated by a person whose use of the computing device is for illegal and/or immoral purposes.
- the bad actor computing device 26 may employ a bot to execute an illegal and/or immoral purpose.
- the person may instruct the bad actor computing device to perform the illegal and/or immoral purpose, such as hacking, planting a worm, planting a virus, stealing data, uploading false data, and so on.
- the analysis system 10 is operable to evaluate a system 11 - 13 , or portion thereof, in a variety of ways.
- the analysis system 10 evaluates system A 11 , or a portion thereof, by testing the organization's understanding of its system, or portion thereof; by testing the organization's implementation of its system, or portion thereof; and/or by testing the system's, or portion thereof; operation.
- the analysis system 10 tests the organization's understanding of its system requirements for the implementation and/or operation of its system, or portion thereof.
- the analysis system 10 tests the organization's understanding of its software maintenance policies and/or procedures.
- the analysis system 10 tests the organization's understanding of its cybersecurity policies and/or procedures.
- the analysis system 10 can evaluate a system 11 - 13 , which may be a computer system, a computer network, an enterprise system, and/or other type of system that includes computing devices operating software.
- the analysis system 10 evaluates a system aspect (e.g., the system or a portion of it) based on an evaluation aspect (e.g., options for how the system, or portion thereof, can be evaluated) in view of evaluation rating metrics (e.g., how the system, or portion thereof, is evaluated) to produce an analysis system output (e.g., an evaluation rating, deficiency identification, and/or deficiency auto-correction).
- a system aspect e.g., the system or a portion of it
- an evaluation aspect e.g., options for how the system, or portion thereof, can be evaluated
- evaluation rating metrics e.g., how the system, or portion thereof, is evaluated
- an analysis system output e.g., an evaluation rating, deficiency identification, and/or deficiency auto-correct
- the system aspect includes a selection of one or more system elements of the system, a selection of one or more system criteria, and/or a selection of one or more system modes.
- a system element of the system includes one or more system assets which is one or more physical assets of the system and/or conceptual assets of the system.
- a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like.
- a conceptual asset is a hardware architectural layout, or portion thereof, and/or a software architectural layout, or portion thereof.
- a system element and/or system asset may be identified in a variety of ways. For example, it is identifiable by its use and/or location within the organization. As a specific example, a system element and/or system asset is identified by an organizational identifier, a division of the organization identifier, a department of a division identifier, a group of a department identifier, and/or a sub-group of a group identifier. In this manner, if the entire system is to be evaluated, the organization identifier is used to select all of the system elements in the system. If a portion of the system is to be test based on business function, then a division, department, group, and/or sub-group identifier is used to select the desired portion of the system.
- a system element and/or system asset is identifiable based on a serial number, an IP (internet protocol) address, a vendor name, a type of system element and/or system asset (e.g., computing entity, a particular user software application, etc.), registered user of the system element and/or system asset, and/or other identifying metric.
- IP internet protocol
- a type of system element and/or system asset e.g., computing entity, a particular user software application, etc.
- registered user of the system element and/or system asset e.g., a particular user software application.
- a system criteria is regarding a level of the system, or portion thereof, being evaluated.
- the system criteria includes guidelines, system requirements, system design, system build, and resulting system.
- the guidelines e.g., business objectives, security objectives, NIST cybersecurity guidelines, system objectives, governmental and/or regulatory requirements, third party requirements, etc.
- the system, or potion thereof can be evaluated from a guideline level, a system requirements level, a design level, a build level, and/or a resulting system level.
- a system mode is regarding a different level of the system, or portion thereof, being evaluated.
- the system mode includes assets, system functions, and security functions.
- the system can be evaluated from an assets level, a system function level, and/or a security function level.
- the evaluation aspect (e.g., options for how the system, or portion thereof, can be evaluated) includes a selection of one or more evaluation perspectives, a selection of one or more evaluation viewpoints, and/or a selection of one or more evaluation categories, which may further includes sub-categories, and sub-categories of the sub-categories).
- An evaluation perspective is understanding of the system, or portion thereof; implementation (e.g., design and build) of the system, or portion thereof; operational performance of the system, or portion thereof; or self-analysis of the system, or portion thereof.
- An evaluation viewpoint is disclosed information from the system, discovered information about the system by the analysis system, or desired information about the system obtained by the analysis system from system proficiency resources.
- the evaluation viewpoint complements the evaluation perspective to allow for more in-depth and/or detailed evaluations.
- the analysis system 10 can evaluate how well the system is understood by comparing disclosed data with discovered data.
- the analysis system 10 can evaluate how well the system is actually implemented in comparison to a desired level of implementation.
- the evaluation category includes an identify category, a protect category, a detect category, a respond category, and a recover category.
- Each evaluation category includes a plurality of sub-categories and, at least some of the sub-categories include their own sub-categories (e.g., a sub-sub category).
- the identify category includes the sub-categories of asset management, business environment, governance, risk assessment, risk management, access control, awareness & training, and data security.
- asset management includes the sub-categories of hardware inventory, software inventory, data flow maps, external system cataloged, resource prioritization, and security roles.
- the analysis system 10 can evaluate the system, or portion thereof, in light of one more evaluation categories, in light of an evaluation category and one or more sub-categories, or in light of an evaluation category, a sub-category, and one or more sub-sub-categories.
- the evaluation rating metrics includes a selection of process, policy, procedure, certification, documentation, and/or automation. This allows the analysis system to quantify its evaluation. For example, the analysis system 10 can evaluate the processes a system, or portion thereof, has to generate an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. As another example, the analysis system 10 can evaluate how well the system, or portion thereof, uses the process it has to generate an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies.
- the analysis computing entity 16 (which includes one or more computing entities) sends a data gathering request to the analysis system module 17 .
- the data gathering request is specific to the evaluation to be performed by the analysis system 10 . For example, if the analysis system 10 is evaluating the understanding of the policies, processes, documentation, and automation regarding the assets built for the engineering department, then the data gathering request would be specific to policies, processes, documentation, and automation regarding the assets built for the engineering department.
- the analysis system module 17 is loaded on the system 11 - 13 and obtained the requested data from the system.
- the obtaining of the data can be done in a variety of ways.
- the data is disclosed by one or more system administrators.
- the disclosed data corresponds to the information the system administrator(s) has regarding the system.
- the disclosed data is a reflection of the knowledge the system administrator(s) has regarding the system.
- the analysis system module 17 communicates with physical assets of the system to discover the data.
- the communication may be direct with an asset.
- the analysis system module 17 sends a request to a particular computing device.
- the communication may be through one or more discovery tools of the system.
- the analysis system module 17 communicates with one or more tools of the system to obtain data regarding data segregation & boundary, infrastructure management, exploit & malware protection, encryption, identity & access management, system monitoring, vulnerability management, and/or data protection.
- a tool is a network monitoring tool, a network strategy and planning tool, a network managing tool, a Simple Network Management Protocol (SNMP) tool, a telephony monitoring tool, a firewall monitoring tool, a bandwidth monitoring tool, an IT asset inventory management tool, a network discovery tool, a network asset discovery tool, a software discovery tool, a security discovery tool, an infrastructure discovery tool, Security Information & Event Management (SIEM) tool, a data crawler tool, and/or other type of tool to assist in discovery of assets, functions, security issues, implementation of the system, and/or operation of the system.
- SIEM Security Information & Event Management
- the analysis system module 17 provides the gathered data to the analysis computing entity 16 , which stores the gathered data in a private storage 19 - 21 and processes it.
- the gathered data is processed alone, in combination with stored data (of the system being evaluated and/or another system's data), in combination with desired data (e.g., system proficiencies), in combination with analysis modeling (e.g., risk modeling, data flow modeling, security modeling, etc.), and/or in combination with stored analytic data (e.g., results of other evaluations).
- the analysis computing entity 16 produces an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies.
- the evaluation results are stored in a private storage and/or in another database.
- the analysis system 10 is operable to evaluate a system and/or its eco-system at any level of granularity from the entire system to an individual asset over a wide spectrum of evaluation options.
- the evaluation is to test understanding of the system, to test the implementation of the system, and/or to test the operation of the system.
- the evaluation is to test the system's self-evaluation capabilities with respect to understanding, implementation, and/or operation.
- the evaluation is to test policies regarding software tools; to test which software tools are prescribed by policy; to test which software tools are prohibited by policy; to test the use of the software tools in accordance with policy, to test maintenance of software tools in accordance with policy; to test the sufficiency of the policies, to test the effectiveness of the policies; and/or to test compliancy with the policies.
- the analysis system 10 takes an outside perspective to analyze the system. From within the system, it is often difficult to test the entire system, to test different combinations of system elements, to identify areas of vulnerabilities (assets and human operators), to identify areas of strength (assets and human operators), and to be proactive. Further, such evaluations are additional tasks the system has to perform, which means it consumes resources (human, physicals assets, and financial). Further, since system analysis is not the primary function of a system (supporting the organization is the system's primary purpose), the system analysis is not as thoroughly developed, implemented, and/or executed as is possible when its implemented in a stand-alone analysis system, like system 10 .
- the primary purpose of the analysis system is to analyze other systems to determine an evaluation rating, to identify deficiencies in the system, and, where it can, auto-correct the deficiencies.
- the evaluation rating can be regarding how well the system, or portion thereof, is understood, how well it is implemented, and/or how well it operates.
- the evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to support a business function; actually (discovered data) supports a business function; and/or should (desired data) support the business function.
- the evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to mitigate security risks; actually (discovered data) supports mitigating security risks; and/or should (desired data) support mitigating security risks.
- the evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to respond to security risks; actually (discovered data) supports responding to security risks; and/or should (desired data) support responding security risks.
- the evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to be used by people; is actually (discovered data) used by people; and/or should (desired data) be used by people.
- the evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to identify assets of the system; actually (discovered data) identifies assets of the system; and/or should (desired data) identify assets of the system.
- the analysis system 10 can evaluate a system 11 - 13 .
- a primary purpose the analysis system 10 is help the system 11 - 13 become more self-healing, more self-updating, more self-protecting, more self-recovering, more self-evaluating, more self-aware, more secure, more efficient, more adaptive, and/or more self-responding.
- the analysis system 10 significantly improves the usefulness, security, and efficiency of systems 11 - 13 .
- FIG. 2A is a schematic block diagram of an embodiment of a computing device 40 that includes a plurality of computing resources.
- the computing resource include a core control module 41 , one or more processing modules 43 , one or more main memories 45 , a read only memory (ROM) 44 for a boot up sequence, cache memory 47 , a video graphics processing module 42 , a display 48 (optional), an Input-Output (I/O) peripheral control module 46 , an I/O interface module 49 (which could be omitted), one or more input interface modules 50 , one or more output interface modules 51 , one or more network interface modules 55 , and one or more memory interface modules 54 .
- I/O Input-Output
- a processing module 43 is described in greater detail at the end of the detailed description of the invention section and, in an alternative embodiment, has a direction connection to the main memory 45 .
- the core control module 41 and the I/O and/or peripheral control module 46 are one module, such as a chipset, a quick path interconnect (QPI), and/or an ultra-path interconnect (UPI).
- QPI quick path interconnect
- UPI ultra-path interconnect
- Each of the main memories 45 includes one or more Random Access Memory (RAM) integrated circuits, or chips.
- a main memory 45 includes four DDR 4 ( 4 th generation of double data rate) RAM chips, each running at a rate of 2 , 400 MHz.
- the main memory 45 stores data and operational instructions most relevant for the processing module 43 .
- the core control module 41 coordinates the transfer of data and/or operational instructions between the main memory 45 and the memory 56 - 57 .
- the data and/or operational instructions retrieve from memory 56 - 57 are the data and/or operational instructions requested by the processing module or will most likely be needed by the processing module.
- the core control module 41 coordinates sending updated data to the memory 56 - 57 for storage.
- the memory 56 - 57 includes one or more hard drives, one or more solid state memory chips, and/or one or more other large capacity storage devices that, in comparison to cache memory and main memory devices, is/are relatively inexpensive with respect to cost per amount of data stored.
- the memory 56 - 57 is coupled to the core control module 41 via the I/O and/or peripheral control module 46 and via one or more memory interface modules 54 .
- the I/O and/or peripheral control module 46 includes one or more Peripheral Component Interface (PCI) buses to which peripheral components connect to the core control module 41 .
- a memory interface module 54 includes a software driver and a hardware connector for coupling a memory device to the I/O and/or peripheral control module 46 .
- a memory interface 54 is in accordance with a Serial Advanced Technology Attachment (SATA) port.
- SATA Serial Advanced Technology Attachment
- the core control module 41 coordinates data communications between the processing module(s) 43 and the network(s) 14 via the I/O and/or peripheral control module 46 , the network interface module(s) 55 , and a network card 58 or 59 .
- a network card 58 or 59 includes a wireless communication unit or a wired communication unit.
- a wireless communication unit includes a wireless local area network (WLAN) communication device, a cellular communication device, a Bluetooth device, and/or a ZigBee communication device.
- a wired communication unit includes a Gigabit LAN connection, a Firewire connection, and/or a proprietary computer wired connection.
- a network interface module 55 includes a software driver and a hardware connector for coupling the network card to the I/O and/or peripheral control module 46 .
- the network interface module 55 is in accordance with one or more versions of IEEE 802.11, cellular telephone protocols, 10/100/1000 Gigabit LAN protocols, etc.
- the core control module 41 coordinates data communications between the processing module(s) 43 and input device(s) 52 via the input interface module(s) 50 , the I/O interface 49 , and the I/O and/or peripheral control module 46 .
- An input device 52 includes a keypad, a keyboard, control switches, a touchpad, a microphone, a camera, etc.
- An input interface module 50 includes a software driver and a hardware connector for coupling an input device to the I/O and/or peripheral control module 46 .
- an input interface module 50 is in accordance with one or more Universal Serial Bus (USB) protocols.
- USB Universal Serial Bus
- the core control module 41 coordinates data communications between the processing module(s) 43 and output device(s) 53 via the output interface module(s) 51 and the I/O and/or peripheral control module 46 .
- An output device 53 includes a speaker, auxiliary memory, headphones, etc.
- An output interface module 51 includes a software driver and a hardware connector for coupling an output device to the I/O and/or peripheral control module 46 .
- an output interface module 46 is in accordance with one or more audio codec protocols.
- the processing module 43 communicates directly with a video graphics processing module 42 to display data on the display 48 .
- the display 48 includes an LED (light emitting diode) display, an LCD (liquid crystal display), and/or other type of display technology.
- the display has a resolution, an aspect ratio, and other features that affect the quality of the display.
- the video graphics processing module 42 receives data from the processing module 43 , processes the data to produce rendered data in accordance with the characteristics of the display, and provides the rendered data to the display 48 .
- FIG. 2B is a schematic block diagram of an embodiment of a computing device 40 that includes a plurality of computing resources similar to the computing resources of FIG. 2A with the addition of one or more cloud memory interface modules 60 , one or more cloud processing interface modules 61 , cloud memory 62 , and one or more cloud processing modules 63 .
- the cloud memory 62 includes one or more tiers of memory (e.g., ROM, volatile (RAM, main, etc.), non-volatile (hard drive, solid-state, etc.) and/or backup (hard drive, tape, etc.)) that is remoted from the core control module and is accessed via a network (WAN and/or LAN).
- the cloud processing module 63 is similar to processing module 43 but is remoted from the core control module and is accessed via a network.
- FIG. 2C is a schematic block diagram of an embodiment of a computing device 40 that includes a plurality of computing resources similar to the computing resources of FIG. 2B with a change in how the cloud memory interface module(s) 60 and the cloud processing interface module(s) 61 are coupled to the core control module 41 .
- the interface modules 60 and 61 are coupled to a cloud peripheral control module 63 that directly couples to the core control module 41 .
- FIG. 2D is a schematic block diagram of an embodiment of a computing device 40 that includes a plurality of computing resources, which includes include a core control module 41 , a boot up processing module 66 , boot up RAM 67 , a read only memory (ROM) 45 , a video graphics processing module 42 , a display 48 (optional), an Input-Output (I/O) peripheral control module 46 , one or more input interface modules 50 , one or more output interface modules 51 , one or more cloud memory interface modules 60 , one or more cloud processing interface modules 61 , cloud memory 62 , and cloud processing module(s) 63 .
- a core control module 41 includes a core control module 41 , a boot up processing module 66 , boot up RAM 67 , a read only memory (ROM) 45 , a video graphics processing module 42 , a display 48 (optional), an Input-Output (I/O) peripheral control module 46 , one or more input interface modules 50 , one or more output interface modules 51
- the computing device 40 includes enough processing resources (e.g., module 66 , ROM 44 , and RAM 67 ) to boot up.
- processing resources e.g., module 66 , ROM 44 , and RAM 67
- the cloud memory 62 and the cloud processing module(s) 63 function as the computing device's memory (e.g., main and hard drive) and processing module.
- FIG. 3A is schematic block diagram of an embodiment of a computing entity 16 that includes a computing device 40 (e.g., one of the embodiments of FIGS. 2A-2D ).
- a computing device may function as a user computing device, a server, a system computing device, a data storage device, a data security device, a networking device, a user access device, a cell phone, a tablet, a laptop, a printer, a game console, a satellite control box, a cable box, etc.
- FIG. 3B is schematic block diagram of an embodiment of a computing entity 16 that includes two or more computing devices 40 (e.g., two or more from any combination of the embodiments of FIGS. 2A-2D ).
- the computing devices 40 perform the functions of a computing entity in a peer processing manner (e.g., coordinate together to perform the functions), in a master-slave manner (e.g., one computing device coordinates and the other support it), and/or in another manner.
- FIG. 3C is schematic block diagram of an embodiment of a computing entity 16 that includes a network of computing devices 40 (e.g., two or more from any combination of the embodiments of FIGS. 2A-2D ).
- the computing devices are coupled together via one or more network connections (e.g., WAN, LAN, cellular data, WLAN, etc.) and preform the functions of the computing entity.
- network connections e.g., WAN, LAN, cellular data, WLAN, etc.
- FIG. 3D is schematic block diagram of an embodiment of a computing entity 16 that includes a primary computing device (e.g., any one of the computing devices of FIGS. 2A-2D ), an interface device (e.g., a network connection), and a network of computing devices 40 (e.g., one or more from any combination of the embodiments of FIGS. 2A-2D ).
- the primary computing device utilizes the other computing devices as co-processors to execute one or more the functions of the computing entity, as storage for data, for other data processing functions, and/or storage purposes.
- FIG. 3E is schematic block diagram of an embodiment of a computing entity 16 that includes a primary computing device (e.g., any one of the computing devices of FIGS. 2A-2D ), an interface device (e.g., a network connection) 70 , and a network of computing resources 71 (e.g., two or more resources from any combination of the embodiments of FIGS. 2A-2D ).
- the primary computing device utilizes the computing resources as co-processors to execute one or more the functions of the computing entity, as storage for data, for other data processing functions, and/or storage purposes.
- FIG. 4 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks, one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more non-business associated computing devices 24 (e.g., publicly available servers 27 and subscription based servers 28 ), one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- This diagram is similar to FIG. 1 with the inclusion of detail within the system proficiency resource(s) 22 , with inclusion of detail within the system 11 , and with the inclusion of detail within the analysis system module 17 .
- a system proficiency resource 22 is a computing device that provides information regarding best-in-class assets, best-in-class practices, known protocols, leading edge information, and/or established guidelines regarding risk assessment, devices, software, networking, data security, cybersecurity, and/or data communication.
- a system proficiency resource 22 is a computing device that may also provide information regarding standards, information regarding compliance requirements, information regarding legal requirements, and/or information regarding regulatory requirements.
- the system 11 is shown to include three inter-dependent modes: system functions 82 , security functions 83 , and system assets 84 .
- System functions 82 correspond to the functions the system executes to support the organization's business requirements.
- Security functions 83 correspond to the functions the system executes to support the organization's security requirements.
- the system assets 84 are the hardware and/or software platforms that support system functions 82 and/or the security functions 83 .
- the analysis system module 17 includes one or more data extraction modules 80 and one or more system user interface modules 81 .
- a data extraction module 80 which will be described in greater detail with reference to one or more subsequent figures, gathers data from the system for analysis by the analysis system 10 .
- a system user interface module 81 provides a user interface between the system 11 and the analysis system 10 and functions to provide user information to the analysis system 10 and to receive output data from the analysis system. The system user interface module 81 will be described in greater detail with reference to one or more subsequent figures.
- FIG. 5 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks, one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more non-business associated computing devices 24 (e.g., publicly available servers 27 and subscription based servers 28 ), one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- This diagram is similar to FIG. 4 with the inclusion of additional detail within the system 11 .
- the system 11 includes a plurality of sets of system assets to support the system functions 82 and/or the security functions 83 .
- a set of system assets supports the system functions 82 and/or security functions 83 for a particular business segment (e.g., a department within the organization).
- a second set of system assets supports the security functions 83 for a different business segment and a third set of system assets supports the system functions 82 for the different business segment.
- FIG. 6 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks, one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more non-business associated computing devices 24 (e.g., publicly available servers 27 and subscription based servers 28 ), one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- This diagram is similar to FIG. 5 with the inclusion of additional detail within the system 11 .
- the system 11 includes a plurality of sets of system assets 84 , system functions 82 , and security functions 83 .
- a set of system assets 84 , system functions 82 , and security functions 83 supports one department in an organization and a second set of system assets 84 , system functions 82 , and security functions 83 supports another department in the organization.
- FIG. 7 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks, one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more non-business associated computing devices 24 (e.g., publicly available servers 27 and subscription based servers 28 ), one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- This diagram is similar to FIG. 4 with the inclusion of additional detail within the system 11 .
- the system 11 includes system assets 84 , system functions 82 , security functions 83 , and self-evaluation functions 85 .
- the self-evaluation functions 85 are supported by the system assets 84 and are used by the system to evaluate its assets, is system functions, and its security functions.
- self-evaluates looks at system's ability to analyze itself for self-determining it's understanding (self-aware) of the system; self-determining the implementation of the system, and/or self-determining operation of the system.
- the self-evaluation may further consider the system's ability to self-heal, self-update, self-protect, self-recover, self-evaluate, and/or self-respond.
- the analysis system 10 can evaluate the understanding, implementation, and/or operation of the self-evaluation functions.
- FIG. 8 is a schematic block diagram of another embodiment of a networked environment having a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks represented by networking infrastructure, one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more publicly available servers 27 , one or more subscription based servers 28 , one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- the system 11 is shown to include a plurality of physical assets dispersed throughout a geographic region (e.g., a building, a town, a county, a state, a country).
- Each of the physical assets includes hardware and software to perform its respective functions within the system.
- a physical asset is a computing entity (CE), a public or provide networking device (ND), a user access device (UAD), or a business associate access device (BAAD).
- CE computing entity
- ND public or provide networking device
- UAD user access device
- BAAD business associate access device
- a computing entity may be a user device, a system admin device, a server, a printer, a data storage device, etc.
- a network device may be a local area network device, a network card, a wide area network device, etc.
- a user access device is a portal that allows authorizes users of the system to remotely access the system.
- a business associated access device is a portal that allows authorized business associates of the system access the system.
- Some of the computing entities are grouped via a common connection to a network device, which provides the group of computing entities access to other parts of the system and/or the internet.
- the highlighted computing entity may access a publicly available server 25 via network devices coupled to the network infrastructure.
- the analysis system 10 can evaluation whether this is an appropriate access, the understanding of this access, the implementation to enable this access, and/or the operation of the system to support this access.
- FIG. 9 is a schematic block diagram of an example of a system section of a system selected for evaluation similar to FIG. 8 .
- the analysis system 10 only evaluates assets, system functions, and/or security functions related to assets within the system section under test 91 .
- FIG. 10 is a schematic block diagram of another example of a system section of a system selected for evaluation similar to FIG. 9 .
- a single computing entity CE
- the analysis system 10 only evaluates assets, system functions, and/or security functions related to the selected computing entity.
- FIG. 11 is a schematic block diagram of an embodiment of a networked environment having a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks 14 , one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more publicly available servers 27 , one or more subscription based servers 28 , one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- the system 11 is shown to include a plurality of system assets (SA).
- SA system asset
- SA may include one or more system sub assets (S 2 A) and a system sub asset (S 2 A) may include one or more system sub-sub assets (S 3 A).
- DEM data extraction module
- SUIM system user interface module
- a system element includes one or more system assets.
- a system asset (SA) may be a physical asset or a conceptual asset as previously described.
- a system element includes a system asset of a computing device.
- the computing device which is the SA, includes user applications and an operating system; each of which are sub assets of the computing device (S 2 A).
- the computing device includes a network card, memory devices, etc., which are sub assets of the computing device (S 2 A).
- Documents created from a word processing user application are sub assets of the word processing user application (S 3 A) and sub-sub assets of the computing device.
- the system asset includes a plurality of computing devices, printers, servers, etc. of a department of the organization operating the system 11 .
- a computing device is a sub asset of the system asset and the software and hardware of the computing devices are sub-sub assets.
- the analysis system 10 may evaluate understanding, implementation, and/or operation of one or more system assets, one or more system sub assets, and/or one or more system sub-sub assets, as an asset, as it supports system functions 82 , and/or as it supports security functions.
- the evaluation may be to produce an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies.
- FIG. 12 is a schematic block diagram of an embodiment of a system 11 that includes a plurality of physical assets 100 coupled to an analysis system 100 .
- the physical assets 100 include an analysis interface device 101 , one or more networking devices 102 , one or more security devices 103 , one or more system admin devices 104 , one or more user devices 105 , one or more storage devices 106 , and/or one or more servers 107 .
- Each device may be a computing entity that includes hardware (HW) components and software (SW) applications (user, device, drivers, and/or system).
- a device may further include a data extraction module (DEM).
- HW hardware
- SW software
- the analysis interface device 101 includes a data extraction module (DEM) 80 and the system user interface module 81 to provide connectivity to the analysis system 10 .
- the analysis system 10 is able to evaluate understanding, implementation, and/or operation of each device, or portion thereof, as an asset, as it supports system functions 82 , and/or as it supports security functions.
- the analysis system 10 evaluates the understanding of networking devices 102 as an asset.
- the analysis system 10 evaluates how well the networking devices 102 , its hardware, and its software are understood within the system and/or by the system administrators.
- the evaluation includes how well are the networking devices 102 , its hardware, and its software documented; how well are they implemented based on system requirements; how well do they operate based on design and/or system requirements; how well are they maintained per system policies and/or procedures; how well are their deficiencies identified; and/or how well are their deficiencies auto-corrected.
- FIG. 13 is a schematic block diagram of another embodiment of a networked environment having a system 11 that includes a plurality of system assets coupled to an analysis system 10 .
- This embodiment is similar to the embodiment of FIG. 11 with the addition of additional data extraction modules (DEM) 80 .
- each system asset (SA) is affiliated with its own DEM 80 .
- SA data extraction modules
- SA system asset
- S 2 A system sub asset
- S 3 A system sub-sub asset
- FIG. 14 is a schematic block diagram of another embodiment of a system 11 physical assets 100 coupled to an analysis system 100 .
- the physical assets 100 include one or more networking devices 102 , one or more security devices 103 , one or more system admin devices 104 , one or more user devices 105 , one or more storage devices 106 , and/or one or more servers 107 .
- Each device may be a computing entity that includes hardware (HW) components and software (SW) applications (user, system, and/or device).
- HW hardware
- SW software
- the system admin device 104 includes one or more analysis system modules 17 , which includes a data extraction module (DEM) 80 and the system user interface module 81 to provide connectivity to the analysis system 10 .
- the analysis system 10 is able to evaluate understanding, implementation, and/or operation of each device, or portion thereof, as an asset, as it supports system functions 82 , and/or as it supports security functions.
- the analysis system 10 evaluates the implementation of networking devices 102 to support system functions.
- the analysis system 10 evaluates how well the networking devices 102 , its hardware, and its software are implemented within the system to support one or more system functions (e.g., managing network traffic, controlling network access per business guidelines, policies, and/or processes, etc.).
- the evaluation includes how well is the implementation of the networking devices 102 , its hardware, and its software documented to support the one or more system functions; how well does their implementation support the one or more system functions; how well have their implementation to support the one or more system functions been verified in accordance with policies, processes, etc.; how well are they updated per system policies and/or procedures; how well are their deficiencies in support of the one or more system functions identified; and/or how well are their deficiencies in support of the one or more system functions auto-corrected.
- FIG. 15 is a schematic block diagram of another embodiment of a system 11 that includes a plurality of physical assets 100 coupled to an analysis system 100 .
- the physical assets 100 include an analysis interface device 101 , one or more networking devices 102 , one or more security devices 103 , one or more system admin devices 104 , one or more user devices 105 , one or more storage devices 106 , and/or one or more servers 107 .
- Each device may be a computing entity that includes hardware (HW) components and software (SW) applications (user, device, drivers, and/or system).
- HW hardware
- SW software
- This embodiment is similar to the embodiment of FIG. 12 with a difference being that the devices 102 - 107 do not include a data extraction module (DEM) as is shown in FIG. 12 .
- DEM data extraction module
- FIG. 16 is a schematic block diagram of another embodiment of a system 11 that includes networking devices 102 , security devices 103 , servers 107 , storage devices 106 , and user devices 105 .
- the system 11 is coupled to the network 14 , which provides connectivity to the busines associate computing device 23 .
- the network 14 is shown to include one or more wide area networks (WAN) 162 , one or more wireless LAN (WLAN) and/or LANs 164 , one or more virtual private networks 166 .
- WAN wide area networks
- WLAN wireless LAN
- LAN virtual private networks
- the networking devices 102 includes one or more modems 120 , one or more routers 121 , one or more switches 122 , one or more access points 124 , and/or one or more local area network cards 124 .
- the analysis system 10 can evaluate the network devices 102 collectively as assets, as they support system functions, and/or as they support security functions.
- the analysis system 10 may also evaluate each network device individually as an asset, as it supports system functions, and/or as it supports security functions.
- the analysis system may further evaluate one or more network devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes).
- the security devices 103 includes one or more infrastructure management tools 125 , one or more encryption software programs 126 , one or more identity and access management tools 127 , one or more data protection software programs 128 , one or more system monitoring tools 129 , one or more exploit and malware protection tools 130 , one or more vulnerability management tools 131 , and/or one or more data segmentation and boundary tools 132 .
- a tool is a program that functions to develop, repair, and/or enhance other programs and/or hardware.
- the analysis system 10 can evaluate the security devices 103 collectively as assets, as they support system functions, and/or as they support security functions.
- the analysis system 10 may also evaluate each security device individually as an asset, as it supports system functions, and/or as it supports security functions.
- the analysis system may further evaluate one or more security devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes).
- the servers 107 include one or more telephony servers 133 , one or more ecommerce servers 134 , one or more email servers 135 , one or more web servers 136 , and/or one or more content servers 137 .
- the analysis system 10 can evaluate the servers 103 collectively as assets, as they support system functions, and/or as they support security functions.
- the analysis system 10 may also evaluate each server individually as an asset, as it supports system functions, and/or as it supports security functions.
- the analysis system may further evaluate one or more servers as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes).
- the storage devices includes one or more cloud storage devices 138 , one or more storage racks 139 (e.g., a plurality of storage devices mounted in a rack), and/or one or more databases 140 .
- the analysis system 10 can evaluate the storage devices 103 collectively as assets, as they support system functions, and/or as they support security functions.
- the analysis system 10 may also evaluate each storage device individually as an asset, as it supports system functions, and/or as it supports security functions.
- the analysis system may further evaluate one or more storage devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes).
- the user devices 105 include one or more landline phones 141 , one or more IP cameras 144 , one or more cell phones 143 , one or more user computing devices 145 , one or more IP phones 150 , one or more video conferencing equipment 148 , one or more scanners 151 , and/or one or more printers 142 .
- the analysis system 10 can evaluate the use devices 103 collectively as assets, as they support system functions, and/or as they support security functions.
- the analysis system 10 may also evaluate each user device individually as an asset, as it supports system functions, and/or as it supports security functions.
- the analysis system may further evaluate one or more user devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes).
- the system admin devices 104 includes one or more system admin computing devices 146 , one or more system computing devices 194 (e.g., data management, access control, privileges, etc.), and/or one or more security management computing devices 147 .
- the analysis system 10 can evaluate the system admin devices 103 collectively as assets, as they support system functions, and/or as they support security functions.
- the analysis system 10 may also evaluate each system admin device individually as an asset, as it supports system functions, and/or as it supports security functions.
- the analysis system may further evaluate one or more system admin devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes).
- FIG. 17 is a schematic block diagram of an embodiment of a user computing device 105 that includes software 160 , a user interface 161 , processing resources 163 , memory 162 and one or more networking device 164 .
- the processing resources 163 include one or more processing modules, cache memory, and a video graphics processing module.
- the memory 162 includes non-volatile memory, volatile memory and/or disk memory.
- the non-volatile memory stores hardware IDs, user credentials, security data, user IDs, passwords, access rights data, device IDs, one or more IP addresses and security software.
- the volatile memory includes system volatile memory and user volatile memory.
- the disk memory includes system disk memory and user disk memory.
- User memory volatile and/or disk
- System memory stores system applications and system data.
- the user interface 104 includes one or more I/O (input/output) devices such as video displays, keyboards, mice, eye scanners, microphones, speakers, and other devices that interface with one or more users.
- the user interface 161 further includes one or more physical (PHY) interface with supporting software such that the user computing device can interface with peripheral devices.
- PHY physical
- the software 160 includes one or more I/O software interfaces (e.g., drivers) that enable the processing module to interface with other components.
- the software 160 also includes system applications, user applications, disk memory software interfaces (drivers) and network software interfaces (drivers).
- the networking device 164 may be a network card or network interface that intercouples the user computing device 105 to devices external to the computing device 105 and includes one or more PHY interfaces.
- the network card is a WLAN card.
- the network card is a cellular data network card.
- the network card is an ethernet card.
- the user computing device may further include a data extraction module 80 . This would allow the analysis system 10 to obtain data directly from the user computing device. Regardless of how the analysis system 10 obtains data regarding the user computing device, the analysis system 10 can evaluate the user computing device as an asset, as it supports one or more system functions, and/or as it supports one or more security functions. The analysis system 10 may also evaluate each element of the user computing device (e.g., each software application, each drive, each piece of hardware, etc.) individually as an asset, as it supports one or more system functions, and/or as it supports one or more security functions.
- each element of the user computing device e.g., each software application, each drive, each piece of hardware, etc.
- FIG. 18 is a schematic block diagram of an embodiment of a server 107 that includes software 170 , processing resources 171 , memory 172 and one or more networking resources 173 .
- the processing resources 171 include one or more processing modules, cache memory, and a video graphics processing module.
- the memory 172 includes non-volatile memory, volatile memory, and/or disk memory.
- the non-volatile memory stores hardware IDs, user credentials, security data, user IDs, passwords, access rights data, device IDs, one or more IP addresses and security software.
- the volatile memory includes system volatile memory and shared volatile memory.
- the disk memory include server disk memory and shared disk memory.
- the software 170 includes one or more I/O software interfaces (e.g., drivers) that enable the software 170 to interface with other components.
- the software 170 includes system applications, server applications, disk memory software interfaces (drivers), and network software interfaces (drivers).
- the networking resources 173 may be one or more network cards that provides a physical interface for the server to a network.
- the server 107 may further include a data extraction module 80 . This would allow the analysis system 10 to obtain data directly from the server. Regardless of how the analysis system 10 obtains data regarding the server, the analysis system 10 can evaluate the server as an asset, as it supports one or more system functions, and/or as it supports one or more security functions. The analysis system 10 may also evaluate each element of the server (e.g., each software application, each drive, each piece of hardware, etc.) individually as an asset, as it supports one or more system functions, and/or as it supports one or more security functions.
- each element of the server e.g., each software application, each drive, each piece of hardware, etc.
- FIG. 19 is a schematic block diagram of another embodiment of a networked environment having a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks 14 , one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more publicly available servers 27 , one or more subscription based servers 28 , one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- system 11 is shown to include a plurality of system functions (SF).
- a system function (SF) may include one or more system sub functions (S2F) and a system sub function (S2F) may include one or more system sub-sub functions (S3F).
- DEM data extraction module
- SUIM system user interface module
- a system function includes one or more business operations, one or more compliance requirements, one or more data flow objectives, one or more data access control objectives, one or more data integrity objectives, one or more data storage objectives, one or more data use objectives, and/or one or more data dissemination objectives.
- Business operation system functions are the primary purpose for the system 11 .
- the system 11 is designed and built to support the operations of the business, which vary from business to business.
- business operations include operations regarding critical business functions, support functions for core business, product and/or service functions, risk management objectives, business ecosystem objectives, and/or business contingency plans.
- the business operations may be divided into executive management operations, information technology operations, marketing operations, engineering operations, manufacturing operations, sales operations, accounting operations, human resource operations, legal operations, intellectual property operations, and/or finance operations.
- Each type of business operation includes sub-business operations, which, in turn may include its own sub-operations.
- engineering operations includes a system function of designing new products and/or product features.
- the design of a new product or feature involves sub-functions of creating design specifications, creating a design based on the design specification, and testing the design through simulation and/or prototyping. Each of these steps includes sub-steps.
- the design process includes the sub-sub system functions of creating a high level design from the design specifications; creating a low level design from the high level design; and the creating code from the low level design.
- a compliance requirement may be a regulatory compliance requirement, a standard compliance requirement, a statutory compliance requirement, and/or an organization compliance requirement.
- a regulatory compliance requirement when the organization has governmental agencies as clients.
- An example of a standard compliance requirement encryption protocols are often standardized.
- Data Encryption Standard (DES), Advanced Encryption Standard (AES), RSA (Rivest-Shamir-Adleman) encryption, and public-key infrastructure (PKI) are examples of encryption type standards.
- HIPAA health Insurance Portability and Accountability Act
- Examples of organization compliance requirements include use of specific vendor hardware, use of specific vendor software, use of encryption, etc.
- a data flow objective is regarding where data can flow, at what rate data can and should flow, the manner in which the data flow, and/or the means over which the data flows.
- data for remote storage is to flow via a secure data pipeline using a particular encryption protocol.
- ingesting of data should have the capacity to handle a data rate of 100 giga-bits per second.
- a data access control objective established which type of personnel and/or type of assets can access specific types of data. For example, certain members of the corporate department and human resources department have access to employee personnel files, while all other members of the organization do not.
- a data integrity objective establishes a reliability that, when data is retrieved, it is the data that was stored, i.e., it was not lost, damaged, or corrupted.
- An example of a data integrity protocol is Cyclic Redundancy Check (CRC).
- Another example of a data integrity protocol is a hash function.
- a data storage objective establishes the manner in which data is to be stored.
- a data storage objective is to store data in a RAID system; in particular, a RAID 6 system.
- a data storage objective is regarding archiving of data and the type of storage to use for archived data.
- a data use objective establishes the manner in which data can be used. For example, if the data is for sale, then the data use objective would establish what type of data is for sale, at what price, and what is the target customer. As another example, a data use objective establishes read only privileges, editing privileges, creation privileges, and/or deleting privileges.
- a data dissemination objective establishes how the data can be shared. For example, a data dissemination objective is regarding confidential information and indicates how the confidential information should be marked, who in can be shared with internally, and how it can be shared externally, if at all.
- the analysis system 10 may evaluate understanding, implementation, and/or operation of one or more system functions, one or more system sub functions, and/or one or more system sub-sub functions.
- the evaluation may be to produce an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies.
- the analysis system 10 evaluates the understanding of the software development policies and/or processes.
- the analysis system 10 evaluates the use of software development policies and/or processes to implement a software program.
- analysis system 10 evaluates the operation of the software program with respect to the business operation, the design specifications, and/or the design.
- FIG. 20 is a schematic block diagram of another embodiment of a system 11 that includes, from a business operations perspective, divisions 181 - 183 , departments, and groups.
- the business structure of the system 11 is governed by a corporate department 180 .
- the corporate department may have its own sub-system with structures and software tailored to the corporate function of the system.
- Organized under the corporate department 180 are divisions, division 1 181 , division 2 182 , through division k 183 .
- These divisions may be different business divisions of a multi-national conglomerate, may be different functional divisions of a business, e.g., finance, marketing, sales, legal, engineering, research and development, etc.
- Under each division 1081 - 183 include a plurality of departments. Under each department are a number of groups.
- the business structure is generic and can be used to represent the structure of most conventional businesses and/or organizations.
- the analysis system 10 is able to use this generic structure to create and categorize the business structure of the system 11 .
- the creation and categorization of the business structure is done in a number of ways. Firstly, the analysis system 10 accesses corporate organization documents for the business and receive feedback from one or more persons in the business and use these documents and data to initially determine at least partially the business structure. Secondly, the analysis system 10 determines the network structure of the other system, investigate identities of components of the network structure, and construct a sub-division of the other system.
- the analysis system 10 based upon software used within the sub-division, data character, and usage character, the analysis system 10 identifies more specifically the function of the divisions, departments and groups. In doing so, the analysis system 10 uses information known of third-party systems to assist in the analysis.
- differing portions of the business structure may have different levels of abstraction from a component/sub-component/sub-sub-component/system/sub-system/sub-sub-system level based upon characters of differing segments of the business. For example. a more detailed level of abstraction for elements of the corporate and security departments of the business may be taken than for other departments of the business.
- FIG. 21 is a schematic block diagram of another embodiment of a business structure of the system 11 . Shown are a corporate department 180 , an IT department 181 , division 2 182 through division “k” 183 , where k is an integer equal to or greater than 3 .
- the corporate department 180 includes a plurality of hardware devices 260 , a plurality of software applications 262 , a plurality of business policies 264 , a plurality of business procedures 266 , local networking 268 , a plurality of security policies 270 , a plurality of security procedures 272 , data protection resources 272 , data access resources 276 , data storage devices 278 , a personnel hierarchy 280 , and external networking 282 .
- analysis system 10 may evaluate the understanding, implementation, and/or operation of the assets, system functions, and/or security functions of the corporate department from a number of different perspectives, as will be described further with reference to one or more the subsequent figures.
- the IT department 181 includes a plurality of hardware devices 290 , a plurality of software applications 292 , a plurality of business policies 294 , a plurality of business procedures 296 , local networking 298 , a plurality of security policies 300 , a plurality of security procedures 302 , data protection resources 304 , data access resources 306 , data storage devices 308 , a personnel hierarchy 310 , and external networking 312 .
- the analysis system 10 may evaluate the understanding, implementation, and/or operation of the assets, system functions, and/or security functions of the IT department from a number of different perspectives, as will be described further with reference to one or more of the subsequent figures.
- FIG. 22 is a schematic block diagram of another embodiment of a division 182 of a system that includes multiple departments.
- the departments include a marketing department 190 , an operations department 191 , an engineering department 192 , a manufacturing department 193 , a sales department 194 , and an accounting department 195 .
- Each of the departments includes a plurality of components relevant to support the corresponding business functions and/or security functions of the division and of the department.
- the marketing department 190 includes a plurality of devices, software, security policies, security procedures, business policies, business procedures, data protection resources, data access resources, data storage resources, a personnel hierarchy, local network resources, and external network resources.
- each of the operations department 191 , the engineering department 192 , the manufacturing department 193 , the sales department 194 , and the accounting department 195 includes a plurality of devices, software, security policies, security procedures, business policies, business procedures, data protection resources, data access resources, data storage resources, a personnel hierarchy, local network resources, and external network resources.
- a service mesh may be established to more effectively protect important portions of the business from other portions of the business.
- the service mesh may have more restrictive safety and security mechanisms for one part of the business than another portion of the business, e.g., manufacturing department service mesh is more restrictive than the sales department service mesh.
- the analysis system 10 may evaluate the understanding, implementation, and/or operation of the assets, system functions, and/or security functions of the division 182 , of each department, of each type of system elements, and/or each system element. For example, the analysis system 10 evaluates the data access policies and procedures of each department. As another example, the analysis system 10 evaluates the data storage policies, procedures, design, implementation, and/or operation of data storage within the engineering department 192 .
- FIG. 23 is a schematic block diagram of another embodiment of a networked environment having a system 11 (or system 12 or system 13 ), the analysis system 10 , one or more networks 14 , one or more system proficiency resources 22 , one or more business associated computing devices 23 , one or more publicly available servers 27 , one or more subscription based servers 28 , one or more BOT computing devices 25 , and one or more bad actor computing devices 26 .
- the system 11 is shown to include a plurality of security functions (SEF).
- a security function may include one or more system sub security functions (SE 2 F) and a security sub function (SE 2 F) may include one or more security sub-sub functions (SE 3 F).
- SEEM data extraction module
- SEIM system user interface module
- a security function includes a security operation, a security requirement, a security policy, and/or a security obj ective with respect to data, system access, system design, system operation, and/or system modifications (e.g., updates, expansion, part replacement, maintenance, etc.).
- a security function includes one or more threat detection functions, one or more threat avoidance functions, one or more threat resolution functions, one or more threat recovery functions, one or more threat assessment functions, one or more threat impact functions, one or more threat tolerance functions, one or more business security functions, one or more governance security functions, one or more data at rest protection functions, one or more data in transit protection functions, and/or one or more data loss prevention functions.
- a threat detection function includes detecting unauthorized system access; detecting unauthorized data access; detecting unauthorized data changes; detecting uploading of worms, viruses, and the like; and/or detecting bad actor attacks.
- a threat avoidance function includes avoiding unauthorized system access; avoiding unauthorized data access; avoiding unauthorized data changes; avoiding uploading of worms, viruses, and the like; and/or avoiding bad actor attacks.
- a threat resolution function includes resolving unauthorized system access; resolving unauthorized data access; resolving unauthorized data changes; resolving uploading of worms, viruses, and the like; and/or resolving bad actor attacks.
- a threat recovery function includes recovering from an unauthorized system access; recovering from an unauthorized data access; recovering from an unauthorized data changes; recovering from an uploading of worms, viruses, and the like; and/or recovering from a bad actor attack.
- a threat assessment function includes accessing the likelihood of and/or mechanisms for unauthorized system access; accessing the likelihood of and/or mechanisms for unauthorized data access; accessing the likelihood of and/or mechanisms for unauthorized data changes; accessing the likelihood of and/or mechanisms for uploading of worms, viruses, and the like; and/or accessing the likelihood of and/or mechanisms for bad actor attacks.
- a threat impact function includes determining an impact on business operations from an unauthorized system access; resolving unauthorized data access; determining an impact on business operations from an unauthorized data changes; determining an impact on business operations from an uploading of worms, viruses, and the like; and/or determining an impact on business operations from an bad actor attacks.
- a threat tolerance function includes determining a level of tolerance for an unauthorized system access; determining a level of tolerance for an unauthorized data access; determining a level of tolerance for an unauthorized data changes; determining a level of tolerance for an uploading of worms, viruses, and the like; and/or determining a level of tolerance for an bad actor attacks.
- a business security function includes data encryption, handling of third party data, releasing data to the public, and so on.
- a governance security function includes HIPAA compliance; data creation, data use, data storage, and/or data dissemination for specific types of customers (e.g., governmental agency); and/or the like.
- a data at rest protection function includes a data access protocol (e.g., user ID, password, etc.) to store data in and/or retrieve data from system data storage; data storage requirements, which include type of storage, location of storage, and storage capacity; and/or other data storage security functions.
- a data access protocol e.g., user ID, password, etc.
- a data in transit protection function includes using a specific data transportation protocol (e.g., TCP/IP); using an encryption function prior to data transmission; using an error encoding function for data transmission; using a specified data communication path for data transmission; and/or other means to protect data in transit.
- a data loss prevention function includes a storage encoding technique (e.g., single parity encoding, double parity encoding, erasure encoding, etc.); a storage backup technique (e.g., one or two backup copies, erasure encoding, etc.); hardware maintenance and replacement policies and processes; and/or other means to prevent loss of data.
- the analysis system 10 may evaluate understanding, implementation, and/or operation of one or more security functions, one or more security sub functions, and/or one or more security sub-sub functions.
- the evaluation may be to produce an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies.
- the analysis system 10 evaluates the understanding of the threat detection policies and/or processes.
- the analysis system 10 evaluates the use of threat detection policies and/or processes to implement a security assets.
- analysis system 10 evaluates the operation of the security assets with respect to the threat detection operation, the threat detection design specifications, and/or the threat detection design.
- FIG. 24 is a schematic block diagram of an embodiment of an engineering department 200 of a division 182 that reports to a corporate department 180 of a system 11 .
- the engineering department 200 includes engineering assets, engineering system functions, and engineering security functions.
- the engineering assets include security HW & SW, user device HW & SW, networking HW & SW, system HW & SW, system monitoring HW & SW, and/or other devices that includes HW and/or SW.
- the organization's system functions includes business operations, compliance requirements, data flow objectives, data access objectives, data integrity objectives, data storage objectives, data use objectives, and/or data dissemination objectives. These system functions apply throughout the system including throughout division 2 and for the engineering department 200 of division 2 .
- the division 182 can issues more restrictive, more secure, and/or more detailed system functions.
- the division has issued more restrictive, secure, and/or detailed business operations (business operations +) and more restrictive, secure, and/or detailed data access functions (data access +).
- the engineering department 200 may issue more restrictive, more secure, and/or more detailed system functions than the organization and/or the division.
- the engineering department has issued more restrictive, secure, and/or detailed business operations (business operations ++) than the division; has issued more restrictive, secure, and/or detailed data flow functions (data flow ++) than the organization; has issued more restrictive, secure, and/or detailed data integrity functions (data integrity ++) than the organization; and has issued more restrictive, secure, and/or detailed data storage functions (data storage ++) than the organization.
- an organization level business operation regarding the design of new products and/or of new product features specifies high-level design and verify guidelines.
- the division issued more detailed design and verify guidelines.
- the engineering department issued even more detailed design and verify guidelines.
- the analysis system 10 can evaluate the compliance with the system functions for the various levels. In addition, the analysis system 10 can evaluate that the division issued system functions are compliant with the organization issued system functions and/or are more restrictive, more secure, and/or more detailed. Similarly, the analysis system 10 can evaluate that the engineering department issued system functions are compliant with the organization and the division issued system functions and/or are more restrictive, more secure, and/or more detailed.
- the organization security functions includes data at rest protection, data loss prevention, data in transit protection, threat management, security governance, and business security.
- the division has issued more restrictive, more secure, and/or more detailed busines security functions (business security +).
- the engineering department has issued more restrictive, more secure, and/or more detailed data at rest protection (data at rest protection ++), data loss prevention (data loss prevention ++), and data in transit protection (data in transit ++).
- the analysis system 10 can evaluate the compliance with the security functions for the various levels. In addition, the analysis system 10 can evaluate that the division issued security functions are compliant with the organization issued security functions and/or are more restrictive, more secure, and/or more detailed. Similarly, the analysis system 10 can evaluate that the engineering department issued security functions are compliant with the organization and the division issued security functions and/or are more restrictive, more secure, and/or more detailed.
- FIG. 25 is a schematic block diagram of an example of an analysis system 10 evaluating a system element under test 91 of a system 11 .
- the system element under test 91 corresponds to a system aspect (or system sector), which includes one or more system elements, one or more system criteria, and one or more system modes.
- the system criteria are shown to includes guidelines, system requirements, system design & system build (system implementation), and the resulting system.
- the analysis system 10 may evaluate the system, or portion thereof, during initial system requirement development, initial design of the system, initial build of the system, operation of the initial system, revisions to the system requirements, revisions to the system design, revisions to the system build, and/or operation of the revised system.
- a revision to a system includes adding assets, system functions, and/or security functions; deleting assets, system functions, and/or security functions; and/or modifying assets, system functions, and/or security functions.
- the guidelines include one or more of business objectives, security objectives, NIST cybersecurity guidelines, system objectives, governmental and/or regulatory requirements, third party requirements, etc. and are used to help create the system requirements.
- System requirements outline the hardware requirements for the system, the software requirements for the system, the networking requirements for the system, the security requirements for the system, the logical data flow for the system, the hardware architecture for the system, the software architecture for the system, the logical inputs and outputs of the system, the system input requirements, the system output requirements, the system's storage requirements, the processing requirements for the system, system controls, system backup, data access parameters, and/or specification for other system features.
- the system requirements are used to help create the system design.
- the system design includes a high level design (HDL), a low level design (LLD), a detailed level design (DLD), and/or other design levels.
- High level design is a general design of the system. It includes a description of system architecture; a database design; an outline of platforms, services, and processes the system will require; a description of relationships between the assets, system functions, and security functions; diagrams regarding data flow; flowcharts; data structures; and/or other documentation to enable more detailed design of the system.
- Low level design is a component level design that is based on the HLD. It provides the details and definitions for every system component (e.g., HW and SW). In particular, LLD specifies the features of the system components and component specifications. Detailed level design describes the interaction of every component of the system.
- the system is built based on the design to produce a resulting system (i.e., the implemented assets).
- the assets of system operate to perform the system functions and/or security functions.
- the analysis system 10 can evaluate the understanding, implementation, operation and/or self-analysis of the system 11 at one or more system criteria level (e.g., guidelines, system requirements, system implementation (e.g., design and/or build), and system) in a variety of ways.
- system criteria level e.g., guidelines, system requirements, system implementation (e.g., design and/or build), and system
- the analysis system 10 evaluates the understanding of the system (or portion thereof) by determining a knowledge level of the system and/or maturity level of system. For example, an understanding evaluation interprets what is known about the system and compares it to what should be known about the system.
- the analysis system evaluates the understanding of the guidelines.
- the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the guidelines to facilitate the understanding of the guidelines.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the understanding of the system requirements. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system requirements to facilitate the understanding of the system requirements. The more incomplete the data regarding the evaluation metrics, the more likely the system requirements are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the creation and/or use of the system requirements, the more likely the system requirements are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the understanding of the system design. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system design to facilitate the understanding of the system design. The more incomplete the data regarding the evaluation metrics, the more likely the system design is incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the creation and/or use of the system design, the more likely the system design is not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the understanding of the system build. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system build to facilitate the understanding of the system build. The more incomplete the data regarding the evaluation metrics, the more likely the system build is incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the execution of and/or use of the system build, the more likely the system build is not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the understanding of the system functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system build to facilitate the understanding of the system build. The more incomplete the data regarding the evaluation metrics, the more likely the system build is incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the execution of and/or use of the system build, the more likely the system build is not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the understanding of the security functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system functions to facilitate the understanding of the system functions. The more incomplete the data regarding the evaluation metrics, the more likely the system functions are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the execution of and/or use of the system functions, the more likely the system functions are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the understanding of the system assets. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system assets to facilitate the understanding of the system assets. The more incomplete the data regarding the evaluation metrics, the more likely the system assets are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the selection, identification, and/or use of the system assets, the more likely the system assets are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 also evaluates the implementation of the system (or portion thereof) by determining how well the system is being, was developed, and/or is being updated. For example, the analysis system 10 determines how well the assets, system functions, and/or security functions are being developed, have been developed, and/or are being updated based on the guidelines, the system requirements, the system design, and/or the system build.
- the analysis system 10 evaluates the implementation of the guidelines. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the guidelines. The more incomplete the data regarding the evaluation metrics, the more likely the development of the guidelines is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the guidelines, the more likely the guidelines are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the implementation of the system requirements. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system requirements. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system requirements is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system requirements, the more likely the system requirements are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the implementation of the system design. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system design. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system design is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system design, the more likely the system design is not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the implementation of the system build. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system build. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system build is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system build, the more likely the system build is not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the implementation of the system functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system functions. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system functions is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system functions, the more likely the system functions are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the implementation of the security functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the security functions. The more incomplete the data regarding the evaluation metrics, the more likely the development of the security functions is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the security functions, the more likely the security functions are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the implementation of the system assets. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system assets. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system assets is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system assets, the more likely the system assets are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 also evaluates the operation of the system (or portion thereof) by determining how well the system fulfills its objectives. For example, the analysis system 10 determines how well the assets, system functions, and/or security functions to fulfill the guidelines, the system requirements, the system design, the system build, the objectives of the system, and/or other purpose of the system.
- the analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines by the system requirements. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines by the system requirements. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines by the system requirements is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines by the system requirements, the more likely the system requirements does not adequately fulfill the guidelines (e.g., lower level of system development maturity) resulting in a low evaluation rating.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines and/or the system requirements by the system design. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines and/or the system requirements by the system design. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines and/or the system requirements by the system design is incomplete.
- the analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, and/or the system design by the system build. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, and/or the system design by the system build.
- evaluation metrics e.g., evaluation metrics
- the analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, the system design, the system build, and/or objectives by the operation of the system in performing the system functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or objectives regarding the performance of the system functions by the system.
- evaluation metrics e.g., evaluation metrics
- the more incomplete the data regarding the evaluation metrics the more likely the fulfillment of the guidelines, the system requirements, the system design, the system, and/or the objectives regarding the system functions is incomplete.
- the analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, the system design, the system build, and/or objectives by the operation of the system in performing the security functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or objectives regarding the performance of the security functions by the system.
- evaluation metrics e.g., evaluation metrics
- the more incomplete the data regarding the evaluation metrics the more likely the fulfillment of the guidelines, the system requirements, the system design, the system, and/or the objectives regarding the security functions is incomplete.
- the analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, the system design, the system build, and/or objectives by the operation of the system functions. For instance, the analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or objectives regarding the performance of the system assets.
- evaluation metrics e.g., evaluation metrics
- the more incomplete the data regarding the evaluation metrics the more likely the fulfillment of the guidelines, the system requirements, the system design, the system, and/or the objectives regarding the system assets is incomplete.
- the analysis system 10 also evaluates the self-analysis capabilities of the system (or portion thereof) by determining how well the self-analysis functions are implemented and how they subsequently fulfill the self-analysis objectives.
- the self-analysis capabilities of the system are a self-analysis system that overlies the system. Accordingly, the overlaid self-analysis system can be evaluated by the analysis system 10 in a similar manner as the system under test 91 .
- the understanding, implementation, and/or operation of the overlaid self-analysis system can be evaluated with respect to self-analysis guidelines, self-analysis requirements, design of the self-analysis system, build of the self-analysis system, and/or operation of the self-analysis system
- the analysis system 10 may identify deficiencies and, when appropriate, auto-correct a deficiency.
- the analysis system 10 identifies deficiencies in the understanding, implementation, and/or operation of the guidelines, the system requirements, the system design, the system build, the resulting system, and/or the system objectives.
- the analysis system 10 obtains addition information from the system via a data gathering process (e.g., producing discovered data) and/or from a system proficiency resource (e.g., producing desired data).
- the analysis system 10 uses the discovered data and/or desired data to identify the deficiencies.
- the analysis system 10 auto-corrects the deficiencies. For example, when a software tool that aides in the creation of guidelines and/or system requirements is missing from the system's tool set, the analysis system 10 can automatically obtain a copy of the missing software tool for the system.
- FIG. 26 is a schematic block diagram of another example of an analysis system 10 evaluating a system element under test 91 .
- the analysis system 10 is evaluating the system element under test 91 from three evaluation viewpoints: disclosed data, discovered data, and desired data.
- Disclosed data is the known data of the system at the outset of an analysis, which is typically supplied by a system administrator and/or is obtained from data files of the system.
- Discovered data is the data discovered about the system from the by the analysis system 10 during the analysis.
- Desired data is the data obtained by the analysis system 10 from system proficiency resources regarding desired guidelines, system requirements, system design, system build, and/or system operation.
- the evaluation from the three evaluation viewpoints may be done serially, in parallel, and/or in a parallel-serial combination to produce three sets of evaluation ratings.
- a set of evaluation ratings includes one or more of: an evaluation rating regarding the understanding of the guidelines; an evaluation rating regarding the understanding of the system requirements; an evaluation rating regarding the understanding of the system design; an evaluation rating regarding the understanding of the system build; an evaluation rating regarding the understanding of the system operation; an evaluation rating regarding the development of the system requirements from the guidelines; an evaluation rating regarding the design from the system requirements; an evaluation rating regarding the system build from the design; an evaluation rating regarding the system operation based on the system design and/or system build; an evaluation rating regarding the guidelines; an evaluation rating regarding the system requirements; an evaluation rating regarding the system design; an evaluation rating regarding the system build; and/or an evaluation rating regarding the system operation.
- FIG. 27 is a schematic block diagram of another example of an analysis system 10 evaluating a system element under test 91 .
- the analysis system 10 is evaluating the system element under test 91 from three evaluation viewpoints: disclosed data, discovered data, and desired data with regard to security functions.
- the evaluation from the three evaluation viewpoints for the security functions may be done serially, in parallel, and/or in a parallel-serial combination to produce three sets of evaluation ratings with respect to security functions: one for disclosed data, one for discovered data, and one for desired data.
- FIG. 28 is a schematic block diagram of another example of an analysis system 10 evaluating a system element under test 91 .
- the analysis system 10 is evaluating the system element under test 91 from three evaluation viewpoints and from three evaluation modes. For example, disclosed data regarding assets, discovered data regarding assets, desired data regarding assets, disclosed data regarding system functions, discovered data regarding system functions, desired data regarding system functions, disclosed data regarding security functions, discovered data regarding security functions, and desired data regarding security functions.
- the evaluation from the nine evaluation viewpoints & evaluation mode combinations may be done serially, in parallel, and/or in a parallel-serial combination to produce nine sets of evaluation ratings one for disclosed data regarding assets, one for discovered data regarding assets, one for desired data regarding assets, one for disclosed data regarding system functions, one for discovered data regarding system functions, one for desired data regarding functions, one for disclosed data regarding security functions, one for discovered data regarding security functions, and one for desired data regarding security functions.
- FIG. 29 is a schematic block diagram of an example of the functioning of an analysis system 10 evaluating a system element under test 91 .
- the analysis system 10 includes evaluation criteria 211 , evaluation mode 212 , analysis perspective 213 , analysis viewpoint 214 , analysis categories 215 , data gathering 216 , pre-processing 217 , and analysis metrics 218 to produce one or more ratings 219 .
- the evaluation criteria 211 includes guidelines, system requirements, system design, system build, and system operation.
- the evaluation mode 212 includes assets, system functions, and security functions.
- the evaluation criteria 211 and the evaluation mode 212 are part of the system aspect, which corresponds to the system, or portion thereof, being evaluated.
- the analysis perspective 213 includes understanding, implementation, operation, and self-analysis.
- the analysis viewpoint includes disclosed, discovered, and desired.
- the analysis categories 215 include identify, protect, detect, respond, and recover.
- the analysis perspective 213 , the analysis viewpoint 214 , and the analysis categories correspond to how the system, or portion thereof, will be evaluated. For example, the system, or portion thereof, is being evaluated regarding the understanding of the system's ability to identify assets, system functions, and/or security functions from discovered data.
- the analysis metrics 218 includes process, policy, procedure, automation, certification, and documentation.
- the analysis metric 218 and the pre-processing 217 corresponds to manner of evaluation. For example, the policies regarding system's ability to identify assets, system functions, and/or security functions from discovered data of the system, or portion thereof, are evaluated to produce an understanding evaluation rating.
- the analysis system 10 determines what portion of the system is evaluated (i.e., a system aspect). As such, the analysis system 10 determines one or more system elements (e.g., including one or more system assets which are one or more physical assets and/or conceptual assets), one or more system criteria (e.g., guidelines, system requirements, system design, system build, and/or system operation), and one or more system modes (e.g., assets, system functions, and security functions).
- the analysis system 10 may determine the system aspect in a variety of ways. For example, the analysis system 10 receives an input identifying the system aspect from an authorized operator of the system (e.g., IT personnel, executive personnel, etc.).
- the analysis system determines the system aspect in a systematic manner to evaluate various combinations of system aspects as part of an overall system evaluation.
- the overall system evaluation may be done one time, periodically, or continuously.
- the analysis system determines the system aspect as part of a systematic analysis of a section of the system, which may be done one time, periodically, or continuously.
- the analysis system determines how the system aspect is to be evaluated by selecting one or more analysis perspectives (understanding, implementation, operation, and self-analysis), one or more analysis viewpoints (disclosed, discovered, and desired), and one or more analysis categories (identify, protect, detect, respond, and recover).
- the analysis system 10 may determine how the system aspect is to be evaluated in a variety of ways. For example, the analysis system 10 receives an input identifying how the system aspect is to be evaluated from an authorized operator of the system (e.g., IT personnel, executive personnel, etc.). As another example, the analysis system determines how the system aspect is to be evaluated in a systematic manner to evaluate the system aspect in various combinations of analysis perspectives, analysis viewpoints, and analysis categories as part of an overall system evaluation. The overall system evaluation may be done one time, periodically, or continuously. As yet another example, the analysis system determines how the system aspect is to be evaluated as part of a systematic analysis of a section of the system, which may be done one time, periodically, or continuously.
- the analysis system 10 also determines one or more analysis metrics (e.g., process, policy, procedure, automation, certification, and documentation) regarding the manner for evaluating the system aspect in accordance with how it's to be evaluated.
- a policy sets out a strategic direction and includes high-level rules or contracts regarding issues and/or matters. For example, all software shall be a most recent version of the software.
- a process is a set of actions for generating outputs from inputs and includes one or more directives for generating outputs from inputs.
- a process regarding the software policy is that software updates are to be performed by the IT department and all software shall be updated within one month of the release of the new version of software.
- a procedure is the working instructions to complete an action as may be outlined by a process.
- the IT department handling software updates includes a procedure that describes the steps for updating the software, verifying that the updated software works, and recording the updating and verification in a software update log.
- Automation is in regard to the level of automation the system includes for handling actions, issues, and/or matters of policies, processes, and/or procedures.
- Documentation is in regard to the level of documentation the system has regard guidelines, system requirements, system design, system build, system operation, system assets, system functions, security functions, system understanding, system implementation, operation of the system, policies, processes, procedures, etc.
- Certification is in regard to certifications of the system, such as maintenance certification, regulatory certifications, etc.
- the analysis system 10 receives an input identifying manner in which to evaluate the system aspect from an authorized operator of the system (e.g., IT personnel, executive personnel, etc.).
- the analysis system determines the manner in which to evaluate the system aspect in a systematic manner to evaluate the system aspect in various combinations of analysis metrics as part of an overall system evaluation.
- the overall system evaluation may be done one time, periodically, or continuously.
- the analysis system determines the manner in which to evaluate the system aspect as part of a systematic analysis of a section of the system, which may be done one time, periodically, or continuously.
- the data gathering function 216 gathers data relevant to the system aspect, how it's to be evaluated, and the manner of evaluation from the system 11 , from resources that store system information 210 (e.g., from the system, from a private storage of the analysis system, etc.), and/or from one or more system proficiency resources 22 .
- a current evaluation is regarding an understanding (analysis perspective) of policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint).
- the data gathering function 216 gathers data regarding policies to identify assets of the engineering department and the operations they perform using one or more data discovery tools.
- the pre-processing function 217 processes the gathered data by normalizing the data, parsing the data, tagging the data, and/or de-duplicating the data.
- the analysis system evaluates the processed data in accordance with the selected analysis metric to produce one or more ratings 219 .
- the analysis system would produce a rating regarding the understanding of policies to identify assets of an engineering department regarding operations that the assets perform based on discovered data.
- the rating 219 is on a scale from low to high. In this example, a low rating indicates issues with the understanding and a high rating indicates no issues with the understanding.
- FIG. 30 is a schematic block diagram of another example of the functioning of an analysis system 10 evaluating a system element under test 91 .
- the functioning of the analysis system includes a deficiency perspective function 230 , a deficiency evaluation viewpoint function 31 , and an auto-correction function 233 .
- the deficiency perspective function 230 receives one or more ratings 219 and may also receive the data used to generate the ratings 219 . From these inputs, the deficiency perspective function 230 determines whether there is an understanding issue, an implementation issue, and/or an operation issue. For example, an understanding (analysis perspective) issue relates to a low understanding evaluation rating for a specific evaluation regarding policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint).
- an understanding (analysis perspective) issue relates to a low understanding evaluation rating for a specific evaluation regarding policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint).
- an implementation (analysis perspective) issue relates to a low implementation evaluation rating for a specific evaluation regarding implementation and/or use of policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint).
- an operation (analysis perspective) issue relates to a low operation evaluation rating for a specific evaluation regarding consistent, reliable, and/or accurate mechanism(s) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint) and on policies (analysis metric).
- the deficiency evaluation viewpoint function 231 determines whether the issue(s) is based on disclosed data, discovered data, and/or desired data. For example, an understanding issue may be based on a difference between disclosed data and discovered data.
- the disclosed data includes a policy outline how to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform, which is listed as version 1.12 and a last revision date of Oct. 2, 2020.
- the discovered data includes the same policy, but is has been updated to version 1.14 and the last revision date as Nov. 13, 2020.
- the deficiency evaluation viewpoint function identifies a deficiency 232 in the disclosed data as being an outdated policy.
- the disclosed data includes a policy outline how to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform.
- the disclosed data also shows an inconsistent use and/or application of the policy resulting one or more assets not being properly identified.
- the deficiency evaluation viewpoint function identifies a deficiency 232 in the disclosed data as being inconsistent use and/or application of the policy.
- the auto-correct function 233 receives a deficiency 232 and interprets it to determine a deficiency type, i.e., a nature of the understanding issue, the implementation issue, and/or the operation issues. Continuing with the outdated policy example, the nature of the understanding issue is that there is a newer version of the policy. Since there is a newer version available, the auto-correct function 233 can update the policy to the newer version for the system (e.g., an auto-correction). In addition to making the auto-correction 235 , the analysis system creates an accounting 236 of the auto-correction (e.g., creates a record). The record includes an identity of the deficiency, date information, what auto-correction was done, how it was done, verification that it was done, and/or more or less data as may be desired for recording auto-corrections.
- a deficiency type i.e., a nature of the understanding issue, the implementation issue, and/or the operation issues.
- a deficiency 232 is discovered that an asset exists in the engineering department that was not included in the disclosed data.
- This deficiency may include one or more related deficiencies.
- a deficiency of design a deficiency of build, a deficiency is oversight of asset installation, etc.
- the deficiencies of design, build, and/or installation oversight can be auto-corrected; the deficiency of an extra asset cannot.
- the analysis system With regard to the deficiency of the extra asset, the analysis system generates a report regarding the extra asset and the related deficiencies.
- FIG. 31 is a diagram of an example of evaluation options of an analysis system 10 for evaluating a system element under test 91 .
- the evaluation options are shown in a three-dimensional tabular form.
- the rows include analysis perspective 213 options (e.g., understanding, implementation, and operation).
- the columns includes analysis viewpoint 214 option (e.g., disclosed, discovered, and desired).
- the third dimension includes analysis output 240 options (e.g., ratings 219 , deficiencies in disclosed data, deficiencies in discovered data, deficiencies in disclosed to discovered data, deficiencies in disclosed to desired data, deficiencies in discovered to desired data, and auto-correct.
- the analysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection, a column selection, and/or a third dimension selection. For example, the analysis system performs an evaluation from an understanding perspective, a disclosed data viewpoint, and a ratings output. As another example, the analysis system performs an evaluation from an understanding perspective, all viewpoints, and a ratings output.
- system element under test 91 e.g., system aspect
- FIG. 32 is a diagram of another example of evaluation options of an analysis system 10 for evaluating a system element under test 91 (e.g., system aspect).
- the evaluation options are shown in the form of a table.
- the rows are assets (physical and conceptual) and the columns are system functions.
- the analysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection and a column selection.
- the analysis system 10 can evaluate user HW with respect to business operations.
- the analysis system 10 can evaluate physical assets with respect to data flow.
- the analysis system 10 can evaluate user SW with respect to all system functions.
- FIG. 33 is a diagram of another example of evaluation options of an analysis system 10 for evaluating a system element under test 91 (e.g., system aspect).
- the evaluation options are shown in the form of a table.
- the rows are security functions and the columns are system functions.
- the analysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection and a column selection.
- the analysis system 10 can evaluate threat detection with respect to business operations.
- the analysis system 10 can evaluate all security functions with respect to data flow.
- the analysis system 10 can evaluate threat avoidance with respect to all system functions.
- FIG. 34 is a diagram of another example of evaluation options of an analysis system 10 for evaluating a system element under test 91 (e.g., system aspect).
- the evaluation options are shown in the form of a table.
- the rows are assets (physical and conceptual) and the columns are security functions.
- the analysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection and a column selection.
- the analysis system 10 can evaluate user HW with respect to threat recovery.
- the analysis system 10 can evaluate physical assets with respect to threat resolution.
- the analysis system 10 can evaluate user SW with respect to all security functions.
- FIG. 35 is a schematic block diagram of an embodiment of an analysis system 10 that includes one or more computing entities 16 , one or more databases 275 , one or more data extraction modules 80 , one or more system user interface modules 81 , and one or more remediation modules 257 .
- the computing entity(ies) 16 is configured to include a data input module 250 , a pre-processing module 251 , a data analysis module 252 , an analytics modeling module 253 , an evaluation processing module 254 , a data output module 255 , and a control module 256 .
- the database 275 which includes one or more databases, stores the private data for a plurality of systems (e.g., systems A-x) and stores analytical data 270 of the analysis system 10 .
- the system 11 provides input 271 to the analysis system 10 via the system user interface module 80 .
- the system user interface module 80 provides a user interface for an administrator of the system 11 and provides a s secure end-point of a secure data pipeline between the system 11 and the analysis system 10 . While the system user interface module 81 is part of the analysis system, it is loaded on and is executed on the system 11 .
- the administrator makes selections as to how the system is to be evaluated and the desired output from the evaluation. For example, the administrator selects evaluate system, which instructs the analysis system 10 to evaluate the system from most every, if not every, combination of system aspect (e.g., system element, system criteria, and system mode), evaluation aspect (e.g., evaluation perspective, evaluation viewpoint, and evaluation category), evaluation metric (e.g., process, policy, procedure, automation, documentation, and certification), and analysis output (e.g., an evaluation rating, deficiencies identified, and auto-correction of deficiencies). As another example, the administrator selects one or more system aspects, one or more evaluation aspects, one or more evaluation metrics, and/or one or more analysis outputs.
- evaluate system which instructs the analysis system 10 to evaluate the system from most every, if not every, combination of system aspect (e.g., system element, system criteria, and system mode), evaluation aspect (e.g., evaluation perspective, evaluation viewpoint, and evaluation category), evaluation metric (e.g., process, policy, procedure, automation, documentation,
- the analysis system 10 receives the evaluation selections as part of the input 271 .
- a control module 256 interprets the input 271 to determine what part of the system is to be evaluated (e.g., system aspects), how the system is to be evaluated (e.g., evaluation aspects), the manner in which the system is to be evaluated (e.g., evaluation metrics), and/or the resulting evaluation output (e.g., an evaluation rating, a deficiency report, and/or auto-correction). From the interpretation of the input, the control module 256 generates data gathering parameters 263 , pre-processing parameters 264 , data analysis parameters 265 , and evaluation parameters 266 .
- the control module 256 provides the data gathering parameters 263 to the data input module 250 .
- the data input module 250 interprets the data gathering parameters 263 to determine data to gather.
- the data gathering parameters 263 are specific to the evaluation to be performed by the analysis system 10 .
- the analysis system 10 is evaluating the understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department, then the data gathering parameters 263 would prescribe gathering data related to policies, processes, documentation, and automation regarding the assets built for the engineering department.
- the data input module 250 may gather (e.g., retrieve, request, etc.) from a variety of sources. For example, the data input module 250 gathers data 258 from the data extraction module 80 . In this example, the data input module 250 provides instructions to the data extraction module 80 regarding the data being requested. The data extraction module 80 pulls the requested data from system information 210 , which may be centralized data of the system, system administration data, and/or data from assets of the system.
- the data input module 250 gathers data from one or more external data feeds 259 .
- a source of an external data feed includes one or more business associate computing devices 23 , one or more publicly available servers 27 , and/or one or more subscriber servers 28 .
- Other sources of external data feeds 259 includes bot computing devices 25 , and/or bad actor computing devices 26 .
- the data input module 250 does not seek data inputs from bot computing devices 25 and/or bad actor computing devices 26 except under certain circumstances involving specific types of cybersecurity risks.
- the data input module 250 gathers system proficiency data 260 from one or more system proficiency resources 22 .
- the data input module 250 addresses one or more system proficiencies resources 22 to obtain the desired system proficiency data 260 .
- system proficiency data 260 includes information regarding best-in-class practices (for system requirements, for system design, for system implementation, and/or for system operation), governmental and/or regulatory requirements, security risk awareness and/or risk remediation information, security risk avoidance, performance optimization information, system development guidelines, software development guideline, hardware requirements, networking requirements, networking guidelines, and/or other system proficiency guidance.
- the data input module 250 gathers stored data 261 from the database 275 .
- the stored data 261 is previously stored data that is unique to the system 11 , is data from other systems, is previously processed data, is previously stored system proficiency data, and/or is previously stored data that assists in the current evaluation of the system.
- the data input module 250 provides the gathered data to the pre-processing module 251 .
- the pre-processing module 251 processes the gathered data to produce pre-processed data 267 .
- the pre-processed data 267 may be stored in the database 275 and later retrieved as stored data 261 .
- the analysis modeling module 253 retrieves stored data 261 and/or stored analytics 262 from the database 275 .
- the analysis modeling module 253 operates to increase the artificial intelligence of the analysis system 10 .
- the analysis modeling module 253 evaluates stored data from one or more systems in a variety of ways to test the evaluation processes of the analysis system.
- the analysis modeling module 253 models the evaluation of understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department across multiple systems to identify commonalities and/or deviations.
- the analysis modeling module 253 interprets the commonalities and/or deviations to adjust parameters of the evaluation of understanding and models how the adjustments affect the evaluation of understanding. If the adjustments have a positive effect, the analysis modeling module 253 stores them as analytics 262 and/or analysis modeling 268 in the database 275 .
- the data analysis module 252 receives the pre-processed data 267 , the data analysis parameters 265 and may further receive optional analysis modeling data 268 .
- the data analysis parameters 265 includes identify of selected evaluation categories (e.g., identify, protect, detect, respond, and recover), identity of selected evaluation sub-categories, identify of selected evaluation sub-sub categories, identity of selected analysis metrics (e.g., process, policy, procedure, automation, certification, and documentation), grading parameters for the selected analysis metrics (e.g., a scoring scale for each type of analysis metric), identity of selected analysis perspective (e.g., understanding, implementation, operation, and self-analysis), and/or identity of selected analysis viewpoint (e.g., disclosed, discovered, and desired).
- selected evaluation categories e.g., identify, protect, detect, respond, and recover
- identity of selected evaluation sub-categories e.g., identify of selected evaluation sub-sub categories
- identity of selected analysis metrics e.g., process, policy, procedure, automation, certification, and documentation
- the data analysis module 252 generates one or more ratings 219 for the pre-processed data 267 based on the data analysis parameters 265 .
- the data analysis module 252 may adjust the generation of the one or more rating 219 based on the analysis modeling data 268 .
- the data analysis module 252 evaluates the understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department based on the pre-processed data 267 to produce at least one evaluation rating 219 .
- the analysis modeling 268 is regarding the evaluation of understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department of a plurality of different organizations operating on a plurality of different systems.
- the modeling indicates that if processes are well understood, the understanding of the policies is less significant in the overall understanding.
- the data analysis module 252 may adjusts its evaluation rating of the understanding to a more favorably rating if the pre-processed data 267 correlates with the modeling (e.g., good understanding of processes).
- the data analysis module 252 provides the rating(s) 219 to the data output module 255 and to the evaluation processing module 254 .
- the data output module 255 provides the rating(s) 219 as an output 269 to the system user interface module 81 .
- the system user interface module 81 provides a graphical rendering of the rating(s) 219 .
- the evaluation processing module 254 processes the rating(s) 219 based on the evaluation parameters 266 to identify deficiencies 232 and/or to determine auto-corrections 235 .
- the evaluation parameters 266 provide guidance on how to evaluate the rating(s) 219 and whether to obtain data (e.g., pre-processed data, stored data, etc.) to assist in the evaluation.
- the evaluation guidance includes how deficiencies are to be identified. For example, identify the deficiencies based on the disclosed data, based on the discovered data, based on a differences between the disclosed and discovered data, based on a differences between the disclosed and desired data, and/or based on a differences between the discovered and desired data.
- the evaluation guidance further includes whether auto-correction is enabled.
- the evaluation parameters 266 may further includes deficiency parameters, which provide a level of tolerance between the disclosed, discovered, and/or desired data when determining deficiencies.
- the evaluation processing module 254 provides deficiencies 232 and/or the auto-corrections 235 to the data output module 255 .
- the data output module 255 provides the deficiencies 232 and/or the auto-corrections 235 as an output 269 to the system user interface module 81 and to the remediation module 257 .
- the system user interface module 81 provides a graphical rendering of the deficiencies 232 and/or the auto-corrections 235 .
- the remediation module 257 interprets the deficiencies 232 and the auto-corrections 235 to identify auto-corrections to be performed within the system. For example, if a deficiency is a computing device having an outdated user software application, the remediation module 257 coordinates obtaining a current copy of the user software application, uploading it on the computing device, and updating maintenance logs.
- FIG. 36 is a schematic block diagram of an embodiment of a portion of an analysis system 10 coupled to a portion of the system 11 .
- the data output module 255 of the analysis system 10 is coupled to a plurality of remediation modules 257 - 1 through 257 - n.
- Each remediation module 257 is coupled to one or more system assets 280 - 1 through 280 - n.
- a remediation module 257 receives a corresponding portion of the output 269 .
- remediation module 257 - 1 receives output 269 - 1 , which is regarding an evaluation rating, deficiency, and/or an auto-correction of system asset 280 - 1 .
- Remediation module 257 - 1 may auto-correct a deficiency of the system asset or a system element thereof.
- the remediation module 257 - 1 may quarantine the system asset or system element thereof if the deficiency cannot be auto-corrected and the deficiency exposes the system to undesired risks, undesired liability, and/or undesired performance degradation.
- FIG. 37 is a schematic block diagram of another embodiment of a portion of an analysis system 10 coupled to a portion of the system 11 .
- the data input module 250 of the analysis system 10 is coupled to a plurality of data extraction modules 80 - 1 through 80 - n.
- Each data extraction module 80 is coupled to a system data source 290 of the system 11 .
- Each of the system data sources produce system information 210 regarding a corresponding portion of the system.
- a system data source 290 - 1 through 290 - n may be an Azure EventHub, Cisco Advanced Malware Protection (AMP), Cisco Email Security Appliance (ESA), Cisco Umbrella, NetFlow, and/or Syslog.
- a system data source may be a system asset, a system element, and/or a storage device storing system information 210 .
- An extraction data migration module 293 coordinates the collection of system information 210 as extracted data 291 - 1 through 291 - n.
- An extraction data coordination module 292 coordinates the forwarding of the extracted data 291 as data 258 to the data input module 250 .
- FIG. 38 is a schematic block diagram of an embodiment of a data extraction module 80 of an analysis system 10 coupled to a system 11 .
- the data extraction module 80 includes a tool one or more interface modules 311 , one or more processing module 312 , and one or more network interfaces 313 .
- the network interface 313 provides a network connections that allows the data extraction module 80 to be coupled to the one or more computing entities 16 of the analysis system 10 .
- the tool interface 311 allows the data extraction module 80 to interact with tools of the system 11 to obtain system information from system data sources 290 .
- the system 11 includes one or more tools that can be accessed by the data extraction module 80 to obtain system information from one or more data sources 290 - 1 through 290 - n.
- the tools include one or more data segmentation tools 300 , one or more boundary detection tools 301 , one or more data protection tools 302 , one or more infrastructure management tools 303 , one or more encryption tools 304 , one or more exploit protection tools 305 , one or more malware protection tools 306 , one or more identity management tools 307 , one or more access management tools 308 , one or more system monitoring tools, and/or one or more vulnerability management tools 310 .
- a system tool may also be an infrastructure management tool, a network monitoring tool, a network strategy and planning tool, a network managing tool, a Simple Network Management Protocol (SNMP) tool, a telephony monitoring tool, a firewall monitoring tool, a bandwidth monitoring tool, an IT asset inventory management tool, a network discovery tool, a network asset discovery tool, a software discovery tool, a security discovery tool, an infrastructure discovery tool, Security Information & Event Management (SIEM) tool, a data crawler tool, and/or other type of tool to assist in discovery of assets, functions, security issues, implementation of the system, and/or operation of the system.
- SIEM Security Information & Event Management
- the tool interface 311 engages a system tool to retrieve system information. For example, the tool interface 311 engages the identity management tool to identify assets in the engineering department.
- the processing module 312 coordinates requests from the analysis system 10 and responds to the analysis system 10 .
- FIG. 39 is a schematic block diagram of another embodiment of an analysis system 10 that includes one or more computing entities 16 , one or more databases 275 , one or more data extraction modules 80 , and one or more system user interface modules 81 .
- the computing entity(ies) 16 is configured to include a data input module 250 , a pre-processing module 251 , a data analysis module 252 , an analytics modeling module 253 , a data output module 255 , and a control module 256 .
- the database 275 which includes one or more databases, stores the private data for a plurality of systems (e.g., systems A-x) and stores analytical data 270 of the analysis system 10 .
- This embodiment operates similarly to the embodiment of FIG. 35 with the removal of the evaluation module 254 , which produces deficiencies 232 and auto-corrections 235 , and the removal of the remediation modules 257 .
- this analysis system 10 produces evaluation ratings 219 as the output 269 .
- FIG. 40 is a schematic block diagram of another embodiment of an analysis system 10 that is similar to the embodiment of FIG. 39 .
- This embodiment does not include a pre-processing module 251 .
- the data collected by the data input module 250 is provided directly to the data analysis module 252 .
- FIG. 41 is a schematic block diagram of an embodiment of a data analysis module 252 of an analysis system 10 .
- the data analysis module 252 includes a data module 321 and an analysis & score module 336 .
- the data module 321 includes a data parse module 320 , one or more data storage modules 322 - 334 , and a source data matrix 335 .
- a data storage module 322 - 334 may be implemented in a variety of ways.
- a data storage module is a buffer.
- a data storage module is a section of memory ( 45 , 56 , 57 , and/or 62 of the FIG. 2 series) of a computing device (e.g., an allocated, or ad hoc, addressable section of memory).
- a data storage module is a storage unit (e.g., a computing device used primarily for storage).
- a data storage module is a section of a database (e.g., an allocated, or ad hoc, addressable section of a database).
- the data module 321 operates to provide the analyze & score module 336 with source data 337 selected from incoming data based on one or more data analysis parameters 265 .
- the data analysis parameter(s) 265 indicate(s) how the incoming data is to be parsed (if at all) and how it is to be stored within the data storage modules 322 - 334 .
- a data analysis parameter 265 includes system aspect storage parameters 345 , evaluation aspect storage parameters 346 , and evaluation metric storage parameters 347 .
- a system aspect storage parameter 345 may be null or includes information to identify one or more system aspects (e.g., system element, system criteria, and system mode), how the data relating to system aspects is to be parsed, and how the system aspect parsed data is to be stored.
- An evaluation aspect storage parameter 346 may be null or includes information to identify one or more evaluation aspects (e.g., evaluation perspective, evaluation viewpoint, and evaluation category), how the data relating to evaluation aspects is to be parsed, and how the evaluation aspect parsed data is to be stored.
- An evaluation metric storage parameter 347 may be null or includes information to identify one or more evaluation metrics (e.g., process, policy, procedure, certification, documentation, and automation), how the data relating to evaluation metrics is to be parsed, and how the evaluation metric parsed data is to be stored. Note that the data module 321 interprets the data analysis parameters 265 collectively such that parsing and storage are consistent with the parameters.
- the data parsing module 320 parses incoming data in accordance with the system aspect storage parameters 345 , evaluation aspect storage parameters 346 , and evaluation metric storage parameters 347 , which generally correspond to what part of the system is being evaluation, how the system is being evaluated, the manner of evaluation, and/or a desired analysis output. As such, incoming data may be parsed in a variety of ways.
- the data storage modules 322 - 334 are assigned to store parsed data in accordance with the storage parameters 345 - 347 .
- the incoming data which includes pre-processed data 267 , other external feed data 259 , data 258 received via a data extraction module, stored data 261 , and/or system proficiency data 260 , is parsed based on system criteria (of the system aspect) and evaluation viewpoint (of the evaluation aspect).
- system criteria of the system aspect
- evaluation viewpoint of the evaluation aspect
- the incoming data is parsed based on a combination of one or more system aspects (e.g., system elements, system criteria, and system mode) or sub-system aspects thereof, one or more evaluation aspects (e.g., evaluation perspective, evaluation viewpoint, and evaluation category) or sub-evaluation aspects thereof, and/or one or more evaluation rating metrics (e.g., process, policy, procedure, certification, documentation, and automation) or sub-evaluation rating metrics thereof.
- system aspects e.g., system elements, system criteria, and system mode
- evaluation aspects e.g., evaluation perspective, evaluation viewpoint, and evaluation category
- evaluation rating metrics e.g., process, policy, procedure, certification, documentation, and automation
- the incoming data is parsed based on the evaluation category of identify and its sub-categories of asset management, business environment, governance, risk assessment, risk management, access control, awareness &, training, and/or data security.
- the incoming data is not parsed, or is minimally parsed.
- the data is parsed based on timestamps: data from one time period (e.g., a day) is parsed from data of another time period (e.g., a different day).
- the source data matrix 335 which may be a configured processing module, retrieves source data 337 from the data storage modules 322 - 334 .
- the selection corresponds to the analysis being performed by the analyze & score module 336 .
- the analyze & score module 336 is evaluating the understanding of the policies, processes, documentation, and automation regarding the assets built for the engineering department, then the source data 337 would be data specific to policies, processes, documentation, and automation regarding the assets built for the engineering department.
- the analyze & score module 336 generates one or more ratings 219 for the source data 337 in accordance with the data analysis parameters 265 and analysis modeling 268 .
- the data analysis parameters 265 includes system aspect analysis parameters 342 , evaluation aspect analysis parameters 343 , and evaluation metric analysis parameters 344 .
- the analyze & score module 336 is discussed in greater detail with reference to FIG. 42 .
- FIG. 42 is a schematic block diagram of an embodiment of an analyze and score module 336 includes a matrix module 341 and a scoring module 348 .
- the matrix module 341 processes an evaluation mode matrix, an evaluation perspective matrix, an evaluation viewpoint matrix, and an evaluation categories matrix to produce a scoring input.
- the scoring module 348 includes an evaluation metric matrix to process the scoring input data in accordance with the analysis modeling 268 to produce the rating(s) 219 .
- the matrix module 341 configures the matrixes based on the system aspect analysis parameters 342 and the evaluation aspect analysis parameters 343 to process the source data 337 to produce the scoring input data.
- the system aspect analysis parameters 342 and the evaluation aspect analysis parameters 343 indicate assets as the evaluation mode, understanding as the evaluation perspective, discovered as the evaluation viewpoint, and the identify as the evaluation category.
- the matrix module 341 communicates with the source data matrix module 335 of the data module 321 to obtain source data 337 relevant to assets, understanding, discovered, and identify.
- the matrix module 341 may organize the source data 337 using an organization scheme (e.g., by asset type, by evaluation metric type, by evaluation sub-categories, etc.) or keep the source data 337 as a collection of data.
- the matrix module 341 provides the scoring input data 344 as a collection of data or as organized data to the scoring module 348 .
- the scoring module 248 receives the scoring input data 348 and evaluates in accordance with the evaluation metric analysis parameters 344 and the analysis modeling 268 to produce the rating(s) 219 .
- the evaluation metric analysis parameters 344 indicate analyzing the scoring input data with respect to processes.
- the analysis modeling 268 provides a scoring mechanism for evaluating the scoring input data with respect to processes to the scoring module 248 .
- the analysis modeling 268 includes six levels regarding processes and a corresponding numerical rating: none (e.g., 0), inconsistent (e.g., 10), repeatable (e.g., 20), standardized (e.g., 30), measured (e.g., 40), and optimized (e.g., 50).
- the analysis modeling 268 includes analysis protocols for interpreting the scoring input data to determine its level and corresponding rating. For example, if there are no processes regarding identifying assess of the discovered data, then an understanding level of processes would be none (e.g., 0 ), since there are no processes. As another example, if there are some processes regarding identifying assess of the discovered data, but there are gaps in the processes (e.g., identifies some assets, but not all, do not produce consistent results), then an understanding level of processes would be inconsistent (e.g., 10 ). To determine if there are gaps in the processes, the score module 248 executes the processes of the discovered data to identify assets. The scoring module 248 also executes one or more asset discovery tools to identify assets and then compares the two results. If there are inconsistencies in the identified assets, then there are gaps in the processes.
- analysis protocols for interpreting the scoring input data to determine its level and corresponding rating. For example, if there are no processes regarding identifying assess of the discovered data, then an understanding level of processes would be none (
- the processes regarding identifying assess of the discovered data are repeatable (e.g., produces consistent results, but there are variations in the processes from process to process, and/or the processes are not all regulated) but not standardized (e.g., produces consistent results, but there are no appreciable variations in the processes from process to process, and/or the processes are regulated). If the processes are repeatable but not standardized, the scoring module establishes an understanding level of the processes as repeatable (e.g., 20).
- the scoring module determines whether the processes are measured (e.g., precise, exact, and/or calculated to the task of identifying assets). If not, the scoring module establishes an understanding level of the processes as standardized (e.g., 30).
- the scoring module determines whether the processes are optimized (e.g., up-to-date and improvement assessed on a regular basis as part of system protocols). If not, the scoring module establishes an understanding level of the processes as measured (e.g., 40). If so, the scoring module establishes an understanding level of the processes as optimized (e.g., 50).
- FIG. 43 is a diagram of an example of system aspect, evaluation aspect, evaluation rating metric, and analysis system output options of an analysis system 10 for analyzing a system 11 , or portion thereof.
- the system aspect corresponds to what part of the system is to be evaluated by the analysis system.
- the evaluation aspect indicates how the system aspect is to be evaluation.
- the evaluation rating metric indicates the manner of evaluation of the system aspect in accordance with the evaluation aspect.
- the analysis system output indicates the type of output to be produced by the analysis system based on the evaluation of the system aspect in accordance with the evaluation aspect as per the evaluation rating metric.
- the system aspect includes system elements, system criteria, and system modes.
- a system element includes one or more system assets, which is a physical asset and/or a conceptual asset.
- a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like.
- a conception asset is a hardware architecture (e.g., identification of a system's physical components, their capabilities, and their relationship to each other) and/or sub-architectures thereof and a software architecture (e.g., fundamental structures for the system's software, their requirements, and inter-relational operations) and sub-architectures thereof.
- a system element and/or system asset is identifiable in a variety of ways. For example, it can be identified by an organization identifier (ID), which would be associated with most, if not all, system elements of a system.
- ID organization identifier
- a system element and/or system asset can be identified by a division ID, where the division is one of a plurality of divisions in the organization.
- a system element and/or system asset can be identified by a department ID, where the department is one of a plurality of departments in a division.
- a system element and/or system asset can be identified by a department ID, where the department is one of a plurality of departments in a division.
- a system element and/or system asset can be identified by a group ID, where the department is one of a plurality of groups in a department.
- a system element and/or system assets can be identified by a sub-group ID, where the department is one of a plurality of sub-groups in a group.
- a collection of system elements and/or system assets can be selected for evaluation by using an organization ID, a division ID, a department ID, a group ID, or a sub-group ID.
- a system element and/or system asset may also be identified based on a user ID, a serial number, vendor data, an IP address, etc.
- a computing device has a serial number and vendor data.
- the computing device can be identified for evaluation by its serial number and/or the vendor data.
- a software application has a serial number and vendor data.
- the software application can be identified for evaluation by its serial number and/or the vendor data.
- an identifier of one system element and/or system asset may link to one or more other system elements and/or system assets.
- computing device has a device ID, a user ID, and/or a serial number to identify it.
- the computing device also includes a plurality of software applications, each with its own serial number.
- the software identifiers are linked to the computing device identifier since the software is loaded on the computing device. This type of an identifier allows a single system asset to be identified for evaluation.
- the system criteria includes information regarding the development, operation, and/or maintenance of the system 11 .
- a system criteria is a guideline, a system requirement, a system design component, a system build component, the system, and system operation. Guidelines, system requirements, system design, system build, and system operation were discussed with reference to FIG. 25 .
- the system mode indicates the assets of the system, the system functions of the system, and/or the security functions of the system are to be evaluated. Assets, system functions, and security functions have been previously discussed with reference to one or more of FIGS. 7-24 and 32-34 .
- the evaluation aspect which indicates how the system aspect is to be evaluated, includes evaluation perspective, evaluation viewpoint, and evaluation category.
- the evaluation perspective includes understanding (e.g., how well the system is known, should be known, etc.); implementation, which includes design and/or build, (e.g., how well is the system designed, how well should it be designed); system performance, and/or system operation (e.g., how well does the system perform and/or operate, how well should it perform and/or operate); and self-analysis (e.g., how self-aware is the system, how self-healing is the system, how self-updating is the system).
- the evaluation viewpoint includes disclosed data, discovered data, and desired data.
- Disclosed data is the known data of the system at the outset of an analysis, which is typically supplied by a system administrator and/or is obtained from data files of the system.
- Discovered data is the data discovered about the system by the analysis system during the analysis.
- Desired data is the data obtained by the analysis system from system proficiency resources regarding desired guidelines, system requirements, system design, system build, and/or system operation. Differences in disclosed, discovered, and desired data are evaluated to support generating an evaluation rating, to identify deficiencies, and/or to determine and provide auto-corrections.
- the evaluation category includes an identify category, a protect category, a detect category, a respond category, and a recover category.
- the identify category is regarding identifying assets, system functions, and/or security functions of the system
- the protect category is regarding protecting assets, system functions, and/or security functions of the system from issues that may adversely affect
- the detect category is regarding detecting issues that may, or have, adversely affect assets, system functions, and/or security functions of the system
- the respond category is regarding responding to issues that may, or have, adversely affect assets, system functions, and/or security functions of the system
- the recover category is regarding recovering from issues that have adversely affect assets, system functions, and/or security functions of the system.
- Each category includes one or more sub-categories and each sub-category may include one or more sub-sub categories as discussed with reference to FIGS. 44-49 .
- the evaluation rating metric includes process, policy, procedure, certification, documentation, and automation.
- the evaluation rating metric may include more or less topics.
- the analysis system output options include evaluation rating, deficiency identification, and deficiency auto-correction.
- the analysis system can analyze a system in thousands, or more, combinations.
- the analysis system 10 could provide an evaluation rating for the entire system with respect to its vulnerability to cyber-attacks.
- the analysis system 10 could also identify deficiencies in the system's cybersecurity processes, policies, documentation, implementation, operation, assets, and/or security functions based on the evaluation rating.
- the analysis system 10 could further auto-correct at least some of the deficiencies in the system's cybersecurity processes, policies, documentation, implementation, operation, assets, and/or security functions.
- the analysis system 10 could evaluates the system's requirements for proper use of software (e.g., authorized to use, valid copy, current version) by analyzing every computing device in the system as to the system's software use requirements. From this analysis, the analysis system generates an evaluation rating.
- the analysis system 10 could also identify deficiencies in the compliance with the system's software use requirements (e.g., unauthorized use, invalid copy, outdated copy). The analysis system 10 could further auto-correct at least some of the deficiencies in compliance with the system's software use requirements (e.g., remove invalid copies, update outdated copies).
- FIG. 44 is a diagram of another example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 .
- This diagram is similar to FIG. 43 with the exception that this figure illustrates sub-categories and sub-sub categories.
- Each evaluation category includes sub-categories, which, in turn, include their own sub-sub categories.
- the various categories, sub-categories, and sub-sub categories corresponds to the categories, sub-categories, and sub-sub categories identified in the “Framework for Improving Critical Instructure Cybersecurity”, Version 1.1, Apr. 16, 2018 by the National Institute of Standards and Technology (NIST).
- FIG. 45 is a diagram of an example of an identification evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories.
- the identify category includes the sub-categories of asset management, business environment, governance, risk management, access control, awareness & training, and data security.
- the asset management sub-category includes the sub-sub categories of HW inventoried, SW inventoried, data flow mapped out, external systems cataloged, resources have been prioritized, and security roles have been established.
- the business environment sub-category includes the sub-sub categories of supply chain roles defined, industry critical infrastructure identified, business priorities established, critical services identified, and resiliency requirements identified.
- the governance sub-category includes the sub-sub categories of security policies are established, security factors aligned, and legal requirements are identified.
- the risk assessment sub-category includes the sub-sub categories of vulnerabilities identified, external sources are leveraged, threats are identified, business impacts are identified, risk levels are identified, and risk responses are identified.
- the risk management sub-category includes the sub-sub categories of risk management processes are established, risk tolerances are established, and risk tolerances are tied to business environment.
- the access control sub-category includes the sub-sub categories of remote access control is defined, permissions are defined, and network integrity is defined.
- the awareness & training sub-category includes the sub-sub categories of users are trained, user privileges are known, third party responsibilities are known, executive responsibilities are known, and IT and security responsibilities are known.
- the data security sub-category includes the sub-sub categories of data at rest protocols are established, data in transit protocols are established, formal asset management protocols are established, adequate capacity of the system is established, data leak prevention protocols are established, integrity checking protocols are established, and use and development separation protocols are established.
- FIG. 46 is a diagram of an example of a protect evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories.
- the protect category includes the sub-categories of information protection processes and procedures, maintenance, and protective technology.
- the information protection processes and procedures sub-category includes the sub-sub categories of baseline configuration of IT/industrial controls are established, system life cycle management is established, configuration control processes are established, backups of information are implemented, policy & regulations for physical operation environment are established, improving protection processes are established, communication regarding effective protection technologies is embraced, response and recovery plans are established, cybersecurity in is including in human resources, and vulnerability management plans are established.
- the maintenance sub-category includes the sub-sub categories of system maintenance & repair of organizational assets programs are established and remote maintenance of organizational assets is established.
- the protective technology sub-category includes the sub-sub-categories of audit and recording policies are practiced, removable media is protected & use policies are established, access to systems and assets is controlled, and communications and control networks are protected.
- FIG. 47 is a diagram of an example of a detect evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories.
- the detect category includes the sub-categories of anomalies and events, security continuous monitoring, and detection processes.
- the anomalies and events sub-category includes the sub-sub categories of baseline of network operations and expected data flows are monitored, detected events are analyzed, event data are aggregated and correlated, impact of events is determined, and incident alert thresholds are established.
- the security continuous monitoring sub-category includes the sub-sub categories of network is monitored to detect potential cybersecurity attacks, physical environment is monitored for cybersecurity events, personnel activity is monitored for cybersecurity events, malicious code is detected, unauthorized mobile codes is detected, external service provider activity is monitored for cybersecurity events, monitoring for unauthorized personnel, connections, devices, and software is performed, and vulnerability scans are performed.
- the detection processes sub-category includes the sub-sub categories of roles and responsibilities for detection are defined, detection activities comply with applicable requirements, detection processes are tested, event detection information is communicated, and detection processes are routinely improved.
- FIG. 48 is a diagram of an example of a respond evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories.
- the respond category includes the sub-categories of response planning, communications, analysis, mitigation, and improvements.
- the response planning sub-category includes the sub-sub category of response plan is executed during and/or after an event.
- the communications sub-category includes the sub-sub category of personnel roles and order of operation are established, events are reported consistent with established criteria, information is shared consistently per the response plan, coordination with stakeholders is consistent with the response plan, and voluntary information is shared with external stakeholders.
- the analysis sub-category includes the sub-sub categories of notifications form detection systems are investigated, impact of the incident is understood, forensics are performed, and incidents are categorized per response plan.
- the mitigation sub-category includes the sub-sub categories of incidents are contained, incidents are mitigated, and newly identified vulnerabilities are processed.
- the improvements sub-categories includes the sub-sub categories of response plans incorporate lessons learned, and response strategies are updated.
- FIG. 49 is a diagram of an example of a recover evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories.
- the recover category includes the sub-categories of recovery plan, improvements, and communication.
- the recovery plan sub-category includes the sub-sub category of recovery plan is executed during and/or after an event.
- the improvement sub-category includes the sub-sub categories of recovery plans incorporate lessons learned and recovery strategies are updated.
- the communications sub-category includes the sub-sub categories of public relations are managed, reputations after an event is repaired, and recovery activities are communicated.
- FIG. 50 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the understanding of the guidelines for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data.
- the analysis system 10 obtains disclosed data from the system regarding the guidelines associated with the assets of the department. From the disclosed data, the analysis system renders an evaluation rating for the understanding of the guidelines for identifying assets. The analysis system renders a second evaluation rating for the understanding of the guidelines regarding protection of the assets from issues. The analysis system renders a third evaluation rating for the understanding of the guidelines regarding detection of issues that may affect or are affecting the assets.
- the analysis system renders a fourth evaluation rating for the understanding of the guidelines regarding responds to issues that may affect or are affecting the assets.
- the analysis system renders a fifth evaluation rating for the understanding of the guidelines regarding recovery from issues that affected the assets of a department based on disclosed data.
- the analysis system may render an overall evaluation rating for the understanding of the guidelines based on the first through fifth evaluation ratings.
- the analysis system 11 evaluates the understanding of guidelines used to determine what assets should be included in the department, how the assets should be protected from issues, how issues that may affect or are affecting the assets are detect, how to respond to issues that may affect or are affecting the assets, and how the assets will recover from issues that may affect or are affecting them based on disclosed data.
- the analysis system renders an evaluation rating for the understanding of the guidelines regarding what assets should be in the department.
- the analysis system renders a second evaluation rating for the understanding of the guidelines regarding how the assets should be protected from issues.
- the analysis system renders a third evaluation rating for the understanding of the guidelines regarding how to detect issues that may affect or are affecting the assets.
- the analysis system renders a fourth evaluation rating for the understanding of the guidelines regarding how to respond to issues that may affect or are affecting the assets.
- the analysis system renders a fifth evaluation rating for the understanding of the guidelines regarding how to recover from issues that affected the assets of a department based on disclosed data.
- the analysis system may render an overall evaluation rating for the understanding based on the first through fifth evaluation ratings.
- FIG. 51 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the understanding of the system design for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data.
- the analysis system 10 obtains disclosed data from the system regarding the system design associated with the assets of the department. From the disclosed data, the analysis system renders an evaluation rating for the understanding of the system design for identifying assets. The analysis system renders a second evaluation rating for the understanding of the system design regarding protection of the assets from issues. The analysis system renders a third evaluation rating for the understanding of the system design regarding detection of issues that may affect or are affecting the assets.
- the analysis system renders a fourth evaluation rating for the understanding of the system design regarding responds to issues that may affect or are affecting the assets.
- the analysis system renders a fifth evaluation rating for the understanding of the system design regarding recovery from issues that affected the assets of a department based on disclosed data.
- the analysis system may render an overall evaluation rating for the understanding based on the first through fifth evaluation ratings.
- the analysis system 11 evaluates the understanding of system design used to determine what assets should be included in the department, how the assets should be protected from issues, how issues that may affect or are affecting the assets are detect, how to respond to issues that may affect or are affecting the assets, and how the assets will recover from issues that may affect or are affecting them based on disclosed data.
- the analysis system renders an evaluation rating for the understanding of the system design regarding what assets should be in the department.
- the analysis system renders a second evaluation rating for the understanding of the system design regarding how the assets should be protected from issues.
- the analysis system renders a third evaluation rating for the understanding of the system design regarding how to detect issues that may affect or are affecting the assets.
- the analysis system renders a fourth evaluation rating for the understanding of the system design regarding how to respond to issues that may affect or are affecting the assets.
- the analysis system renders a fifth evaluation rating for the understanding of the system design regarding how to recover from issues that affected the assets of a department based on disclosed data.
- the analysis system may render an overall evaluation rating for the understanding based on the first through fifth evaluation ratings.
- FIG. 52 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the understanding of the guidelines, system requirements, and system design for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data and discovered data.
- the analysis system 10 obtains disclosed data and discovered from the system regarding guidelines, system requirements, and system design associated with the assets of the department. From the disclosed data and discovered data, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, and system design, or one for all three) for the understanding of the guidelines, system requirements, and system design for identifying assets. The analysis system renders one or more second evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding protection of the assets from issues. The analysis system renders one or more third evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding detection of issues that may affect or are affecting the assets.
- first evaluation ratings e.g., one for each of guidelines, system requirements, and system design, or one for all three
- the analysis system renders one or more second evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding protection of the assets from issues.
- the analysis system renders one or more third evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding detection of
- the analysis system renders one or more fourth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding responds to issues that may affect or are affecting the assets.
- the analysis system renders one or more fifth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding recovery from issues that affected the assets of a department based on disclosed data.
- the analysis system may render an overall evaluation rating for the understanding based on the one or more first through one or more fifth evaluation ratings.
- the analysis system 11 may further render an understanding evaluation rating regarding how well the discovered data correlates with the disclosed data. In other words, evaluate the knowledge level of the system.
- the analysis system compares the disclosed data with the discovered data. If they substantially match, the understanding of the system would receive a relatively high evaluation rating. The more the disclosed data differs from the discovered data, the lower the understanding evaluation rating will be.
- the analysis system 11 evaluates the understanding of guidelines, system requirements, and system design used to determine what assets should be included in the department, how the assets should be protected from issues, how issues that may affect or are affecting the assets are detect, how to respond to issues that may affect or are affecting the assets, and how the assets will recover from issues that may affect or are affecting them based on disclosed data and discovered data.
- the analysis system renders one or more first evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding what assets should be in the department.
- the analysis system renders one or more second evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how the assets should be protected from issues.
- the analysis system renders one or more third evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how to detect issues that may affect or are affecting the assets.
- the analysis system renders one or more fourth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how to respond to issues that may affect or are affecting the assets.
- the analysis system renders one or more fifth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how to recover from issues that affected the assets of a department based on disclosed data.
- the analysis system may render an overall evaluation rating for the understanding of the guidelines, system requirements, and system design based on the one or more first through the one or more fifth evaluation ratings.
- FIG. 53 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the implementation for and operation of identifying assets of a department, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets per the guidelines, system requirements, system design, system build, and resulting system based on disclosed data and discovered data.
- the analysis system 10 obtains disclosed data and discovered data from the system regarding the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. From the disclosed data and discovered data, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, system design, system build, resulting system with respect to each of implementation and operation or one for all of them) for the implementation and operation of identifying the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system renders one or more second evaluation ratings for the implementation and operation of protecting the assets from issues per the guidelines, system requirements, system design, system build, and resulting system.
- first evaluation ratings e.g., one for each of guidelines, system requirements, system design, system build, resulting system with respect to each of implementation and operation or one for all of them
- the analysis system renders one or more second evaluation ratings for the implementation and operation of protecting the assets from issues per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system renders one or more third evaluation ratings for the implementation and operation of detecting issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system renders one or more fourth evaluation ratings for the implementation and operation of responding to issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system renders one or more fifth evaluation ratings for the implementation and operation of recovering from issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system may render an overall evaluation rating for the implementation and/or performance based on the one or more first through one or more fifth evaluation ratings.
- FIG. 54 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the implementation for and operation of identifying assets of a department, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets per the guidelines, system requirements, system design, system build, and resulting system based on discovered data and desired data.
- the analysis system 10 obtains disclosed data and discovered from the system regarding the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. From the discovered data and desired data, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, system design, system build, resulting system with respect to each of implementation and operation or one for all of them) for the implementation and operation of identifying the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system renders one or more second evaluation ratings for the implementation and operation of protecting the assets from issues per the guidelines, system requirements, system design, system build, and resulting system.
- first evaluation ratings e.g., one for each of guidelines, system requirements, system design, system build, resulting system with respect to each of implementation and operation or one for all of them
- the analysis system renders one or more second evaluation ratings for the implementation and operation of protecting the assets from issues per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system renders one or more third evaluation ratings for the implementation and operation of detecting issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system renders one or more fourth evaluation ratings for the implementation and operation of responding to issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system renders one or more fifth evaluation ratings for the implementation and operation of recovering from issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- the analysis system may render an overall evaluation rating for the implementation and/or performance based on the one or more first through one or more fifth evaluation ratings.
- the analysis system 11 may further render an implementation and/or operation evaluation rating regarding how well the discovered data correlates with the desired data. In other words, evaluate the level implementation and operation of the system.
- the analysis system compares the disclosed data with the desired data. If they substantially match, the implementation and/or operation of the system would receive a relatively high evaluation rating. The more the discovered data differs from the desired data, the lower the implementation and/or operation evaluation rating will be.
- FIG. 55 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the system's self-evaluation for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data and discovered data per the guidelines, system requirements, and system design.
- the analysis system 10 obtains disclosed data and discovered from the system regarding the guidelines, system requirements, and system design associated with the assets of the department. From the disclosed data and discovered, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, and system design, or one for all three) for the self-evaluation of identifying assets per the guidelines, system requirements, and system design. For instance, what resources does the system have with respect to its guidelines, system requirements, and/or system design for self-identifying of assets.
- first evaluation ratings e.g., one for each of guidelines, system requirements, and system design, or one for all three
- the analysis system renders one or more second evaluation ratings for the self-evaluation of protecting the assets from issues per the guidelines, system requirements, and system design regarding.
- the analysis system renders one or more third evaluation ratings for the self-evaluation of detecting issues that may affect or are affecting the assets per the guidelines, system requirements, and system design regarding detection.
- the analysis system renders one or more fourth evaluation ratings for the self-evaluation of responding to issues that may affect or are affecting the assets per the guidelines, system requirements, and system design.
- the analysis system renders one or more fifth evaluation ratings for the self-evaluation of recovering from issues that affected the assets per the guidelines, system requirements, and system design.
- the analysis system may render an overall evaluation rating for the self-evaluation based on the one or more first through one or more fifth evaluation ratings.
- FIG. 56 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating the understanding of the guidelines, system requirements, system design, system build, and resulting system for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data and discovered data.
- the analysis system 10 obtains disclosed data and discovered data from the system regarding guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department.
- the disclosed data includes guidelines that certain types of data shall be encrypted; a system requirement that specifies 128-bit Advanced Encryption Standard (AES) for “y” types of documents; a system design that includes 12 “x” type computers that are to be loaded with 128-bit AES software by company “M”, version 2.0 or newer; and a system build and resulting system that includes 12 “x” type computers that have 128-bit AES software by company “M”, version 2.1.
- AES Advanced Encryption Standard
- the discovered data includes the same guideline as the disclosed data; a first system requirement that specifies 128-bit Advanced Encryption Standard (AES) for “y” types of documents and a second system requirement that specifies 256-bit Advanced Encryption Standard (AES) for “A” types of documents; a system design that includes 12 “x” type computers that are to be loaded with 128-bit AES software by company “M”, version 2.0 or newer, and 3 “z” type computers that are to be loaded with 256-bit AES software by company “N” version 3.0 or newer; and a system build and resulting system that includes 10 “x” type computers that have 128-bit AES software by company “M” version 2.1, 2 “x” type computers that have 128-bit AES software by company “M” version 1.3, 2 “z” type computers that have 256-bit AES software by company “N” version 3.1, and 1 “z” type computer that has 256-bit AES software by company “K” version 0.1.
- AES Advanced
- the analysis system would render a relatively high evaluation rating for the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department.
- the relatively high evaluation rating would be warranted since the system build and resulting system included what was in the system design (e.g., 12 “x” type computers that have 128-bit AES software by company “M”, version 2.1).
- the system design is consistent with the system reequipments (e.g., 128-bit Advanced Encryption Standard (AES) for “y” types of documents), which is consistent with the guidelines (e.g., certain types of data shall be encrypted).
- AES Advanced Encryption Standard
- the analysis system would render a relatively low evaluation rating for the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department.
- the relatively low evaluation rating would be warranted since the system build and resulting system is not consistent with the system design (e.g., is missing 2 “x” type computers with the right encryption software, only has 2 “z” type computers with the right software, and has a “z” type computer with the wrong software).
- the analysis system would also process the evaluation ratings from the disclosed data and from the discovered data to produce an overall evaluation rating for the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department.
- the disclosed data does not substantially match the discovered data, which indicates a lack of understanding of what's really in the system (i.e., knowledge of the system).
- the evaluation rating from the discovered data was low, the analysis system would produce a low overall evaluation rating for the understanding.
- FIG. 57 is a diagram of an extension of the example of FIG. 56 .
- the analysis system processes the data and/or evaluation ratings to identify deficiencies and/or auto-corrections of at least some of the deficiencies.
- the disclosed data includes:
- the discovered data includes:
- the analysis system identifies deficiencies 232 and, when possible, provides auto-corrections 235 .
- the analysis system determines that the system requirements also included a requirement for 256-bit AES for “A” type documents.
- the analysis system can auto-correct this deficiency by updating the knowledge of the system to include the missing requirement. This may include updating one or more policies, one or more processes, one or more procedures, and/or updating documentation.
- the analysis system identifies the deficiency of the design further included 3 “z” type computers that are to be loaded with 256-bit AES software by company “N”, version 3.0 or newer.
- the analysis system can auto-correct this deficiency by updating the knowledge of the system to includes the thee “z” type computers with the correct software. Again, this may include updating one or more policies, one or more processes, one or more procedures, and/or updating documentation.
- the analysis system identifies the deficiency of 2 “x” type computers having old versions of the encryption software (e.g., have version 1.3 of company M's 128-bit AES software instead of a version 2.0 or newer).
- the analysis system can auto-correct this deficiency by updating the version of software for the two computers.
- the analysis system identifies the deficiency of 1 “z” type computer has the wrong encryption software (e.g., it has version 0.1 from company K and not version 3.0 or newer from company N).
- the analysis system can auto-correct this deficiency by replacing the wrong encryption software with the correct encryption software.
- the analysis system identifies the deficiency of 1 “z” type computer is missing from the system.
- the analysis system cannot auto-correct this deficiency since it is missing hardware. In this instance, the analysis system notifies a system admin of the missing computer.
- FIG. 58 is a schematic block diagram of an embodiment of an evaluation processing module 254 that includes a plurality of comparators 360 - 362 , a plurality of analyzers 363 - 365 , and a deficiency correction module 366 .
- the evaluation processing module 254 identifies deficiencies 232 and, when possible, determines auto-corrections 235 from the ratings 219 and/or inputted data (e.g., disclosed data, discovered data, and/or desired data) based on evaluation parameters 266 (e.g., disclosed to discovered deficiency criteria 368 , discovered to desired deficiency criteria 370 , disclosed to desired deficiency criteria 372 , disclosed to discovered compare criteria 373 , discovered to desired compare criteria 374 , and disclosed to desired compare criteria 375 ).
- evaluation parameters 266 e.g., disclosed to discovered deficiency criteria 368 , discovered to desired deficiency criteria 370 , disclosed to desired deficiency criteria 372 , disclosed to discovered compare criteria 373 , discovered to desired compare criteria 374 , and disclosed to desired compare criteria 375 ).
- comparator 360 compares disclosed data and/or ratings 338 and discovered data and/or ratings 339 based on the disclosed to discovered compare criteria 373 to produce, if any, one or more disclosed to discovered differences 367 .
- the analysis system evaluates disclosed, discovered, and/or desired data to produce one or more evaluation ratings regarding the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with identifying the assets of the department.
- Each of the disclosed data, discovered data, and desired data includes data regarding the guidelines, system requirements, system design, system build, and/or resulting system associated with identifying the assets of the department and/or the assets of the department.
- disclosed data is the known data of the system at the outset of an analysis, which is typically supplied by a system administrator and/or is obtained from data files of the system.
- the discovered data is the data discovered about the system by the analysis system during the analysis.
- the desired data is the data obtained by the analysis system from system proficiency resources regarding desired guidelines, system requirements, system design, system build, and/or system operation.
- the analysis system may produce one or more evaluation ratings.
- the analysis system produces an evaluation rating for:
- the disclosed to discovered compare criteria 373 specifies the evaluation ratings to be compared and/or which data of the disclosed data is to be compared to data of the discovered data. For example, the disclosed to discovered compare criteria 373 indicates that the “understanding of the guidelines with respect to system design of the department from the disclosed data” is to be compared to the “understanding of the system design with respect to identifying assets of the department from the discovered data”. As another example, the disclosed to discovered compare criteria 373 indicates that data regarding system design of the disclosed data is to be compared with the data regarding the system design of the discovered data.
- the comparator 360 compares the “understanding of the guidelines with respect to system design of the department from the disclosed data” with the “understanding of the system design with respect to identifying assets of the department from the discovered data” to produce, if any, one or more understanding differences.
- the comparator 360 also compares the data regarding system design of the disclosed data with the data regarding the system design of the discovered data to produce, if any, one or more data differences.
- the comparator 360 outputs the one or more understanding differences and/or the one or more data differences as the disclosed to discovered differences 367 .
- the analyzer 363 analyzes the disclosed to discovered differences 267 in accordance with the disclosed to discovered deficiency criteria 368 to determine whether a difference 267 constitutes a deficiency. If so, the analyzer 363 includes it in the disclosed to discovered deficiencies 232 - 1 .
- the disclosed to discovered deficiency criteria 368 correspond to the disclosed to discovered compare criteria 373 and specify how the differences 367 are to be analyzed to determine if they constitute deficiencies 232 - 1 .
- the disclosed to discovered deficiency criteria 368 specify a series of comparative thresholds based on the impact the differences have on the system.
- the range of impact is from none to significant with as many granular levels in between as desired.
- the comparative threshold is set to trigger a deficiency for virtually any difference. For example, if the difference is regarding system security, then then threshold is set that any difference is a deficiency.
- the threshold is set to not identify the difference as a deficiency.
- the discovered data includes a PO date on Nov. 2, 2020 for a specific purchase order and the disclosed data didn't include a PO date, but the rest of the information regarding the PO is the same for the disclosed and discovered data. In this instance, the missing PO date is inconsequential and would not be identified as a deficiency.
- the deficiency correction module 366 receives the disclosed to discovered deficiencies 232 - 1 , if any, and determines whether one or more of the deficiencies 232 - 1 can be auto-corrected to produce an auto-correction 235 .
- software deficiencies are auto-correctable (e.g., wrong software, missing software, out-of-date software, etc.) while hardware deficiencies are not auto-correctable (e.g., wrong computing device, missing computing device, missing network connection, etc.).
- the comparator 361 functions similarly to the comparator 360 to produce discovered to desired differences 369 based on the discovered data and/or rating 339 and the desired data and/or rating 340 in accordance with the discovered to desired compare criteria 374 .
- the analyzer 364 functions similarly to the analyzer 363 to produce discovered to desired deficiencies 232 - 2 from the discovered to desired differences 369 in accordance with the discovered to desired deficiency criteria 370 .
- the deficiency correction module 366 auto-corrects, when possible, the discovered to desired deficiencies 232 - 2 to produce auto-corrections 235 .
- the comparator 362 functions similarly to the comparator 360 to produce disclosed to desired differences 371 based on the disclosed data and/or rating 338 and the desired data and/or rating 340 in accordance with the disclosed to desired compare criteria 375 .
- the analyzer 365 functions similarly to the analyzer 363 to produce disclosed to desired deficiencies 232 - 3 from the disclosed to desired differences 371 in accordance with the disclosed to desired deficiency criteria 372 .
- the deficiency correction module 366 auto-corrects, when possible, the disclosed to desired deficiencies 232 - 3 to produce auto-corrections 235 .
- the evaluation processing module 254 processes any combination of system aspects, evaluation aspects, and evaluation metrics in a similar manner. For example, the evaluation processing module 254 processes the implementation of the system with respect to identifying assets of the department to identify deficiencies 232 and auto-corrections in the implementation. As another example, the evaluation processing module 254 processes the operation of the system with respect to identifying assets of the department to identify deficiencies 232 and auto-corrections in the operation of the system.
- FIG. 59 is a state diagram of an example the analysis system analyzing a system. From a start state 380 , the analysis proceeds to an understanding of the system state 38 ) or to a test operations of the assets system functions, and/or security functions of a system state 386 based on the desired analysis to be performed. For testing the understanding, the analysis proceeds to state 381 where the understanding of the assets, system functions, and/or security functions of the system are evaluated. This may be done via documentation of the system, policies of the supported business, based upon a question and answer session with personnel of the owner/operator of the system, and/or as discussed herein.
- the analysis proceeds to the determine deficiencies in the understanding of the system state 382 .
- the deficiencies in understanding are determined by processing differences and/or as discussed herein.
- state 382 corrections required in understanding the system are identified and operation proceeds to state 383 in which a report is generated regarding understanding deficiencies and/or corrective measures to be taken. In addition, a report is generated and sent to the owner/operator of the other system. If there are no understanding deficiencies and/or corrective measures, no auto correction is needed, and operations are complete at the done state.
- operation proceeds to state 385 where a report is generated regarding an adequate understanding of the system and the report is sent. From state 385 if operation is complete, operations proceed to the done state. Alternately, from state 385 operation may proceed to state 386 where testing of the assets, system functions and/or security functions of the other system is performed. If testing of the assets, system functions, and/or security functions of the system results in an adequate test result, operation proceeds to state 390 where a report is generated indicating adequate implementation and/or operation of the system and the report is sent.
- state 386 if the testing of the system results in an inadequate result, operations proceed to state 387 where deficiencies in the assets, system functions, and/or security functions of the system are tested. At state 387 differences are compared to identify deficiencies in the assets, system functions, and/or security functions. The analysis then proceeds from state 387 to state 388 where a report is generated regarding corrective measures to be taken in response to the assets, system functions, and/or security functions deficiencies. The report is then sent to the owner/operator. If there are no deficiencies and/or corrective measures, no auto correction is needed, and operations are complete at the done state. If autocorrect is required, operation proceeds to state 389 where the analysis system updates assets, system functions, and/or security functions of the system. Corrections are then implemented and the analysis proceeds to state 386 . Note that corrections may be automatically performed for some deficiencies but not others, depending upon the nature of the deficiency.
- FIG. 60 is a logic diagram of an example of an analysis system analyzing a system, or portion thereof.
- the method includes the analysis system obtaining system proficiency understanding data regarding the assets of the system (step 400 ) and obtaining data regarding the owner/operator' s understanding of the assets (step 401 ).
- System proficiencies of step 400 include industry best practices and regulatory requirements, for example.
- the data obtained from the system at step 401 is based upon data received regarding the system or received by probing the system.
- step 402 The data collected at steps 400 and 401 is then compared (step 402 ) and a determination is made regarding the comparison. If the comparison is favorable, as determined at step 403 , meaning that the system proficiency understanding compares favorably to the data regarding understanding, operation is complete, a report is generated (step 412 ), and the report is sent (step 413 ). If the comparison is not favorable, as determined at step 403 , operation continues with identifying deficiencies in the understanding of the system (step 404 ), identifying corrective measures (step 405 ), generating a corresponding report (step 412 ) and sending the report (step 413 ).
- the method also includes the analysis system obtaining system proficiency understanding data of the system functions and/or security implementation and/or operation of the system (step 406 ) and obtaining data regarding the owner/operator's understanding of the system functions and/or security functions implementation and/or operation of the system (step 407 ).
- System proficiencies of step 406 include industry best practices and regulatory requirements, for example.
- the data obtained from the system at step 407 is based upon data received regarding the system or received by probing the system.
- the data collected at steps 406 and 407 is then compared (step 414 ) and a determination is made regarding the comparison. If the comparison is favorable, as determined at step 415 , meaning that the system proficiency understanding compares favorably to the data regarding understanding, operation is complete, a report is generated (step 412 ), and the report is sent (step 413 ). If the comparison is not favorable, as determined at step 415 , operation continues with identifying deficiencies in the understanding of the system (step 416 ), identifying corrective measures (step 417 ), generating a corresponding report (step 412 ) and sending the report (step 413 ).
- the method further includes the analysis system comparing the understanding of the physical structure (obtained at step 401 ) with the understanding of the system functions and/or security functions implementation and/or operation (obtained at step 406 ) at step 408 .
- Step 408 essentially determines whether the understanding of the assets corresponds with the understanding of the system functions and/or security functions of the implementation and/or operation of the system. If the comparison is favorable, as determined at step 409 , a report is generated (step 412 ), and the report is sent (step 413 ).
- step 409 the method continues with identifying imbalances in the understanding (step 410 ), identifying corrective measures (step 410 ), generating a corresponding report (step 412 ), and sending the report (step 413 ).
- FIG. 61 is a logic diagram of another example of an analysis system analyzing a system, or portion thereof.
- the method begins at step 420 where the analysis system determines a system evaluation mode (e.g., assets, system functions, and/or security functions) for analysis.
- the method continues at step 421 where the analysis system determines a system evaluation level (e.g., the system or a portion thereof). For instance, the analysis system identifies one or more system elements for evaluation.
- the method continues at step 422 where the analysis system determines an analysis perspective (e.g., understanding, implementation, operation, and/or self-evaluate).
- the method continues at step 423 where the analysis system determines an analysis viewpoint (e.g., disclosed, discovered, and/or desired).
- the method continues at step 424 where the analysis system determines a desired output (e.g., evaluation rating, deficiencies, and/or auto-corrections).
- step 425 the analysis system determines what data to gather based on the preceding determinations.
- step 426 the analysis system gathers data in accordance with the determination made in step 425 .
- step 427 the analysis system determines whether the gathered data is to be pre-processed.
- the method continues at step 428 where the analysis system determines data pre-processing functions (e.g., normalize, parse, tag, and/or de-duplicate).
- step 429 the analysis system pre-processes the data based on the pre-processing functions to produce pre-processed data.
- the method continues at step 430 where the analysis system determines one or more evaluation categories (e.g., identify, protect, detect, respond, and/or recover) and/or sub-categories for evaluation. Note that this may be done prior to step 425 and be part of determining the data to gather.
- the method continues at step 431 where the analysis system analyzes the data in accordance with the determine evaluation categories and in accordance with a selected evaluation metric (e.g., process, policy, procedure, automation, certification, and/or documentation) to produce analysis results.
- a selected evaluation metric e.g., process, policy, procedure, automation, certification, and/or documentation
- the analysis system processes the analysis results to produce the desired output (e.g., evaluation rating, deficiencies, and/or auto-correct).
- the analysis system determines whether to end the method or repeat it for another analysis of the system.
- FIG. 62 is a logic diagram of another example of an analysis system analyzing a system or portion thereof.
- the method begins at step 440 where the analysis system determines physical assets of the system, or portion thereof, to analyze (e.g., assets in the resulting system).
- a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like.
- step 441 the analysis system ascertains implementation of the system, or portion thereof (e.g., assets designed to be, and/or built, in the system).
- step 442 the analysis system correlates components of the assets to components of the implementation (e.g., do the assets of the actual system correlate with assets design/built to be in the system).
- the method continues at step 443 where the analysis system scores the components of the physical assets in accordance with the mapped components of the implementation. For example, the analysis system scores how well the assets of the actual system correlate with assets design/built to be in the system. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation).
- evaluation metrics e.g. process, policy, procedure, automation, certification, and/or documentation.
- the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies).
- step 445 the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues at step 446 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues at step 447 where the analysis system identifies vulnerabilities in the physical assets and/or in the implementation. For example, the analysis system determines that a security software application is missing from several computing devices in the system, or portion thereof, being analyzed.
- a target result e.g., the evaluation rating is a certain value
- step 448 the analysis system determines, if possible, corrective measures of the identified vulnerabilities.
- step 449 the analysis system determines whether the corrective measures can be done automatically. If not, the method continues at step 451 where the analysis system reports the corrective measures. If yes, the method continues at step 450 where the analysis system auto-corrects the vulnerabilities.
- FIG. 63 is a logic diagram of another example of an analysis system analyzing a system or portion thereof.
- the method begins at step 460 where the analysis system determines physical assets of the system, or portion thereof, to analyze (e.g., assets and their intended operation).
- the method continues at step 461 where the analysis system ascertains operation of the system, or portion thereof (e.g., the operations actually performed by the assets).
- the method continues at step 462 where the analysis system correlates components of the assets to components of operation (e.g., do the identified operations of the assets correlate with the operations actually performed by the assets).
- the method continues at step 463 where the analysis system scores the components of the physical assets in accordance with the mapped components of the operation. For example, the analysis system scores how well the identified operations of the assets correlate with operations actually performed by the assets. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation).
- evaluation metrics e.g. process, policy, procedure, automation, certification, and/or documentation.
- the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies).
- step 465 the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues at step 466 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues at step 467 where the analysis system identifies vulnerabilities in the physical assets and/or in the operation.
- a target result e.g., the evaluation rating is a certain value
- step 468 the analysis system determines, if possible, corrective measures of the identified vulnerabilities.
- step 469 the analysis system determines whether the corrective measures can be done automatically. If not, the method continues at step 471 where the analysis system reports the corrective measures. If yes, the method continues at step 470 where the analysis system auto-corrects the vulnerabilities.
- FIG. 64 is a logic diagram of another example of an analysis system analyzing a system or portion thereof.
- the method begins at step 480 where the analysis system determines system functions of the system, or portion thereof, to analyze.
- the method continues at step 481 where the analysis system ascertains implementation of the system, or portion thereof (e.g., system functions designed to be, and/or built, in the system).
- the method continues at step 482 where the analysis system correlates components of the system functions to components of the implementation (e.g., do the system functions of the actual system correlate with system functions design/built to be in the system).
- the method continues at step 483 where the analysis system scores the components of the system functions in accordance with the mapped components of the implementation. For example, the analysis system scores how well the system functions of the actual system correlate with system functions design/built to be in the system. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation).
- evaluation metrics e.g. process, policy, procedure, automation, certification, and/or documentation.
- the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies).
- step 485 the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues at step 486 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues at step 487 where the analysis system identifies vulnerabilities in the physical assets and/or in the implementation.
- a target result e.g., the evaluation rating is a certain value
- step 488 the analysis system determines, if possible, corrective measures of the identified vulnerabilities.
- step 489 the analysis system determines whether the corrective measures can be done automatically. If not, the method continues at step 491 where the analysis system reports the corrective measures. If yes, the method continues at step 490 where the analysis system auto-corrects the vulnerabilities.
- FIG. 65 is a logic diagram of another example of an analysis system analyzing a system or portion thereof.
- the method begins at step 500 where the analysis system determines system functions of the system, or portion thereof, to analyze.
- the method continues at step 501 where the analysis system ascertains operation of the system, or portion thereof (e.g., the operations associated with the system functions).
- the method continues at step 502 where the analysis system correlates components of the system functions to components of operation (e.g., do the identified operations of the system functions correlate with the operations actually performed to provide the system functions).
- the method continues at step 503 where the analysis system scores the components of the system functions in accordance with the mapped components of the operation. For example, the analysis system scores how well the identified operations to support the system functions correlate with operations actually performed to support the system functions. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation).
- the method continues at step 504 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies).
- step 505 the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues at step 506 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues at step 507 where the analysis system identifies vulnerabilities in the physical assets and/or in the operation.
- a target result e.g., the evaluation rating is a certain value
- step 508 the analysis system determines, if possible, corrective measures of the identified vulnerabilities.
- step 509 the analysis system determines whether the corrective measures can be done automatically. If not, the method continues at step 511 where the analysis system reports the corrective measures. If yes, the method continues at step 510 where the analysis system auto-corrects the vulnerabilities.
- FIG. 66 is a logic diagram of another example of an analysis system analyzing a system or portion thereof.
- the method begins at step 520 where the analysis system determines security functions of the system, or portion thereof, to analyze.
- the method continues at step 521 where the analysis system ascertains implementation of the system, or portion thereof (e.g., security functions designed to be, and/or built, in the system).
- the method continues at step 522 where the analysis system correlates components of the security functions to components of the implementation (e.g., do the security functions of the actual system correlate with security functions design/built to be in the system).
- the method continues at step 523 where the analysis system scores the components of the security functions in accordance with the mapped components of the implementation. For example, the analysis system scores how well the security functions of the actual system correlate with security functions design/built to be in the system. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation).
- evaluation metrics e.g. process, policy, procedure, automation, certification, and/or documentation.
- the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies).
- step 525 the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues at step 526 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues at step 527 where the analysis system identifies vulnerabilities in the physical assets and/or in the implementation.
- a target result e.g., the evaluation rating is a certain value
- step 528 the analysis system determines, if possible, corrective measures of the identified vulnerabilities.
- step 529 the analysis system determines whether the corrective measures can be done automatically. If not, the method continues at step 531 where the analysis system reports the corrective measures. If yes, the method continues at step 530 where the analysis system auto-corrects the vulnerabilities.
- FIG. 67 is a logic diagram of another example of an analysis system analyzing a system or portion thereof.
- the method begins at step 540 where the analysis system determines security functions of the system, or portion thereof, to analyze.
- the method continues at step 541 where the analysis system ascertains operation of the system, or portion thereof (e.g., the operations associated with the security functions).
- the method continues at step 542 where the analysis system correlates components of the security functions to components of operation (e.g., do the identified operations of the security functions correlate with the operations actually performed to provide the security functions).
- the method continues at step 543 where the analysis system scores the components of the security functions in accordance with the mapped components of the operation. For example, the analysis system scores how well the identified operations to support the security functions correlate with operations actually performed to support the security functions. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation).
- evaluation metrics e.g. process, policy, procedure, automation, certification, and/or documentation.
- the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies).
- step 545 the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues at step 546 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues at step 547 where the analysis system identifies vulnerabilities in the physical assets and/or in the operation.
- a target result e.g., the evaluation rating is a certain value
- step 548 the analysis system determines, if possible, corrective measures of the identified vulnerabilities.
- step 549 the analysis system determines whether the corrective measures can be done automatically. If not, the method continues at step 551 where the analysis system reports the corrective measures. If yes, the method continues at step 550 where the analysis system auto-corrects the vulnerabilities.
- FIG. 68 is a logic diagram of an example of an analysis system determining an issue detection rating for a system, or portion thereof. Issue detection rating is also referred to herein as detect rating and/or detection rating.
- An issue includes an event, incident, alert, anomalous activity and/or behavior, and any other system occurrence of interest.
- the method begins at step 560 where the analysis system determines a system aspect (see FIG. 69 ) of a system for an issue detection evaluation.
- An issue detection evaluation includes evaluating the system aspect's anomalies and event awareness, the system aspect's continuous security monitoring, and/or the system aspect's issue detection process analysis.
- An evaluation perspective is an understanding perspective, an implementation perspective, an operation (e.g., an operational performance) perspective, or a self-analysis perspective.
- An understanding perspective is with regard to how well the assets, system functions, and/or security functions are understood.
- An implementation perspective is with regard to how well the assets, system functions, and/or security functions are implemented.
- An operation perspective (also referred to herein as a performance perspective) is with regard to how well the assets, system functions, and/or security functions operate.
- a self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions.
- the method continues at step 562 where the analysis system determines at least one evaluation viewpoint for use in performing the issue detection evaluation on the system aspect.
- An evaluation viewpoint is a disclosed viewpoint, a discovered viewpoint, or a desired viewpoint.
- a disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data.
- a discovered viewpoint is with regard to analyzing the system aspect based on the discovered data.
- a desired viewpoint is with regard to analyzing the system aspect based on the desired data.
- step 563 the analysis system obtains issue detection data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint.
- Issue detection data is data obtained about the system aspect. The obtaining of issue detection data will be discussed in greater detail with reference to FIGS. 72-79 .
- step 564 the analysis system calculates an issue detection rating as a measure of system issue detection maturity for the system aspect based on the issue detection data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric.
- An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric. The calculating of an issue detection rating will be discussed in greater detail with reference to FIGS. 80-108 .
- maturity refers to level of development, level of operation reliability, level of operation predictability, level of operation repeatability, level of understanding, level of implementation, level of advanced technologies, level of operation efficiency, level of proficiency, and/or state-of-the-art level.
- FIG. 69 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for determining a system aspect.
- the method begins at step 565 where the analysis system determines at least one system element of the system.
- a system element includes one or more system assets which is a physical asset and/or a conceptual asset.
- a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like.
- a system element includes an organization identifier, a division identifier, a department identifier, a group identifier, a sub-group identifier, a device identifier, a software identifier, and/or an internet protocol address identifier.
- step 566 the analysis system determines at least one system criteria of the system.
- a system criteria is system guidelines, system requirements, system design, system build, or resulting system. Evaluation based on system criteria assists with determining where a deficiency originated and/or how it might be corrected. For example, if the system requirements were lacking a requirement for detecting a particular type of threat, the lack of system requirements could be identified and corrected.
- the method continues at step 567 where the analysis system determines at least one system mode of the system.
- a system mode is assets, system functions, or system security.
- the method continues at step 568 where the analysis system determines the system aspect based on the at least one system element, the at least one system criteria, and the at least one system mode.
- a system aspect is determined to be for assets with respect to system requirements of system elements in a particular division.
- a system aspect is determined to be for system functions with respect to system design and/or system build of system elements in a particular division.
- a system aspect is determined to be for assets, system functions, and security functions with respect to guidelines, system requirements, system design, system build, and resulting system of system elements in the organization (e.g., the entire system/enterprise).
- FIG. 70 is a schematic block diagram of an example of an analysis system 10 determining an issue detection rating for a system, or portion thereof.
- the control module 256 receives an input 271 from the system user interface module 81 loaded on a system 11 .
- the input 271 identifies the system aspect to be analyzed and how it is to be analyzed.
- the control module 256 determines one or more system elements, one or more system criteria, and one or more system modes based on the system aspect.
- the control module 256 also determines one or more evaluation perspectives, one or more evaluation viewpoints, and/or one or more evaluation rating metrics from the input.
- the input 271 could specify the evaluation perspective(s), the evaluation viewpoint(s), the evaluation rating metric(s), and/or analysis output(s).
- the input 271 indicates a desired analysis output (e.g., an evaluation rating, deficiencies identified, and/or deficiencies auto-corrected). From this input, the control module 256 determines the evaluation perspective(s), the evaluation viewpoint(s), the evaluation rating metric(s) to fulfill the desired analysis output.
- control module 256 generates data gathering parameters 263 , pre-processing parameters 264 , data analysis parameters 265 , and/or evaluation parameters 266 as discussed with reference to FIG. 35 .
- the data input module 250 obtains issue detection data in accordance with the data gathering parameters 263 from the data extraction module(s) 80 loaded on the system 11 , other external feeds 258 , and/or system proficiency data 260 .
- the pre-processing module 251 processes the issue detection data in accordance with the pre-processing parameters 264 to produce pre-processed data 414 .
- the issue detection data and/or the pre-processed data 414 may be stored in the database 275 .
- the data analysis module 252 calculates an issue detection rating 219 based on the pre-processed data 414 in accordance with the data analysis parameters 265 and the analysis modeling 268 .
- the data output module 255 outputs the issue detection rating 219 as the output 269 .
- the system user interface module 80 renders a graphical representation of the issue detection rating and the database 275 stores it.
- the evaluation processing module 254 evaluates the issue detection rating 219 and may further evaluate the pre-processed data to identify one or more deficiencies 232 . In addition, the evaluation processing module 254 determines whether a deficiency can be auto-corrected and, if so, determines the auto-correction 235 . In this instance, the data output module 255 outputs the issue detection rating 219 , the deficiencies 232 , and the auto-corrections 235 as output 269 to the database 275 , the system user interface module 81 , and the remediation module 257 .
- the system user interface module 80 renders a graphical representation of the issue detection rating, the deficiencies, and/or the auto-corrections.
- the database 275 stores the issue detection rating, the deficiencies, and/or the auto-corrections.
- the remediation module 257 processes the auto-corrections 235 within the system 11 , verifies the auto-corrections, and then records the execution of the auto-correction and its verification.
- FIG. 71 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating, with respect to process, policy, procedure, certification, documentation, and/or automation, the understanding of the system build for issue detection (e.g., the “detect” evaluation category) security functions of an organization (e.g., the entire enterprise) based on disclosed data to produce an evaluation rating.
- issue detection e.g., the “detect” evaluation category
- security functions of an organization e.g., the entire enterprise
- the analysis system 10 obtains disclosed data from the system regarding the system build associated with the issue detection security functions of the organization. From the disclosed data, the analysis system renders a first evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of process. The analysis system renders a second evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of policy. The analysis system renders a third evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of procedure. The analysis system renders a fourth evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of certification.
- the analysis system renders a fifth evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of documentation.
- the analysis system renders a sixth evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of automation.
- the analysis system 11 generates the issue detection rating for the understanding of the system build for issue detection security functions based on the six evaluation ratings.
- each of the six evaluation rating metrics has a maximum potential rating (e.g., 50 for process, 20 for policy, 15 for procedure, 10 for certification, 20 for documentation, and 20 for automation), which has a maximum rating of 135.
- the first evaluation rating based on process is 35; the second evaluation rating based on policy is 10; the third evaluation rating based on procedure is 10; the fourth evaluation rating based on certification is 10; the fifth evaluation rating based on documentation is 15; and the sixth evaluation rating based on automation is 20, resulting in a cumulative score of 100 out of a possible 135.
- This rating indicates that there is room for improvement and provides a basis for identifying deficiencies.
- FIG. 72 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for obtaining issue detection data, which is a collection of issue detection information.
- the method begins at step 570 where the analysis system determines data gathering parameters regarding the system aspect in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and the least one evaluation rating metric. The generation of data gathering parameters will be discussed in greater detail with reference to FIG. 74 .
- step 571 the analysis system identifies system elements of the system aspect based on the data gathering parameters and obtains issue detection information from the system elements in accordance with the data gathering parameters.
- the obtaining of the issue detection information is discussed in greater detail with reference to FIG. 73 .
- step 572 the analysis system records the issue detection information from the system elements to produce the issue detection data.
- the analysis system stores the issue detection information in the database.
- the analysis system temporarily stores the issue detection information in the data input module.
- the analysis system uses some form of retaining a record of the issue detection information. Examples of issue detection information are provided with reference to FIGS. 75-79 .
- FIG. 73 is a logic diagram of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for obtaining the issue detection information.
- the method begins at step 573 where the analysis system probes (e.g., push and/or pull information requests) a system element in accordance with the data gathering parameters to obtain a system element data response.
- the analysis system would do this for most, if not all of the system elements of the system aspect (e.g., the system, or portion of the system, being evaluated).
- the method continues at step 574 where the analysis system identifies vendor information from the system element data response.
- vendor information includes vendor name, a model name, a product name, a serial number, a purchase date, and/or other information to identify the system element.
- the analysis system tags the system element data response with the vendor information.
- FIG. 74 is a logic diagram of a further of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for determining the data gathering parameters for the evaluation category of issue detection (i.e., detect).
- the method begins at step 576 where the analysis system, for the system aspect, ascertains identity of one or more system elements of the system aspect. For a system element of the system aspect, the method continues at step 577 where the analysis system determines a first data gathering parameter based on at least one system criteria (e.g., guidelines, system requirements, system design, system build, and/or resulting system) of the system aspect. For example, if the determined selected criteria is system requirements, then the first data gathering parameter would be to search for system requirement information.
- system criteria e.g., guidelines, system requirements, system design, system build, and/or resulting system
- step 578 the analysis system determines a second data gathering parameter based on at least one system mode (e.g., assets, system functions, and/or security functions). For example, if the determined selected mode is system functions, then the second data gathering parameter would be to search for system function information.
- system mode e.g., assets, system functions, and/or security functions.
- step 579 the analysis system determines a third data gathering parameter based on the at least one evaluation perspective (e.g., understanding, implementation, operation, and/or self-evaluation). For example, if the determined selected evaluation perspective is operation, then the third data gathering parameter would be to search for information regarding operation of the system aspect.
- a third data gathering parameter based on the at least one evaluation perspective (e.g., understanding, implementation, operation, and/or self-evaluation). For example, if the determined selected evaluation perspective is operation, then the third data gathering parameter would be to search for information regarding operation of the system aspect.
- step 580 the analysis system determines a fourth data gathering parameter based on the at least one evaluation viewpoint (e.g., disclosed data, discovered data, and/or desired data). For example, if the determined selected evaluation viewpoint is disclosed and discovered data, then the fourth data gathering parameter would be to obtain for disclosed data and to obtain discovered data.
- the at least one evaluation viewpoint e.g., disclosed data, discovered data, and/or desired data.
- step 581 the analysis system determines a fifth data gathering parameter based on the at least one evaluation rating metric (e.g., process, policy, procedure, certification, documentation, and/or automation). For example, if the determined selected evaluation rating metric is process, policy, procedure, certification, documentation, and automation, then the fifth data gathering parameter would be to search for data regarding process, policy, procedure, certification, documentation, and automation.
- the evaluation rating metric e.g., process, policy, procedure, certification, documentation, and/or automation
- the analysis system generates the data gathering parameters from the first through fifth data gathering parameters.
- the data gathering parameters include search for information regarding processes, policies, procedures, certifications, documentation, and/or automation (fifth parameter) pertaining to issue detection (selected evaluation category) system requirements (first parameter) for system operation (third parameter) of system functions (second parameter) from disclosed and discovered data (fourth parameter).
- FIG. 75 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- the sub-categories and/or sub-sub categories are cues for determining what data to gather for an issue detection evaluation.
- the sub-categories include anomalies and event awareness, continuous security monitoring, and issue detection processes analysis.
- the issue detection data may include a collection of anomalies and event awareness information, continuous security monitoring information, and issue detection processes analysis information.
- the anomalies and event awareness sub-category includes the sub-sub categories of baseline of network operations and expected data flows establishment and management, detected event analysis, event data aggregation and correlation, event impact determination, and incident alert thresholds establishment.
- the anomalies and event awareness information includes information regarding how these sub-sub categories are implemented (e.g., understood to be implemented, actually implemented, etc.).
- the continuous security monitoring sub-category includes the sub-sub categories of network monitoring, physical environment monitoring, personnel activity monitoring, malicious code detection, unauthorized mobile code detection, external service provider activity monitoring, monitoring for unauthorized personnel, monitoring for unauthorized connections, monitoring for unauthorized devices, monitoring for unauthorized software, and vulnerability scanning.
- the continuous security monitoring information includes information regarding how these sub-sub categories are implemented (e.g., understood to be implemented, actually implemented, etc.).
- the issue detection processes analysis sub-category includes the sub-sub categories of roles and responsibilities for issue detection are defined, issue detection activities comply with all applicable requirements (i.e., compliance verification), issue detection process testing, event detection information communication, and issue detection processes improvement.
- the issue detection processes analysis information includes information regarding how these sub-sub categories are implemented (e.g., understood to be implemented, actually implemented, etc.).
- FIG. 76 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- the issue detection data includes one or more diagrams, one or more reports, one or more logs, one or more application information records, one or more user information records, one or more device information records, one or more system tool records, one or more system plans, and/or one or more other documents regarding a system aspect.
- a diagram is a data flow diagram, an HLD diagram, an LLD diagram, a DLD diagram, an operation flowchart, a data flow diagram, a security operations center (SOC) processes diagram, a software architecture diagram, a hardware architecture diagram, and/or other diagram regarding the design, build, and/or operation of the system, or a portion thereof.
- a report may be a compliance report, a security report (e.g., risk assessment, user behavioral analysis, exchange traffic and use, etc.), a sales report, a trend report, an inventory summary, or any kind of report generated by an aspect of the system and/or an outside source regarding the aspect of the system.
- a compliance report is a report created to ensure system compliance with industry standards, laws, rules, and regulations set by government agencies and regulatory bodies.
- a compliance report may include information to ensure compliance with the Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPPA), etc.
- PCI-DSS Payment Card Industry Data Security Standard
- HIPSPA Health Insurance Portability and Accountability Act
- a security report summarizes a system's weaknesses based on security scan results and other analytic tools.
- Logs include machine data generated by applications or the infrastructure used to run applications. Logs create a record of events that happen on system components such as network devices (e.g., firewall, router, switch, load balancer, etc.), user applications, databases, servers, and operating systems.
- An event is a change in the normal behavior of a given system, process, environment, or workflow. Events can be positive, neutral, or negative. An average organization experiences thousands of events every day (e.g., an email, update to firewalls, etc.).
- Logs include log files, event logs, transaction logs, message logs, etc.
- a log file is a file that records events that occur in an operating system, other software, or between different users of communication software.
- a transaction log is a file of communications between a system and the users of that system, or a data collection method that automatically captures the type, content, or time transactions made by a person from a terminal with that system.
- an event log provides information about network traffic, usage, and other conditions. For example, an event log may capture all logon sessions to a network, along with account lockouts, failed password attempts, application events, etc. An event log stores this data for retrieval by security professionals or automated security systems.
- An application information record is a record regarding one or more applications of the system.
- an application information record includes a list of user applications and/or a list of system applications (e.g., operating systems) of the system or a portion thereof.
- a list of applications includes vendor information (e.g., name, address, contact person information, etc.), a serial number, a software description, a software model number, a version, a generation, a purchase date, an installation date, a service date, use and/or user information, and/or other mechanism for identifying applications.
- a user information record is a record regarding one or more users of the system.
- a user information record may include one or more lists of users and affiliations of users with the system, or portion thereof.
- a user information record include a log of use of the one or more assets by a user or others.
- a user information record may also include privileges and/or restrictions imposed on the use of the one or more assets (e.g., access control lists).
- a user information record may include roles and responsibilities of personnel (e.g., from a personnel handbook, access control list, data flow information, human resources documentation, and/or other system information).
- a device information record is a record regarding one or more devices of the system.
- a device information record may include a list of network devices (e.g., hardware and/or software), a list of user devices (e.g., hardware and/or software), a list of security devices (e.g., hardware and/or software), a list of servers (e.g., hardware and/or software), and/or a list of any other devices of the system or a portion thereof.
- Each list includes device information pertaining to the devices in the list.
- the device information may include vendor information (e.g., name, address, contact person information, etc.), a serial number, a device description, a device model number, a version, a generation, a purchase date, an installation date, a service date, and/or other mechanism for identifying a device.
- the device information may also include purchases, installation notes, maintenance records, data use information, and age information.
- a purchase is a purchase order, a purchase fulfillment document, bill of laden, a quote, a receipt, and/or other information regarding purchases of assets of the system, or a portion thereof.
- An installation note is a record regarding the installation of an asset of the system, or portion thereof.
- a maintenance record is a record regarding each maintenance service performed on an asset of the system, or portion thereof.
- a list of user devices and associated user device information may include (e.g., in tabular form) user ID, user level, user role, hardware (HW) information, IP address, user application software (SW) information, device application software (SW) information, device use information, and/or device maintenance information.
- a user ID may include an individual identifier of a user and may further include an organization ID, a division ID, a department ID, a group ID, and/or a sub-group ID associated with the user.
- a user level (e.g., C-Level, director level, general level) includes options for data access privileges, data access restrictions, network access privileges, network access restrictions, server access privileges, server access restrictions, storage access privileges, storage access restrictions, required user applications, required device applications, and/or prohibited user applications.
- a user role (e.g., project manager, engineer, quality control, administration) includes further options for data access privileges, data access restrictions, network access privileges, network access restrictions, server access privileges, server access restrictions, storage access privileges, storage access restrictions, required user applications, required device applications, and/or prohibited user applications.
- the HW information field may store information regarding the hardware of the device.
- the HW information includes information regarding a computing device such as vendor information, a serial number, a description of the computing device, a computing device model number, a version of the computing device, a generation of the computing device, and/or other mechanism for identifying a computing device.
- the HW information may further store information regarding the components of the computing device such as the motherboard, the processor, video graphics card, network card, connection ports, and/or memory.
- the user application SW information field may store information regarding the user applications installed on the user's computing device.
- the user application SW information includes information regarding a SW program (e.g., spreadsheet, word processing, database, email, etc.) such as vendor information, a serial number, a description of the program, a program model number, a version of the program, a generation of the program, and/or other mechanism for identifying a program.
- the device SW information may include similar information, but for device applications (e.g., operating system, drivers, security, etc.).
- the device use data field may store data regarding the use of the device (e.g., use of the computing device and software running on it).
- the device use data includes a log of use of a user application, or program (e.g., time of day, duration of use, date information, etc.).
- the device use data includes a log of data communications to and from the device.
- the device use data includes a log of network accesses.
- the device use data includes a log of server access (e.g., local and/or remote servers).
- the device use data includes a log of storage access (e.g., local and/or remote memory).
- the maintenance field stores data regarding the maintenance of the device and/or its components.
- the maintenance data includes a purchase date, purchase information, an installation date, installation notes, a service date, services notes, and/or other maintenance data of the device and/or its components.
- device information records include application information
- the application information records may include at least a portion of the device information records and vice versa.
- a system tool record is a record regarding one or more tools of the system.
- a system tool record includes a list of system tools of the system or a portion thereof and information related to the system tools.
- System tools may include one or more security tools, one or more data segmentation tools, one or more boundary detection tools, one or more data protection tools, one or more infrastructure management tools, one or more encryption tools, one or more exploit protection tools, one or more malware protection tools, one or more identity management tools, one or more access management tools, one or more system monitoring tools, one or more verification tools, and/or one or more vulnerability management tools.
- a system tool may also be an infrastructure management tool, a network monitoring tool, a network strategy and planning tool, a network managing tool, a Simple Network Management Protocol (SNMP) tool, a telephony monitoring tool, a firewall monitoring tool, a bandwidth monitoring tool, an IT asset inventory management tool, a network discovery tool, a network asset discovery tool, a software discovery tool, a security discovery tool, an infrastructure discovery tool, Security Information & Event Management (SIEM) tool, a data crawler tool, and/or other type of tool to assist in discovery of assets, functions, security issues, implementation of the system, and/or operation of the system.
- SIEM Security Information & Event Management
- System tool information may include vendor information (e.g., name, address, contact person information, etc.), a serial number, a system tool description, a system tool model number, a version, a generation, a purchase date, an installation date, a service date, and/or other mechanism for identifying a system tool.
- vendor information e.g., name, address, contact person information, etc.
- System plans include business plans, operational plans, system security plans, system design specifications, etc.
- a system security plan is a document that identifies the functions and features of a system, including all its hardware and software installed on the system.
- System design specifications may include security specifications, hardware specifications, software specifications, data flow specifications, business operation specifications, build specifications, and/or other specifications regarding the system, or a portion thereof.
- Training materials may include at least a portion of a personnel handbook, presentation slides from a cybersecurity training session, links to videos and/or audio tutorials, handouts from a cybersecurity training session, user manuals regarding security devices and/or tools, training email communications, etc.
- FIG. 77 is another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- disclosed anomalies and event awareness information is gathered from various disclosed data sources of an organization as a portion of the disclosed issue detection information collected as issue detection data.
- disclosed organization diagrams relevant to anomalies and event awareness include a network operations diagram and expected data flow diagrams for users and systems of the organization.
- Disclosed system tools relevant to anomalies and event awareness include a network monitoring tool and a log management tool.
- the network monitoring tool may be a simple network management protocol (SNMP).
- SNMP network management protocol
- monitored devices are installed with an agent software and a network management system monitors each device and communicates information from those devices to an administrator.
- the network monitoring tool may include NetFlow monitoring.
- NetFlow is a standard for collecting network traffic statistics (e.g., IP addresses, ports, protocol, timestamp, number of bytes, packets, flags, etc.) from routers, switches, specialized network probes, and/or other network devices.
- network traffic statistics e.g., IP addresses, ports, protocol, timestamp, number of bytes, packets, flags, etc.
- a log management tool generally performs a variety of functions such as log collection, central aggregation, storage and retention of logs, log rotation, analysis, and log reporting.
- the log management tool may be a Security Information & Event Management (SIEM) tool.
- SIEM Security Information & Event Management
- a SIEM tool allows an organization to monitor network activity and implements a variety of security techniques such as log management, security event management, security information management, and security event correlation.
- Disclosed device information and application information relevant to anomalies and event awareness include a list of security devices (software and/or hardware), a list of network devices (software and/or hardware), a list of system applications (e.g., operating systems), a list of user applications, a list of servers (software and/or hardware), and a list of storage devices (software and/or hardware).
- a list of security devices software and/or hardware
- a list of network devices software and/or hardware
- system applications e.g., operating systems
- user applications e.g., user applications
- a list of servers software and/or hardware
- storage devices software and/or hardware
- security devices may include one or more log forwarding devices and a centralized log server required for a log management tool to collect and store data from various sources.
- a network packet collector is a security device required to collect network traffic for input to the network monitoring tool.
- Portions of the disclosed system security plan relevant to anomalies and event awareness include security analyst team roles and responsibilities as well as any specific processes, procedures, plans, data flows, policies, and requirements related to detecting anomalies and events.
- the security analyst team roles and responsibilities may include day to day activities of each team member, verification processes, information sharing requirements, hierarchy of roles, user privileges, and reporting requirements.
- the disclosed system security plan discloses a network monitoring process, a log management process, and event impact and incident alert processes.
- a network monitoring process may include a requirement to send network monitoring alerts generated by the network monitoring tool to a security analyst team for further analysis and remediation.
- a log management process may include a requirement to send alerts generated by the log management tool to a security analyst team for further analysis and remediation.
- Event impact and incident alert processes may include log analysis procedures to determine event impact and methods to establish appropriate incident alert settings.
- An incident is a change in a system that negatively impacts the organization. For example, an incident might take place when a cyberattack occurs.
- An incident alert is a notification of a cybersecurity incident.
- Disclosed reports relevant to anomalies and event awareness include security reports generated by one or more of the log management tool, the network monitoring tool, and the security analyst team. Additionally, compliance reports may contain relevant information to anomalies and event awareness such as previous security issues and application of remedies. Numerous other pieces of disclosed anomalies and event awareness information may be gathered from various information sources of the organization. The above examples are far from exhaustive.
- FIG. 78 is another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- disclosed continuous security monitoring information is gathered from various disclosed data sources of an organization as a portion of the disclosed issue detection information recorded as issue detection data.
- one or more portions of a disclosed system security plan relevant to continuous security monitoring include specific processes (e.g., network monitoring processes), procedures, plans, data flows, policies (e.g., physical environment threat protections), and requirements related to continuous security monitoring.
- the system security plan includes policies that all user devices must have antivirus software installed, that the antivirus software be centrally managed, and that unknown networks are prohibited.
- a network monitoring process may include a requirement to send alerts from a network monitoring tool to a security analyst team for further analysis and remediation.
- physical environment threat protections may include policies against eating or drinking around equipment, fire hazard avoidance policies, maintenance of building security and environmental threat alarms, etc.
- Disclosed device information relevant to continuous security monitoring include a list of security devices such as firewalls and intrusion detection systems, antivirus software and associated devices, malware protection software and associated devices, automatic update and automatic installation features of required software, and on-site alarm systems.
- Disclosed system tools relevant to continuous security monitoring include a network monitoring tool and any tools pertaining to use and deployment of malware protection software.
- Other examples of continuous security monitoring tools not disclosed here may include a network behavior anomaly detection (NBAD) tool for personnel and network monitoring, a cloud access security broker (CASB) tool for external service provider activity monitoring, etc.
- NBAD network behavior anomaly detection
- CASB cloud access security broker
- NBAD continuously tracks network characteristics such as traffic volume, bandwidth use, and protocol use in real time and generates an alarm if a strange event or trend is detected indicative of a potential security threat.
- CASB is an on-premises or cloud-hosted software placed between cloud service consumers and cloud server providers to enforce security, compliance, and governance policies for cloud applications.
- Disclosed training materials relevant to continuous security monitoring include documents related to phishing attack awareness training, email and website security training materials, physical environment security training materials, dates and times of cybersecurity training sessions, and the participants involved in the disclosed cybersecurity training sessions.
- Disclosed user information and application information relevant to continuous security monitoring may include user affiliations with devices (hardware and/or software), access control lists, and personnel roles. For example, based on a user's role within the organization, the user may have certain software requirements, access privileges, access restrictions, etc. Numerous other pieces of disclosed continuous security monitoring information may be gathered from various information sources of the organization. The above examples are far from exhaustive.
- FIG. 79 is another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- disclosed issue detection processes analysis information is gathered from various disclosed data sources of an organization as a portion of the disclosed issue detection information recorded as issue detection data.
- portions of the system security plan relevant to issue detection processes analysis include specific processes, procedures, plans, data flows, policies, and requirements generally or specifically related to issue detection.
- the system security plan includes documented issue detection policies and processes and issue detection requirements (e.g., based on compliance requirements).
- System diagrams relevant to issue detection processes analysis include issue detection flowcharts that visually illustrate processes described in the system security plan.
- System tools relevant to issue detection processes include verification tools for testing and improving the issue detection processes of the system.
- Portions of personnel handbooks and/or human resources (HR) documents relevant to issue detection processes analysis include the roles and responsibilities of those involves in issued detection processes and any personnel guidelines.
- the portions of personnel handbooks and/or HR documents includes the roles and responsibilities of the IT department, the roles and responsibilities of the security analyst team, the roles and responsibilities of the legal department, and issue detection process guidelines.
- Disclosed training materials relevant to issue detection processes analysis include all issue detection training materials, dates and times of cybersecurity training sessions, and the participants involved in the disclosed cybersecurity training sessions.
- Disclosed reports and internal documents relevant to issue detection processes include compliance reports and personnel communications.
- Compliance reports include details regarding issue detection processes analysis conducted and the results of such analysis.
- Personnel communications may include any relevant issue detection process analysis information.
- an issue detection process may specify that the security analyst team must notify legal of incidents.
- An email communication from the security analyst team to the legal department regarding an incident is a disclosed personnel communication that demonstrates implementation of this process.
- Numerous other pieces of disclosed issue detection processes analysis information may be gathered from various information sources of the organization. The above examples are far from exhaustive.
- FIG. 80 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for calculating the issue detection rating.
- the method begins at step 590 where the analysis system selects and performs at least two of steps 591 - 596 .
- the analysis system generates a policy rating for the system aspect based on the issue detection data and policy analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and policy as the evaluation rating metric.
- the analysis system generates a documentation rating for the system aspect based on the issue detection data and documentation analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and documentation as the evaluation rating metric.
- the analysis system generates an automation rating for the system aspect based on the issue detection data and automation analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and automation as the evaluation rating metric.
- the analysis system generates a policy rating for the system aspect based on the issue detection data and policy analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and policy as the evaluation rating metric.
- the analysis system generates a certification rating for the system aspect based on the issue detection data and certification analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and certification as the evaluation rating metric.
- the analysis system generates a procedure rating for the system aspect based on the issue detection data and procedure analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and procedure as the evaluation rating metric.
- the analysis system generates the issue detection rating based on the selected and performed at least two of the process rating, the policy rating, the documentation rating, the automation rating, the procedure rating, and the certification rating.
- the issue detection rating is a summation of the at least individual evaluation metric ratings.
- the analysis system performs a mathematical and/or logical function (e.g., a weight average, standard deviation, statistical analysis, trending, etc.) on the at least two individual evaluation metric to produce the issue detection rating.
- FIG. 81 is a schematic block diagram of an embodiment of a scoring module of the data analysis module 252 that includes a process rating module 601 , a policy rating module 602 , a procedure rating module 603 , a certification rating module 604 , a documentation rating module 605 , an automation rating module 606 , and a cumulative rating module 607 .
- the data scoring module generates an issue detection rating 608 from a collection of data based on data analysis parameters 265 .
- the process rating module 601 evaluates the collection of data 600 , or portion thereof, (e.g., pre-processed data of FIG. 35 ) to produce a process evaluation rating in accordance with process analysis parameters of the data analysis parameters 265 .
- the process analysis parameters indicate how the collection of data is to be evaluated with respect to processes of the system, or portion thereof.
- the process analysis parameters include:
- the process rating module 601 can rate the data 600 at three levels.
- the first level is that the system has processes, the system has the right number of processes, and/or the system has processes that address the right topics.
- the second level digs into the processes themselves to determine whether they are adequately cover the requirements of the system.
- the third level evaluates how well the processes are used and how well they are adhered to.
- the process rating module 601 generates a process evaluation rating based on a comparison of the processes of the data 600 with a list of processes the system, or portion thereof, should have. If all of the processes on the list are found in the data 600 , then the process evaluation rating is high. The fewer processes on the list that found in the data 600 , the lower the process evaluation rating will be.
- the process rating module 601 generates a process evaluation rating based on a determination of the last revisions of processes of data 600 and/or to determine an age of last revisions. As a specific example, if processes are revised at a rate that corresponds to a rate of revision in the industry, then a relatively high process evaluation rate would be produced. As another specific example, if processes are revised at a much lower rate that corresponds to a rate of revision in the industry, then a relatively low process evaluation rate would be produced (implies a lack of attention to the processes).
- the process rating module 601 generates a process evaluation rating based on a determination of frequency of use of processes of data 600 .
- a frequency e.g., x times per week
- a relatively high process evaluation rate would be produced.
- a relatively low process evaluation rate would be produced (implies a lack of using and adhering to the processes).
- a relatively low process evaluation rate would be produced (implies processes are inaccuracy, incompleteness, and/or difficult to use).
- the process rating module 601 generates a process evaluation rating based on an evaluation of a process of data 600 with respect to a checklist regarding content of the policy (e.g., what should be in the policy, which may be based, at least in part, on an evaluation category, sub-category, and/or sub-sub category).
- a checklist of desired topics for such a process. If all of the topics on the checklist are found in the process of data 600 , then the process evaluation rating is high. The fewer topics on the checklist that found in the process of data 600 , the lower the process evaluation rating will be.
- the process rating module 601 generates a process evaluation rating based on a comparison of balance between local processes of data 600 and system-wide processes of data 600 .
- most security processes should be system-wide. Thus, if there are a certain percentage (e.g., less than 10 %) of security processes that are local, then a relatively high process evaluation rating will be generated. Conversely, the greater the percentage of local security processes, the lower the process evaluation rating will be.
- the process rating module 601 generates a process evaluation rating based on evaluation of language use within processes of data 600 .
- most security requirements are mandatory. Thus, if the policy includes too much use of the word “may” (which implies optionality) versus the word “shall” (which implies must), the lower the process evaluation rating will be.
- the process rating module 601 may perform a plurality of the above examples of process evaluation to produce a plurality of process evaluation ratings.
- the process rating module 601 may output the plurality of the process evaluation ratings to the cumulative rating module 607 .
- the process rating module 601 may perform a function (e.g., a weight average, standard deviation, statistical analysis, etc.) on the plurality of process evaluation ratings to produce a process evaluation rating that's provided to the cumulative rating module 607 .
- the policy rating module 602 evaluates the collection of data 600 , or portion thereof, (e.g., pre-processed data of FIG. 35 ) to produce a policy evaluation rating in accordance with policy analysis parameters of the data analysis parameters 265 .
- the policy analysis parameters indicate how the collection of data is to be evaluated with respect to policies of the system, or portion thereof.
- the policy analysis parameters include:
- the policy rating module 602 can rate the data 600 at three levels.
- the first level is that the system has policies, the system has the right number of policies, and/or the system has policies that address the right topics.
- the second level digs into the policies themselves to determine whether they are adequately cover the requirements of the system.
- the third level evaluates how well the policies are used and how well they are adhered to.
- the procedure rating module 603 evaluates the collection of data 600 , or portion thereof, (e.g., pre-processed data of FIG. 35 ) to produce a procedure evaluation rating in accordance with procedure analysis parameters of the data analysis parameters 265 .
- the procedure analysis parameters indicate how the collection of data is to be evaluated with respect to procedures of the system, or portion thereof.
- the procedure analysis parameters include:
- the procedure rating module 603 can rate the data 600 at three levels.
- the first level is that the system has procedures, the system has the right number of procedures, and/or the system has procedures that address the right topics.
- the second level digs into the procedures themselves to determine whether they are adequately cover the requirements of the system.
- the third level evaluates how well the procedures are used and how well they are adhered to.
- the certification rating module 604 evaluates the collection of data 600 , or portion thereof, (e.g., pre-processed data of FIG. 35 ) to produce a certification evaluation rating in accordance with certification analysis parameters of the data analysis parameters 265 .
- the certification analysis parameters indicate how the collection of data is to be evaluated with respect to certifications of the system, or portion thereof.
- the certification analysis parameters include:
- the certification rating module 603 can rate the data 600 at three levels.
- the first level is that the system has certifications, the system has the right number of certifications, and/or the system has certifications that address the right topics.
- the second level digs into the certifications themselves to determine whether they are adequately cover the requirements of the system.
- the third level evaluates how well the certifications are maintained and updated.
- the documentation rating module 603 evaluates the collection of data 600 , or portion thereof, (e.g., pre-processed data of FIG. 35 ) to produce a documentation evaluation rating in accordance with documentation analysis parameters of the data analysis parameters 265 .
- the documentation analysis parameters indicate how the collection of data is to be evaluated with respect to documentation of the system, or portion thereof.
- the documentation analysis parameters include:
- the documentation rating module 605 can rate the data 600 at three levels.
- the first level is that the system has documentation, the system has the right number of documents, and/or the system has documents that address the right topics.
- the second level digs into the documents themselves to determine whether they are adequately cover the requirements of the system.
- the third level evaluates how well the documentation is used and how well it is maintained.
- the automation rating module 606 evaluates the collection of data 600 , or portion thereof, (e.g., pre-processed data of FIG. 35 ) to produce an automation evaluation rating in accordance with automation analysis parameters of the data analysis parameters 265 .
- the automation analysis parameters indicate how the collection of data is to be evaluated with respect to automation of the system, or portion thereof.
- the automation analysis parameters include:
- the automation rating module 606 can rate the data 600 at three levels.
- the first level is that the system has automation, the system has the right number of automation, and/or the system has automation that address the right topics.
- the second level digs into the automation themselves to determine whether they are adequately cover the requirements of the system.
- the third level evaluates how well the automations are used and how well they are adhered to.
- the cumulative rating module 607 receives one or more process evaluation ratings, one or more policy evaluation ratings, one or more procedure evaluation ratings, one or more certification evaluation ratings, one or more documentation evaluation ratings, and/or one or more automation evaluation ratings.
- the cumulative rating module 607 may output the evaluation ratings it receives as the issue detection rating 608 .
- the cumulative rating module 607 performs a function (e.g., a weight average, standard deviation, statistical analysis, etc.) on the evaluation ratings it receives to produce the issue detection rating 608 .
- FIG. 82 is a schematic block diagram of another embodiment of a data analysis module 252 that is similar to the data analysis module of FIG. 81 .
- the data analysis module 252 includes a data parsing module 609 , which parses the data 600 into process data, policy data, procedure data, certification data, documentation data, and/or automation data.
- FIG. 83 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating a process rating.
- the method begins at step 610 where the analysis system generates a first process rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system requirements
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- the method continues at step 611 where the analysis system generates a second process rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system design
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- FIG. 84 is a logic diagram of a further example of generating a process rating for understanding of system build for issue detection security functions of an organization.
- the method begins at step 613 where the analysis system identifies processes regarding issue detection security functions from the data.
- the method continues at step 614 where the analysis system generates a process rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 615 where the analysis system determines use of the issue detection processes.
- the method continues at step 616 where the analysis system generates a process rating based on use. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 617 where the analysis system determines consistency of applying the issue detection processes.
- the method continues at step 618 where the analysis system generates a process rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 619 where the analysis system generates the process rating based on the process rating from the data, the process rating based on use, and the process rating based on consistency of use.
- FIG. 85 is a logic diagram of a further example of generating a process rating for understanding of verifying issue detection security functions of an organization.
- the method begins at step 620 where the analysis system identifies processes to verify issue detection security functions from the data.
- the method continues at step 621 where the analysis system generates a process rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 622 where the analysis system determines use of the processes to verify the issue detection security functions.
- the method continues at step 623 where the analysis system generates a process rating based on use of the verify processes. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 624 where the analysis system determines consistency of applying the verifying processes to issue detection security functions.
- the method continues at step 625 where the analysis system generates a process rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 626 where the analysis system generates the process rating based on the process rating from the data, the process rating based on use, and the process rating based on consistency of use.
- FIG. 86 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- the issue detection data includes a collection of data 600 , which includes disclosed data, discovered data, and/or desired data.
- data 600 includes one or more network operation and data flow establishment and management processes, one or more data protection tools, one or more vulnerability management tools, one or more network monitoring tools, one or more access management tools, one or more boundary protection tools, network operation and data flow establishment and management implementation, device information, one or more system plans, one or more diagrams, one or more event detection analysis processes, one or more event detection analysis tools, event detection analysis implementation, one or more event data aggregation and correlation processes, one or more event data aggregation and correlation tools, event data aggregation and correlation implementation, one or more logs, one or more event impact determination processes, one or more event impact determination tools, event impact determination implementation, one or more incident alert threshold establishment processes, one or more incident alert threshold establishment tools, incident alert threshold establishment implementation, one or more network monitoring processes, network monitoring implementation, one or more network physical environment monitoring processes, one or more physical environment monitoring tools, physical environment monitoring implementation, one or more personnel activity monitoring processes, one or more personnel activity monitoring tools, personnel activity monitoring implementation, one or more malicious code detection processes, one or more malware protection tools
- the data 600 may further include one or more network operation and data flow establishment and management policies, one or more event detection analysis policies, one or more event data aggregation and correlation policies, one or more event impact determination policies, one or more incident alert threshold establishment policies, one or more network monitoring policies, one or more network physical environment monitoring policies, one or more personnel activity monitoring policies, one or more malicious code detection policies, one or more unauthorized mobile code detection policies, one or more external service provider monitoring policies, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring policies, one or more vulnerability scanning policies, one or more verification policies, one or more issue detection process compliance verification policies, one or more issue detection process testing policies, one or more issue detection process communication policies, and/or one or more issue detection process improvement policies.
- one or more network operation and data flow establishment and management policies one or more event detection analysis policies, one or more event data aggregation and correlation policies, one or more event impact determination policies, one or more incident alert threshold establishment policies, one or more network monitoring policies, one or more network physical environment monitoring policies, one
- the data 600 may still include one or more network operation and data flow establishment and management procedures, one or more event detection analysis procedures, one or more event data aggregation and correlation procedures, one or more event impact determination procedures, one or more incident alert threshold establishment procedures, one or more network monitoring procedures, one or more network physical environment monitoring procedures, one or more personnel activity monitoring procedures, one or more malicious code detection procedures, one or more unauthorized mobile code detection procedures, one or more external service provider monitoring procedures, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring procedures, one or more vulnerability scanning procedures, one or more verification procedures, one or more issue detection process compliance verification procedures, one or more issue detection process testing procedures, one or more issue detection process communication procedures, and/or one or more issue detection process improvement procedures.
- one or more network operation and data flow establishment and management procedures one or more event detection analysis procedures, one or more event data aggregation and correlation procedures, one or more event impact determination procedures, one or more incident alert threshold establishment procedures, one or more network monitoring procedures, one or more network physical environment monitoring procedures, one
- the data 600 may still include one or more network operation and data flow establishment and management documents, one or more event detection analysis documents, one or more event data aggregation and correlation documents, one or more event impact determination documents, one or more incident alert threshold establishment documents, one or more network monitoring documents, one or more network physical environment monitoring documents, one or more personnel activity monitoring documents, one or more malicious code detection documents, one or more unauthorized mobile code detection documents, one or more external service provider monitoring documents, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring documents, one or more vulnerability scanning documents, one or more verification documents, one or more issue detection process compliance verification documents, one or more issue detection process testing documents, one or more issue detection process communication documents, and/or one or more issue detection process improvement documents.
- one or more network operation and data flow establishment and management documents one or more event detection analysis documents, one or more event data aggregation and correlation documents, one or more event impact determination documents, one or more incident alert threshold establishment documents, one or more network monitoring documents, one or more network physical environment monitoring documents, one
- the data 600 may still include one or more network operation and data flow establishment and management certifications, one or more event detection analysis certifications, one or more event data aggregation and correlation certifications, one or more event impact determination certifications, one or more incident alert threshold establishment certifications, one or more network monitoring certifications, one or more network physical environment monitoring certifications, one or more personnel activity monitoring certifications, one or more malicious code detection certifications, one or more unauthorized mobile code detection certifications, one or more external service provider monitoring certifications, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring certifications, one or more vulnerability scanning certifications, one or more verification certifications, one or more issue detection process compliance verification certifications, one or more issue detection process testing certifications, one or more issue detection process communication certifications, and/or one or more issue detection process improvement certifications.
- network monitoring certifications one or more network physical environment monitoring certifications, one or more personnel activity monitoring certifications, one or more malicious code detection certifications, one or more unauthorized mobile code detection certifications, one or more external service provider
- the data 600 may still include one or more network operation and data flow establishment and management automations, one or more event detection analysis automations, one or more event data aggregation and correlation automations, one or more event impact determination automations, one or more incident alert threshold establishment automations, one or more network monitoring automations, one or more network physical environment monitoring automations, one or more personnel activity monitoring automations, one or more malicious code detection automations, one or more unauthorized mobile code detection automations, one or more external service provider monitoring automations, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring automations, one or more vulnerability scanning automations, one or more verification automations, one or more issue detection process compliance verification automations, one or more issue detection process testing automations, one or more issue detection process communication automations, and/or one or more issue detection process improvement automations.
- network monitoring automations one or more network physical environment monitoring automations, one or more personnel activity monitoring automations, one or more malicious code detection automations, one or more unauthorized mobile code detection automations, one or more external service provider
- the blue shaded boxes are data that is directly relevant to the process rating module 601 .
- the light green shaded boxes are data that may be relevant to the process rating module 601 .
- Implementations e.g., network operation and data flow establishment and management implementation, event detection analysis implementation, etc. refer to how a particular process is performed within the system aspect.
- Other data e.g., diagrams, reports, etc. provides information regarding one or more of the processes, tools, and implementations. The data listed is exemplary and not intended to be an exhaustive list.
- a security information event management (SIEM) tool may include a combination of tools such as vulnerability management, vulnerability scanning, network monitoring, event detection analysis, event aggregation and correlation, event impact determination, etc.
- a network monitoring tool may be considered a boundary protection tool and vice versa.
- Tools may be shared across processes. For example, while vulnerability management tools are listed near network operation and data flow establishment and management processes, several other processes may rely on vulnerability management tools such as vulnerability scanning processes and event awareness analysis processes.
- the process rating module 601 rates how well processes are used for identified tasks. For example, the process rating module 601 rates how well the network operation and data flow establishment and management tools (e.g., data protection tools, vulnerability management tools, network monitoring tools, access management tools, boundary protection tools, etc.) are used in accordance with the network operation and data flow establishment and management processes.
- the network operation and data flow establishment and management tools e.g., data protection tools, vulnerability management tools, network monitoring tools, access management tools, boundary protection tools, etc.
- the process rating module 601 rates the consistency of application of processes in implementation.
- the process rating module rates the consistency of use of the network operation and data flow establishment and management processes to use the network operation and data flow establishment and management tools in the network operation and data flow establishment and management implementation (e.g., to establish and manage the network operation and expected data flows of the system).
- FIG. 87 is a logic diagram of an example of generating a process rating by the analysis system; in particular the process rating module generating a process rating based on data.
- the method begins at step 620 where the analysis system determines whether there is at least one process in the collection of data. Note that the threshold number in this step could be greater than one. If there are no processes, the method continues at step 631 where the analysis system generates a process rating of 0 (and/or a word rating of “none”).
- the method continues at step 632 where the analysis system determines whether the processes are repeatable.
- repeatable processes produce consistent results, include variations from process to process, are not routinely reviewed in an organized manner, and/or are not all regulated. For example, when the number of processes is below a desired number of processes, the analysis system determines that the processes are not repeatable (e.g., with too few processes cannot get repeatable outcomes). As another example, when the processes of the data 600 does not include one or more processes on a list of processes the system should have, the analysis system determines that the processes are not repeatable (e.g., with missing processes cannot get repeatable outcomes).
- the method continues at step 633 where the analysis system generates a process rating of 10 (and/or a word rating of “inconsistent”). If, however, the processes are at least repeatable, the method continues at step 634 where the analysis system determines whether the processes are standardized. In this instance, standardized includes repeatable plus there are no appreciable variations in the processes from process to process, and/or the processes are regulated.
- the method continues at step 635 where the analysis system generates a process rating of 20 (and/or a word rating of “repeatable”). If, however, the processes are at least standardized, the method continues at step 636 where the analysis system determines whether the processes are measured. In this instance, measured includes standardized plus precise, exact, and/or calculated to specific needs, concerns, and/or functioning of the system.
- step 637 the analysis system generates a process rating of 30 (and/or a word rating of “standardized”). If, however, the processes are at least measured, the method continues at step 638 where the analysis system determines whether the processes are optimized. In this instance, optimized includes measured plus processes are up-to-date and/or process improvement assessed on a regular basis as part of system protocols.
- step 639 the analysis system generates a process rating of 40 (and/or a word rating of “measured”). If the processes are optimized, the method continues at step 640 where the analysis system generates a process rating of 50 (and/or a word rating of “optimized”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of process rating may be more or less than the six shown.
- weighting factors on certain types of analysis affect the level.
- weighting factors for analysis to determine last revisions of processes, age of last revisions, content verification of processes with respect to a checklist, balance of local processes and system-wide processes, topic verification of the processes with respect to desired topics, and/or process language evaluation will affect the resulting level.
- FIG. 88 is a logic diagram of an example of generating a process rating by the analysis system; in particular the process rating module generating a process rating based on use of processes.
- the method begins at step 641 where the analysis system determines whether at least one process in the collection of data has been used. Note that the threshold number in this step could be greater than one. If no processes have been used, the method continues at step 642 where the analysis system generates a process rating of 0 (and/or a word rating of “none”).
- step 643 the analysis system determines whether the use of the processes is repeatable. In this instance, repeatable use of processes is consistent use, but with variations from process to process, use is not routinely reviewed or verified in an organized manner, and/or use is not regulated.
- step 644 the analysis system generates a process rating of 10 (and/or a word rating of “inconsistent”). If, however, the use of processes is at least repeatable, the method continues at step 645 where the analysis system determines whether the use of processes is standardized. In this instance, standardized includes repeatable plus there are no appreciable variations in the use of processes from process to process, and/or the use of processes is regulated.
- the method continues at step 646 where the analysis system generates a process rating of 20 (and/or a word rating of “repeatable”). If, however, the use of processes is at least standardized, the method continues at step 647 where the analysis system determines whether the use of processes is measured. In this instance, measured includes standardized plus use is precise, exact, and/or calculated to specific needs, concerns, and/or functioning of the system.
- step 648 the analysis system generates a process rating of 30 (and/or a word rating of “standardized”). If, however, the use of processes is at least measured, the method continues at step 649 where the analysis system determines whether the use of processes is optimized. In this instance, optimized includes measured plus use of processes are up-to-date and/or improving use of processes is assessed on a regular basis as part of system protocols.
- step 650 the analysis system generates a process rating of 40 (and/or a word rating of “measured”). If the use of processes is optimized, the method continues at step 651 where the analysis system generates a process rating of 50 (and/or a word rating of “optimized”).
- the numerical ratings are example values and could be other values. Further note that the number of level of process rating may be more or less than the six shown.
- FIG. 89 is a logic diagram of an example of generating a process rating by the analysis system; in particular the process rating module generating a process rating based on consistency of application of processes.
- the method begins at step 652 where the analysis system determines whether at least one process in the collection of data has been consistently applied. Note that the threshold number in this step could be greater than one. If there no processes have been consistently applied, the method continues at step 653 where the analysis system generates a process rating of 0 (and/or a word rating of “none”).
- step 654 the analysis system determines whether the consistent application of processes is repeatable.
- repeatable consistency of application of processes is a process is consistently applied for a given circumstance of the system (e.g., determining software applications for like devices in a department), but with variations from process to process, application of processes is not routinely reviewed or verified in an organized manner, and/or application of processes is not regulated.
- the method continues at step 655 where the analysis system generates a process rating of 10 (and/or a word rating of “inconsistent”). If, however, the consistency of application of processes is at least repeatable, the method continues at step 656 where the analysis system determines whether the consistency of application of processes is standardized. In this instance, standardized includes repeatable plus there are no appreciable variations in the application of processes from process to process, and/or the application of processes is regulated.
- the method continues at step 657 where the analysis system generates a process rating of 20 (and/or a word rating of “repeatable”). If, however, the consistency of application of processes is at least standardized, the method continues at step 658 where the analysis system determines whether the consistency of application of processes is measured. In this instance, measured includes standardized plus application of processes is precise, exact, and/or calculated to specific needs, concerns, and/or functioning of the system.
- step 659 the analysis system generates a process rating of 30 (and/or a word rating of “standardized”). If, however, the consistency of application of processes is at least measured, the method continues at step 660 where the analysis system determines whether the consistency of application of processes is optimized. In this instance, optimized includes measured plus application of processes is up-to-date and/or improving application of processes is assessed on a regular basis as part of system protocols.
- step 661 the analysis system generates a process rating of 40 (and/or a word rating of “measured”). If the consistency of application of processes is optimized, the method continues at step 662 where the analysis system generates a process rating of 50 (and/or a word rating of “optimized”).
- the numerical ratings are example values and could be other values. Further note that the number of level of process rating may be more or less than the six shown.
- FIG. 90 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating a policy rating.
- the method begins at step 670 where the analysis system generates a first policy rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system requirements
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- the method continues at step 671 where the analysis system generates a second policy rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system design
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- FIG. 91 is a logic diagram of a further example of generating a policy rating for understanding of system build for issue detection security functions of an organization.
- the method begins at step 673 where the analysis system identifies policies regarding issue detection security functions from the data.
- the method continues at step 674 where the analysis system generates a policy rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 675 where the analysis system determines use of the policies for issue detection.
- the method continues at step 676 where the analysis system generates a policy rating based on use. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 677 where the analysis system determines consistency of applying the policies for issue detection.
- the method continues at step 678 where the analysis system generates a policy rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 679 where the analysis system generates the policy rating based on the policy rating from the data, the policy rating based on use, and the policy rating based on consistency of use.
- FIG. 92 is a logic diagram of a further example of generating a policy rating for understanding of verifying issue detection security functions of an organization.
- the method begins at step 680 where the analysis system identifies policies to verify the issue detection security functions from the data.
- the method continues at step 681 where the analysis system generates a policy rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 682 where the analysis system determines use of the policies to verify the issue detection security functions.
- the method continues at step 683 where the analysis system generates a policy rating based on use of the verify policies. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 684 where the analysis system determines consistency of applying the verifying policies to issue detection security functions.
- the method continues at step 685 where the analysis system generates a policy rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 686 where the analysis system generates the policy rating based on the policy rating from the data, the policy rating based on use, and the policy rating based on consistency of use.
- FIG. 93 is a logic diagram of an example of generating a policy rating by the analysis system; in particular the policy rating module generating a policy rating based on data 600 .
- the method begins at step 687 where the analysis system determines whether there is at least one policy in the collection of data. Note that the threshold number in this step could be greater than one. If there are no policies, the method continues at step 688 where the analysis system generates a policy rating of 0 (and/or a word rating of “none”).
- the method continues at step 689 where the analysis system determines whether the policies are defined.
- defined policies include sufficient detail to produce consistent results, include variations from policy to policy, are not routinely reviewed in an organized manner, and/or are not all regulated. For example, when the number of policies is below a desired number of policies, the analysis system determines that the processes are not repeatable (e.g., with too few policies cannot get repeatable outcomes). As another example, when the policies of the data 600 does not include one or more policies on a list of policies the system should have, the analysis system determines that the policies are not repeatable (e.g., with missing policies cannot get repeatable outcomes).
- the method continues at step 690 where the analysis system generates a policy rating of 5 (and/or a word rating of “informal”). If, however, the policies are at least defined, the method continues at step 691 where the analysis system determines whether the policies are audited. In this instance, audited includes defined plus the policies are routinely reviewed, and/or the policies are regulated.
- the method continues at step 692 where the analysis system generates a policy rating of 10 (and/or a word rating of “defined”). If, however, the policies are at least audited, the method continues at step 693 where the analysis system determines whether the policies are embedded. In this instance, embedded includes audited plus are systematically rooted in most, if not all, aspects of the system.
- the method continues at step 694 where the analysis system generates a policy rating of 15 (and/or a word rating of “audited”). If the policies are embedded, the method continues at step 695 where the analysis system generates a policy rating of 20 (and/or a word rating of “embedded”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of policy rating may be more or less than the five shown.
- weighting factors on certain types of analysis affect the level.
- weighting factors for analysis to determine last revisions of policies, age of last revisions, content verification of policies with respect to a checklist, balance of local policies and system-wide policies, topic verification of the policies with respect to desired topics, and/or policy language evaluation will affect the resulting level.
- FIG. 94 is a logic diagram of an example of generating a policy rating by the analysis system; in particular the policy rating module generating a policy rating based on use of the polices.
- the method begins at step 696 where the analysis system determines whether there is at least one use of a policy. Note that the threshold number in this step could be greater than one. If there are no uses of policies, the method continues at step 697 where the analysis system generates a policy rating of 0 (and/or a word rating of “none”).
- step 698 the analysis system determines whether the use of policies is defined.
- defined use of policies include sufficient detail on how and/or when to use a policy, include variations in use from policy to policy, use of policies is not routinely reviewed in an organized manner, and/or use of policies is not regulated.
- the method continues at step 699 where the analysis system generates a policy rating of 5 (and/or a word rating of “informal”). If, however, the use of policies is at least defined, the method continues at step 700 where the analysis system determines whether the use of policies is audited. In this instance, audited includes defined plus the use of policies is routinely reviewed, and/or the use of policies is regulated.
- step 701 the analysis system generates a policy rating of 10 (and/or a word rating of “defined”). If, however, the use of policies is at least audited, the method continues at step 702 where the analysis system determines whether the use of policies is embedded. In this instance, embedded includes audited plus use of policies is systematically rooted in most, if not all, aspects of the system.
- the method continues at step 703 where the analysis system generates a policy rating of 15 (and/or a word rating of “audited”). If the use of policies is embedded, the method continues at step 704 where the analysis system generates a policy rating of 20 (and/or a word rating of “embedded”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of policy rating may be more or less than the five shown.
- FIG. 95 is a logic diagram of an example of generating a policy rating by the analysis system; in particular the policy rating module generating a policy rating based on consistent application of polices.
- the method begins at step 705 where the analysis system determines whether there is at least one consistent application of a policy. Note that the threshold number in this step could be greater than one. If there are no consistent application of policies, the method continues at step 706 where the analysis system generates a policy rating of 0 (and/or a word rating of “none”).
- step 707 the analysis system determines whether the consistent application of policies is defined.
- defined application of policies include sufficient detail on when policies apply, includes application variations from policy to policy, application of policies is not routinely reviewed in an organized manner, and/or application of policies is not regulated.
- step 708 the analysis system generates a policy rating of 5 (and/or a word rating of “informal”). If, however, the application of policies is at least defined, the method continues at step 707 where the analysis system determines whether the application of policies is audited. In this instance, audited includes defined plus the application of policies is routinely reviewed, and/or the application of policies is regulated.
- step 710 the analysis system generates a policy rating of 10 (and/or a word rating of “defined”). If, however, the application of policies is at least audited, the method continues at step 711 where the analysis system determines whether the application of policies is embedded. In this instance, embedded includes audited plus application of policies is systematically rooted in most, if not all, aspects of the system.
- step 712 the analysis system generates a policy rating of 15 (and/or a word rating of “audited”). If the application of policies is embedded, the method continues at step 713 where the analysis system generates a policy rating of 20 (and/or a word rating of “embedded”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of policies may be more or less than the five shown.
- FIG. 96 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating a documentation rating.
- the method begins at step 720 where the analysis system generates a first documentation rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system requirements
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- the method continues at step 721 where the analysis system generates a second documentation rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system design
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- FIG. 97 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization.
- the method begins at step 723 where the analysis system identifies documentation regarding issue detection security functions from the data.
- the method continues at step 724 where the analysis system generates a documentation rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 725 where the analysis system determines use of the documentation for issue detection security functions.
- the method continues at step 726 where the analysis system generates a documentation rating based on use. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 727 where the analysis system determines consistency of applying the documentation for issue detection security functions.
- the method continues at step 728 where the analysis system generates a documentation rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 729 where the analysis system generates the documentation rating based on the documentation rating from the data, the documentation rating based on use, and the documentation rating based on consistency of use.
- FIG. 98 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization.
- the method begins at step 730 where the analysis system identifies documentation to verify the issue detection security functions from the data.
- the method continues at step 731 where the analysis system generates a documentation rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 732 where the analysis system determines use of the documentation to verify issue detection security functions.
- the method continues at step 733 where the analysis system generates a documentation rating based on use of the verify documentation. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 734 where the analysis system determines consistency of applying the verifying documentation for issue detection security functions.
- the method continues at step 735 where the analysis system generates a documentation rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 736 where the analysis system generates the documentation rating based on the documentation rating from the data, the documentation rating based on use, and the documentation rating based on consistency of use.
- FIG. 99 is a logic diagram of an example of generating a documentation rating by the analysis system; in particular the documentation rating module generating a documentation rating based on data 600 .
- the method begins at step 737 where the analysis system determines whether there is at least one document in the collection of data. Note that the threshold number in this step could be greater than one. If there are no documents, the method continues at step 738 where the analysis system generates a documentation rating of 0 (and/or a word rating of “none”).
- step 739 the analysis system determines whether the documents are formalized.
- formalized documents include sufficient detail to produce consistent documentation, include form variations from document to document, are not routinely reviewed in an organized manner, and/or formation of documents is not regulated.
- the method continues at step 740 where the analysis system generates a documentation rating of 5 (and/or a word rating of “informal”). If, however, the documents are at least formalized, the method continues at step 741 where the analysis system determines whether the documents are metric & reporting. In this instance, metric & reporting includes formal plus the documents are routinely reviewed, and/or the formation of documents is regulated.
- the method continues at step 742 where the analysis system generates a documentation rating of 10 (and/or a word rating of “formal”). If, however, the documents are at least metric & reporting, the method continues at step 743 where the analysis system determines whether the documents are improved. In this instance, improved includes audited plus document formation is systematically rooted in most, if not all, aspects of the system.
- step 744 the analysis system generates a documentation rating of 15 (and/or a word rating of “metric & reporting”). If the documents are improve, the method continues at step 745 where the analysis system generates a documentation rating of 20 (and/or a word rating of “improvement”).
- the numerical ratings are example values and could be other values. Further note that the number of level of documentation rating may be more or less than the five shown.
- weighting factors on certain types of analysis affect the level.
- weighting factors for analysis to determine last revisions of documents, age of last revisions, content verification of documents with respect to a checklist, balance of local documents and system-wide documents, topic verification of the documents with respect to desired topics, and/or document language evaluation will affect the resulting level.
- FIG. 100 is a logic diagram of an example of generating a documentation rating by the analysis system; in particular the documentation rating module generating a documentation rating based on use of documents.
- the method begins at step 746 where the analysis system determines whether there is at least one use of a document. Note that the threshold number in this step could be greater than one. If there are no use of documents, the method continues at step 747 where the analysis system generates a documentation rating of 0 (and/or a word rating of “none”).
- step 748 the analysis system determines whether the use of the documents is formalized.
- formalized use of documents include sufficient detail regarding how to use the documentation, include use variations from document to document, use of documents is not routinely reviewed in an organized manner, and/or use of documents is not regulated.
- the method continues at step 749 where the analysis system generates a documentation rating of 5 (and/or a word rating of “informal”). If, however, the use of documents is at least formalized, the method continues at step 750 where the analysis system determines whether the use of the documents is metric & reporting. In this instance, metric & reporting includes formal plus use of documents is routinely reviewed, and/or the use of documents is regulated.
- the method continues at step 751 where the analysis system generates a documentation rating of 10 (and/or a word rating of “formal”). If, however, the use of documents is at least metric & reporting, the method continues at step 752 where the analysis system determines whether the use of documents is improved. In this instance, improved includes metric & reporting plus use of document is systematically rooted in most, if not all, aspects of the system.
- step 753 the analysis system generates a documentation rating of 15 (and/or a word rating of “metric & reporting”). If the use of documents is improve, the method continues at step 754 where the analysis system generates a documentation rating of 20 (and/or a word rating of “improvement”).
- the numerical ratings are example values and could be other values. Further note that the number of level of documentation rating may be more or less than the five shown.
- FIG. 101 is a logic diagram of an example of generating a documentation rating by the analysis system; in particular the documentation rating module generating a documentation rating based on application of documents.
- the method begins at step 755 where the analysis system determines whether there is at least one application of a document. Note that the threshold number in this step could be greater than one. If there are no applications of documents, the method continues at step 756 where the analysis system generates a documentation rating of 0 (and/or a word rating of “none”).
- step 757 the analysis system determines whether the application of the documents is formalized.
- formalized application of documents include sufficient detail regarding how to apply the documentation, include application variations from document to document, application of documents is not routinely reviewed in an organized manner, and/or application of documents is not regulated.
- step 758 the analysis system generates a documentation rating of 5 (and/or a word rating of “informal”). If, however, the application of documents is at least formalized, the method continues at step 759 where the analysis system determines whether the application of the documents is metric & reporting. In this instance, metric & reporting includes formal plus application of documents is routinely reviewed, and/or the application of documents is regulated.
- step 760 the analysis system generates a documentation rating of 10 (and/or a word rating of “formal”). If, however, the application of documents is at least metric & reporting, the method continues at step 761 where the analysis system determines whether the application of documents is improve. In this instance, improve includes metric & reporting plus use of document is systematically rooted in most, if not all, aspects of the system.
- step 762 the analysis system generates a documentation rating of 15 (and/or a word rating of “metric & reporting”). If the application of documents is improve, the method continues at step 763 where the analysis system generates a documentation rating of 20 (and/or a word rating of “improvement”).
- the numerical ratings are example values and could be other values. Further note that the number of level of documentation may be more or less than the five shown.
- FIG. 102 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating an automation rating.
- the method begins at step 764 where the analysis system generates a first automation rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system requirements
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- the method continues at step 765 where the analysis system generates a second automation rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data).
- a system criteria e.g., system design
- a system mode e.g., system functions
- an evaluation perspective e.g., implementation
- an evaluation viewpoint e.g., disclosed data
- FIG. 103 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization.
- the method begins at step 767 where the analysis system identifies automation regarding issue detection security functions from the data.
- the method continues at step 768 where the analysis system generates an automation rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 769 where the analysis system determines use of the automation for issue detection security functions.
- the method continues at step 770 where the analysis system generates an automation rating based on use. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 771 where the analysis system determines consistency of applying the automation for issue detection security functions.
- the method continues at step 772 where the analysis system generates an automation rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 773 where the analysis system generates the automation rating based on the automation rating from the data, the automation rating based on use, and the automation rating based on application (i.e., consistency of use).
- FIG. 104 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization.
- the method begins at step 774 where the analysis system identifies automation to verify issue detection security functions from the data.
- the method continues at step 775 where the analysis system generates an automation rating from the data. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 776 where the analysis system determines use of the automation for issue detection security functions.
- the method continues at step 777 where the analysis system generates an automation rating based on use of the verify automation. Examples of this were discussed with reference to FIG. 81 .
- the method also continues at step 778 where the analysis system determines consistency of applying the verifying automation for issue detection security functions.
- the method continues at step 779 where the analysis system generates an automation rating based on consistency of use. Examples of this were discussed with reference to FIG. 81 .
- the method continues at step 780 where the analysis system generates the automation rating based on the automation rating from the data, the automation rating based on use, and the automation rating based on consistency of use.
- FIG. 105 is a logic diagram of an example of generating an automation rating by the analysis system; in particular the automation rating module generating an automation rating based on data 600 .
- the method begins at step 781 where the analysis system determines whether there is available automation for a particular system aspect, system criteria, and/or system mode. If automation is not available, the method continues at step 782 where the analysis system generates an automation rating of 10 (and/or a word rating of “unavailable”).
- step 783 the analysis system determines whether there is at least one automation in the data. If not, the method continues at step 784 where the analysis system generates an automation rating of 0 (and/or a word rating of “none”).
- step 785 the analysis system determines whether full automation is found in the data.
- full automation refers to the automation techniques that are available for the system are in the data 600 .
- step 786 the analysis system generates an automation rating of 5 (and/or a word rating of “partial”). If, however, the automation is full, the method continues at step 787 where the analysis system generates an automation rating of 10 (and/or a word rating of “full”).
- the numerical ratings are example values and could be other values. Further note that the number of level of automation may be more or less than the four shown.
- FIG. 106 is a logic diagram of an example of generating an automation rating by the analysis system; in particular the automation rating module generating an automation rating based on use.
- the method begins at step 788 where the analysis system determines whether there is available automation for a particular system aspect, system criteria, and/or system mode. If automation is not available, the method continues at step 789 where the analysis system generates an automation rating of 10 (and/or a word rating of “unavailable”).
- step 790 the analysis system determines whether there is at least one use of automation. If not, the method continues at step 791 where the analysis system generates an automation rating of 0 (and/or a word rating of “none”).
- step 792 the analysis system determines whether automation is fully used.
- full use of automation refers to the automation techniques that the system has are fully used.
- step 793 the analysis system generates an automation rating of 5 (and/or a word rating of “partial”). If, however, the use of automation is full, the method continues at step 794 where the analysis system generates an automation rating of 10 (and/or a word rating of “full”).
- FIG. 107 is a logic diagram of an example of generating an automation rating by the analysis system; in particular the automation rating module generating an automation rating based on application of automation.
- the method begins at step 795 where the analysis system determines whether there is available automation for a particular system aspect, system criteria, and/or system mode. If automation is not available, the method continues at step 796 where the analysis system generates an automation rating of 10 (and/or a word rating of “unavailable”).
- step 790 the analysis system determines whether there is at least one application of automation. If not, the method continues at step 798 where the analysis system generates an automation rating of 0 (and/or a word rating of “none”).
- step 799 the analysis system determines whether automation is fully applied.
- full application of automation refers to the automation techniques of the system are applied to achieve consistent use.
- step 800 the analysis system generates an automation rating of 5 (and/or a word rating of “partial”). If, however, the application of automation is full, the method continues at step 801 where the analysis system generates an automation rating of 10 (and/or a word rating of “full”).
- FIG. 108 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular the analysis system identifying system elements for use in the issue detection evaluation.
- the method begins at step 810 where the analysis system activating at least one detection tool (e.g., cybersecurity tool, end point tool, network tool, IP address tool, hardware detection tool, software detection tool, etc.) based on the system aspect.
- at least one detection tool e.g., cybersecurity tool, end point tool, network tool, IP address tool, hardware detection tool, software detection tool, etc.
- step 811 the analysis system determines whether an identified system element has already been identified for the system aspect (e.g., is already in the collection of data 600 and/or is part of the gathered data). If yes, the method continues at step 812 where the analysis system determines whether the identifying of system elements is done. If not, the method repeats at step 811 . If the identifying of system elements is done, the method continues at step 813 where the analysis system determines whether to end the method or repeat it for another system aspect, or portion thereof.
- the method continues at step 814 where the analysis system determines whether the potential system element is already identified as being a part of the system aspect, but not included in the collection of data 600 (e.g., is it cataloged as being part of the system?). If yes, the method continues at step 815 where the analysis system adds the identified system element to the collection of data 600 .
- the method continues at step 816 where the analysis system obtains data regarding the potential system element.
- the analysis system obtains a device ID, a user ID, a device serial number, a device description, a software ID, a software serial number, a software description, vendor information and/or other data regarding the system element.
- the method continues at step 816 where the analysis system verifies the potential system element based on the data. For example, the analysis system verifies one or more of a device ID, a user ID, a device serial number, a device description, a software ID, a software serial number, a software description, vendor information and/or other data regarding the system element to establish that the system element is a part of the system.
- the method continues at step 818 where the analysis system adds the system element as a part of the system aspect (e.g., catalogs it as part of the system and/or adds it to the collection of data 600 ).
- FIG. 109 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system 11 for analyzing a system 11 , or portion thereof.
- analysis system 11 is evaluating, with respect to process, policy, procedure, certification, documentation, and/or automation, the understanding and implementation of the guidelines, system requirements, system design, and/or system build for issue detection security functions of an organization based on disclosed data and discovered data to produce an evaluation rating.
- the analysis system 10 can generate one or a plurality of issue detection evaluation ratings for implementation of the guidelines, system requirements, system design, and/or system build for issue detection security functions of an organization based on disclosed data and discovered data in accordance with the evaluation rating metrics of process, policy, procedure, certification, documentation, and/or automation. A few, but far from exhaustive, examples are shown in FIGS. 113-117 .
- FIG. 110 is an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- FIG. 110 is similar to FIG. 77 except, in addition to disclosed information regarding system build of issue detection security functions of an organization, here, the issue detection information further includes discovered information regarding system build, system design, and implementation of issue detection security functions of the organization. Discovered information is the information discovered about the system by the analysis system during the analysis.
- the discovered and disclosed anomalies and event awareness information is gathered from various disclosed and discovered data sources as a portion of the discovered and disclosed issue detection information recorded as issue detection data.
- the disclosed portion of the discovered and disclosed anomalies and event awareness information may include information obtained from disclosed diagrams, a list of system tools, disclosed device information, disclosed application information, the system security plan, and disclosed reports relevant to anomalies and event awareness as discussed with reference to FIG. 77 .
- the discovered portion of the of the discovered and disclosed anomalies and event awareness information is obtained from analyzing at least a portion of the disclosed information and from engagement with system devices, operating systems, networks, system tools, one or more departments (e.g., the IT department), one or more groups, one or more users (e.g., discovered user data), etc.
- the analysis system may discover devices (e.g., discovered devices) connected to an organization's network that were not in the disclosed information.
- the analysis system is operable to analyze whether the devices have any hardware and/or software issues. For example, hardware and/or software may be improperly installed, old, and/or misused. As another example, hardware may not efficiently support software.
- the analysis system can determine how well the system tools are working and if they are executing their intended purposes. For example, the analysis system can engage with the log management tool, gather logs from system components, and determine log management processes to assess the log management tool's capabilities. As an example, the analysis system may discover that not all log data is being collected and additional log collection forwarding devices (e.g., hardware and/or software) are needed. As another example, the analysis system may discover that while the log management tool aggregates event data from various sources, the event analysis component of the log management tool is limited (e.g., the security analyst team is short staffed and unable to manage the data). When the analysis system outputs include deficiency identification and/or auto-correction, the analysis system is operable to recommend or auto-install a SIEM tool or similar event aggregation and analysis tool to remedy the issue.
- the analysis system outputs include deficiency identification and/or auto-correction
- the analysis system By engaging with various departments, groups, users (e.g., discovered user data such as user communications (e.g., email content, memos, internal documents, text messages, etc.)), etc., the analysis system develops an understanding of system data flow, network operations, personnel roles and responsibilities, and execution of disclosed processes, policies, procedures, etc. For example, a disclosed issue detection policy may require the security analyst team to alert the legal department upon discovery of an incident. However, the analysis system discovers that there is no clear definition of the term incident in the disclosed information and the issue detection policy is flawed. As another example, the analysis system may discover that the incident alert threshold is set too low and is creating noise and alarm fatigue. Numerous other pieces of discovered and disclosed anomalies and event awareness information may be gathered from various information sources. The above examples are far from exhaustive.
- FIG. 111 is an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- FIG. 111 is similar to FIG. 78 except, in addition to disclosed information regarding system build of issue detection security functions of an organization, here, the issue detection information further includes discovered information regarding system build, system design, and implementation of issue detection security functions of the organization. Discovered information is the information discovered about the system by the analysis system during the analysis.
- discovered and disclosed continuous security monitoring information is gathered from various data sources as a portion of the discovered and disclosed issue detection information recorded as issue detection data.
- the disclosed portion of the discovered and disclosed continuous security monitoring information may include the disclosed system security plan, disclosed list of system tools, disclosed device information, training materials, and disclosed user information relevant to continuous security monitoring as discussed with reference to FIG. 78 .
- the discovered portion of the of the discovered and disclosed continuous security monitoring information is obtained from analyzing at least a portion of the disclosed information and from engagement with the system devices, applications, networks, system tools, one or more departments (e.g., the IT department), one or more groups, one or more users (e.g., discovered user data), etc.
- the analysis system develops an understanding of security monitoring data flows, personnel roles and responsibilities, and execution of disclosed continuous security monitoring processes, policies, procedures, etc.
- the analysis system may discover devices (e.g., discovered devices) connected to an organization's network that were not in the disclosed information.
- the analysis system is operable to analyze whether the devices have any hardware and/or software issues. For example, a disclosed malicious code detection policy may state that all system devices require current antivirus software, however; the analysis system analyzes system devices to determine that several devices have no antivirus software installed or have an old version installed.
- the analysis system can determine how well the system tools are working and if they are executing their intended purposes. For example, the analysis system can engage with the network monitoring tool, gather network traffic data, and determine network monitoring processes to assess the network monitoring tool's capabilities. As an example, the analysis system may discover that an important device is not being monitored. As another example, the analysis system may discover that the network monitoring tool has a negative performance impact on the devices where it is implemented. As another example, the analysis system may discover that external service provider activity is not monitored.
- the analysis system is operable to recommend and/or auto-install remedies such as an external service provider activity monitoring tool (e.g., a cloud access security broker (CASB)) in this instance.
- an external service provider activity monitoring tool e.g., a cloud access security broker (CASB)
- the analysis system can assess the quality and performance of an alarm system installed at a facility for purposes of physical environment monitoring.
- the analysis system can assess the antivirus and malware protection software implemented by the system to determine whether it is appropriate for the organization and has the necessary features (e.g., auto-update, central management, etc.).
- the analysis system By engaging with various groups, users (e.g., discovered user data such as user communications (e.g., email content, memos, internal documents, text messages, etc.)), etc., the analysis system develops an understanding of system data flow, personnel roles and responsibilities, and execution of disclosed processes, policies, procedures, etc. For example, disclosed personnel training materials may disclose policies regarding user uploads and network connections. By engaging with devices and analyzing communications on the network, the analysis system can assess whether the users are adhering to the practices outlined in the training materials (e.g., are users performing unauthorized uploads, are users connecting to unknown networks, are users falling victim to phishing emails, etc.). Numerous other pieces of discovered and disclosed continuous security monitoring information may be gathered from various information sources. The above examples are far from exhaustive.
- FIG. 112 is an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.
- FIG. 112 is similar to FIG. 79 except, in addition to disclosed information regarding system build of issue detection security functions of an organization, here, the issue detection information further includes discovered information regarding system build, system design, and implementation of issue detection security functions of the organization. Discovered information is the information discovered about the system by the analysis system during the analysis.
- discovered and disclosed issue detection processes analysis information is gathered from various sources as a portion of the discovered and disclosed issue detection information recorded as issue detection data.
- the disclosed portion of the discovered and disclosed issue detection processes analysis information may include the disclosed system security plan, disclosed diagrams, a disclosed list of system tools, personnel handbooks, human resources (HR) documents, personnel training materials, reports, and other disclosed internal documents as discussed with reference to FIG. 79 .
- the discovered portion of the of the discovered and disclosed issue detection processes analysis information is obtained from analyzing at least a portion of the disclosed information and from engagement with the system devices, operating systems, networks, system tools, one or more departments (e.g., the IT department), one or more groups, one or more users (e.g., discovered user data), etc.
- the analysis system can determine how well the system tools are working and if they are executing their intended purposes. For example, the analysis system can engage with verification tools, gather network and data flow information, and determine issue detection processes to assess the verification tools' capabilities. As an example, the analysis system may discover that vulnerability scanning processes are insufficient for the system or a portion thereof (e.g., scanning is too infrequent).
- the analysis system By engaging with system devices and various departments (e.g., the IT department, the HR department, etc.), the analysis system develops an understanding of system data flow, personnel roles and responsibilities, and execution of disclosed processes, policies, procedures, etc. For example, the analysis system can analyze whether there is overlap in the roles and responsibilities of the security analyst team, whether security reports are sent to the right parties, whether personnel act in accordance with their roles, etc.
- system devices and various departments e.g., the IT department, the HR department, etc.
- the analysis system can assess whether training occurs often enough, covers the correct topics, involves the right people, and whether the matters discussed are enforced. Numerous other pieces of discovered and disclosed issue detection processes analysis information may be gathered from various information sources. The above examples are far from exhaustive.
- FIG. 113 is a diagram of an example of producing a plurality of issue detection ratings and combining them into one rating.
- sixteen individual ratings are generated and then, via the cumulative rating module 607 , are combined into one issue detection rating 608 .
- a first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data.
- a second individual issue detection rating is generated from a combination of organization, guidelines, security functions, implementation, and disclosed data.
- a third individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and discovered data.
- a fourth issue detection issue detection rating is generated from a combination of organization, guidelines, security functions, implementation, and discovered data. The remaining twelve individual issue detection ratings are generated from the combinations shown.
- FIG. 114 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating.
- two individual ratings are generated and then, via the cumulative rating module 607 , are combined into one issue detection rating 608 .
- a first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data.
- a second individual issue detection rating is generated from a combination of organization, guidelines, security functions, implementation, and disclosed data. This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data and the implementation of the issue detection security functions of the organization from the guidelines of the disclosed data. This comparison provides a metric for determining how well the guidelines were understood and how well they were used and/or applied.
- FIG. 115 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating.
- two individual ratings are generated and then, via the cumulative rating module 607 , are combined into one issue detection rating 608 .
- a first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data.
- a second individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and discovered data. This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data and the understanding of the issue detection security functions of the organization from the guidelines of the discovered data. This comparison provides a metric for determining how well the guidelines were believed to be understood and how well they were actually understood.
- FIG. 116 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating.
- two individual ratings are generated and then, via the cumulative rating module 607 , are combined into one issue detection rating 608 .
- a first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data.
- a second individual issue detection rating is generated from a combination of organization, system requirements, security functions, understanding, and disclosed data. This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data and the understanding of the issue detection security functions of the organization from the system requirements of the disclosed data. This comparison provides a metric for determining how well the guidelines were converted into the system requirements.
- FIG. 117 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating.
- four individual ratings are generated and then, via the cumulative rating module 607 , are combined into one issue detection rating 608 .
- a first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data.
- a second individual issue detection rating is generated from a combination of organization, system requirements, security functions, understanding, and disclosed data.
- a third individual issue detection rating is generated from a combination of organization, system design, security functions, understanding, and disclosed data.
- a fourth individual issue detection rating is generated from a combination of organization, system build, security functions, understanding, and disclosed data.
- This comparison provides a metric for determining how well the guidelines, system requirements, system design, and/or system build were understood with respect to each and how well they were used and/or applied.
- FIG. 118 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and determining deficiencies.
- the method includes one or more of steps 820 - 823 .
- the analysis system determines a system criteria deficiency (e.g., guidelines, system requirements, system design, system build, and/or resulting system) of the system aspect based on the issue detection rating and the issue detection data. Examples have been discussed with reference to one or more preceding figures.
- a system criteria deficiency e.g., guidelines, system requirements, system design, system build, and/or resulting system
- the analysis system determines a system mode deficiency (e.g., assets, system functions, and/or security functions) of the system aspect based on the issue detection rating and the issue detection data.
- the analysis system determines an evaluation perspective deficiency (e.g., understanding, implementation, operation, and/or self-analysis) of the system aspect based on the issue detection rating and the issue detection data.
- the analysis system determines an evaluation viewpoint deficiency (e.g., disclosed, discovered, and/or desired) of the system aspect based on the issue detection rating and the issue detection data. Examples have been discussed with reference to one or more preceding figures.
- FIG. 119 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and determining auto-corrections.
- the method begins at step 824 where the analysis system determines a deficiency of the system aspect based on the issue detection rating and/or the issue detection data as discussed with reference to FIG. 118 .
- the method continues at step 825 where the analysis system determines whether the deficiency is auto-correctable. For example, is the deficiency regarding software and if so, can it be auto-corrected.
- the method continues at step 826 where the analysis system includes the identified deficiency in a report. If, however, the deficiency is auto-correctable, the method continues at step 827 where the analysis system auto-corrects the deficiency. The method continues at step 828 where the analysis system includes the identified deficiency and auto-correction in a report. Examples of auto-correction have been discussed with reference to one or more preceding Figures.
- FIG. 120 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof.
- the method begins at step 830 where the analysis system selects a system, or portion thereof, to evaluate issue detection of the system, or portion thereof, with respect to assets, system functions, and/or security functions.
- the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated.
- the analysis system selects one more evaluation perspective, one or more evaluation viewpoints, and/or detect as the for the evaluation category.
- the analysis system may further select one or more detect sub-categories and/or one or more sub-sub categories.
- the analysis system selects the entire system (e.g., the organization/enterprise), selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group.
- the analysis system selects one or more physical assets and/or one or more conceptual assets.
- the method continues at step 831 where the analysis system obtains issue detection information regarding the system, or portion thereof.
- the issue detection information includes information representative of an organization's understanding of issue detection of the system, or portion thereof, with respect to the assets, the system functions, and/or the security functions.
- the analysis system obtains the issue detection information (e.g., disclosed data from the system) by receiving it from a system admin computing entity.
- the analysis system obtains the issue detection information by gathering it from one or more computing entities of the system.
- step 832 the analysis system engages with the system, or portion thereof, to produce system issue detection data (e.g., discovered data) regarding the system, or portion thereof, with respect to the assets, the system functions, and/or the security functions. Engaging the system, or portion thereof, will be discussed in greater detail with reference to FIG. 121 .
- system issue detection data e.g., discovered data
- step 833 the analysis system calculates an issue detection rating regarding the issue detection of the system, or portion thereof, based on the issue detection information, the system issue detection data, and issue detection processes, issue detection policies, issue detection documentation, and/or issue detection automation.
- the issue detection rating may be indicative of a variety of factors of the system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects an understanding of issue detection with respect to security functions of the system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects an understanding of issue detection with respect to system functions of system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects an understanding of issue detection with respect to assets the system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects intended implementation of issue detection with respect to assets of the system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects intended operation of issue detection with respect to assets of the system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects intended implementation of issue detection with respect to system functions of the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects intended operation of issue detection with respect to system functions of system, or portion thereof.
- the issue detection rating indicates how well the issue detection information reflects intended implementation of issue detection with respect to security functions of the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects intended operation of issue detection with respect to security functions of the system, or portion thereof.
- the method continues at step 834 where the analysis system gathers desired system issue detection data from one or more system proficiency resources.
- the method continues at step 835 where the analysis system calculates a second issue detection rating regarding a desired level of issue detection of the system, or portion thereof, based on the issue detection information, the issue detection data, the desired issue detection data, and the issue detection processes, the issue detection policies, the issue detection documentation, and/or the issue detection automation.
- the second issue detection rating is regarding a comparison of desired data with the disclosed data and/or discovered data.
- FIG. 121 is a logic diagram of a further example of an analysis system determining an issue detection rating for system, or portion thereof; in particular engaging the system, or portion thereof to obtain data.
- the method begins at step 836 where the analysis system interprets the issue detection information to identify components (e.g., computing device, HW, SW, server, etc.) of the system, or portion thereof.
- the method continues at step 837 where the analysis system queries a component regarding implementation, function, and/or operation of the component.
- step 838 the analysis system evaluates a response from the component concurrence with a portion of the issue detection information relevant to the component.
- the method continues at step 839 where the analysis system determines whether the response concurs with a portion of the issue detection information. If the response concurs, the method continues at step 840 where the analysis system adds a data element (e.g., a record entry, a note, set a flag, etc.) to the system issue detection data regarding the substantial concurrence of the response from the component with the portion of the issue detection information relevant to the component.
- a data element e.g., a record entry, a note, set a flag, etc.
- the method continues at step 841 where the analysis system adds a data element (e.g., a record entry in a table, a note, set a flag, etc.) to the system issue detection data regarding the response from the component not substantially concurring with the portion of the issue detection information relevant to the component.
- a data element e.g., a record entry in a table, a note, set a flag, etc.
- the non-concurrence is indicative of a deviation in the implementation, function, and/or operation of the component as identified in the response from disclosed implementation, function, and/or operation of the component as contained in the issue detection information.
- the deviation is different HW, different SW, different network access, different data access, different data flow, coupled to different other components, and/or other differences.
- the method continues in FIG. 122 at step 842 where the analysis system queries the component and/or another component regarding a cause for the deviation.
- the method continues at step 843 where the analysis system updates a data element to include an indication of the one or more causes for a deviation, wherein a cause for the deviation is based on responses from the component and/or the other component.
- step 844 the analysis system determines whether the deviation is a communication deviation. If yes, the method continues at step 845 where the analysis system evaluate a response from the device to ascertain an error of the issue detection information regarding the device and/or the communication between the device and the component. The method continues at step 846 where the analysis system determines one or more causes of the error of the communication deviation.
- the method continues at step 847 where the analysis system determines whether the deviation is a system function deviation. If yes, the method continues at step 848 where the analysis system evaluate a response from the device to ascertain an error of the issue detection information regarding the device and/or the system function of the device. The method continues at step 849 where the analysis system determines one or more causes of the error of the system function deviation.
- the method continues at step 850 where the analysis system determines whether the deviation is a security function deviation. If yes, the method continues at step 851 where the analysis system evaluate a response from the device to ascertain an error of the issue detection information regarding the device and/or the security function of the device. The method continues at step 852 where the analysis system determines one or more causes of the error of the security function deviation.
- the method continues at step 853 where the analysis system evaluates a device response from the device to ascertain an error of the issue detection information regarding the device and/or of the device.
- the method continues at step 854 where the analysis system determines one or more causes of the error of the information and/or of the device.
- FIG. 123 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and in particular for calculating the issue detection rating.
- the method begins at steps 851 and 856 .
- the analysis system processes the issue detection information into policy related issue detection information, process related issue detection information, documentation related issue detection information, and/or automation related issue detection information.
- the analysis system processes the system issue detection data into policy related system issue detection data, process related system issue detection data, documentation related system issue detection data, and/or automation related system issue detection data.
- the analysis system evaluates the process related issue detection information with respect to the process related system issue detection data to produce a process issue detection rating.
- the analysis system evaluates the policy related issue detection information with respect to the policy related system issue detection data to produce a policy issue detection rating.
- the analysis system evaluates the documentation related issue detection information with respect to the documentation related system issue detection data to produce a documentation issue detection rating.
- the analysis system evaluates the automation related issue detection information with respect to the automation related system issue detection data to produce an automation issue detection rating.
- step 861 the analysis system generates an issue detection rating based on the automation issue detection rating, the documentation issue detection rating, the process issue detection rating, and the policy issue detection rating.
- the analysis system performs a function on the automation issue detection rating, the documentation issue detection rating, the process issue detection rating, and the policy issue detection rating to produce the issue detection rating.
- the function is a weight average, standard deviation, statistical analysis, trending, and/or other mathematical function.
- FIG. 124 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and in particular engaging the system to obtain issue detection data.
- the method begins at step 862 where the analysis system determines data gathering criteria and/or parameters. The determining of data gathering parameters has been discussed with reference to one or mor preceding Figures and one or more subsequent Figures.
- the method continues at step 863 where the analysis system identifies a user device and queries it for data in accordance with the data gathering parameters.
- the method continues at step 864 where the analysis system obtains a data response from the user device.
- the data response includes data regarding the user device relevant to the data gathering parameters. An example of user device data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 865 where the analysis system catalogs the user device (e.g., records it as being part of the system, or portion thereof, if not already cataloged).
- the method continues at step 866 where the analysis system identifies vendor information regarding the user device.
- the method continues at step 867 where the analysis system tags the data regarding the user device with the vendor information. This enables data to be sorted, searched, etc. based on vendor information.
- step 868 the analysis system determines whether data has been received from all relevant user devices. If not, the method repeats at step 863 . If yes, the method continues at step 869 where the analysis system identifies a storage device and queries it for data in accordance with the data gathering parameters. The method continues at step 870 where the analysis system obtains a data response from the storage device. The data response includes data regarding the storage device relevant to the data gathering parameters. An example of storage device data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 871 where the analysis system catalogs the storage device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the storage device responds.
- the method continues at step 872 where the analysis system identifies vendor information regarding the storage device.
- the method continues at step 873 where the analysis system tags the data regarding the storage device with the vendor information.
- the method continues at step 874 where the analysis system determines whether data has been received from all relevant storage devices. If not, the method repeats at step 869 .
- the method continues at step 875 where the analysis system identifies a server device and queries it for data in accordance with the data gathering parameters.
- the method continues at step 876 where the analysis system obtains a data response from the server device.
- the data response includes data regarding the server device relevant to the data gathering parameters. An example of server device data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 877 where the analysis system catalogs the server device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the server device responds.
- step 878 the analysis system identifies vendor information regarding the server device.
- step 879 the analysis system tags the data regarding the server device with the vendor information.
- step 880 of FIG. 125 the analysis system determines whether data has been received from all relevant server devices. If not, the method repeats at step 875 .
- step 881 the analysis system identifies a security device and queries it for data in accordance with the data gathering parameters.
- step 882 the analysis system obtains a data response from the security device.
- the data response includes data regarding the security device relevant to the data gathering parameters. An example of security device data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 883 where the analysis system catalogs the security device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the security device responds.
- the method continues at step 884 where the analysis system identifies vendor information regarding the security device.
- the method continues at step 885 where the analysis system tags the data regarding the security device with the vendor information.
- the method continues at step 886 where the analysis system determines whether data has been received from all relevant security devices. If not, the method repeats at step 881 .
- step 887 the analysis system identifies a security tool and queries it for data in accordance with the data gathering parameters.
- step 888 the analysis system obtains a data response from the security tool.
- the data response includes data regarding the security tool relevant to the data gathering parameters. An example of security tool data was discussed with reference to one or more of FIGS. 75-79 .
- step 889 the analysis system catalogs the security tool (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the security tool responds via hardware on which the tool operates.
- the security tool e.g., records it as being part of the system, or portion thereof, if not already cataloged
- step 890 the analysis system identifies vendor information regarding the security tool.
- step 891 the analysis system tags the data regarding the security tool with the vendor information.
- step 892 the analysis system determines whether data has been received from all relevant security tools. If not, the method repeats at step 887 .
- the method continues at step 893 where the analysis system identifies a network device and queries it for data in accordance with the data gathering parameters.
- the method continues at step 894 where the analysis system obtains a data response from the network device.
- the data response includes data regarding the network device relevant to the data gathering parameters. An example of network device data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 895 where the analysis system catalogs the network device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the network device responds.
- step 896 the analysis system identifies vendor information regarding the network device.
- step 897 the analysis system tags the data regarding the network device with the vendor information.
- step 898 of FIG. 126 the analysis system determines whether data has been received from all relevant network devices. If not, the method repeats at step 893 .
- the method continues at step 899 where the analysis system identifies another device (e.g., any other device that is part of the system, interfaces with the system, uses the system, and/or supports the system) and queries it for data in accordance with the data gathering parameters.
- the analysis system obtains a data response from the other device.
- the data response includes data regarding the other device relevant to the data gathering parameters. An example of other device data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 901 where the analysis system catalogs the other device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the other device responds.
- the method continues at step 902 where the analysis system identifies vendor information regarding the other device.
- the method continues at step 903 where the analysis system tags the data regarding the other device with the vendor information.
- the method continues at step 904 where the analysis system determines whether data has been received from all relevant other devices. If not, the method repeats at step 899 .
- the method continues at step 905 where the analysis system identifies another tool (e.g., any other tool that is part of the system, interprets the system, monitors the system, and/or supports the system) and queries it for data in accordance with the data gathering parameters.
- the analysis system obtains a data response from the other tool.
- the data response includes data regarding the other tool relevant to the data gathering parameters. An example of other tool data was discussed with reference to one or more of FIGS. 75-79 .
- the method continues at step 907 where the analysis system catalogs the other tool (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the other tool responds via hardware on which the tool operates
- the method continues at step 908 where the analysis system identifies vendor information regarding the other tool.
- the method continues at step 909 where the analysis system tags the data regarding the other tool with the vendor information.
- the method continues at step 910 where the analysis system determines whether data has been received from all relevant other tools. If not, the method repeats at step 905 . If yes, the method continues at step 911 where the analysis system ends the process or repeats it for another part of the system.
- FIG. 127 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof and, in particular, identifying a device or a tool relevant to issue detection.
- the method begins at step 920 where the analysis system determines whether a device (e.g., hardware and/or software) or tool is already included in the regarding the data gathering parameters information (e.g., the disclosed data for a particular analysis of the system, or portion thereof). If yes, the method continues at step 921 where the analysis module determines whether it's done with identifying devices or tools relevant to issue detection. If yes, the method is ended. If not, the method repeats at step 920 .
- a device e.g., hardware and/or software
- the analysis module determines whether it's done with identifying devices or tools relevant to issue detection. If yes, the method is ended. If not, the method repeats at step 920 .
- the method continues at step 922 where the analysis system engages one or more detection (or discovery) tools to detect a device and/or a tool relevant to issue detection. Examples of detection tools were discussed with reference to one or more preceding figures.
- the method continues at step 923 where the analysis system determines whether the detection tool(s) has identified a device (e.g., hardware and/or software). If not, the method continues at step 924 where the analysis system determines whether the detection tool(s) has identified a tool. If not, the method repeats at step 921 .
- the method continues at step 925 where the analysis system obtains a data response from the tool, via hardware on which the tool operates, in regard to a data gathering request.
- the data response includes data regarding the tool. Examples of the data regarding the tool were discussed with reference to one or more of FIGS. 75-79 .
- step 926 the analysis system determines whether the tool is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues at step 926 where the analysis system adds the tool to the issue detection information and the method continues at step 921 .
- the method continues at step 928 where the analysis system verifies the tool as being part of the system and then catalogs it as part of the system.
- the method continues at step 929 where the analysis system identifies vendor information regarding the tool.
- the method continues at step 930 where the analysis system tags the data regarding the tool with the vendor information. The method repeats at step 921 .
- step 931 the analysis system obtains a data response from the device in regard to a data gathering request.
- the data response includes data regarding the device. Examples of the data regarding the device were discussed with reference to one or more of FIGS. 75-79 .
- step 933 the analysis system determines whether the device (e.g., hardware and/or software) is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues at step 932 where the analysis system adds the devices to the issue detection information and the method continues at step 921 .
- the device e.g., hardware and/or software
- the method continues at step 934 where the analysis system verifies the device as being part of the system and then catalogs it as part of the system.
- the method continues at step 935 where the analysis system identifies vendor information regarding the device.
- the method continues at step 936 where the analysis system tags the data regarding the device with the vendor information. The method repeats at step 921 .
- FIG. 128 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof and, in particular, identifying a device or a tool relevant to issue detection.
- the method begins at step 940 where the analysis system determines whether a device (e.g., hardware and/or software) or tool is already included in the issue detection information (e.g., the disclosed data for a particular analysis of the system, or portion thereof). If yes, the method continues at step 941 where the analysis module determines whether it's done with identifying devices or tools. If yes, the method is ended. If not, the method repeats at step 940 .
- a device e.g., hardware and/or software
- the analysis module determines whether it's done with identifying devices or tools. If yes, the method is ended. If not, the method repeats at step 940 .
- the method continues at step 942 where the analysis system interprets data from an identified device and/or tool (e.g., already in the issue detection information) with regards to a device or tool. For example, the analysis system looks for data regarding an identified device exchanging data with the device being reviewed. As another example, the analysis system looks for data regarding a tool being used on the device under review to repair a software issue.
- an identified device and/or tool e.g., already in the issue detection information
- step 943 the analysis system determines whether the data has identified such a device (e.g., hardware and/or software). If not, the method continues at step 944 where the analysis system determines whether the detection tool(s) has identified such a tool. If not, the method repeats at step 941 .
- a device e.g., hardware and/or software
- the method continues at step 945 where the analysis system obtains a data response from the tool, via hardware on which the tool operates, in regard to a data gathering request.
- the data response includes data regarding the tool. Examples of the data regarding the tool were discussed with reference to one or more of FIGS. 75-79 .
- step 947 the analysis system determines whether the tool is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues at step 946 where the analysis system adds the tool to the issue detection information and the method continues at step 941 .
- the method continues at step 948 where the analysis system verifies the tool as being part of the system and then catalogs it as part of the system.
- the method continues at step 949 where the analysis system identifies vendor information regarding the tool.
- the method continues at step 950 where the analysis system tags the data regarding the tool with the vendor information. The method repeats at step 921 .
- step 943 the method continues at step 951 where the analysis system obtains a data response from the device in regard to a data gathering request.
- the data response includes data regarding the device. Examples of the data regarding the device were discussed with reference to one or more of FIGS. 75-79 .
- step 953 the analysis system determines whether the device (e.g., hardware and/or software) is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues at step 952 where the analysis system adds the devices to the issue detection information and the method continues at step 941 .
- the device e.g., hardware and/or software
- the method continues at step 954 where the analysis system verifies the device as being part of the system and then catalogs it as part of the system.
- the method continues at step 955 where the analysis system identifies vendor information regarding the device.
- the method continues at step 956 where the analysis system tags the data regarding the device with the vendor information. The method repeats at step 941 .
- FIG. 129 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and in particular to generating data gathering criteria (or parameters).
- the method begins at step 960 where the analysis system determines whether the current analysis is for the entire system or a portion thereof. If the analysis is for the entire system, the method continues at step 962 where the analysis system prepares to analyze the entire system. If the analysis is for a portion of the system, the method continues at step 961 where the analysis system determines the particular section (e.g., identifies one or more system elements).
- step 963 the analysis system determines whether the current analysis has identified evaluation criteria (e.g., guidelines, system requirements, system design, system build, and/or resulting system). If yes, the method continues at step 964 where the analysis system determines the specific evaluation criteria. If not, the method continues at step 965 where the analysis system determines a set of default evaluation criteria (e.g., one or more of the evaluation criteria).
- evaluation criteria e.g., guidelines, system requirements, system design, system build, and/or resulting system.
- step 966 the analysis system determines whether the current analysis has identified an evaluation mode (e.g., assets, system functions, and/or security functions). If yes, the method continues at step 966 where the analysis system determines the specific evaluation mode(s). If not, the method continues at step 967 where the analysis system determines a set of default evaluation modes (e.g., one or more of the evaluation modes).
- an evaluation mode e.g., assets, system functions, and/or security functions.
- step 968 the analysis system determines whether the current analysis has identified an evaluation perspective (e.g., understanding, implementation, and/or operation). If yes, the method continues at step 969 where the analysis system determines the specific evaluation perspective(s). If not, the method continues at step 970 where the analysis system determines a set of default evaluation perspectives (e.g., one or more of the evaluation perspectives).
- an evaluation perspective e.g., understanding, implementation, and/or operation.
- step 971 the analysis system determines whether the current analysis has identified an evaluation viewpoint (e.g., disclosed, discovered, desired, and/or self-analysis). If yes, the method continues at step 972 where the analysis system determines the specific evaluation viewpoint(s). If not, the method continues at step 973 where the analysis system determines a set of default evaluation viewpoints (e.g., one or more of the evaluation viewpoints).
- an evaluation viewpoint e.g., disclosed, discovered, desired, and/or self-analysis.
- the method continues at step 974 where the analysis system determines whether the current analysis has identified an evaluation category, and/or sub-categories (e.g., categories include identify, protect, detect, response, and/or recover). If yes, the method continues at step 975 where the analysis system determines one or more specific evaluation categories and/or sub-categories. If not, the method continues at step 977 where the analysis system determines a set of default evaluation categories and/or sub-categories (e.g., one or more of the evaluation categories and/or sub-categories). The method continues at step 976 where the analysis system determines the data gathering criteria (or parameters) based on the determination made in the previous steps.
- evaluation category e.g., categories include identify, protect, detect, response, and/or recover. If yes, the method continues at step 975 where the analysis system determines one or more specific evaluation categories and/or sub-categories. If not, the method continues at step 977 where the analysis system determines a set of default evaluation categories and/or sub-catego
- FIG. 130 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof.
- the method begins at step 1000 where the analysis system selects a system, or portion thereof, to evaluate anomaly and event awareness of the system, or portion thereof.
- the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated.
- the analysis system selects one more evaluation perspective and/or one or more evaluation viewpoints.
- the analysis system may further select one or more sub-sub categories of the anomaly and event awareness sub-category of the issue detection (e.g., detect) category.
- the analysis system selects the entire system, selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group.
- the analysis system selects one or more physical assets and/or one or more conceptual assets.
- the method continues at step 1001 where the analysis system obtains anomaly and event awareness information regarding the system, or portion thereof.
- the anomaly and event awareness information includes information representative of an organization's understanding of the system, or portion thereof, with respect to establishment and management of network operations and expected data flows, detected event analysis, event data aggregation and correlation, event impact determination, and incident alert threshold establishment.
- the analysis system obtains the anomaly and event awareness information (e.g., disclosed data from the system) by receiving it from a system admin computing entity.
- the analysis system obtains the anomaly and event awareness information by gathering it from one or more computing entities of the system.
- step 1002 the analysis system engages with the system, or portion thereof, to produce system anomaly and event awareness data (e.g., discovered data) regarding the system, or portion thereof, with respect to the establishment and management of network operations and expected data flows, the detected event analysis, the event data aggregation and correlation, the event impact determination, and the incident alert threshold establishment.
- system anomaly and event awareness data e.g., discovered data
- the analysis system calculates an anomaly and event awareness rating regarding the anomaly and event awareness of the system, or portion thereof, based on the anomaly and event awareness information, the system anomaly and event awareness data, and anomaly and event awareness processes, anomaly and event awareness policies, anomaly and event awareness documentation, and/or anomaly and event awareness automation.
- the anomaly and event awareness rating may be indicative of a variety of factors of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects an understanding of the anomaly and event awareness with respect to assets of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects an understanding of the anomaly and event awareness with respect to system functions of system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects an understanding of the anomaly and event awareness with respect to the security functions of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended implementation of the anomaly and event awareness with respect to the assets of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended operation of the anomaly and event awareness with respect to the assets of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended implementation of the anomaly and event awareness with respect to the system functions of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended operation of the anomaly and event awareness with respect to the system functions of system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended implementation of the anomaly and event awareness with respect to the security functions of the system, or portion thereof.
- the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended operation of the anomaly and event awareness with respect to the security functions of the system, or portion thereof.
- the method continues at step 1004 where the analysis system gathers desired system anomaly and event awareness data from one or more system proficiency resources.
- the method continues at step 1005 where the analysis system calculates a second anomaly and event awareness rating regarding a desired level of anomaly and event awareness of the system, or portion thereof, based on the anomaly and event awareness information, the system anomaly and event awareness data, the desired anomaly and event awareness data, and the anomaly and event awareness processes, the anomaly and event awareness policies, the anomaly and event awareness documentation, and/or the anomaly and event awareness automation.
- the second anomaly and event awareness rating is regarding a comparison of desired data with the disclosed data and/or discovered data.
- FIG. 131 is a logic diagram of an example of an analysis system determining an anomaly and event awareness evaluation rating for a system, or portion thereof.
- the method begins at step 1006 where the analysis system determines a system aspect (see FIG. 69 ) of a system for an anomaly and event awareness evaluation.
- An anomaly and event awareness evaluation includes evaluating the system's establishment and management of network operations and expected data flows, detected event analysis, event data aggregation and correlation, event impact determination, and incident alert threshold establishment.
- An evaluation perspective is an understanding perspective, an implementation perspective, an operation perspective, or a self-analysis perspective.
- An understanding perspective is with regard to how well the assets, system functions, and/or security functions with respect to anomaly and event awareness are understood.
- An implementation perspective is with regard to how well the assets, system functions, and/or security functions with respect to anomaly and event awareness are implemented.
- An operation perspective is with regard to how well the assets, system functions, and/or security functions with respect to anomaly and event awareness operate.
- a self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions with respect to anomaly and event awareness.
- the method continues at step 1008 where the analysis system determines at least one evaluation viewpoint for use in performing the anomaly and event awareness evaluation on the system aspect.
- An evaluation viewpoint is disclosed viewpoint, a discovered viewpoint, or a desired viewpoint.
- a disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data.
- a discovered viewpoint is with regard to analyzing the system aspect based on the discovered data.
- a desired viewpoint is with regard to analyzing the system aspect based on the desired data.
- step 1009 the analysis system obtains anomaly and event awareness data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint.
- Anomaly and event awareness data is data obtained that is regarding the system aspect.
- step 1010 the analysis system calculates an anomaly and event awareness rating as a measure of system anomaly and event awareness maturity for the system aspect based on the anomaly and event awareness data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric.
- An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric.
- FIG. 132 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof.
- the method begins at step 1020 where the analysis system selects a system, or portion thereof, to evaluate continuous security monitoring of the system, or portion thereof. For example, the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated. As another example, the analysis system selects one more evaluation perspective and/or one or more evaluation viewpoints. The analysis system may further select one or more sub-sub categories of the continuous security monitoring sub-category of the issue detection (e.g., detect) category.
- issue detection e.g., detect
- the analysis system selects the entire system, selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group.
- the analysis system selects one or more physical assets and/or one or more conceptual assets.
- the method continues at step 1021 where the analysis system obtains continuous security monitoring information regarding the system, or portion thereof.
- the continuous security monitoring information includes information representative of an organization's understanding of the system, or portion thereof, with respect to network monitoring, physical environment monitoring, personnel activity monitoring, malicious code detection, unauthorized mobile code detection, external service provider activity monitoring, unauthorized personnel monitoring, unauthorized connection monitoring, unauthorized device monitoring, unauthorized software monitoring, and vulnerability scanning.
- the analysis system obtains the continuous security monitoring information (e.g., disclosed data from the system) by receiving it from a system admin computing entity. In another example, the analysis system obtains the continuous security monitoring information by gathering it from one or more computing entities of the system.
- the continuous security monitoring information e.g., disclosed data from the system
- step 1022 the analysis system engages with the system, or portion thereof, to produce system continuous security monitoring data (e.g., discovered data) regarding the system, or portion thereof, with respect to the network monitoring, the physical environment monitoring, the personnel activity monitoring, the malicious code detection, the unauthorized mobile code detection, the external service provider activity monitoring, the unauthorized personnel monitoring, the unauthorized connection monitoring, the unauthorized device monitoring, the unauthorized software monitoring, and the vulnerability scanning.
- system continuous security monitoring data e.g., discovered data
- step 1023 the analysis system calculates a continuous security monitoring rating regarding the continuous security monitoring of the system, or portion thereof, based on the continuous security monitoring information, the system continuous security monitoring data, and continuous security monitoring processes, continuous security monitoring policies, continuous security monitoring documentation, and/or continuous security monitoring automation.
- the continuous security monitoring rating may be indicative of a variety of factors of the system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects an understanding of the continuous security monitoring with respect to assets of the system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects an understanding of the continuous security monitoring with respect to system functions of system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects an understanding of the continuous security monitoring with respect to the security functions of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended implementation of the continuous security monitoring with respect to the assets of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended operation of the continuous security monitoring with respect to the assets of the system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended implementation of the continuous security monitoring with respect to the system functions of the system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended operation of the continuous security monitoring with respect to the system functions of system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended implementation of the continuous security monitoring with respect to the security functions of the system, or portion thereof.
- the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended operation of the continuous security monitoring with respect to the security functions of the system, or portion thereof.
- the method continues at step 1024 where the analysis system gathers desired continuous security monitoring data from one or more system proficiency resources.
- the method continues at step 1025 where the analysis system calculates a second continuous security monitoring rating regarding a desired level of continuous security monitoring of the system, or portion thereof, based on the continuous security monitoring information, the system continuous security monitoring data, the desired continuous security monitoring data, and the continuous security monitoring processes, the continuous security monitoring policies, the continuous security monitoring documentation, and/or the continuous security monitoring automation.
- the continuous security monitoring rating is regarding a comparison of desired data with the disclosed data and/or discovered data.
- FIG. 133 is a logic diagram of an example of an analysis system determining a continuous security monitoring evaluation rating for a system, or portion thereof.
- the method begins at step 1026 where the analysis system determines a system aspect (see FIG. 69 ) of a system for a continuous security monitoring evaluation.
- a continuous security monitoring evaluation includes evaluating the system's network monitoring, physical environment monitoring, personnel activity monitoring, malicious code detection, unauthorized mobile code detection, external service provider activity monitoring, unauthorized personnel monitoring, unauthorized connection monitoring, unauthorized device monitoring, unauthorized software monitoring, and vulnerability scanning.
- An evaluation perspective is an understanding perspective, an implementation perspective, an operation perspective, or a self-analysis perspective.
- An understanding perspective is with regard to how well the assets, system functions, and/or security functions with respect to continuous security monitoring are understood.
- An implementation perspective is with regard to how well the assets, system functions, and/or security functions with respect to continuous security monitoring are implemented.
- An operation perspective is with regard to how well the assets, system functions, and/or security functions with respect to continuous security monitoring operate.
- a self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions with respect to continuous security monitoring.
- the method continues at step 1028 where the analysis system determines at least one evaluation viewpoint for use in performing the continuous security monitoring evaluation on the system aspect.
- An evaluation viewpoint is disclosed viewpoint, a discovered viewpoint, or a desired viewpoint.
- a disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data.
- a discovered viewpoint is with regard to analyzing the system aspect based on the discovered data.
- a desired viewpoint is with regard to analyzing the system aspect based on the desired data.
- step 1029 the analysis system obtains continuous security monitoring data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint.
- Continuous security monitoring data is data obtained that is regarding the system aspect.
- step 1030 the analysis system calculates a continuous security monitoring rating as a measure of system continuous security monitoring maturity for the system aspect based on the continuous security monitoring data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric.
- An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric.
- FIG. 134 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof.
- the method begins at step 1040 where the analysis system selects a system, or portion thereof, to evaluate issue detection processes analysis the system, or portion thereof. For example, the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated. As another example, the analysis system selects one more evaluation perspective and/or one or more evaluation viewpoints. The analysis system may further select one or more sub-sub categories of the issue detection processes analysis sub-category of the issue detection (e.g., detect) category.
- issue detection e.g., detect
- the analysis system selects the entire system, selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group.
- the analysis system selects one or more physical assets and/or one or more conceptual assets.
- the method continues at step 1041 where the analysis system obtains issue detection processes analysis information regarding the system, or portion thereof.
- the issue detection processes analysis information includes information representative of an organization's understanding of the system, or portion thereof, with respect to defined issue detection roles, defined issue detection responsibilities, issue detection compliance verification, testing of issue detection processes, event detection communication, and continuous improvement of issue detection processes.
- the analysis system obtains the issue detection processes analysis information (e.g., disclosed data from the system) by receiving it from a system admin computing entity.
- the analysis system obtains the issue detection processes analysis information by gathering it from one or more computing entities of the system.
- step 1042 the analysis system engages with the system, or portion thereof, to produce system issue detection processes analysis data (e.g., discovered data) regarding the system, or portion thereof, with respect to the defined issue detection roles, the defined issue detection responsibilities, the issue detection compliance verification, the testing of issue detection processes, the event detection communication, and the continuous improvement of issue detection processes.
- analysis data e.g., discovered data
- step 1043 the analysis system calculates an issue detection processes analysis rating regarding the issue detection processes analysis of the system, or portion thereof, based on the issue detection processes analysis information, the system issue detection processes analysis data, and issue detection processes analysis processes, issue detection processes analysis policies, issue detection processes analysis documentation, and/or issue detection processes analysis automation.
- the issue detection processes analysis rating may be indicative of a variety of factors of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects an understanding of the issue detection processes analysis with respect to assets of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects an understanding of the issue detection processes analysis with respect to system functions of system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects an understanding of the issue detection processes analysis with respect to the security functions of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended implementation of the issue detection processes analysis with respect to the assets of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended operation of the issue detection processes analysis with respect to the assets of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended implementation of the issue detection processes analysis with respect to the system functions of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended operation of the issue detection processes analysis with respect to the system functions of system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended implementation of the issue detection processes analysis with respect to the security functions of the system, or portion thereof.
- the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended operation of the issue detection processes analysis with respect to the security functions of the system, or portion thereof.
- the method continues at step 1044 where the analysis system gathers desired system issue detection processes analysis data from one or more system proficiency resources.
- the method continues at step 1045 where the analysis system calculates a second issue detection processes analysis rating regarding a desired level of issue detection processes analysis of the system, or portion thereof, based on the issue detection processes analysis information, the system issue detection processes analysis data, the desired issue detection processes analysis data, and the issue detection processes analysis processes, the issue detection processes analysis policies, the issue detection processes analysis documentation, and/or the issue detection processes analysis automation.
- the second issue detection processes analysis rating is regarding a comparison of desired data with the disclosed data and/or discovered data.
- FIG. 135 is a logic diagram of an example of an analysis system determining an issue detection processes analysis rating for a system, or portion thereof.
- the method begins at step 1046 where the analysis system determines a system aspect (see FIG. 69 ) of a system for an issue detection processes analysis evaluation.
- An issue detection processes analysis evaluation includes evaluating the system's defined issue detection roles, defined issue detection responsibilities, issue detection compliance verification, testing of issue detection processes, event detection communication, and continuous improvement of issue detection processes.
- An evaluation perspective is an understanding perspective, an implementation perspective, an operation perspective, or a self-analysis perspective.
- An understanding perspective is with regard to how well the assets, system functions, and/or security functions with respect to issue detection processes analysis are understood.
- An implementation perspective is with regard to how well the assets, system functions, and/or security functions with respect to issue detection processes analysis are implemented.
- An operation perspective is with regard to how well the assets, system functions, and/or security functions with respect to issue detection processes analysis operate.
- a self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions with respect to issue detection processes analysis.
- step 1048 the analysis system determines at least one evaluation viewpoint for use in performing the issue detection processes analysis evaluation on the system aspect.
- An evaluation viewpoint is disclosed viewpoint, a discovered viewpoint, or a desired viewpoint.
- a disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data.
- a discovered viewpoint is with regard to analyzing the system aspect based on the discovered data.
- a desired viewpoint is with regard to analyzing the system aspect based on the desired data.
- step 1049 the analysis system obtains issue detection processes analysis data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint. Issue detection processes analysis data is data obtained that is regarding the system aspect.
- step 1050 the analysis system calculates an issue detection processes analysis rating as a measure of system issue detection processes analysis maturity for the system aspect based on the issue detection processes analysis data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric.
- An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric.
- the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items.
- an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more.
- Other examples of industry-accepted tolerance range from less than one percent to fifty percent.
- Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics.
- tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/ ⁇ 1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitude of differences.
- the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level.
- inferred coupling i.e., where one element is coupled to another element by inference
- the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items.
- the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item.
- the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2 , a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1 .
- the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship.
- one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c”, with more or less elements than “a”, “b”, and “c”.
- the phrases are to be interpreted identically.
- “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c.
- it means: “a” only, “b” only, “c” only, “a” and “b”, “a” and “c”, “b” and “c”, and/or “a”, “b”, and “c”.
- processing module may be a single processing device or a plurality of processing devices.
- a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions.
- the processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit.
- a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
- processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network).
- the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry
- the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.
- the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures.
- Such a memory device or memory element can be included in an article of manufacture.
- a flow diagram may include a “start” and/or “continue” indication.
- the “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines.
- a flow diagram may include an “end” and/or “continue” indication.
- the “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines.
- start indicates the beginning of the first step presented and may be preceded by other activities not specifically shown.
- the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown.
- a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
- the one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples.
- a physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein.
- the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
- transistors in the above described figure(s) is/are shown as field effect transistors (FETs), as one of ordinary skill in the art will appreciate, the transistors may be implemented using any type of transistor structure including, but not limited to, bipolar, metal oxide semiconductor field effect transistors (MOSFET), N-well transistors, P-well transistors, enhancement mode, depletion mode, and zero voltage threshold (VT) transistors.
- FETs field effect transistors
- MOSFET metal oxide semiconductor field effect transistors
- N-well transistors N-well transistors
- P-well transistors P-well transistors
- enhancement mode enhancement mode
- depletion mode depletion mode
- VT zero voltage threshold
- signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
- signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
- a signal path is shown as a single-ended path, it also represents a differential signal path.
- a signal path is shown as a differential path, it also represents a single-ended signal path.
- module is used in the description of one or more of the embodiments.
- a module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions.
- a module may operate independently and/or in conjunction with software and/or firmware.
- a module may contain one or more sub-modules, each of which may be one or more modules.
- a computer readable memory includes one or more memory elements.
- a memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device.
- Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
- the memory device may be in a form a solid-state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Business, Economics & Management (AREA)
- Computing Systems (AREA)
- Human Resources & Organizations (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Strategic Management (AREA)
- Educational Administration (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Debugging And Monitoring (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- The present U.S. Utility Patent Application claims priority pursuant to 35 U.S.C. § 120 as a continuation of U.S. Utility application Ser. No. 17/247,701, entitled “GENERATION OF AN ISSUE DETECTION EVALUATION REGARDING A SYSTEM ASPECT OF A SYSTEM,” filed Dec. 21, 2020, which claims priority pursuant to 35 U.S.C. § 119(e) to U.S. Provisional Application No. 62/992,661, entitled “SYSTEM ANALYSIS SYSTEM,” filed Mar. 20, 2020, both of which are hereby incorporated herein by reference in their entirety and made part of the present U.S. Utility Patent Application for all purposes.
- Not Applicable.
- Not Applicable.
- This disclosure relates to computer systems and more particularly to evaluation of a computer system.
- The structure and operation of the Internet and other publicly available networks are well known and support computer systems (systems) of multitudes of companies, organizations, and individuals. A typical system includes networking equipment, end point devices such as computer servers, user computers, storage devices, printing devices, security devices, and point of service devices, among other types of devices. The networking equipment includes routers, switches, edge devices, wireless access points, and other types of communication devices that intercouple in a wired or wireless fashion. The networking equipment facilitates the creation of one or more networks that are tasked to service all or a portion of a company's communication needs, e.g., Wide Area Networks, Local Area Networks, Virtual Private Networks, etc.
- Each device within a system includes hardware components and software components. Hardware components degrade over time and eventually are incapable of performing their intended functions. Software components must be updated regularly to ensure their proper functionality. Some software components are simply replaced by newer and better software even though they remain operational within a system.
- Many companies and larger organizations have their own Information Technology (IT) departments. Others outsource their IT needs to third party providers. The knowledge requirements for servicing a system typically outstrip the abilities of the IT department or third-party provider. Thus, hardware and software may not be functioning properly and can adversely affect the overall system.
- Cyber-attacks are initiated by individuals or entities with the bad intent of stealing sensitive information such as login/password information, stealing proprietary information such as trade secrets or important new technology, interfering with the operation of a system, and/or holding the system hostage until a ransom is paid, among other improper purposes. A single cyber-attack can make a large system inoperable and cost the system owner many millions of dollars to restore and remedy.
- The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee
-
FIG. 1 is a schematic block diagram of an embodiment of a networked environment that includes systems coupled to an analysis system in accordance with the present disclosure; -
FIGS. 2A-2D are schematic block diagrams of embodiments of a computing device in accordance with the present disclosure; -
FIGS. 3A-3E are schematic block diagrams of embodiments of a computing entity in accordance with the present disclosure; -
FIG. 4 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure; -
FIG. 5 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure; -
FIG. 6 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure; -
FIG. 7 is a schematic block diagram of another embodiment of a networked environment that includes a system coupled to an analysis system in accordance with the present disclosure; -
FIG. 8 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of system elements in accordance with the present disclosure; -
FIG. 9 is a schematic block diagram of an example of a system section of a system selected for evaluation in accordance with the present disclosure; -
FIG. 10 is a schematic block diagram of another example of a system section of a system selected for evaluation in accordance with the present disclosure; -
FIG. 11 is a schematic block diagram of an embodiment of a networked environment having a system that includes a plurality of system assets coupled to an analysis system in accordance with the present disclosure; -
FIG. 12 is a schematic block diagram of an embodiment of a system that includes a plurality of physical assets coupled to an analysis system in accordance with the present disclosure; -
FIG. 13 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of system assets coupled to an analysis system in accordance with the present disclosure; -
FIG. 14 is a schematic block diagram of another embodiment of a system that includes a plurality of physical assets coupled to an analysis system in accordance with the present disclosure; -
FIG. 15 is a schematic block diagram of another embodiment of a system that includes a plurality of physical assets coupled to an analysis system in accordance with the present disclosure; -
FIG. 16 is a schematic block diagram of another embodiment of a system that includes a plurality of physical assets in accordance with the present disclosure; -
FIG. 17 is a schematic block diagram of an embodiment of a user computing device in accordance with the present disclosure; -
FIG. 18 is a schematic block diagram of an embodiment of a server in accordance with the present disclosure; -
FIG. 19 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of system functions coupled to an analysis system in accordance with the present disclosure; -
FIG. 20 is a schematic block diagram of another embodiment of a system that includes divisions, departments, and groups in accordance with the present disclosure; -
FIG. 21 is a schematic block diagram of another embodiment of a system that includes divisions and departments, which include system elements in accordance with the present disclosure; -
FIG. 22 is a schematic block diagram of another embodiment of a division of a system having departments, which include system elements in accordance with the present disclosure; -
FIG. 23 is a schematic block diagram of another embodiment of a networked environment having a system that includes a plurality of security functions coupled to an analysis system in accordance with the present disclosure; -
FIG. 24 is a schematic block diagram of an embodiment an engineering department of a division that reports to a corporate department of a system in accordance with the present disclosure; -
FIG. 25 is a schematic block diagram of an example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 26 is a schematic block diagram of another example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 27 is a schematic block diagram of another example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 28 is a schematic block diagram of another example of an analysis system evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 29 is a schematic block diagram of an example of the functioning of an analysis system evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 30 is a schematic block diagram of another example of the functioning of an analysis system evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 31 is a diagram of an example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 32 is a diagram of another example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 33 is a diagram of another example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 34 is a diagram of another example of evaluation options of an analysis system for evaluating a system element under test of a system in accordance with the present disclosure; -
FIG. 35 is a schematic block diagram of an embodiment of an analysis system coupled to a system in accordance with the present disclosure; -
FIG. 36 is a schematic block diagram of an embodiment of a portion of an analysis system coupled to a system in accordance with the present disclosure; -
FIG. 37 is a schematic block diagram of another embodiment of a portion of an analysis system coupled to a system in accordance with the present disclosure; -
FIG. 38 is a schematic block diagram of an embodiment of a data extraction module of an analysis system coupled to a system in accordance with the present disclosure; -
FIG. 39 is a schematic block diagram of another embodiment of an analysis system coupled to a system in accordance with the present disclosure; -
FIG. 40 is a schematic block diagram of another embodiment of an analysis system coupled to a system in accordance with the present disclosure; -
FIG. 41 is a schematic block diagram of an embodiment of a data analysis module of an analysis system in accordance with the present disclosure; -
FIG. 42 is a schematic block diagram of an embodiment of an analyze and score module of an analysis system in accordance with the present disclosure; -
FIG. 43 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 44 is a diagram of another example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 45 is a diagram of an example of an identification evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure; -
FIG. 46 is a diagram of an example of a protect evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure; -
FIG. 47 is a diagram of an example of a detect evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure; -
FIG. 48 is a diagram of an example of a respond evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure; -
FIG. 49 is a diagram of an example of a recover evaluation category, sub-categories, and sub-sub-categories of the evaluation aspects and in accordance with the present disclosure; -
FIG. 50 is a diagram of a specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 51 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 52 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 53 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 54 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 55 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 56 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for analyzing a section of a system in accordance with the present disclosure; -
FIG. 57 is a diagram of an example of identifying deficiencies and auto-corrections by an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 58 is a schematic block diagram of an embodiment of an evaluation processing module of an analysis system in accordance with the present disclosure; -
FIG. 59 is a state diagram of an example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 60 is a logic diagram of an example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 61 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 62 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 63 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 64 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 65 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 66 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 67 is a logic diagram of another example of an analysis system analyzing a section of a system in accordance with the present disclosure; -
FIG. 68 is a logic diagram of an example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 69 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 70 is a schematic block diagram of an example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 71 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for generating an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 72 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 73 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 74 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 75 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 76 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 77 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 78 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 79 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 80 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 81 is a schematic block diagram of an embodiment of a data analysis module of an analysis system in accordance with the present disclosure; -
FIG. 82 is a schematic block diagram of another embodiment of a data analysis module of an analysis system in accordance with the present disclosure; -
FIG. 83 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 84 is a logic diagram of a further example of generating a process rating for understanding of system build for issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 85 is a logic diagram of a further example of generating a process rating for understanding of verifying issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 86 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 87 is a logic diagram of an example of generating a process rating by the analysis system in accordance with the present disclosure; -
FIG. 88 is a logic diagram of an example of generating a process rating by the analysis system in accordance with the present disclosure; -
FIG. 89 is a logic diagram of an example of generating a process rating by the analysis system in accordance with the present disclosure; -
FIG. 90 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 91 is a logic diagram of a further example of generating a policy rating for understanding of system build for issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 92 is a logic diagram of a further example of generating a policy rating for understanding of verifying issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 93 is a logic diagram of an example of generating a policy rating by the analysis system in accordance with the present disclosure; -
FIG. 94 is a logic diagram of an example of generating a policy rating by the analysis system in accordance with the present disclosure; -
FIG. 95 is a logic diagram of an example of generating a policy rating by the analysis system in accordance with the present disclosure; -
FIG. 96 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 97 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 98 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 99 is a logic diagram of an example of generating a documentation rating by the analysis system in accordance with the present disclosure; -
FIG. 100 is a logic diagram of an example of generating a documentation rating by the analysis system in accordance with the present disclosure; -
FIG. 101 is a logic diagram of an example of generating a documentation rating by the analysis system in accordance with the present disclosure; -
FIG. 102 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 103 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 104 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization in accordance with the present disclosure; -
FIG. 105 is a logic diagram of an example of generating an automation rating by the analysis system in accordance with the present disclosure; -
FIG. 106 is a logic diagram of an example of generating an automation rating by the analysis system in accordance with the present disclosure; -
FIG. 107 is a logic diagram of an example of generating an automation rating by the analysis system in accordance with the present disclosure; -
FIG. 108 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 109 is a diagram of another specific example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of an analysis system for generating an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 110 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 111 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 112 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 113 is a diagram of an example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure; -
FIG. 114 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure; -
FIG. 115 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure; -
FIG. 116 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure; -
FIG. 117 is a diagram of another example of combining one or more individual issue detection ratings into an issue detection rating in accordance with the present disclosure; -
FIG. 118 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 119 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 120 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 121 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 122 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 123 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 124 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 125 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 126 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 127 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 128 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 129 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 130 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 131 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 132 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 133 is a logic diagram of a further example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; -
FIG. 134 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure; and -
FIG. 135 is a logic diagram of another example of an analysis system determining an issue detection rating for a section of a system in accordance with the present disclosure. -
FIG. 1 is a schematic block diagram of an embodiment of a networked environment that includes one ormore networks 14, external data feedssources 15, a plurality of systems 11-13, and ananalysis system 10. The external data feedsources 15 includes one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more non-business associated computing devices 24 (e.g., publiclyavailable servers 27 and subscription based servers 28), one or more BOT (i.e., internet robot)computing devices 25, and one or more badactor computing devices 26. Theanalysis system 10 includes one or moreanalysis computing entities 16, a plurality of analysis system modules 17 (one or more in each of the systems 11-13), and a plurality of storage systems 19-21 (e.g., system Aprivate storage 19, system Bprivate storage 20, through system xprivate storage 21, and other storage). Each of the systems 11-13 includes one or more network interfaces 18 and many more elements not shown inFIG. 1 . - A computing device may be implemented in a variety of ways. A few examples are shown in
FIGS. 2A-2D . A computing entity may be implemented in a variety of ways. A few examples are shown inFIGS. 3A-3E . - A storage system 19-21 may be implemented in a variety of ways. For example, each storage system is a standalone database. As another example, the storage systems are implemented in a common database. A database is a centralized database, a distributed database, an operational database, a cloud database, an object-oriented database, and/or a relational database. A storage system 19-21 is coupled to the
analysis system 10 using a secure data pipeline to limit and control access to the storage systems. The secure data pipeline may be implemented in a variety of ways. For example, the secure data pipeline is implemented on a provide network of the analysis system and/or of a system under test. As another example, the secure data pipeline is implemented via thenetwork 14 using access control, using network controls, implementing access and control policies, using encryption, using data loss prevention tools, and/or using auditing tools. - The one or
more networks 14 includes one or more wide area networks (WAN), one or more local area networks (LAN), one or more wireless LANs (WLAN), one or more cellular networks, one or more satellite networks, one or more virtual private networks (VPN), one or more campus area networks (CAN), one or more metropolitan area networks (MAN), one or more storage area networks (SAN), one or more enterprise private networks (EPN), and/or one or more other type of networks. - In general, a
system proficiency resource 22 is a source for data regarding best-in-class practices (for system requirements, for system design, for system implementation, and/or for system operation), governmental and/or regulatory requirements, security risk awareness and/or risk remediation information, security risk avoidance, performance optimization information, system development guidelines, software development guideline, hardware requirements, networking requirements, networking guidelines, and/or other system proficiency guidance. “Framework for Improving Critical Instructure Cybersecurity”, Version 1.1, Apr. 16, 2018 by the National Institute of Standards and Technology (NIST) is an example of a system proficiency in the form of a guideline for cybersecurity. - A business associated computing device 23 is one that is operated by a business associate of the system owner. Typically, the business associated computing device 23 has access to at least a limited portion of the system to which the general public does not have access. For example, the business associated computing device 23 is operated by a vendor of the organization operating the system and is granted limited access for order placement and/or fulfillment. As another example, the business associated computing device 23 is operated by a customer of the organization operating the system and is granted limited access for placing orders.
- A non-business associated computing device 24 is a computing device operated by a person or entity that does not have a business relationship with the organization operating the system. Such non-business associated computing device 24 are not granted special access to the system. For example, a non-business associated computing device 24 is a publicly
available server 27 to which a user computing device of the system may access. As another example, a non-business associated computing device 24 is a subscription basedservers 28 to which a user computing device of the system may access if it is authorized by a system administrator of the system to have a subscription and has a valid subscription. As yet another example, the non-business associated computing device 24 is a computing device operated by a person or business that does not have an affiliation with the organization operating the system. - A bot (i.e., internet robot)
computing device 25 is a computing device that runs, with little to no human interaction, to interact with a system and/or a computing device of a user via the internet or a network. There are a variety of types of bots. For example, there are social media bots, chatbots, bot crawlers, transaction bots, information bots, and entertainment bots (e.g., games, art, books, etc.). - A bad
actor computing device 26 is a computing device operated by a person whose use of the computing device is for illegal and/or immoral purposes. The badactor computing device 26 may employ a bot to execute an illegal and/or immoral purpose. In addition or in the alternative, the person may instruct the bad actor computing device to perform the illegal and/or immoral purpose, such as hacking, planting a worm, planting a virus, stealing data, uploading false data, and so on. - The
analysis system 10 is operable to evaluate a system 11-13, or portion thereof, in a variety of ways. For example, theanalysis system 10 evaluatessystem A 11, or a portion thereof, by testing the organization's understanding of its system, or portion thereof; by testing the organization's implementation of its system, or portion thereof; and/or by testing the system's, or portion thereof; operation. As a specific example, theanalysis system 10 tests the organization's understanding of its system requirements for the implementation and/or operation of its system, or portion thereof. As another specific example, theanalysis system 10 tests the organization's understanding of its software maintenance policies and/or procedures. As another specific example, theanalysis system 10 tests the organization's understanding of its cybersecurity policies and/or procedures. - There is an almost endless combination of ways in which the
analysis system 10 can evaluate a system 11-13, which may be a computer system, a computer network, an enterprise system, and/or other type of system that includes computing devices operating software. For example, theanalysis system 10 evaluates a system aspect (e.g., the system or a portion of it) based on an evaluation aspect (e.g., options for how the system, or portion thereof, can be evaluated) in view of evaluation rating metrics (e.g., how the system, or portion thereof, is evaluated) to produce an analysis system output (e.g., an evaluation rating, deficiency identification, and/or deficiency auto-correction). - The system aspect (e.g., the system or a portion thereof) includes a selection of one or more system elements of the system, a selection of one or more system criteria, and/or a selection of one or more system modes. A system element of the system includes one or more system assets which is one or more physical assets of the system and/or conceptual assets of the system. For example, a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like. As another example, a conceptual asset is a hardware architectural layout, or portion thereof, and/or a software architectural layout, or portion thereof.
- A system element and/or system asset may be identified in a variety of ways. For example, it is identifiable by its use and/or location within the organization. As a specific example, a system element and/or system asset is identified by an organizational identifier, a division of the organization identifier, a department of a division identifier, a group of a department identifier, and/or a sub-group of a group identifier. In this manner, if the entire system is to be evaluated, the organization identifier is used to select all of the system elements in the system. If a portion of the system is to be test based on business function, then a division, department, group, and/or sub-group identifier is used to select the desired portion of the system.
- In addition or in the alternative, a system element and/or system asset is identifiable based on a serial number, an IP (internet protocol) address, a vendor name, a type of system element and/or system asset (e.g., computing entity, a particular user software application, etc.), registered user of the system element and/or system asset, and/or other identifying metric. In this manner, an individual system element and/or system asset can be evaluated and/or a type of system element and/or system asset can be evaluated (e.g., a particular user software application).
- A system criteria is regarding a level of the system, or portion thereof, being evaluated. For example, the system criteria includes guidelines, system requirements, system design, system build, and resulting system. As a further example, the guidelines (e.g., business objectives, security objectives, NIST cybersecurity guidelines, system objectives, governmental and/or regulatory requirements, third party requirements, etc.) are used to develop the system requirements, which are used to design the system, which is used to the build the resulting system. As such, the system, or potion thereof, can be evaluated from a guideline level, a system requirements level, a design level, a build level, and/or a resulting system level.
- A system mode is regarding a different level of the system, or portion thereof, being evaluated. For example, the system mode includes assets, system functions, and security functions. As such, the system can be evaluated from an assets level, a system function level, and/or a security function level.
- The evaluation aspect (e.g., options for how the system, or portion thereof, can be evaluated) includes a selection of one or more evaluation perspectives, a selection of one or more evaluation viewpoints, and/or a selection of one or more evaluation categories, which may further includes sub-categories, and sub-categories of the sub-categories). An evaluation perspective is understanding of the system, or portion thereof; implementation (e.g., design and build) of the system, or portion thereof; operational performance of the system, or portion thereof; or self-analysis of the system, or portion thereof.
- An evaluation viewpoint is disclosed information from the system, discovered information about the system by the analysis system, or desired information about the system obtained by the analysis system from system proficiency resources. The evaluation viewpoint complements the evaluation perspective to allow for more in-depth and/or detailed evaluations. For example, the
analysis system 10 can evaluate how well the system is understood by comparing disclosed data with discovered data. As another example, theanalysis system 10 can evaluate how well the system is actually implemented in comparison to a desired level of implementation. - The evaluation category includes an identify category, a protect category, a detect category, a respond category, and a recover category. Each evaluation category includes a plurality of sub-categories and, at least some of the sub-categories include their own sub-categories (e.g., a sub-sub category). For example, the identify category includes the sub-categories of asset management, business environment, governance, risk assessment, risk management, access control, awareness & training, and data security. As a further example, asset management includes the sub-categories of hardware inventory, software inventory, data flow maps, external system cataloged, resource prioritization, and security roles. The
analysis system 10 can evaluate the system, or portion thereof, in light of one more evaluation categories, in light of an evaluation category and one or more sub-categories, or in light of an evaluation category, a sub-category, and one or more sub-sub-categories. - The evaluation rating metrics (e.g., how the system, or portion thereof, is evaluated) includes a selection of process, policy, procedure, certification, documentation, and/or automation. This allows the analysis system to quantify its evaluation. For example, the
analysis system 10 can evaluate the processes a system, or portion thereof, has to generate an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. As another example, theanalysis system 10 can evaluate how well the system, or portion thereof, uses the process it has to generate an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. - In an example, the analysis computing entity 16 (which includes one or more computing entities) sends a data gathering request to the
analysis system module 17. The data gathering request is specific to the evaluation to be performed by theanalysis system 10. For example, if theanalysis system 10 is evaluating the understanding of the policies, processes, documentation, and automation regarding the assets built for the engineering department, then the data gathering request would be specific to policies, processes, documentation, and automation regarding the assets built for the engineering department. - The
analysis system module 17 is loaded on the system 11-13 and obtained the requested data from the system. The obtaining of the data can be done in a variety of ways. For example, the data is disclosed by one or more system administrators. The disclosed data corresponds to the information the system administrator(s) has regarding the system. In essence, the disclosed data is a reflection of the knowledge the system administrator(s) has regarding the system. - As another example, the
analysis system module 17 communicates with physical assets of the system to discover the data. The communication may be direct with an asset. For example, theanalysis system module 17 sends a request to a particular computing device. Alternatively or in addition, the communication may be through one or more discovery tools of the system. For example, theanalysis system module 17 communicates with one or more tools of the system to obtain data regarding data segregation & boundary, infrastructure management, exploit & malware protection, encryption, identity & access management, system monitoring, vulnerability management, and/or data protection. - A tool is a network monitoring tool, a network strategy and planning tool, a network managing tool, a Simple Network Management Protocol (SNMP) tool, a telephony monitoring tool, a firewall monitoring tool, a bandwidth monitoring tool, an IT asset inventory management tool, a network discovery tool, a network asset discovery tool, a software discovery tool, a security discovery tool, an infrastructure discovery tool, Security Information & Event Management (SIEM) tool, a data crawler tool, and/or other type of tool to assist in discovery of assets, functions, security issues, implementation of the system, and/or operation of the system. If the system does not have a particular tool, the
analysis system module 17 engages one to discover a particular piece of data. - The
analysis system module 17 provides the gathered data to theanalysis computing entity 16, which stores the gathered data in a private storage 19-21 and processes it. The gathered data is processed alone, in combination with stored data (of the system being evaluated and/or another system's data), in combination with desired data (e.g., system proficiencies), in combination with analysis modeling (e.g., risk modeling, data flow modeling, security modeling, etc.), and/or in combination with stored analytic data (e.g., results of other evaluations). As a result of the processing, theanalysis computing entity 16 produces an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. The evaluation results are stored in a private storage and/or in another database. - The
analysis system 10 is operable to evaluate a system and/or its eco-system at any level of granularity from the entire system to an individual asset over a wide spectrum of evaluation options. As an example, the evaluation is to test understanding of the system, to test the implementation of the system, and/or to test the operation of the system. As another example, the evaluation is to test the system's self-evaluation capabilities with respect to understanding, implementation, and/or operation. As yet another example, the evaluation is to test policies regarding software tools; to test which software tools are prescribed by policy; to test which software tools are prohibited by policy; to test the use of the software tools in accordance with policy, to test maintenance of software tools in accordance with policy; to test the sufficiency of the policies, to test the effectiveness of the policies; and/or to test compliancy with the policies. - The
analysis system 10 takes an outside perspective to analyze the system. From within the system, it is often difficult to test the entire system, to test different combinations of system elements, to identify areas of vulnerabilities (assets and human operators), to identify areas of strength (assets and human operators), and to be proactive. Further, such evaluations are additional tasks the system has to perform, which means it consumes resources (human, physicals assets, and financial). Further, since system analysis is not the primary function of a system (supporting the organization is the system's primary purpose), the system analysis is not as thoroughly developed, implemented, and/or executed as is possible when its implemented in a stand-alone analysis system, likesystem 10. - The primary purpose of the analysis system is to analyze other systems to determine an evaluation rating, to identify deficiencies in the system, and, where it can, auto-correct the deficiencies. The evaluation rating can be regarding how well the system, or portion thereof, is understood, how well it is implemented, and/or how well it operates. The evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to support a business function; actually (discovered data) supports a business function; and/or should (desired data) support the business function.
- The evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to mitigate security risks; actually (discovered data) supports mitigating security risks; and/or should (desired data) support mitigating security risks. The evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to respond to security risks; actually (discovered data) supports responding to security risks; and/or should (desired data) support responding security risks.
- The evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to be used by people; is actually (discovered data) used by people; and/or should (desired data) be used by people. The evaluation rating can be regarding how effective the system, or portion thereof, is believed (disclosed data) to identify assets of the system; actually (discovered data) identifies assets of the system; and/or should (desired data) identify assets of the system.
- There are a significant number of combinations in which the
analysis system 10 can evaluate a system 11-13. A primary purpose theanalysis system 10 is help the system 11-13 become more self-healing, more self-updating, more self-protecting, more self-recovering, more self-evaluating, more self-aware, more secure, more efficient, more adaptive, and/or more self-responding. By discovering the strengths, weaknesses, vulnerabilities, and other system limitations in a way that the system itself cannot do effectively, theanalysis system 10 significantly improves the usefulness, security, and efficiency of systems 11-13. -
FIG. 2A is a schematic block diagram of an embodiment of acomputing device 40 that includes a plurality of computing resources. The computing resource include acore control module 41, one ormore processing modules 43, one or moremain memories 45, a read only memory (ROM) 44 for a boot up sequence,cache memory 47, a videographics processing module 42, a display 48 (optional), an Input-Output (I/O)peripheral control module 46, an I/O interface module 49 (which could be omitted), one or moreinput interface modules 50, one or moreoutput interface modules 51, one or morenetwork interface modules 55, and one or morememory interface modules 54. Aprocessing module 43 is described in greater detail at the end of the detailed description of the invention section and, in an alternative embodiment, has a direction connection to themain memory 45. In an alternate embodiment, thecore control module 41 and the I/O and/orperipheral control module 46 are one module, such as a chipset, a quick path interconnect (QPI), and/or an ultra-path interconnect (UPI). - Each of the
main memories 45 includes one or more Random Access Memory (RAM) integrated circuits, or chips. For example, amain memory 45 includes four DDR4 (4 th generation of double data rate) RAM chips, each running at a rate of 2,400 MHz. In general, themain memory 45 stores data and operational instructions most relevant for theprocessing module 43. For example, thecore control module 41 coordinates the transfer of data and/or operational instructions between themain memory 45 and the memory 56-57. The data and/or operational instructions retrieve from memory 56-57 are the data and/or operational instructions requested by the processing module or will most likely be needed by the processing module. When the processing module is done with the data and/or operational instructions in main memory, thecore control module 41 coordinates sending updated data to the memory 56-57 for storage. - The memory 56-57 includes one or more hard drives, one or more solid state memory chips, and/or one or more other large capacity storage devices that, in comparison to cache memory and main memory devices, is/are relatively inexpensive with respect to cost per amount of data stored. The memory 56-57 is coupled to the
core control module 41 via the I/O and/orperipheral control module 46 and via one or morememory interface modules 54. In an embodiment, the I/O and/orperipheral control module 46 includes one or more Peripheral Component Interface (PCI) buses to which peripheral components connect to thecore control module 41. Amemory interface module 54 includes a software driver and a hardware connector for coupling a memory device to the I/O and/orperipheral control module 46. For example, amemory interface 54 is in accordance with a Serial Advanced Technology Attachment (SATA) port. - The
core control module 41 coordinates data communications between the processing module(s) 43 and the network(s) 14 via the I/O and/orperipheral control module 46, the network interface module(s) 55, and anetwork card network card network interface module 55 includes a software driver and a hardware connector for coupling the network card to the I/O and/orperipheral control module 46. For example, thenetwork interface module 55 is in accordance with one or more versions of IEEE 802.11, cellular telephone protocols, 10/100/1000 Gigabit LAN protocols, etc. - The
core control module 41 coordinates data communications between the processing module(s) 43 and input device(s) 52 via the input interface module(s) 50, the I/O interface 49, and the I/O and/orperipheral control module 46. An input device 52 includes a keypad, a keyboard, control switches, a touchpad, a microphone, a camera, etc. Aninput interface module 50 includes a software driver and a hardware connector for coupling an input device to the I/O and/orperipheral control module 46. In an embodiment, aninput interface module 50 is in accordance with one or more Universal Serial Bus (USB) protocols. - The
core control module 41 coordinates data communications between the processing module(s) 43 and output device(s) 53 via the output interface module(s) 51 and the I/O and/orperipheral control module 46. An output device 53 includes a speaker, auxiliary memory, headphones, etc. Anoutput interface module 51 includes a software driver and a hardware connector for coupling an output device to the I/O and/orperipheral control module 46. In an embodiment, anoutput interface module 46 is in accordance with one or more audio codec protocols. - The
processing module 43 communicates directly with a videographics processing module 42 to display data on thedisplay 48. Thedisplay 48 includes an LED (light emitting diode) display, an LCD (liquid crystal display), and/or other type of display technology. The display has a resolution, an aspect ratio, and other features that affect the quality of the display. The videographics processing module 42 receives data from theprocessing module 43, processes the data to produce rendered data in accordance with the characteristics of the display, and provides the rendered data to thedisplay 48. -
FIG. 2B is a schematic block diagram of an embodiment of acomputing device 40 that includes a plurality of computing resources similar to the computing resources ofFIG. 2A with the addition of one or more cloudmemory interface modules 60, one or more cloudprocessing interface modules 61,cloud memory 62, and one or morecloud processing modules 63. Thecloud memory 62 includes one or more tiers of memory (e.g., ROM, volatile (RAM, main, etc.), non-volatile (hard drive, solid-state, etc.) and/or backup (hard drive, tape, etc.)) that is remoted from the core control module and is accessed via a network (WAN and/or LAN). Thecloud processing module 63 is similar toprocessing module 43 but is remoted from the core control module and is accessed via a network. -
FIG. 2C is a schematic block diagram of an embodiment of acomputing device 40 that includes a plurality of computing resources similar to the computing resources ofFIG. 2B with a change in how the cloud memory interface module(s) 60 and the cloud processing interface module(s) 61 are coupled to thecore control module 41. In this embodiment, theinterface modules peripheral control module 63 that directly couples to thecore control module 41. -
FIG. 2D is a schematic block diagram of an embodiment of acomputing device 40 that includes a plurality of computing resources, which includes include acore control module 41, a boot up processing module 66, boot upRAM 67, a read only memory (ROM) 45, a videographics processing module 42, a display 48 (optional), an Input-Output (I/O)peripheral control module 46, one or moreinput interface modules 50, one or moreoutput interface modules 51, one or more cloudmemory interface modules 60, one or more cloudprocessing interface modules 61,cloud memory 62, and cloud processing module(s) 63. - In this embodiment, the
computing device 40 includes enough processing resources (e.g., module 66,ROM 44, and RAM 67) to boot up. Once booted up, thecloud memory 62 and the cloud processing module(s) 63 function as the computing device's memory (e.g., main and hard drive) and processing module. -
FIG. 3A is schematic block diagram of an embodiment of acomputing entity 16 that includes a computing device 40 (e.g., one of the embodiments ofFIGS. 2A-2D ). A computing device may function as a user computing device, a server, a system computing device, a data storage device, a data security device, a networking device, a user access device, a cell phone, a tablet, a laptop, a printer, a game console, a satellite control box, a cable box, etc. -
FIG. 3B is schematic block diagram of an embodiment of acomputing entity 16 that includes two or more computing devices 40 (e.g., two or more from any combination of the embodiments ofFIGS. 2A-2D ). Thecomputing devices 40 perform the functions of a computing entity in a peer processing manner (e.g., coordinate together to perform the functions), in a master-slave manner (e.g., one computing device coordinates and the other support it), and/or in another manner. -
FIG. 3C is schematic block diagram of an embodiment of acomputing entity 16 that includes a network of computing devices 40 (e.g., two or more from any combination of the embodiments ofFIGS. 2A-2D ). The computing devices are coupled together via one or more network connections (e.g., WAN, LAN, cellular data, WLAN, etc.) and preform the functions of the computing entity. -
FIG. 3D is schematic block diagram of an embodiment of acomputing entity 16 that includes a primary computing device (e.g., any one of the computing devices ofFIGS. 2A-2D ), an interface device (e.g., a network connection), and a network of computing devices 40 (e.g., one or more from any combination of the embodiments ofFIGS. 2A-2D ). The primary computing device utilizes the other computing devices as co-processors to execute one or more the functions of the computing entity, as storage for data, for other data processing functions, and/or storage purposes. -
FIG. 3E is schematic block diagram of an embodiment of acomputing entity 16 that includes a primary computing device (e.g., any one of the computing devices ofFIGS. 2A-2D ), an interface device (e.g., a network connection) 70, and a network of computing resources 71 (e.g., two or more resources from any combination of the embodiments ofFIGS. 2A-2D ). The primary computing device utilizes the computing resources as co-processors to execute one or more the functions of the computing entity, as storage for data, for other data processing functions, and/or storage purposes. -
FIG. 4 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (orsystem 12 or system 13), theanalysis system 10, one or more networks, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more non-business associated computing devices 24 (e.g., publiclyavailable servers 27 and subscription based servers 28), one or moreBOT computing devices 25, and one or more badactor computing devices 26. This diagram is similar toFIG. 1 with the inclusion of detail within the system proficiency resource(s) 22, with inclusion of detail within thesystem 11, and with the inclusion of detail within theanalysis system module 17. - In addition to the discussion with respect
FIG. 1 , asystem proficiency resource 22 is a computing device that provides information regarding best-in-class assets, best-in-class practices, known protocols, leading edge information, and/or established guidelines regarding risk assessment, devices, software, networking, data security, cybersecurity, and/or data communication. Asystem proficiency resource 22 is a computing device that may also provide information regarding standards, information regarding compliance requirements, information regarding legal requirements, and/or information regarding regulatory requirements. - The
system 11 is shown to include three inter-dependent modes: system functions 82, security functions 83, andsystem assets 84. System functions 82 correspond to the functions the system executes to support the organization's business requirements. Security functions 83 correspond to the functions the system executes to support the organization's security requirements. Thesystem assets 84 are the hardware and/or software platforms that support system functions 82 and/or the security functions 83. - The
analysis system module 17 includes one or moredata extraction modules 80 and one or more systemuser interface modules 81. Adata extraction module 80, which will be described in greater detail with reference to one or more subsequent figures, gathers data from the system for analysis by theanalysis system 10. A systemuser interface module 81 provides a user interface between thesystem 11 and theanalysis system 10 and functions to provide user information to theanalysis system 10 and to receive output data from the analysis system. The systemuser interface module 81 will be described in greater detail with reference to one or more subsequent figures. -
FIG. 5 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (orsystem 12 or system 13), theanalysis system 10, one or more networks, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more non-business associated computing devices 24 (e.g., publiclyavailable servers 27 and subscription based servers 28), one or moreBOT computing devices 25, and one or more badactor computing devices 26. This diagram is similar toFIG. 4 with the inclusion of additional detail within thesystem 11. - In this embodiment, the
system 11 includes a plurality of sets of system assets to support the system functions 82 and/or the security functions 83. For example, a set of system assets supports the system functions 82 and/orsecurity functions 83 for a particular business segment (e.g., a department within the organization). As another example, a second set of system assets supports the security functions 83 for a different business segment and a third set of system assets supports the system functions 82 for the different business segment. -
FIG. 6 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (orsystem 12 or system 13), theanalysis system 10, one or more networks, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more non-business associated computing devices 24 (e.g., publiclyavailable servers 27 and subscription based servers 28), one or moreBOT computing devices 25, and one or more badactor computing devices 26. This diagram is similar toFIG. 5 with the inclusion of additional detail within thesystem 11. - In this embodiment, the
system 11 includes a plurality of sets ofsystem assets 84, system functions 82, and security functions 83. For example, a set ofsystem assets 84, system functions 82, and security functions 83 supports one department in an organization and a second set ofsystem assets 84, system functions 82, and security functions 83 supports another department in the organization. -
FIG. 7 is a schematic block diagram of another embodiment of a networked environment that includes a system 11 (orsystem 12 or system 13), theanalysis system 10, one or more networks, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more non-business associated computing devices 24 (e.g., publiclyavailable servers 27 and subscription based servers 28), one or moreBOT computing devices 25, and one or more badactor computing devices 26. This diagram is similar toFIG. 4 with the inclusion of additional detail within thesystem 11. - In this embodiment, the
system 11 includessystem assets 84, system functions 82, security functions 83, and self-evaluation functions 85. The self-evaluation functions 85 are supported by thesystem assets 84 and are used by the system to evaluate its assets, is system functions, and its security functions. In general, self-evaluates looks at system's ability to analyze itself for self-determining it's understanding (self-aware) of the system; self-determining the implementation of the system, and/or self-determining operation of the system. In addition, the self-evaluation may further consider the system's ability to self-heal, self-update, self-protect, self-recover, self-evaluate, and/or self-respond. Theanalysis system 10 can evaluate the understanding, implementation, and/or operation of the self-evaluation functions. -
FIG. 8 is a schematic block diagram of another embodiment of a networked environment having a system 11 (orsystem 12 or system 13), theanalysis system 10, one or more networks represented by networking infrastructure, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more publiclyavailable servers 27, one or more subscription basedservers 28, one or moreBOT computing devices 25, and one or more badactor computing devices 26. - In this embodiment, the
system 11 is shown to include a plurality of physical assets dispersed throughout a geographic region (e.g., a building, a town, a county, a state, a country). Each of the physical assets includes hardware and software to perform its respective functions within the system. A physical asset is a computing entity (CE), a public or provide networking device (ND), a user access device (UAD), or a business associate access device (BAAD). - A computing entity may be a user device, a system admin device, a server, a printer, a data storage device, etc. A network device may be a local area network device, a network card, a wide area network device, etc. A user access device is a portal that allows authorizes users of the system to remotely access the system. A business associated access device is a portal that allows authorized business associates of the system access the system.
- Some of the computing entities are grouped via a common connection to a network device, which provides the group of computing entities access to other parts of the system and/or the internet. For example, the highlighted computing entity may access a publicly
available server 25 via network devices coupled to the network infrastructure. Theanalysis system 10 can evaluation whether this is an appropriate access, the understanding of this access, the implementation to enable this access, and/or the operation of the system to support this access. -
FIG. 9 is a schematic block diagram of an example of a system section of a system selected for evaluation similar toFIG. 8 . In this example, only a portion of the system is being tested, i.e., system section undertest 91. As such, theanalysis system 10 only evaluates assets, system functions, and/or security functions related to assets within the system section undertest 91. -
FIG. 10 is a schematic block diagram of another example of a system section of a system selected for evaluation similar toFIG. 9 . In this example, a single computing entity (CE) is being tested, i.e., system section undertest 91. As such, theanalysis system 10 only evaluates assets, system functions, and/or security functions related to the selected computing entity. -
FIG. 11 is a schematic block diagram of an embodiment of a networked environment having a system 11 (orsystem 12 or system 13), theanalysis system 10, one ormore networks 14, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more publiclyavailable servers 27, one or more subscription basedservers 28, one or moreBOT computing devices 25, and one or more badactor computing devices 26. - In this embodiment, the
system 11 is shown to include a plurality of system assets (SA). A system asset (SA) may include one or more system sub assets (S2A) and a system sub asset (S2A) may include one or more system sub-sub assets (S3A). While being a part of theanalysis system 10, at least one data extraction module (DEM) 80 and at least one system user interface module (SUIM) 81 are installed on thesystem 11. - A system element includes one or more system assets. A system asset (SA) may be a physical asset or a conceptual asset as previously described. As an example, a system element includes a system asset of a computing device. The computing device, which is the SA, includes user applications and an operating system; each of which are sub assets of the computing device (S2A). In addition, the computing device includes a network card, memory devices, etc., which are sub assets of the computing device (S2A). Documents created from a word processing user application are sub assets of the word processing user application (S3A) and sub-sub assets of the computing device.
- As another example, the system asset (SA) includes a plurality of computing devices, printers, servers, etc. of a department of the organization operating the
system 11. In this example, a computing device is a sub asset of the system asset and the software and hardware of the computing devices are sub-sub assets. - The
analysis system 10 may evaluate understanding, implementation, and/or operation of one or more system assets, one or more system sub assets, and/or one or more system sub-sub assets, as an asset, as it supports system functions 82, and/or as it supports security functions. The evaluation may be to produce an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. -
FIG. 12 is a schematic block diagram of an embodiment of asystem 11 that includes a plurality ofphysical assets 100 coupled to ananalysis system 100. Thephysical assets 100 include ananalysis interface device 101, one ormore networking devices 102, one ormore security devices 103, one or moresystem admin devices 104, one ormore user devices 105, one ormore storage devices 106, and/or one ormore servers 107. Each device may be a computing entity that includes hardware (HW) components and software (SW) applications (user, device, drivers, and/or system). A device may further include a data extraction module (DEM). - The
analysis interface device 101 includes a data extraction module (DEM) 80 and the systemuser interface module 81 to provide connectivity to theanalysis system 10. With the connectivity, theanalysis system 10 is able to evaluate understanding, implementation, and/or operation of each device, or portion thereof, as an asset, as it supports system functions 82, and/or as it supports security functions. For example, theanalysis system 10 evaluates the understanding ofnetworking devices 102 as an asset. As a more specific example, theanalysis system 10 evaluates how well thenetworking devices 102, its hardware, and its software are understood within the system and/or by the system administrators. The evaluation includes how well are thenetworking devices 102, its hardware, and its software documented; how well are they implemented based on system requirements; how well do they operate based on design and/or system requirements; how well are they maintained per system policies and/or procedures; how well are their deficiencies identified; and/or how well are their deficiencies auto-corrected. -
FIG. 13 is a schematic block diagram of another embodiment of a networked environment having asystem 11 that includes a plurality of system assets coupled to ananalysis system 10. This embodiment is similar to the embodiment ofFIG. 11 with the addition of additional data extraction modules (DEM) 80. In this embodiment, each system asset (SA) is affiliated with itsown DEM 80. This allows theanalysis system 10 to extract data more efficiently than via a single DEM. A further extension of this embodiment is that each system sub asset (S2A) could have itsown DEM 80. As yet a further extension, each system sub-sub asset (S3A) could have itsown DEM 80. -
FIG. 14 is a schematic block diagram of another embodiment of asystem 11physical assets 100 coupled to ananalysis system 100. Thephysical assets 100 include one ormore networking devices 102, one ormore security devices 103, one or moresystem admin devices 104, one ormore user devices 105, one ormore storage devices 106, and/or one ormore servers 107. Each device may be a computing entity that includes hardware (HW) components and software (SW) applications (user, system, and/or device). - The
system admin device 104 includes one or moreanalysis system modules 17, which includes a data extraction module (DEM) 80 and the systemuser interface module 81 to provide connectivity to theanalysis system 10. With the connectivity, theanalysis system 10 is able to evaluate understanding, implementation, and/or operation of each device, or portion thereof, as an asset, as it supports system functions 82, and/or as it supports security functions. For example, theanalysis system 10 evaluates the implementation ofnetworking devices 102 to support system functions. As a more specific example, theanalysis system 10 evaluates how well thenetworking devices 102, its hardware, and its software are implemented within the system to support one or more system functions (e.g., managing network traffic, controlling network access per business guidelines, policies, and/or processes, etc.). The evaluation includes how well is the implementation of thenetworking devices 102, its hardware, and its software documented to support the one or more system functions; how well does their implementation support the one or more system functions; how well have their implementation to support the one or more system functions been verified in accordance with policies, processes, etc.; how well are they updated per system policies and/or procedures; how well are their deficiencies in support of the one or more system functions identified; and/or how well are their deficiencies in support of the one or more system functions auto-corrected. -
FIG. 15 is a schematic block diagram of another embodiment of asystem 11 that includes a plurality ofphysical assets 100 coupled to ananalysis system 100. Thephysical assets 100 include ananalysis interface device 101, one ormore networking devices 102, one ormore security devices 103, one or moresystem admin devices 104, one ormore user devices 105, one ormore storage devices 106, and/or one ormore servers 107. Each device may be a computing entity that includes hardware (HW) components and software (SW) applications (user, device, drivers, and/or system). This embodiment is similar to the embodiment ofFIG. 12 with a difference being that the devices 102-107 do not include a data extraction module (DEM) as is shown inFIG. 12 . -
FIG. 16 is a schematic block diagram of another embodiment of asystem 11 that includesnetworking devices 102,security devices 103,servers 107,storage devices 106, anduser devices 105. Thesystem 11 is coupled to thenetwork 14, which provides connectivity to the busines associate computing device 23. Thenetwork 14 is shown to include one or more wide area networks (WAN) 162, one or more wireless LAN (WLAN) and/orLANs 164, one or more virtualprivate networks 166. - The
networking devices 102 includes one ormore modems 120, one ormore routers 121, one ormore switches 122, one ormore access points 124, and/or one or more localarea network cards 124. Theanalysis system 10 can evaluate thenetwork devices 102 collectively as assets, as they support system functions, and/or as they support security functions. Theanalysis system 10 may also evaluate each network device individually as an asset, as it supports system functions, and/or as it supports security functions. The analysis system may further evaluate one or more network devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes). - The
security devices 103 includes one or moreinfrastructure management tools 125, one or moreencryption software programs 126, one or more identity andaccess management tools 127, one or more dataprotection software programs 128, one or moresystem monitoring tools 129, one or more exploit andmalware protection tools 130, one or morevulnerability management tools 131, and/or one or more data segmentation andboundary tools 132. Note that a tool is a program that functions to develop, repair, and/or enhance other programs and/or hardware. - The
analysis system 10 can evaluate thesecurity devices 103 collectively as assets, as they support system functions, and/or as they support security functions. Theanalysis system 10 may also evaluate each security device individually as an asset, as it supports system functions, and/or as it supports security functions. The analysis system may further evaluate one or more security devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes). - The
servers 107 include one ormore telephony servers 133, one ormore ecommerce servers 134, one ormore email servers 135, one ormore web servers 136, and/or one ormore content servers 137. Theanalysis system 10 can evaluate theservers 103 collectively as assets, as they support system functions, and/or as they support security functions. Theanalysis system 10 may also evaluate each server individually as an asset, as it supports system functions, and/or as it supports security functions. The analysis system may further evaluate one or more servers as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes). - The storage devices includes one or more
cloud storage devices 138, one or more storage racks 139 (e.g., a plurality of storage devices mounted in a rack), and/or one ormore databases 140. Theanalysis system 10 can evaluate thestorage devices 103 collectively as assets, as they support system functions, and/or as they support security functions. Theanalysis system 10 may also evaluate each storage device individually as an asset, as it supports system functions, and/or as it supports security functions. The analysis system may further evaluate one or more storage devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes). - The
user devices 105 include one ormore landline phones 141, one ormore IP cameras 144, one ormore cell phones 143, one or more user computing devices 145, one ormore IP phones 150, one or morevideo conferencing equipment 148, one ormore scanners 151, and/or one ormore printers 142. Theanalysis system 10 can evaluate theuse devices 103 collectively as assets, as they support system functions, and/or as they support security functions. Theanalysis system 10 may also evaluate each user device individually as an asset, as it supports system functions, and/or as it supports security functions. The analysis system may further evaluate one or more user devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes). - The
system admin devices 104 includes one or more systemadmin computing devices 146, one or more system computing devices 194 (e.g., data management, access control, privileges, etc.), and/or one or more securitymanagement computing devices 147. Theanalysis system 10 can evaluate thesystem admin devices 103 collectively as assets, as they support system functions, and/or as they support security functions. Theanalysis system 10 may also evaluate each system admin device individually as an asset, as it supports system functions, and/or as it supports security functions. The analysis system may further evaluate one or more system admin devices as part of the physical assets of a system aspect (e.g., the system or a portion thereof being evaluated with respect to one or more system criteria and one or more system modes). -
FIG. 17 is a schematic block diagram of an embodiment of auser computing device 105 that includessoftware 160, a user interface 161, processing resources 163,memory 162 and one ormore networking device 164. The processing resources 163 include one or more processing modules, cache memory, and a video graphics processing module. - The
memory 162 includes non-volatile memory, volatile memory and/or disk memory. The non-volatile memory stores hardware IDs, user credentials, security data, user IDs, passwords, access rights data, device IDs, one or more IP addresses and security software. The volatile memory includes system volatile memory and user volatile memory. The disk memory includes system disk memory and user disk memory. User memory (volatile and/or disk) stores user data and user applications. System memory (volatile and/or disk) stores system applications and system data. - The
user interface 104 includes one or more I/O (input/output) devices such as video displays, keyboards, mice, eye scanners, microphones, speakers, and other devices that interface with one or more users. The user interface 161 further includes one or more physical (PHY) interface with supporting software such that the user computing device can interface with peripheral devices. - The
software 160 includes one or more I/O software interfaces (e.g., drivers) that enable the processing module to interface with other components. Thesoftware 160 also includes system applications, user applications, disk memory software interfaces (drivers) and network software interfaces (drivers). - The
networking device 164 may be a network card or network interface that intercouples theuser computing device 105 to devices external to thecomputing device 105 and includes one or more PHY interfaces. For example, the network card is a WLAN card. As another example, the network card is a cellular data network card. As yet another example, the network card is an ethernet card. - The user computing device may further include a
data extraction module 80. This would allow theanalysis system 10 to obtain data directly from the user computing device. Regardless of how theanalysis system 10 obtains data regarding the user computing device, theanalysis system 10 can evaluate the user computing device as an asset, as it supports one or more system functions, and/or as it supports one or more security functions. Theanalysis system 10 may also evaluate each element of the user computing device (e.g., each software application, each drive, each piece of hardware, etc.) individually as an asset, as it supports one or more system functions, and/or as it supports one or more security functions. -
FIG. 18 is a schematic block diagram of an embodiment of aserver 107 that includessoftware 170, processingresources 171,memory 172 and one ormore networking resources 173. Theprocessing resources 171 include one or more processing modules, cache memory, and a video graphics processing module. Thememory 172 includes non-volatile memory, volatile memory, and/or disk memory. The non-volatile memory stores hardware IDs, user credentials, security data, user IDs, passwords, access rights data, device IDs, one or more IP addresses and security software. The volatile memory includes system volatile memory and shared volatile memory. The disk memory include server disk memory and shared disk memory. - The
software 170 includes one or more I/O software interfaces (e.g., drivers) that enable thesoftware 170 to interface with other components. Thesoftware 170 includes system applications, server applications, disk memory software interfaces (drivers), and network software interfaces (drivers). Thenetworking resources 173 may be one or more network cards that provides a physical interface for the server to a network. - The
server 107 may further include adata extraction module 80. This would allow theanalysis system 10 to obtain data directly from the server. Regardless of how theanalysis system 10 obtains data regarding the server, theanalysis system 10 can evaluate the server as an asset, as it supports one or more system functions, and/or as it supports one or more security functions. Theanalysis system 10 may also evaluate each element of the server (e.g., each software application, each drive, each piece of hardware, etc.) individually as an asset, as it supports one or more system functions, and/or as it supports one or more security functions. -
FIG. 19 is a schematic block diagram of another embodiment of a networked environment having a system 11 (orsystem 12 or system 13), theanalysis system 10, one ormore networks 14, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more publiclyavailable servers 27, one or more subscription basedservers 28, one or moreBOT computing devices 25, and one or more badactor computing devices 26. - In this embodiment, the
system 11 is shown to include a plurality of system functions (SF). A system function (SF) may include one or more system sub functions (S2F) and a system sub function (S2F) may include one or more system sub-sub functions (S3F). While being a part of theanalysis system 10, at least one data extraction module (DEM) 80 and at least one system user interface module (SUIM) 81 are installed on thesystem 11. - A system function (SF) includes one or more business operations, one or more compliance requirements, one or more data flow objectives, one or more data access control objectives, one or more data integrity objectives, one or more data storage objectives, one or more data use objectives, and/or one or more data dissemination objectives. Business operation system functions are the primary purpose for the
system 11. Thesystem 11 is designed and built to support the operations of the business, which vary from business to business. - In general, business operations include operations regarding critical business functions, support functions for core business, product and/or service functions, risk management objectives, business ecosystem objectives, and/or business contingency plans. The business operations may be divided into executive management operations, information technology operations, marketing operations, engineering operations, manufacturing operations, sales operations, accounting operations, human resource operations, legal operations, intellectual property operations, and/or finance operations. Each type of business operation includes sub-business operations, which, in turn may include its own sub-operations.
- For example, engineering operations includes a system function of designing new products and/or product features. The design of a new product or feature involves sub-functions of creating design specifications, creating a design based on the design specification, and testing the design through simulation and/or prototyping. Each of these steps includes sub-steps. For example, for the design of a software program, the design process includes the sub-sub system functions of creating a high level design from the design specifications; creating a low level design from the high level design; and the creating code from the low level design.
- A compliance requirement may be a regulatory compliance requirement, a standard compliance requirement, a statutory compliance requirement, and/or an organization compliance requirement. For example, there are a regulatory compliance requirements when the organization has governmental agencies as clients. An example of a standard compliance requirement, encryption protocols are often standardized. Data Encryption Standard (DES), Advanced Encryption Standard (AES), RSA (Rivest-Shamir-Adleman) encryption, and public-key infrastructure (PKI) are examples of encryption type standards. HIPAA (health Insurance Portability and Accountability Act) is an example of a statutory compliance requirement. Examples of organization compliance requirements include use of specific vendor hardware, use of specific vendor software, use of encryption, etc.
- A data flow objective is regarding where data can flow, at what rate data can and should flow, the manner in which the data flow, and/or the means over which the data flows. As an example of a data flow objective, data for remote storage is to flow via a secure data pipeline using a particular encryption protocol. As another example of a data flow objective, ingesting of data should have the capacity to handle a data rate of 100 giga-bits per second.
- A data access control objective established which type of personnel and/or type of assets can access specific types of data. For example, certain members of the corporate department and human resources department have access to employee personnel files, while all other members of the organization do not.
- A data integrity objective establishes a reliability that, when data is retrieved, it is the data that was stored, i.e., it was not lost, damaged, or corrupted. An example of a data integrity protocol is Cyclic Redundancy Check (CRC). Another example of a data integrity protocol is a hash function.
- A data storage objective establishes the manner in which data is to be stored. For example, a data storage objective is to store data in a RAID system; in particular, a
RAID 6 system. As another example, a data storage objective is regarding archiving of data and the type of storage to use for archived data. - A data use objective establishes the manner in which data can be used. For example, if the data is for sale, then the data use objective would establish what type of data is for sale, at what price, and what is the target customer. As another example, a data use objective establishes read only privileges, editing privileges, creation privileges, and/or deleting privileges.
- A data dissemination objective establishes how the data can be shared. For example, a data dissemination objective is regarding confidential information and indicates how the confidential information should be marked, who in can be shared with internally, and how it can be shared externally, if at all.
- The
analysis system 10 may evaluate understanding, implementation, and/or operation of one or more system functions, one or more system sub functions, and/or one or more system sub-sub functions. The evaluation may be to produce an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. For example, theanalysis system 10 evaluates the understanding of the software development policies and/or processes. As another example, theanalysis system 10 evaluates the use of software development policies and/or processes to implement a software program. As yet another example,analysis system 10 evaluates the operation of the software program with respect to the business operation, the design specifications, and/or the design. -
FIG. 20 is a schematic block diagram of another embodiment of asystem 11 that includes, from a business operations perspective, divisions 181-183, departments, and groups. The business structure of thesystem 11, as in most businesses, is governed by acorporate department 180. The corporate department may have its own sub-system with structures and software tailored to the corporate function of the system. Organized under thecorporate department 180 are divisions,division 1 181,division 2 182, throughdivision k 183. These divisions may be different business divisions of a multi-national conglomerate, may be different functional divisions of a business, e.g., finance, marketing, sales, legal, engineering, research and development, etc. Under each division 1081-183 include a plurality of departments. Under each department are a number of groups. - The business structure is generic and can be used to represent the structure of most conventional businesses and/or organizations. The
analysis system 10 is able to use this generic structure to create and categorize the business structure of thesystem 11. The creation and categorization of the business structure is done in a number of ways. Firstly, theanalysis system 10 accesses corporate organization documents for the business and receive feedback from one or more persons in the business and use these documents and data to initially determine at least partially the business structure. Secondly, theanalysis system 10 determines the network structure of the other system, investigate identities of components of the network structure, and construct a sub-division of the other system. Then, based upon software used within the sub-division, data character, and usage character, theanalysis system 10 identifies more specifically the function of the divisions, departments and groups. In doing so, theanalysis system 10 uses information known of third-party systems to assist in the analysis. - With the abstraction of the business structure, differing portions of the business structure may have different levels of abstraction from a component/sub-component/sub-sub-component/system/sub-system/sub-sub-system level based upon characters of differing segments of the business. For example. a more detailed level of abstraction for elements of the corporate and security departments of the business may be taken than for other departments of the business.
-
FIG. 21 is a schematic block diagram of another embodiment of a business structure of thesystem 11. Shown are acorporate department 180, anIT department 181,division 2 182 through division “k” 183, where k is an integer equal to or greater than 3. Thecorporate department 180 includes a plurality ofhardware devices 260, a plurality ofsoftware applications 262, a plurality ofbusiness policies 264, a plurality ofbusiness procedures 266,local networking 268, a plurality ofsecurity policies 270, a plurality ofsecurity procedures 272,data protection resources 272,data access resources 276,data storage devices 278, apersonnel hierarchy 280, andexternal networking 282. Based upon an assessment of these assets of thecorporate department 180,analysis system 10 may evaluate the understanding, implementation, and/or operation of the assets, system functions, and/or security functions of the corporate department from a number of different perspectives, as will be described further with reference to one or more the subsequent figures. - Likewise, the
IT department 181 includes a plurality ofhardware devices 290, a plurality ofsoftware applications 292, a plurality of business policies 294, a plurality of business procedures 296,local networking 298, a plurality ofsecurity policies 300, a plurality ofsecurity procedures 302,data protection resources 304,data access resources 306,data storage devices 308, apersonnel hierarchy 310, andexternal networking 312. Based upon an assessment of these assets of theIT department 181, theanalysis system 10 may evaluate the understanding, implementation, and/or operation of the assets, system functions, and/or security functions of the IT department from a number of different perspectives, as will be described further with reference to one or more of the subsequent figures. -
FIG. 22 is a schematic block diagram of another embodiment of adivision 182 of a system that includes multiple departments. The departments include amarketing department 190, anoperations department 191, anengineering department 192, amanufacturing department 193, asales department 194, and anaccounting department 195. Each of the departments includes a plurality of components relevant to support the corresponding business functions and/or security functions of the division and of the department. In particular, themarketing department 190 includes a plurality of devices, software, security policies, security procedures, business policies, business procedures, data protection resources, data access resources, data storage resources, a personnel hierarchy, local network resources, and external network resources. - Likewise, each of the
operations department 191, theengineering department 192, themanufacturing department 193, thesales department 194, and theaccounting department 195 includes a plurality of devices, software, security policies, security procedures, business policies, business procedures, data protection resources, data access resources, data storage resources, a personnel hierarchy, local network resources, and external network resources. - Further, within the business structure, a service mesh may be established to more effectively protect important portions of the business from other portions of the business. The service mesh may have more restrictive safety and security mechanisms for one part of the business than another portion of the business, e.g., manufacturing department service mesh is more restrictive than the sales department service mesh.
- The
analysis system 10 may evaluate the understanding, implementation, and/or operation of the assets, system functions, and/or security functions of thedivision 182, of each department, of each type of system elements, and/or each system element. For example, theanalysis system 10 evaluates the data access policies and procedures of each department. As another example, theanalysis system 10 evaluates the data storage policies, procedures, design, implementation, and/or operation of data storage within theengineering department 192. -
FIG. 23 is a schematic block diagram of another embodiment of a networked environment having a system 11 (orsystem 12 or system 13), theanalysis system 10, one ormore networks 14, one or moresystem proficiency resources 22, one or more business associated computing devices 23, one or more publiclyavailable servers 27, one or more subscription basedservers 28, one or moreBOT computing devices 25, and one or more badactor computing devices 26. - In this embodiment, the
system 11 is shown to include a plurality of security functions (SEF). A security function (SEF) may include one or more system sub security functions (SE2F) and a security sub function (SE2F) may include one or more security sub-sub functions (SE3F). While being a part of theanalysis system 10, at least one data extraction module (DEM) 80 and at least one system user interface module (SUIM) 81 are installed on thesystem 11. As used herein, a security function includes a security operation, a security requirement, a security policy, and/or a security obj ective with respect to data, system access, system design, system operation, and/or system modifications (e.g., updates, expansion, part replacement, maintenance, etc.). - A security function (SF) includes one or more threat detection functions, one or more threat avoidance functions, one or more threat resolution functions, one or more threat recovery functions, one or more threat assessment functions, one or more threat impact functions, one or more threat tolerance functions, one or more business security functions, one or more governance security functions, one or more data at rest protection functions, one or more data in transit protection functions, and/or one or more data loss prevention functions.
- A threat detection function includes detecting unauthorized system access; detecting unauthorized data access; detecting unauthorized data changes; detecting uploading of worms, viruses, and the like; and/or detecting bad actor attacks. A threat avoidance function includes avoiding unauthorized system access; avoiding unauthorized data access; avoiding unauthorized data changes; avoiding uploading of worms, viruses, and the like; and/or avoiding bad actor attacks.
- A threat resolution function includes resolving unauthorized system access; resolving unauthorized data access; resolving unauthorized data changes; resolving uploading of worms, viruses, and the like; and/or resolving bad actor attacks. A threat recovery function includes recovering from an unauthorized system access; recovering from an unauthorized data access; recovering from an unauthorized data changes; recovering from an uploading of worms, viruses, and the like; and/or recovering from a bad actor attack.
- A threat assessment function includes accessing the likelihood of and/or mechanisms for unauthorized system access; accessing the likelihood of and/or mechanisms for unauthorized data access; accessing the likelihood of and/or mechanisms for unauthorized data changes; accessing the likelihood of and/or mechanisms for uploading of worms, viruses, and the like; and/or accessing the likelihood of and/or mechanisms for bad actor attacks.
- A threat impact function includes determining an impact on business operations from an unauthorized system access; resolving unauthorized data access; determining an impact on business operations from an unauthorized data changes; determining an impact on business operations from an uploading of worms, viruses, and the like; and/or determining an impact on business operations from an bad actor attacks.
- A threat tolerance function includes determining a level of tolerance for an unauthorized system access; determining a level of tolerance for an unauthorized data access; determining a level of tolerance for an unauthorized data changes; determining a level of tolerance for an uploading of worms, viruses, and the like; and/or determining a level of tolerance for an bad actor attacks.
- A business security function includes data encryption, handling of third party data, releasing data to the public, and so on. A governance security function includes HIPAA compliance; data creation, data use, data storage, and/or data dissemination for specific types of customers (e.g., governmental agency); and/or the like.
- A data at rest protection function includes a data access protocol (e.g., user ID, password, etc.) to store data in and/or retrieve data from system data storage; data storage requirements, which include type of storage, location of storage, and storage capacity; and/or other data storage security functions.
- A data in transit protection function includes using a specific data transportation protocol (e.g., TCP/IP); using an encryption function prior to data transmission; using an error encoding function for data transmission; using a specified data communication path for data transmission; and/or other means to protect data in transit. A data loss prevention function includes a storage encoding technique (e.g., single parity encoding, double parity encoding, erasure encoding, etc.); a storage backup technique (e.g., one or two backup copies, erasure encoding, etc.); hardware maintenance and replacement policies and processes; and/or other means to prevent loss of data.
- The
analysis system 10 may evaluate understanding, implementation, and/or operation of one or more security functions, one or more security sub functions, and/or one or more security sub-sub functions. The evaluation may be to produce an evaluation rating, to identify deficiencies, and/or to auto-correct deficiencies. For example, theanalysis system 10 evaluates the understanding of the threat detection policies and/or processes. As another example, theanalysis system 10 evaluates the use of threat detection policies and/or processes to implement a security assets. As yet another example,analysis system 10 evaluates the operation of the security assets with respect to the threat detection operation, the threat detection design specifications, and/or the threat detection design. -
FIG. 24 is a schematic block diagram of an embodiment of anengineering department 200 of adivision 182 that reports to acorporate department 180 of asystem 11. Theengineering department 200 includes engineering assets, engineering system functions, and engineering security functions. The engineering assets include security HW & SW, user device HW & SW, networking HW & SW, system HW & SW, system monitoring HW & SW, and/or other devices that includes HW and/or SW. - In this example, the organization's system functions includes business operations, compliance requirements, data flow objectives, data access objectives, data integrity objectives, data storage objectives, data use objectives, and/or data dissemination objectives. These system functions apply throughout the system including throughout
division 2 and for theengineering department 200 ofdivision 2. - The
division 182, however, can issues more restrictive, more secure, and/or more detailed system functions. In this example, the division has issued more restrictive, secure, and/or detailed business operations (business operations +) and more restrictive, secure, and/or detailed data access functions (data access +). Similarly, theengineering department 200 may issue more restrictive, more secure, and/or more detailed system functions than the organization and/or the division. In this example, the engineering department has issued more restrictive, secure, and/or detailed business operations (business operations ++) than the division; has issued more restrictive, secure, and/or detailed data flow functions (data flow ++) than the organization; has issued more restrictive, secure, and/or detailed data integrity functions (data integrity ++) than the organization; and has issued more restrictive, secure, and/or detailed data storage functions (data storage ++) than the organization. - For example, an organization level business operation regarding the design of new products and/or of new product features specifies high-level design and verify guidelines. The division issued more detailed design and verify guidelines. The engineering department issued even more detailed design and verify guidelines.
- The
analysis system 10 can evaluate the compliance with the system functions for the various levels. In addition, theanalysis system 10 can evaluate that the division issued system functions are compliant with the organization issued system functions and/or are more restrictive, more secure, and/or more detailed. Similarly, theanalysis system 10 can evaluate that the engineering department issued system functions are compliant with the organization and the division issued system functions and/or are more restrictive, more secure, and/or more detailed. - As is further shown in this example, the organization security functions includes data at rest protection, data loss prevention, data in transit protection, threat management, security governance, and business security. The division has issued more restrictive, more secure, and/or more detailed busines security functions (business security +). The engineering department has issued more restrictive, more secure, and/or more detailed data at rest protection (data at rest protection ++), data loss prevention (data loss prevention ++), and data in transit protection (data in transit ++).
- The
analysis system 10 can evaluate the compliance with the security functions for the various levels. In addition, theanalysis system 10 can evaluate that the division issued security functions are compliant with the organization issued security functions and/or are more restrictive, more secure, and/or more detailed. Similarly, theanalysis system 10 can evaluate that the engineering department issued security functions are compliant with the organization and the division issued security functions and/or are more restrictive, more secure, and/or more detailed. -
FIG. 25 is a schematic block diagram of an example of ananalysis system 10 evaluating a system element undertest 91 of asystem 11. The system element undertest 91 corresponds to a system aspect (or system sector), which includes one or more system elements, one or more system criteria, and one or more system modes. - In this example, the system criteria are shown to includes guidelines, system requirements, system design & system build (system implementation), and the resulting system. The
analysis system 10 may evaluate the system, or portion thereof, during initial system requirement development, initial design of the system, initial build of the system, operation of the initial system, revisions to the system requirements, revisions to the system design, revisions to the system build, and/or operation of the revised system. A revision to a system includes adding assets, system functions, and/or security functions; deleting assets, system functions, and/or security functions; and/or modifying assets, system functions, and/or security functions. - The guidelines include one or more of business objectives, security objectives, NIST cybersecurity guidelines, system objectives, governmental and/or regulatory requirements, third party requirements, etc. and are used to help create the system requirements. System requirements outline the hardware requirements for the system, the software requirements for the system, the networking requirements for the system, the security requirements for the system, the logical data flow for the system, the hardware architecture for the system, the software architecture for the system, the logical inputs and outputs of the system, the system input requirements, the system output requirements, the system's storage requirements, the processing requirements for the system, system controls, system backup, data access parameters, and/or specification for other system features.
- The system requirements are used to help create the system design. The system design includes a high level design (HDL), a low level design (LLD), a detailed level design (DLD), and/or other design levels. High level design is a general design of the system. It includes a description of system architecture; a database design; an outline of platforms, services, and processes the system will require; a description of relationships between the assets, system functions, and security functions; diagrams regarding data flow; flowcharts; data structures; and/or other documentation to enable more detailed design of the system.
- Low level design is a component level design that is based on the HLD. It provides the details and definitions for every system component (e.g., HW and SW). In particular, LLD specifies the features of the system components and component specifications. Detailed level design describes the interaction of every component of the system.
- The system is built based on the design to produce a resulting system (i.e., the implemented assets). The assets of system operate to perform the system functions and/or security functions.
- The
analysis system 10 can evaluate the understanding, implementation, operation and/or self-analysis of thesystem 11 at one or more system criteria level (e.g., guidelines, system requirements, system implementation (e.g., design and/or build), and system) in a variety of ways. - The
analysis system 10 evaluates the understanding of the system (or portion thereof) by determining a knowledge level of the system and/or maturity level of system. For example, an understanding evaluation interprets what is known about the system and compares it to what should be known about the system. - As a more specific example, the analysis system evaluates the understanding of the guidelines. For instance, the
analysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the guidelines to facilitate the understanding of the guidelines. The more incomplete the data regarding the evaluation metrics, the more likely the guidelines are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the creation and/or use of the guidelines, the more likely the guidelines are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - As another more specific example of an understanding evaluation, the
analysis system 10 evaluates the understanding of the system requirements. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system requirements to facilitate the understanding of the system requirements. The more incomplete the data regarding the evaluation metrics, the more likely the system requirements are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the creation and/or use of the system requirements, the more likely the system requirements are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - As another more specific example of an understanding evaluation, the
analysis system 10 evaluates the understanding of the system design. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system design to facilitate the understanding of the system design. The more incomplete the data regarding the evaluation metrics, the more likely the system design is incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the creation and/or use of the system design, the more likely the system design is not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - As another more specific example of an understanding evaluation, the
analysis system 10 evaluates the understanding of the system build. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system build to facilitate the understanding of the system build. The more incomplete the data regarding the evaluation metrics, the more likely the system build is incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the execution of and/or use of the system build, the more likely the system build is not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - As another more specific example of an understanding evaluation, the
analysis system 10 evaluates the understanding of the system functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system build to facilitate the understanding of the system build. The more incomplete the data regarding the evaluation metrics, the more likely the system build is incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the execution of and/or use of the system build, the more likely the system build is not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - As another more specific example of an understanding evaluation, the
analysis system 10 evaluates the understanding of the security functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system functions to facilitate the understanding of the system functions. The more incomplete the data regarding the evaluation metrics, the more likely the system functions are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the execution of and/or use of the system functions, the more likely the system functions are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - As another more specific example of an understanding evaluation, the
analysis system 10 evaluates the understanding of the system assets. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the thoroughness of the system assets to facilitate the understanding of the system assets. The more incomplete the data regarding the evaluation metrics, the more likely the system assets are incomplete; which indicates a lack of understanding. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the selection, identification, and/or use of the system assets, the more likely the system assets are not well understood (e.g., lower level of knowledge and/or of system maturity) resulting in a low evaluation rating. - The
analysis system 10 also evaluates the implementation of the system (or portion thereof) by determining how well the system is being, was developed, and/or is being updated. For example, theanalysis system 10 determines how well the assets, system functions, and/or security functions are being developed, have been developed, and/or are being updated based on the guidelines, the system requirements, the system design, and/or the system build. - As a more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the guidelines. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the guidelines. The more incomplete the data regarding the evaluation metrics, the more likely the development of the guidelines is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the guidelines, the more likely the guidelines are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the system requirements. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system requirements. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system requirements is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system requirements, the more likely the system requirements are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the system design. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system design. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system design is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system design, the more likely the system design is not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the system build. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system build. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system build is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system build, the more likely the system build is not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the system functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system functions. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system functions is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system functions, the more likely the system functions are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the security functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the security functions. The more incomplete the data regarding the evaluation metrics, the more likely the development of the security functions is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the security functions, the more likely the security functions are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an implementation evaluation, the
analysis system 10 evaluates the implementation of the system assets. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the development of the system assets. The more incomplete the data regarding the evaluation metrics, the more likely the development of the system assets is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the development of the system assets, the more likely the system assets are not well developed (e.g., lower level of system development maturity) resulting in a low evaluation rating. - The
analysis system 10 also evaluates the operation of the system (or portion thereof) by determining how well the system fulfills its objectives. For example, theanalysis system 10 determines how well the assets, system functions, and/or security functions to fulfill the guidelines, the system requirements, the system design, the system build, the objectives of the system, and/or other purpose of the system. - As a more specific example of an operation evaluation, the
analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines by the system requirements. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines by the system requirements. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines by the system requirements is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines by the system requirements, the more likely the system requirements does not adequately fulfill the guidelines (e.g., lower level of system development maturity) resulting in a low evaluation rating. - As another more specific example of an operation evaluation, the
analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines and/or the system requirements by the system design. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines and/or the system requirements by the system design. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines and/or the system requirements by the system design is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines and/or the system requirements by the system design, the more likely the system design does not adequately fulfill the guidelines and/or the system requirements (e.g., lower level of system operation maturity) resulting in a low evaluation rating. - As another more specific example of an operation evaluation, the
analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, and/or the system design by the system build. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, and/or the system design by the system build. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines, the system requirements, and/or the system design by the system build is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines, the system requirements, and/or the system design by the system build, the more likely the system build does not adequately fulfill the guidelines, the system requirements, and/or the system design (e.g., lower level of system operation maturity) resulting in a low evaluation rating. - As another more specific example of an operation evaluation, the
analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, the system design, the system build, and/or objectives by the operation of the system in performing the system functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or objectives regarding the performance of the system functions by the system. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines, the system requirements, the system design, the system, and/or the objectives regarding the system functions is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or the objectives, the more likely the system does not adequately fulfill the guidelines, the system requirements, the system design, the system build, and/or the objectives regarding the system functions (e.g., lower level of system operation maturity) resulting in a low evaluation rating. - As another more specific example of an operation evaluation, the
analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, the system design, the system build, and/or objectives by the operation of the system in performing the security functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or objectives regarding the performance of the security functions by the system. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines, the system requirements, the system design, the system, and/or the objectives regarding the security functions is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or the objectives, the more likely the system does not adequately fulfill the guidelines, the system requirements, the system design, the system build, and/or the objectives regarding the security functions (e.g., lower level of system operation maturity) resulting in a low evaluation rating. - As another more specific example of an operation evaluation, the
analysis system 10 evaluates the operation (i.e., fulfillment) of the guidelines, the system requirements, the system design, the system build, and/or objectives by the operation of the system functions. For instance, theanalysis system 10 evaluates the policies, processes, procedures, automation, certifications, documentation, and/or other evaluation metric (e.g., evaluation metrics) regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or objectives regarding the performance of the system assets. The more incomplete the data regarding the evaluation metrics, the more likely the fulfillment of the guidelines, the system requirements, the system design, the system, and/or the objectives regarding the system assets is incomplete. The fewer numbers of and/or incompleteness of policies, processes, procedures, automation, documentation, certification, and/or other evaluation metric regarding the fulfillment of the guidelines, the system requirements, the system design, the system build, and/or the objectives, the more likely the system assets do not adequately fulfill the guidelines, the system requirements, the system design, the system build, and/or the objectives (e.g., lower level of system operation maturity) resulting in a low evaluation rating. - The
analysis system 10 also evaluates the self-analysis capabilities of the system (or portion thereof) by determining how well the self-analysis functions are implemented and how they subsequently fulfill the self-analysis objectives. In an example, the self-analysis capabilities of the system are a self-analysis system that overlies the system. Accordingly, the overlaid self-analysis system can be evaluated by theanalysis system 10 in a similar manner as the system undertest 91. For example, the understanding, implementation, and/or operation of the overlaid self-analysis system can be evaluated with respect to self-analysis guidelines, self-analysis requirements, design of the self-analysis system, build of the self-analysis system, and/or operation of the self-analysis system - As part of the evaluation process, the
analysis system 10 may identify deficiencies and, when appropriate, auto-correct a deficiency. For example, theanalysis system 10 identifies deficiencies in the understanding, implementation, and/or operation of the guidelines, the system requirements, the system design, the system build, the resulting system, and/or the system objectives. For example, theanalysis system 10 obtains addition information from the system via a data gathering process (e.g., producing discovered data) and/or from a system proficiency resource (e.g., producing desired data). Theanalysis system 10 uses the discovered data and/or desired data to identify the deficiencies. When possible, theanalysis system 10 auto-corrects the deficiencies. For example, when a software tool that aides in the creation of guidelines and/or system requirements is missing from the system's tool set, theanalysis system 10 can automatically obtain a copy of the missing software tool for the system. -
FIG. 26 is a schematic block diagram of another example of ananalysis system 10 evaluating a system element undertest 91. In this example, theanalysis system 10 is evaluating the system element undertest 91 from three evaluation viewpoints: disclosed data, discovered data, and desired data. Disclosed data is the known data of the system at the outset of an analysis, which is typically supplied by a system administrator and/or is obtained from data files of the system. Discovered data is the data discovered about the system from the by theanalysis system 10 during the analysis. Desired data is the data obtained by theanalysis system 10 from system proficiency resources regarding desired guidelines, system requirements, system design, system build, and/or system operation. - The evaluation from the three evaluation viewpoints may be done serially, in parallel, and/or in a parallel-serial combination to produce three sets of evaluation ratings. One set for disclosed data, one set for discovered data, and one set for desired data.
- A set of evaluation ratings includes one or more of: an evaluation rating regarding the understanding of the guidelines; an evaluation rating regarding the understanding of the system requirements; an evaluation rating regarding the understanding of the system design; an evaluation rating regarding the understanding of the system build; an evaluation rating regarding the understanding of the system operation; an evaluation rating regarding the development of the system requirements from the guidelines; an evaluation rating regarding the design from the system requirements; an evaluation rating regarding the system build from the design; an evaluation rating regarding the system operation based on the system design and/or system build; an evaluation rating regarding the guidelines; an evaluation rating regarding the system requirements; an evaluation rating regarding the system design; an evaluation rating regarding the system build; and/or an evaluation rating regarding the system operation.
-
FIG. 27 is a schematic block diagram of another example of ananalysis system 10 evaluating a system element undertest 91. In this example, theanalysis system 10 is evaluating the system element undertest 91 from three evaluation viewpoints: disclosed data, discovered data, and desired data with regard to security functions. The evaluation from the three evaluation viewpoints for the security functions may be done serially, in parallel, and/or in a parallel-serial combination to produce three sets of evaluation ratings with respect to security functions: one for disclosed data, one for discovered data, and one for desired data. -
FIG. 28 is a schematic block diagram of another example of ananalysis system 10 evaluating a system element undertest 91. In this example, theanalysis system 10 is evaluating the system element undertest 91 from three evaluation viewpoints and from three evaluation modes. For example, disclosed data regarding assets, discovered data regarding assets, desired data regarding assets, disclosed data regarding system functions, discovered data regarding system functions, desired data regarding system functions, disclosed data regarding security functions, discovered data regarding security functions, and desired data regarding security functions. - The evaluation from the nine evaluation viewpoints & evaluation mode combinations may be done serially, in parallel, and/or in a parallel-serial combination to produce nine sets of evaluation ratings one for disclosed data regarding assets, one for discovered data regarding assets, one for desired data regarding assets, one for disclosed data regarding system functions, one for discovered data regarding system functions, one for desired data regarding functions, one for disclosed data regarding security functions, one for discovered data regarding security functions, and one for desired data regarding security functions.
-
FIG. 29 is a schematic block diagram of an example of the functioning of ananalysis system 10 evaluating a system element undertest 91. Functionally, theanalysis system 10 includesevaluation criteria 211,evaluation mode 212,analysis perspective 213,analysis viewpoint 214,analysis categories 215, data gathering 216, pre-processing 217, andanalysis metrics 218 to produce one ormore ratings 219. Theevaluation criteria 211 includes guidelines, system requirements, system design, system build, and system operation. Theevaluation mode 212 includes assets, system functions, and security functions. Theevaluation criteria 211 and theevaluation mode 212 are part of the system aspect, which corresponds to the system, or portion thereof, being evaluated. - The
analysis perspective 213 includes understanding, implementation, operation, and self-analysis. The analysis viewpoint includes disclosed, discovered, and desired. Theanalysis categories 215 include identify, protect, detect, respond, and recover. Theanalysis perspective 213, theanalysis viewpoint 214, and the analysis categories correspond to how the system, or portion thereof, will be evaluated. For example, the system, or portion thereof, is being evaluated regarding the understanding of the system's ability to identify assets, system functions, and/or security functions from discovered data. - The
analysis metrics 218 includes process, policy, procedure, automation, certification, and documentation. Theanalysis metric 218 and the pre-processing 217 corresponds to manner of evaluation. For example, the policies regarding system's ability to identify assets, system functions, and/or security functions from discovered data of the system, or portion thereof, are evaluated to produce an understanding evaluation rating. - In an example of operation, the
analysis system 10 determines what portion of the system is evaluated (i.e., a system aspect). As such, theanalysis system 10 determines one or more system elements (e.g., including one or more system assets which are one or more physical assets and/or conceptual assets), one or more system criteria (e.g., guidelines, system requirements, system design, system build, and/or system operation), and one or more system modes (e.g., assets, system functions, and security functions). Theanalysis system 10 may determine the system aspect in a variety of ways. For example, theanalysis system 10 receives an input identifying the system aspect from an authorized operator of the system (e.g., IT personnel, executive personnel, etc.). As another example, the analysis system determines the system aspect in a systematic manner to evaluate various combinations of system aspects as part of an overall system evaluation. The overall system evaluation may be done one time, periodically, or continuously. As yet another example, the analysis system determines the system aspect as part of a systematic analysis of a section of the system, which may be done one time, periodically, or continuously. - The analysis system then determines how the system aspect is to be evaluated by selecting one or more analysis perspectives (understanding, implementation, operation, and self-analysis), one or more analysis viewpoints (disclosed, discovered, and desired), and one or more analysis categories (identify, protect, detect, respond, and recover). The
analysis system 10 may determine how the system aspect is to be evaluated in a variety of ways. For example, theanalysis system 10 receives an input identifying how the system aspect is to be evaluated from an authorized operator of the system (e.g., IT personnel, executive personnel, etc.). As another example, the analysis system determines how the system aspect is to be evaluated in a systematic manner to evaluate the system aspect in various combinations of analysis perspectives, analysis viewpoints, and analysis categories as part of an overall system evaluation. The overall system evaluation may be done one time, periodically, or continuously. As yet another example, the analysis system determines how the system aspect is to be evaluated as part of a systematic analysis of a section of the system, which may be done one time, periodically, or continuously. - The
analysis system 10 also determines one or more analysis metrics (e.g., process, policy, procedure, automation, certification, and documentation) regarding the manner for evaluating the system aspect in accordance with how it's to be evaluated. A policy sets out a strategic direction and includes high-level rules or contracts regarding issues and/or matters. For example, all software shall be a most recent version of the software. A process is a set of actions for generating outputs from inputs and includes one or more directives for generating outputs from inputs. For example, a process regarding the software policy is that software updates are to be performed by the IT department and all software shall be updated within one month of the release of the new version of software. - A procedure is the working instructions to complete an action as may be outlined by a process. For example, the IT department handling software updates includes a procedure that describes the steps for updating the software, verifying that the updated software works, and recording the updating and verification in a software update log. Automation is in regard to the level of automation the system includes for handling actions, issues, and/or matters of policies, processes, and/or procedures. Documentation is in regard to the level of documentation the system has regard guidelines, system requirements, system design, system build, system operation, system assets, system functions, security functions, system understanding, system implementation, operation of the system, policies, processes, procedures, etc. Certification is in regard to certifications of the system, such as maintenance certification, regulatory certifications, etc.
- In an example, the
analysis system 10 receives an input identifying manner in which to evaluate the system aspect from an authorized operator of the system (e.g., IT personnel, executive personnel, etc.). As another example, the analysis system determines the manner in which to evaluate the system aspect in a systematic manner to evaluate the system aspect in various combinations of analysis metrics as part of an overall system evaluation. The overall system evaluation may be done one time, periodically, or continuously. As yet another example, the analysis system determines the manner in which to evaluate the system aspect as part of a systematic analysis of a section of the system, which may be done one time, periodically, or continuously. - Once the analysis system has determined the system aspect, how it is to be evaluated, and the manner for evaluation, the
data gathering function 216 gathers data relevant to the system aspect, how it's to be evaluated, and the manner of evaluation from thesystem 11, from resources that store system information 210 (e.g., from the system, from a private storage of the analysis system, etc.), and/or from one or moresystem proficiency resources 22. For example, a current evaluation is regarding an understanding (analysis perspective) of policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint). As such, thedata gathering function 216 gathers data regarding policies to identify assets of the engineering department and the operations they perform using one or more data discovery tools. - The
pre-processing function 217 processes the gathered data by normalizing the data, parsing the data, tagging the data, and/or de-duplicating the data. The analysis system evaluations the processed data in accordance with the selected analysis metric to produce one ormore ratings 219. For example, the analysis system would produce a rating regarding the understanding of policies to identify assets of an engineering department regarding operations that the assets perform based on discovered data. Therating 219 is on a scale from low to high. In this example, a low rating indicates issues with the understanding and a high rating indicates no issues with the understanding. -
FIG. 30 is a schematic block diagram of another example of the functioning of ananalysis system 10 evaluating a system element undertest 91. The functioning of the analysis system includes adeficiency perspective function 230, a deficiency evaluation viewpoint function 31, and an auto-correction function 233. - The
deficiency perspective function 230 receives one ormore ratings 219 and may also receive the data used to generate theratings 219. From these inputs, thedeficiency perspective function 230 determines whether there is an understanding issue, an implementation issue, and/or an operation issue. For example, an understanding (analysis perspective) issue relates to a low understanding evaluation rating for a specific evaluation regarding policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint). - As another example, an implementation (analysis perspective) issue relates to a low implementation evaluation rating for a specific evaluation regarding implementation and/or use of policies (analysis metric) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint). As yet another example, an operation (analysis perspective) issue relates to a low operation evaluation rating for a specific evaluation regarding consistent, reliable, and/or accurate mechanism(s) to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform based on discovered data (analysis viewpoint) and on policies (analysis metric).
- When an understanding, implementation, and/or operation issue is identified, the deficiency evaluation viewpoint function 231 determines whether the issue(s) is based on disclosed data, discovered data, and/or desired data. For example, an understanding issue may be based on a difference between disclosed data and discovered data. As a specific example, the disclosed data includes a policy outline how to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform, which is listed as version 1.12 and a last revision date of Oct. 2, 2020. In this specific example, the discovered data includes the same policy, but is has been updated to version 1.14 and the last revision date as Nov. 13, 2020. As such, the deficiency evaluation viewpoint function identifies a
deficiency 232 in the disclosed data as being an outdated policy. - As another specific example, the disclosed data includes a policy outline how to identify (analysis category) assets (evaluation mode) of an engineering department (system elements) regarding operations (evaluation criteria) that the assets perform. The disclosed data also shows an inconsistent use and/or application of the policy resulting one or more assets not being properly identified. In this instance, the deficiency evaluation viewpoint function identifies a
deficiency 232 in the disclosed data as being inconsistent use and/or application of the policy. - The auto-
correct function 233 receives adeficiency 232 and interprets it to determine a deficiency type, i.e., a nature of the understanding issue, the implementation issue, and/or the operation issues. Continuing with the outdated policy example, the nature of the understanding issue is that there is a newer version of the policy. Since there is a newer version available, the auto-correct function 233 can update the policy to the newer version for the system (e.g., an auto-correction). In addition to making the auto-correction 235, the analysis system creates anaccounting 236 of the auto-correction (e.g., creates a record). The record includes an identity of the deficiency, date information, what auto-correction was done, how it was done, verification that it was done, and/or more or less data as may be desired for recording auto-corrections. - As another specific example, a
deficiency 232 is discovered that an asset exists in the engineering department that was not included in the disclosed data. This deficiency may include one or more related deficiencies. For example, a deficiency of design, a deficiency of build, a deficiency is oversight of asset installation, etc. The deficiencies of design, build, and/or installation oversight can be auto-corrected; the deficiency of an extra asset cannot. With regard to the deficiency of the extra asset, the analysis system generates a report regarding the extra asset and the related deficiencies. -
FIG. 31 is a diagram of an example of evaluation options of ananalysis system 10 for evaluating a system element undertest 91. The evaluation options are shown in a three-dimensional tabular form. The rows includeanalysis perspective 213 options (e.g., understanding, implementation, and operation). The columns includesanalysis viewpoint 214 option (e.g., disclosed, discovered, and desired). The third dimension includesanalysis output 240 options (e.g.,ratings 219, deficiencies in disclosed data, deficiencies in discovered data, deficiencies in disclosed to discovered data, deficiencies in disclosed to desired data, deficiencies in discovered to desired data, and auto-correct. - The
analysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection, a column selection, and/or a third dimension selection. For example, the analysis system performs an evaluation from an understanding perspective, a disclosed data viewpoint, and a ratings output. As another example, the analysis system performs an evaluation from an understanding perspective, all viewpoints, and a ratings output. -
FIG. 32 is a diagram of another example of evaluation options of ananalysis system 10 for evaluating a system element under test 91 (e.g., system aspect). The evaluation options are shown in the form of a table. The rows are assets (physical and conceptual) and the columns are system functions. Theanalysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection and a column selection. - For example, the
analysis system 10 can evaluate user HW with respect to business operations. As another example, theanalysis system 10 can evaluate physical assets with respect to data flow. As another example, theanalysis system 10 can evaluate user SW with respect to all system functions. -
FIG. 33 is a diagram of another example of evaluation options of ananalysis system 10 for evaluating a system element under test 91 (e.g., system aspect). The evaluation options are shown in the form of a table. The rows are security functions and the columns are system functions. Theanalysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection and a column selection. - For example, the
analysis system 10 can evaluate threat detection with respect to business operations. As another example, theanalysis system 10 can evaluate all security functions with respect to data flow. As another example, theanalysis system 10 can evaluate threat avoidance with respect to all system functions. -
FIG. 34 is a diagram of another example of evaluation options of ananalysis system 10 for evaluating a system element under test 91 (e.g., system aspect). The evaluation options are shown in the form of a table. The rows are assets (physical and conceptual) and the columns are security functions. Theanalysis system 10 can evaluate the system element under test 91 (e.g., system aspect) in one or more combinations of a row selection and a column selection. - For example, the
analysis system 10 can evaluate user HW with respect to threat recovery. As another example, theanalysis system 10 can evaluate physical assets with respect to threat resolution. As another example, theanalysis system 10 can evaluate user SW with respect to all security functions. -
FIG. 35 is a schematic block diagram of an embodiment of ananalysis system 10 that includes one ormore computing entities 16, one ormore databases 275, one or moredata extraction modules 80, one or more systemuser interface modules 81, and one ormore remediation modules 257. The computing entity(ies) 16 is configured to include adata input module 250, apre-processing module 251, adata analysis module 252, ananalytics modeling module 253, anevaluation processing module 254, adata output module 255, and acontrol module 256. Thedatabase 275, which includes one or more databases, stores the private data for a plurality of systems (e.g., systems A-x) and storesanalytical data 270 of theanalysis system 10. - In an example, the
system 11 providesinput 271 to theanalysis system 10 via the systemuser interface module 80. The systemuser interface module 80 provides a user interface for an administrator of thesystem 11 and provides a s secure end-point of a secure data pipeline between thesystem 11 and theanalysis system 10. While the systemuser interface module 81 is part of the analysis system, it is loaded on and is executed on thesystem 11. - Via the system
user interface module 81, the administrator makes selections as to how the system is to be evaluated and the desired output from the evaluation. For example, the administrator selects evaluate system, which instructs theanalysis system 10 to evaluate the system from most every, if not every, combination of system aspect (e.g., system element, system criteria, and system mode), evaluation aspect (e.g., evaluation perspective, evaluation viewpoint, and evaluation category), evaluation metric (e.g., process, policy, procedure, automation, documentation, and certification), and analysis output (e.g., an evaluation rating, deficiencies identified, and auto-correction of deficiencies). As another example, the administrator selects one or more system aspects, one or more evaluation aspects, one or more evaluation metrics, and/or one or more analysis outputs. - The
analysis system 10 receives the evaluation selections as part of theinput 271. Acontrol module 256 interprets theinput 271 to determine what part of the system is to be evaluated (e.g., system aspects), how the system is to be evaluated (e.g., evaluation aspects), the manner in which the system is to be evaluated (e.g., evaluation metrics), and/or the resulting evaluation output (e.g., an evaluation rating, a deficiency report, and/or auto-correction). From the interpretation of the input, thecontrol module 256 generatesdata gathering parameters 263,pre-processing parameters 264,data analysis parameters 265, andevaluation parameters 266. - The
control module 256 provides thedata gathering parameters 263 to thedata input module 250. Thedata input module 250 interprets thedata gathering parameters 263 to determine data to gather. For example, thedata gathering parameters 263 are specific to the evaluation to be performed by theanalysis system 10. As a more specific example, if theanalysis system 10 is evaluating the understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department, then thedata gathering parameters 263 would prescribe gathering data related to policies, processes, documentation, and automation regarding the assets built for the engineering department. - The
data input module 250 may gather (e.g., retrieve, request, etc.) from a variety of sources. For example, thedata input module 250 gathersdata 258 from thedata extraction module 80. In this example, thedata input module 250 provides instructions to thedata extraction module 80 regarding the data being requested. Thedata extraction module 80 pulls the requested data fromsystem information 210, which may be centralized data of the system, system administration data, and/or data from assets of the system. - As another example, the
data input module 250 gathers data from one or more external data feeds 259. A source of an external data feed includes one or more business associate computing devices 23, one or more publiclyavailable servers 27, and/or one ormore subscriber servers 28. Other sources of external data feeds 259 includesbot computing devices 25, and/or badactor computing devices 26. Typically, thedata input module 250 does not seek data inputs frombot computing devices 25 and/or badactor computing devices 26 except under certain circumstances involving specific types of cybersecurity risks. - As another example, the
data input module 250 gatherssystem proficiency data 260 from one or moresystem proficiency resources 22. As a specific example, for a data request that includes desired data, thedata input module 250 addresses one or moresystem proficiencies resources 22 to obtain the desiredsystem proficiency data 260. For example,system proficiency data 260 includes information regarding best-in-class practices (for system requirements, for system design, for system implementation, and/or for system operation), governmental and/or regulatory requirements, security risk awareness and/or risk remediation information, security risk avoidance, performance optimization information, system development guidelines, software development guideline, hardware requirements, networking requirements, networking guidelines, and/or other system proficiency guidance. - As another example, the
data input module 250 gathers storeddata 261 from thedatabase 275. The storeddata 261 is previously stored data that is unique to thesystem 11, is data from other systems, is previously processed data, is previously stored system proficiency data, and/or is previously stored data that assists in the current evaluation of the system. - The
data input module 250 provides the gathered data to thepre-processing module 251. Based on the pre-processing parameters 264 (e.g., normalize, parse, tag, de-duplication, sort, filter, etc.), thepre-processing module 251 processes the gathered data to producepre-processed data 267. Thepre-processed data 267 may be stored in thedatabase 275 and later retrieved as storeddata 261. - The
analysis modeling module 253 retrieves storeddata 261 and/or storedanalytics 262 from thedatabase 275. Theanalysis modeling module 253 operates to increase the artificial intelligence of theanalysis system 10. For example, theanalysis modeling module 253 evaluates stored data from one or more systems in a variety of ways to test the evaluation processes of the analysis system. As a more specific example, theanalysis modeling module 253 models the evaluation of understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department across multiple systems to identify commonalities and/or deviations. Theanalysis modeling module 253 interprets the commonalities and/or deviations to adjust parameters of the evaluation of understanding and models how the adjustments affect the evaluation of understanding. If the adjustments have a positive effect, theanalysis modeling module 253 stores them asanalytics 262 and/oranalysis modeling 268 in thedatabase 275. - The
data analysis module 252 receives thepre-processed data 267, thedata analysis parameters 265 and may further receive optionalanalysis modeling data 268. Thedata analysis parameters 265 includes identify of selected evaluation categories (e.g., identify, protect, detect, respond, and recover), identity of selected evaluation sub-categories, identify of selected evaluation sub-sub categories, identity of selected analysis metrics (e.g., process, policy, procedure, automation, certification, and documentation), grading parameters for the selected analysis metrics (e.g., a scoring scale for each type of analysis metric), identity of selected analysis perspective (e.g., understanding, implementation, operation, and self-analysis), and/or identity of selected analysis viewpoint (e.g., disclosed, discovered, and desired). - The
data analysis module 252 generates one ormore ratings 219 for thepre-processed data 267 based on thedata analysis parameters 265. Thedata analysis module 252 may adjust the generation of the one ormore rating 219 based on theanalysis modeling data 268. For example, thedata analysis module 252 evaluates the understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department based on thepre-processed data 267 to produce at least oneevaluation rating 219. - Continuing with this example, the
analysis modeling 268 is regarding the evaluation of understanding of the policies, processes, documentation, and automation regarding the assets built for an engineering department of a plurality of different organizations operating on a plurality of different systems. The modeling indicates that if processes are well understood, the understanding of the policies is less significant in the overall understanding. In this instance, thedata analysis module 252 may adjusts its evaluation rating of the understanding to a more favorably rating if thepre-processed data 267 correlates with the modeling (e.g., good understanding of processes). - The
data analysis module 252 provides the rating(s) 219 to thedata output module 255 and to theevaluation processing module 254. Thedata output module 255 provides the rating(s) 219 as anoutput 269 to the systemuser interface module 81. The systemuser interface module 81 provides a graphical rendering of the rating(s) 219. - The
evaluation processing module 254 processes the rating(s) 219 based on theevaluation parameters 266 to identifydeficiencies 232 and/or to determine auto-corrections 235. Theevaluation parameters 266 provide guidance on how to evaluate the rating(s) 219 and whether to obtain data (e.g., pre-processed data, stored data, etc.) to assist in the evaluation. The evaluation guidance includes how deficiencies are to be identified. For example, identify the deficiencies based on the disclosed data, based on the discovered data, based on a differences between the disclosed and discovered data, based on a differences between the disclosed and desired data, and/or based on a differences between the discovered and desired data. The evaluation guidance further includes whether auto-correction is enabled. Theevaluation parameters 266 may further includes deficiency parameters, which provide a level of tolerance between the disclosed, discovered, and/or desired data when determining deficiencies. - The
evaluation processing module 254 providesdeficiencies 232 and/or the auto-corrections 235 to thedata output module 255. Thedata output module 255 provides thedeficiencies 232 and/or the auto-corrections 235 as anoutput 269 to the systemuser interface module 81 and to theremediation module 257. The systemuser interface module 81 provides a graphical rendering of thedeficiencies 232 and/or the auto-corrections 235. - The
remediation module 257 interprets thedeficiencies 232 and the auto-corrections 235 to identify auto-corrections to be performed within the system. For example, if a deficiency is a computing device having an outdated user software application, theremediation module 257 coordinates obtaining a current copy of the user software application, uploading it on the computing device, and updating maintenance logs. -
FIG. 36 is a schematic block diagram of an embodiment of a portion of ananalysis system 10 coupled to a portion of thesystem 11. In particular, thedata output module 255 of theanalysis system 10 is coupled to a plurality of remediation modules 257-1 through 257-n. Eachremediation module 257 is coupled to one or more system assets 280-1 through 280-n. - A
remediation module 257 receives a corresponding portion of theoutput 269. For example, remediation module 257-1 receives output 269-1, which is regarding an evaluation rating, deficiency, and/or an auto-correction of system asset 280-1. Remediation module 257-1 may auto-correct a deficiency of the system asset or a system element thereof. Alternatively or in addition, the remediation module 257-1 may quarantine the system asset or system element thereof if the deficiency cannot be auto-corrected and the deficiency exposes the system to undesired risks, undesired liability, and/or undesired performance degradation. -
FIG. 37 is a schematic block diagram of another embodiment of a portion of ananalysis system 10 coupled to a portion of thesystem 11. In particular, thedata input module 250 of theanalysis system 10 is coupled to a plurality of data extraction modules 80-1 through 80-n. Eachdata extraction module 80 is coupled to asystem data source 290 of thesystem 11. Each of the system data sources producesystem information 210 regarding a corresponding portion of the system. A system data source 290-1 through 290-n may be an Azure EventHub, Cisco Advanced Malware Protection (AMP), Cisco Email Security Appliance (ESA), Cisco Umbrella, NetFlow, and/or Syslog. In addition, a system data source may be a system asset, a system element, and/or a storage devicestoring system information 210. - An extraction
data migration module 293 coordinates the collection ofsystem information 210 as extracted data 291-1 through 291-n. An extractiondata coordination module 292 coordinates the forwarding of the extracteddata 291 asdata 258 to thedata input module 250. -
FIG. 38 is a schematic block diagram of an embodiment of adata extraction module 80 of ananalysis system 10 coupled to asystem 11. Thedata extraction module 80 includes a tool one ormore interface modules 311, one ormore processing module 312, and one or more network interfaces 313. Thenetwork interface 313 provides a network connections that allows thedata extraction module 80 to be coupled to the one ormore computing entities 16 of theanalysis system 10. Thetool interface 311 allows thedata extraction module 80 to interact with tools of thesystem 11 to obtain system information from system data sources 290. - The
system 11 includes one or more tools that can be accessed by thedata extraction module 80 to obtain system information from one or more data sources 290-1 through 290-n. The tools include one or moredata segmentation tools 300, one or moreboundary detection tools 301, one or moredata protection tools 302, one or moreinfrastructure management tools 303, one ormore encryption tools 304, one or moreexploit protection tools 305, one or moremalware protection tools 306, one or moreidentity management tools 307, one or moreaccess management tools 308, one or more system monitoring tools, and/or one or morevulnerability management tools 310. - A system tool may also be an infrastructure management tool, a network monitoring tool, a network strategy and planning tool, a network managing tool, a Simple Network Management Protocol (SNMP) tool, a telephony monitoring tool, a firewall monitoring tool, a bandwidth monitoring tool, an IT asset inventory management tool, a network discovery tool, a network asset discovery tool, a software discovery tool, a security discovery tool, an infrastructure discovery tool, Security Information & Event Management (SIEM) tool, a data crawler tool, and/or other type of tool to assist in discovery of assets, functions, security issues, implementation of the system, and/or operation of the system.
- Depending on the data gathering parameters, the
tool interface 311 engages a system tool to retrieve system information. For example, thetool interface 311 engages the identity management tool to identify assets in the engineering department. Theprocessing module 312 coordinates requests from theanalysis system 10 and responds to theanalysis system 10. -
FIG. 39 is a schematic block diagram of another embodiment of ananalysis system 10 that includes one ormore computing entities 16, one ormore databases 275, one or moredata extraction modules 80, and one or more systemuser interface modules 81. The computing entity(ies) 16 is configured to include adata input module 250, apre-processing module 251, adata analysis module 252, ananalytics modeling module 253, adata output module 255, and acontrol module 256. Thedatabase 275, which includes one or more databases, stores the private data for a plurality of systems (e.g., systems A-x) and storesanalytical data 270 of theanalysis system 10. - This embodiment operates similarly to the embodiment of
FIG. 35 with the removal of theevaluation module 254, which producesdeficiencies 232 and auto-corrections 235, and the removal of theremediation modules 257. As such, thisanalysis system 10 producesevaluation ratings 219 as theoutput 269. -
FIG. 40 is a schematic block diagram of another embodiment of ananalysis system 10 that is similar to the embodiment ofFIG. 39 . This embodiment does not include apre-processing module 251. As such, the data collected by thedata input module 250 is provided directly to thedata analysis module 252. -
FIG. 41 is a schematic block diagram of an embodiment of adata analysis module 252 of ananalysis system 10. Thedata analysis module 252 includes adata module 321 and an analysis &score module 336. Thedata module 321 includes a data parsemodule 320, one or more data storage modules 322-334, and asource data matrix 335. A data storage module 322-334 may be implemented in a variety of ways. For example, a data storage module is a buffer. As another example, a data storage module is a section of memory (45, 56, 57, and/or 62 of theFIG. 2 series) of a computing device (e.g., an allocated, or ad hoc, addressable section of memory). As another example, a data storage module is a storage unit (e.g., a computing device used primarily for storage). As yet another example, a data storage module is a section of a database (e.g., an allocated, or ad hoc, addressable section of a database). - The
data module 321 operates to provide the analyze & scoremodule 336 withsource data 337 selected from incoming data based on one or moredata analysis parameters 265. The data analysis parameter(s) 265 indicate(s) how the incoming data is to be parsed (if at all) and how it is to be stored within the data storage modules 322-334. Adata analysis parameter 265 includes systemaspect storage parameters 345, evaluationaspect storage parameters 346, and evaluationmetric storage parameters 347. A systemaspect storage parameter 345 may be null or includes information to identify one or more system aspects (e.g., system element, system criteria, and system mode), how the data relating to system aspects is to be parsed, and how the system aspect parsed data is to be stored. - An evaluation
aspect storage parameter 346 may be null or includes information to identify one or more evaluation aspects (e.g., evaluation perspective, evaluation viewpoint, and evaluation category), how the data relating to evaluation aspects is to be parsed, and how the evaluation aspect parsed data is to be stored. An evaluationmetric storage parameter 347 may be null or includes information to identify one or more evaluation metrics (e.g., process, policy, procedure, certification, documentation, and automation), how the data relating to evaluation metrics is to be parsed, and how the evaluation metric parsed data is to be stored. Note that thedata module 321 interprets thedata analysis parameters 265 collectively such that parsing and storage are consistent with the parameters. - The
data parsing module 320 parses incoming data in accordance with the systemaspect storage parameters 345, evaluationaspect storage parameters 346, and evaluationmetric storage parameters 347, which generally correspond to what part of the system is being evaluation, how the system is being evaluated, the manner of evaluation, and/or a desired analysis output. As such, incoming data may be parsed in a variety of ways. The data storage modules 322-334 are assigned to store parsed data in accordance with the storage parameters 345-347. For example, the incoming data, which includespre-processed data 267, otherexternal feed data 259,data 258 received via a data extraction module, storeddata 261, and/orsystem proficiency data 260, is parsed based on system criteria (of the system aspect) and evaluation viewpoint (of the evaluation aspect). As a more specific example, the incoming data is parsed into, and stored, as follows: -
- disclosed guideline data that is stored in a disclosed guideline
data storage module 322; - discovered guideline data that is stored in a discovered guideline data storage module 323;
- desired guideline data that is stored in a desired guideline data storage module 324;
- disclosed system requirement (sys. req.) data that is stored in a disclosed system requirement data storage module 325;
- discovered system requirement (sys. req.) data that is stored in a discovered system requirement data storage module 326;
- desired system requirement (sys. req.) data that is stored in a desired system requirement
data storage module 327; - disclosed design and/or build data that is stored in a disclosed design and/or build
data storage module 328; - discovered design and/or build data that is stored in a discovered design and/or build data storage module 329;
- desired design and/or build data that is stored in a desired design and/or build
data storage module 330; - disclosed system operation data that is stored in a disclosed system operation
data storage module 331; - discovered system operation data that is stored in a discovered system operation data storage module 332;
- desired system operation data that is stored in a desired system operation data storage module 333; and/or
- other data that is stored in another data storage module 334.
- disclosed guideline data that is stored in a disclosed guideline
- As another example of parsing, the incoming data is parsed based on a combination of one or more system aspects (e.g., system elements, system criteria, and system mode) or sub-system aspects thereof, one or more evaluation aspects (e.g., evaluation perspective, evaluation viewpoint, and evaluation category) or sub-evaluation aspects thereof, and/or one or more evaluation rating metrics (e.g., process, policy, procedure, certification, documentation, and automation) or sub-evaluation rating metrics thereof. As a specific example, the incoming data is parsed based on the evaluation rating metrics, creating processed parsed data, policy parsed data, procedure parsed data, certification parsed data, documentation parsed data, and automation parsed data. As another specific example, the incoming data is parsed based on the evaluation category of identify and its sub-categories of asset management, business environment, governance, risk assessment, risk management, access control, awareness &, training, and/or data security.
- As another example of parsing, the incoming data is not parsed, or is minimally parsed. As a specific example, the data is parsed based on timestamps: data from one time period (e.g., a day) is parsed from data of another time period (e.g., a different day).
- The
source data matrix 335, which may be a configured processing module, retrievessource data 337 from the data storage modules 322-334. The selection corresponds to the analysis being performed by the analyze & scoremodule 336. For example, if the analyze & scoremodule 336 is evaluating the understanding of the policies, processes, documentation, and automation regarding the assets built for the engineering department, then thesource data 337 would be data specific to policies, processes, documentation, and automation regarding the assets built for the engineering department. - The analyze & score
module 336 generates one ormore ratings 219 for thesource data 337 in accordance with thedata analysis parameters 265 andanalysis modeling 268. Thedata analysis parameters 265 includes systemaspect analysis parameters 342, evaluationaspect analysis parameters 343, and evaluationmetric analysis parameters 344. The analyze & scoremodule 336 is discussed in greater detail with reference toFIG. 42 . -
FIG. 42 is a schematic block diagram of an embodiment of an analyze and scoremodule 336 includes amatrix module 341 and ascoring module 348. Thematrix module 341 processes an evaluation mode matrix, an evaluation perspective matrix, an evaluation viewpoint matrix, and an evaluation categories matrix to produce a scoring input. Thescoring module 348 includes an evaluation metric matrix to process the scoring input data in accordance with theanalysis modeling 268 to produce the rating(s) 219. - For example, the
matrix module 341 configures the matrixes based on the systemaspect analysis parameters 342 and the evaluationaspect analysis parameters 343 to process thesource data 337 to produce the scoring input data. As a specific example, the systemaspect analysis parameters 342 and the evaluationaspect analysis parameters 343 indicate assets as the evaluation mode, understanding as the evaluation perspective, discovered as the evaluation viewpoint, and the identify as the evaluation category. - Accordingly, the matrix module 341communicates with the source
data matrix module 335 of thedata module 321 to obtainsource data 337 relevant to assets, understanding, discovered, and identify. Thematrix module 341 may organize thesource data 337 using an organization scheme (e.g., by asset type, by evaluation metric type, by evaluation sub-categories, etc.) or keep thesource data 337 as a collection of data. Thematrix module 341 provides the scoringinput data 344 as a collection of data or as organized data to thescoring module 348. - Continuing with the example, the scoring module 248 receives the scoring
input data 348 and evaluates in accordance with the evaluationmetric analysis parameters 344 and theanalysis modeling 268 to produce the rating(s) 219. As a specific example, the evaluationmetric analysis parameters 344 indicate analyzing the scoring input data with respect to processes. In this instance, theanalysis modeling 268 provides a scoring mechanism for evaluating the scoring input data with respect to processes to the scoring module 248. For instance, theanalysis modeling 268 includes six levels regarding processes and a corresponding numerical rating: none (e.g., 0), inconsistent (e.g., 10), repeatable (e.g., 20), standardized (e.g., 30), measured (e.g., 40), and optimized (e.g., 50). - In addition, the
analysis modeling 268 includes analysis protocols for interpreting the scoring input data to determine its level and corresponding rating. For example, if there are no processes regarding identifying assess of the discovered data, then an understanding level of processes would be none (e.g., 0), since there are no processes. As another example, if there are some processes regarding identifying assess of the discovered data, but there are gaps in the processes (e.g., identifies some assets, but not all, do not produce consistent results), then an understanding level of processes would be inconsistent (e.g., 10). To determine if there are gaps in the processes, the score module 248 executes the processes of the discovered data to identify assets. The scoring module 248 also executes one or more asset discovery tools to identify assets and then compares the two results. If there are inconsistencies in the identified assets, then there are gaps in the processes. - As a further example, the processes regarding identifying assess of the discovered data are repeatable (e.g., produces consistent results, but there are variations in the processes from process to process, and/or the processes are not all regulated) but not standardized (e.g., produces consistent results, but there are no appreciable variations in the processes from process to process, and/or the processes are regulated). If the processes are repeatable but not standardized, the scoring module establishes an understanding level of the processes as repeatable (e.g., 20).
- If the processes are standardized, the scoring module then determines whether the processes are measured (e.g., precise, exact, and/or calculated to the task of identifying assets). If not, the scoring module establishes an understanding level of the processes as standardized (e.g., 30).
- If the processes are measured, the scoring module then determines whether the processes are optimized (e.g., up-to-date and improvement assessed on a regular basis as part of system protocols). If not, the scoring module establishes an understanding level of the processes as measured (e.g., 40). If so, the scoring module establishes an understanding level of the processes as optimized (e.g., 50).
-
FIG. 43 is a diagram of an example of system aspect, evaluation aspect, evaluation rating metric, and analysis system output options of ananalysis system 10 for analyzing asystem 11, or portion thereof. The system aspect corresponds to what part of the system is to be evaluated by the analysis system. The evaluation aspect indicates how the system aspect is to be evaluation. The evaluation rating metric indicates the manner of evaluation of the system aspect in accordance with the evaluation aspect. The analysis system output indicates the type of output to be produced by the analysis system based on the evaluation of the system aspect in accordance with the evaluation aspect as per the evaluation rating metric. - The system aspect includes system elements, system criteria, and system modes. A system element includes one or more system assets, which is a physical asset and/or a conceptual asset. For example, a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like. As another example, a conception asset is a hardware architecture (e.g., identification of a system's physical components, their capabilities, and their relationship to each other) and/or sub-architectures thereof and a software architecture (e.g., fundamental structures for the system's software, their requirements, and inter-relational operations) and sub-architectures thereof.
- A system element and/or system asset is identifiable in a variety of ways. For example, it can be identified by an organization identifier (ID), which would be associated with most, if not all, system elements of a system. As another example, a system element and/or system asset can be identified by a division ID, where the division is one of a plurality of divisions in the organization. As another example, a system element and/or system asset can be identified by a department ID, where the department is one of a plurality of departments in a division. As yet another example, a system element and/or system asset can be identified by a department ID, where the department is one of a plurality of departments in a division. As a further example, a system element and/or system asset can be identified by a group ID, where the department is one of a plurality of groups in a department. As a still further example, a system element and/or system assets can be identified by a sub-group ID, where the department is one of a plurality of sub-groups in a group. With this type of identifier, a collection of system elements and/or system assets can be selected for evaluation by using an organization ID, a division ID, a department ID, a group ID, or a sub-group ID.
- A system element and/or system asset may also be identified based on a user ID, a serial number, vendor data, an IP address, etc. For example, a computing device has a serial number and vendor data. As such, the computing device can be identified for evaluation by its serial number and/or the vendor data. As another example, a software application has a serial number and vendor data. As such, the software application can be identified for evaluation by its serial number and/or the vendor data.
- In addition, an identifier of one system element and/or system asset may link to one or more other system elements and/or system assets. For example, computing device has a device ID, a user ID, and/or a serial number to identify it. The computing device also includes a plurality of software applications, each with its own serial number. In this example, the software identifiers are linked to the computing device identifier since the software is loaded on the computing device. This type of an identifier allows a single system asset to be identified for evaluation.
- The system criteria includes information regarding the development, operation, and/or maintenance of the
system 11. For example, a system criteria is a guideline, a system requirement, a system design component, a system build component, the system, and system operation. Guidelines, system requirements, system design, system build, and system operation were discussed with reference toFIG. 25 . - The system mode indicates the assets of the system, the system functions of the system, and/or the security functions of the system are to be evaluated. Assets, system functions, and security functions have been previously discussed with reference to one or more of
FIGS. 7-24 and 32-34 . - The evaluation aspect, which indicates how the system aspect is to be evaluated, includes evaluation perspective, evaluation viewpoint, and evaluation category. The evaluation perspective includes understanding (e.g., how well the system is known, should be known, etc.); implementation, which includes design and/or build, (e.g., how well is the system designed, how well should it be designed); system performance, and/or system operation (e.g., how well does the system perform and/or operate, how well should it perform and/or operate); and self-analysis (e.g., how self-aware is the system, how self-healing is the system, how self-updating is the system).
- The evaluation viewpoint includes disclosed data, discovered data, and desired data. Disclosed data is the known data of the system at the outset of an analysis, which is typically supplied by a system administrator and/or is obtained from data files of the system. Discovered data is the data discovered about the system by the analysis system during the analysis. Desired data is the data obtained by the analysis system from system proficiency resources regarding desired guidelines, system requirements, system design, system build, and/or system operation. Differences in disclosed, discovered, and desired data are evaluated to support generating an evaluation rating, to identify deficiencies, and/or to determine and provide auto-corrections.
- The evaluation category includes an identify category, a protect category, a detect category, a respond category, and a recover category. In general, the identify category is regarding identifying assets, system functions, and/or security functions of the system; the protect category is regarding protecting assets, system functions, and/or security functions of the system from issues that may adversely affect; the detect category is regarding detecting issues that may, or have, adversely affect assets, system functions, and/or security functions of the system; the respond category is regarding responding to issues that may, or have, adversely affect assets, system functions, and/or security functions of the system; and the recover category is regarding recovering from issues that have adversely affect assets, system functions, and/or security functions of the system. Each category includes one or more sub-categories and each sub-category may include one or more sub-sub categories as discussed with reference to
FIGS. 44-49 . - The evaluation rating metric includes process, policy, procedure, certification, documentation, and automation. The evaluation rating metric may include more or less topics. The analysis system output options include evaluation rating, deficiency identification, and deficiency auto-correction.
- With such a significant number of options with the system aspect, the evaluation aspect, the evaluation rating metrics, and analysis system output options, the analysis system can analyze a system in thousands, or more, combinations. For example, the
analysis system 10 could provide an evaluation rating for the entire system with respect to its vulnerability to cyber-attacks. Theanalysis system 10 could also identify deficiencies in the system's cybersecurity processes, policies, documentation, implementation, operation, assets, and/or security functions based on the evaluation rating. Theanalysis system 10 could further auto-correct at least some of the deficiencies in the system's cybersecurity processes, policies, documentation, implementation, operation, assets, and/or security functions. - As another example, the
analysis system 10 could evaluates the system's requirements for proper use of software (e.g., authorized to use, valid copy, current version) by analyzing every computing device in the system as to the system's software use requirements. From this analysis, the analysis system generates an evaluation rating. Theanalysis system 10 could also identify deficiencies in the compliance with the system's software use requirements (e.g., unauthorized use, invalid copy, outdated copy). Theanalysis system 10 could further auto-correct at least some of the deficiencies in compliance with the system's software use requirements (e.g., remove invalid copies, update outdated copies). -
FIG. 44 is a diagram of another example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11. This diagram is similar toFIG. 43 with the exception that this figure illustrates sub-categories and sub-sub categories. Each evaluation category includes sub-categories, which, in turn, include their own sub-sub categories. The various categories, sub-categories, and sub-sub categories corresponds to the categories, sub-categories, and sub-sub categories identified in the “Framework for Improving Critical Instructure Cybersecurity”, Version 1.1, Apr. 16, 2018 by the National Institute of Standards and Technology (NIST). -
FIG. 45 is a diagram of an example of an identification evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories. The identify category includes the sub-categories of asset management, business environment, governance, risk management, access control, awareness & training, and data security. - The asset management sub-category includes the sub-sub categories of HW inventoried, SW inventoried, data flow mapped out, external systems cataloged, resources have been prioritized, and security roles have been established. The business environment sub-category includes the sub-sub categories of supply chain roles defined, industry critical infrastructure identified, business priorities established, critical services identified, and resiliency requirements identified.
- The governance sub-category includes the sub-sub categories of security policies are established, security factors aligned, and legal requirements are identified. The risk assessment sub-category includes the sub-sub categories of vulnerabilities identified, external sources are leveraged, threats are identified, business impacts are identified, risk levels are identified, and risk responses are identified. The risk management sub-category includes the sub-sub categories of risk management processes are established, risk tolerances are established, and risk tolerances are tied to business environment.
- The access control sub-category includes the sub-sub categories of remote access control is defined, permissions are defined, and network integrity is defined. The awareness & training sub-category includes the sub-sub categories of users are trained, user privileges are known, third party responsibilities are known, executive responsibilities are known, and IT and security responsibilities are known. The data security sub-category includes the sub-sub categories of data at rest protocols are established, data in transit protocols are established, formal asset management protocols are established, adequate capacity of the system is established, data leak prevention protocols are established, integrity checking protocols are established, and use and development separation protocols are established.
-
FIG. 46 is a diagram of an example of a protect evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories. The protect category includes the sub-categories of information protection processes and procedures, maintenance, and protective technology. - The information protection processes and procedures sub-category includes the sub-sub categories of baseline configuration of IT/industrial controls are established, system life cycle management is established, configuration control processes are established, backups of information are implemented, policy & regulations for physical operation environment are established, improving protection processes are established, communication regarding effective protection technologies is embraced, response and recovery plans are established, cybersecurity in is including in human resources, and vulnerability management plans are established.
- The maintenance sub-category includes the sub-sub categories of system maintenance & repair of organizational assets programs are established and remote maintenance of organizational assets is established. The protective technology sub-category includes the sub-sub-categories of audit and recording policies are practiced, removable media is protected & use policies are established, access to systems and assets is controlled, and communications and control networks are protected.
-
FIG. 47 is a diagram of an example of a detect evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories. The detect category includes the sub-categories of anomalies and events, security continuous monitoring, and detection processes. - The anomalies and events sub-category includes the sub-sub categories of baseline of network operations and expected data flows are monitored, detected events are analyzed, event data are aggregated and correlated, impact of events is determined, and incident alert thresholds are established. The security continuous monitoring sub-category includes the sub-sub categories of network is monitored to detect potential cybersecurity attacks, physical environment is monitored for cybersecurity events, personnel activity is monitored for cybersecurity events, malicious code is detected, unauthorized mobile codes is detected, external service provider activity is monitored for cybersecurity events, monitoring for unauthorized personnel, connections, devices, and software is performed, and vulnerability scans are performed. The detection processes sub-category includes the sub-sub categories of roles and responsibilities for detection are defined, detection activities comply with applicable requirements, detection processes are tested, event detection information is communicated, and detection processes are routinely improved.
-
FIG. 48 is a diagram of an example of a respond evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories. The respond category includes the sub-categories of response planning, communications, analysis, mitigation, and improvements. - The response planning sub-category includes the sub-sub category of response plan is executed during and/or after an event. The communications sub-category includes the sub-sub category of personnel roles and order of operation are established, events are reported consistent with established criteria, information is shared consistently per the response plan, coordination with stakeholders is consistent with the response plan, and voluntary information is shared with external stakeholders.
- The analysis sub-category includes the sub-sub categories of notifications form detection systems are investigated, impact of the incident is understood, forensics are performed, and incidents are categorized per response plan. The mitigation sub-category includes the sub-sub categories of incidents are contained, incidents are mitigated, and newly identified vulnerabilities are processed. The improvements sub-categories includes the sub-sub categories of response plans incorporate lessons learned, and response strategies are updated.
-
FIG. 49 is a diagram of an example of a recover evaluation category that includes a plurality of sub-categories and each sub-category includes its own plurality of sub-sub-categories. The recover category includes the sub-categories of recovery plan, improvements, and communication. The recovery plan sub-category includes the sub-sub category of recovery plan is executed during and/or after an event. - The improvement sub-category includes the sub-sub categories of recovery plans incorporate lessons learned and recovery strategies are updated. The communications sub-category includes the sub-sub categories of public relations are managed, reputations after an event is repaired, and recovery activities are communicated.
-
FIG. 50 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the understanding of the guidelines for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data. - For this specific example, the
analysis system 10 obtains disclosed data from the system regarding the guidelines associated with the assets of the department. From the disclosed data, the analysis system renders an evaluation rating for the understanding of the guidelines for identifying assets. The analysis system renders a second evaluation rating for the understanding of the guidelines regarding protection of the assets from issues. The analysis system renders a third evaluation rating for the understanding of the guidelines regarding detection of issues that may affect or are affecting the assets. - The analysis system renders a fourth evaluation rating for the understanding of the guidelines regarding responds to issues that may affect or are affecting the assets. The analysis system renders a fifth evaluation rating for the understanding of the guidelines regarding recovery from issues that affected the assets of a department based on disclosed data. The analysis system may render an overall evaluation rating for the understanding of the guidelines based on the first through fifth evaluation ratings.
- As another example, the
analysis system 11 evaluates the understanding of guidelines used to determine what assets should be included in the department, how the assets should be protected from issues, how issues that may affect or are affecting the assets are detect, how to respond to issues that may affect or are affecting the assets, and how the assets will recover from issues that may affect or are affecting them based on disclosed data. In this example, the analysis system renders an evaluation rating for the understanding of the guidelines regarding what assets should be in the department. The analysis system renders a second evaluation rating for the understanding of the guidelines regarding how the assets should be protected from issues. The analysis system renders a third evaluation rating for the understanding of the guidelines regarding how to detect issues that may affect or are affecting the assets. - The analysis system renders a fourth evaluation rating for the understanding of the guidelines regarding how to respond to issues that may affect or are affecting the assets. The analysis system renders a fifth evaluation rating for the understanding of the guidelines regarding how to recover from issues that affected the assets of a department based on disclosed data. The analysis system may render an overall evaluation rating for the understanding based on the first through fifth evaluation ratings.
-
FIG. 51 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the understanding of the system design for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data. - For this specific example, the
analysis system 10 obtains disclosed data from the system regarding the system design associated with the assets of the department. From the disclosed data, the analysis system renders an evaluation rating for the understanding of the system design for identifying assets. The analysis system renders a second evaluation rating for the understanding of the system design regarding protection of the assets from issues. The analysis system renders a third evaluation rating for the understanding of the system design regarding detection of issues that may affect or are affecting the assets. - The analysis system renders a fourth evaluation rating for the understanding of the system design regarding responds to issues that may affect or are affecting the assets. The analysis system renders a fifth evaluation rating for the understanding of the system design regarding recovery from issues that affected the assets of a department based on disclosed data. The analysis system may render an overall evaluation rating for the understanding based on the first through fifth evaluation ratings.
- As another example, the
analysis system 11 evaluates the understanding of system design used to determine what assets should be included in the department, how the assets should be protected from issues, how issues that may affect or are affecting the assets are detect, how to respond to issues that may affect or are affecting the assets, and how the assets will recover from issues that may affect or are affecting them based on disclosed data. In this example, the analysis system renders an evaluation rating for the understanding of the system design regarding what assets should be in the department. The analysis system renders a second evaluation rating for the understanding of the system design regarding how the assets should be protected from issues. The analysis system renders a third evaluation rating for the understanding of the system design regarding how to detect issues that may affect or are affecting the assets. - The analysis system renders a fourth evaluation rating for the understanding of the system design regarding how to respond to issues that may affect or are affecting the assets. The analysis system renders a fifth evaluation rating for the understanding of the system design regarding how to recover from issues that affected the assets of a department based on disclosed data. The analysis system may render an overall evaluation rating for the understanding based on the first through fifth evaluation ratings.
-
FIG. 52 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the understanding of the guidelines, system requirements, and system design for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data and discovered data. - For this specific example, the
analysis system 10 obtains disclosed data and discovered from the system regarding guidelines, system requirements, and system design associated with the assets of the department. From the disclosed data and discovered data, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, and system design, or one for all three) for the understanding of the guidelines, system requirements, and system design for identifying assets. The analysis system renders one or more second evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding protection of the assets from issues. The analysis system renders one or more third evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding detection of issues that may affect or are affecting the assets. - The analysis system renders one or more fourth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding responds to issues that may affect or are affecting the assets. The analysis system renders one or more fifth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding recovery from issues that affected the assets of a department based on disclosed data. The analysis system may render an overall evaluation rating for the understanding based on the one or more first through one or more fifth evaluation ratings.
- The
analysis system 11 may further render an understanding evaluation rating regarding how well the discovered data correlates with the disclosed data. In other words, evaluate the knowledge level of the system. In this example, the analysis system compares the disclosed data with the discovered data. If they substantially match, the understanding of the system would receive a relatively high evaluation rating. The more the disclosed data differs from the discovered data, the lower the understanding evaluation rating will be. - As another example, the
analysis system 11 evaluates the understanding of guidelines, system requirements, and system design used to determine what assets should be included in the department, how the assets should be protected from issues, how issues that may affect or are affecting the assets are detect, how to respond to issues that may affect or are affecting the assets, and how the assets will recover from issues that may affect or are affecting them based on disclosed data and discovered data. In this example, the analysis system renders one or more first evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding what assets should be in the department. The analysis system renders one or more second evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how the assets should be protected from issues. The analysis system renders one or more third evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how to detect issues that may affect or are affecting the assets. - The analysis system renders one or more fourth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how to respond to issues that may affect or are affecting the assets. The analysis system renders one or more fifth evaluation ratings for the understanding of the guidelines, system requirements, and system design regarding how to recover from issues that affected the assets of a department based on disclosed data. The analysis system may render an overall evaluation rating for the understanding of the guidelines, system requirements, and system design based on the one or more first through the one or more fifth evaluation ratings.
-
FIG. 53 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the implementation for and operation of identifying assets of a department, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets per the guidelines, system requirements, system design, system build, and resulting system based on disclosed data and discovered data. - For this specific example, the
analysis system 10 obtains disclosed data and discovered data from the system regarding the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. From the disclosed data and discovered data, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, system design, system build, resulting system with respect to each of implementation and operation or one for all of them) for the implementation and operation of identifying the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system renders one or more second evaluation ratings for the implementation and operation of protecting the assets from issues per the guidelines, system requirements, system design, system build, and resulting system. - The analysis system renders one or more third evaluation ratings for the implementation and operation of detecting issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system renders one or more fourth evaluation ratings for the implementation and operation of responding to issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- The analysis system renders one or more fifth evaluation ratings for the implementation and operation of recovering from issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system may render an overall evaluation rating for the implementation and/or performance based on the one or more first through one or more fifth evaluation ratings.
-
FIG. 54 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the implementation for and operation of identifying assets of a department, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets per the guidelines, system requirements, system design, system build, and resulting system based on discovered data and desired data. - For this specific example, the
analysis system 10 obtains disclosed data and discovered from the system regarding the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. From the discovered data and desired data, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, system design, system build, resulting system with respect to each of implementation and operation or one for all of them) for the implementation and operation of identifying the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system renders one or more second evaluation ratings for the implementation and operation of protecting the assets from issues per the guidelines, system requirements, system design, system build, and resulting system. - The analysis system renders one or more third evaluation ratings for the implementation and operation of detecting issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system renders one or more fourth evaluation ratings for the implementation and operation of responding to issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system.
- The analysis system renders one or more fifth evaluation ratings for the implementation and operation of recovering from issues that may affect or are affecting the assets per the guidelines, system requirements, system design, system build, and resulting system. The analysis system may render an overall evaluation rating for the implementation and/or performance based on the one or more first through one or more fifth evaluation ratings.
- The
analysis system 11 may further render an implementation and/or operation evaluation rating regarding how well the discovered data correlates with the desired data. In other words, evaluate the level implementation and operation of the system. In this example, the analysis system compares the disclosed data with the desired data. If they substantially match, the implementation and/or operation of the system would receive a relatively high evaluation rating. The more the discovered data differs from the desired data, the lower the implementation and/or operation evaluation rating will be. -
FIG. 55 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the system's self-evaluation for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data and discovered data per the guidelines, system requirements, and system design. - For this specific example, the
analysis system 10 obtains disclosed data and discovered from the system regarding the guidelines, system requirements, and system design associated with the assets of the department. From the disclosed data and discovered, the analysis system renders one or more first evaluation ratings (e.g., one for each of guidelines, system requirements, and system design, or one for all three) for the self-evaluation of identifying assets per the guidelines, system requirements, and system design. For instance, what resources does the system have with respect to its guidelines, system requirements, and/or system design for self-identifying of assets. - The analysis system renders one or more second evaluation ratings for the self-evaluation of protecting the assets from issues per the guidelines, system requirements, and system design regarding. The analysis system renders one or more third evaluation ratings for the self-evaluation of detecting issues that may affect or are affecting the assets per the guidelines, system requirements, and system design regarding detection.
- The analysis system renders one or more fourth evaluation ratings for the self-evaluation of responding to issues that may affect or are affecting the assets per the guidelines, system requirements, and system design. The analysis system renders one or more fifth evaluation ratings for the self-evaluation of recovering from issues that affected the assets per the guidelines, system requirements, and system design. The analysis system may render an overall evaluation rating for the self-evaluation based on the one or more first through one or more fifth evaluation ratings.
-
FIG. 56 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating the understanding of the guidelines, system requirements, system design, system build, and resulting system for identifying assets, protecting the assets from issues, detecting issues that may affect or are affecting the assets, responding to issues that may affect or are affecting the assets, and recovering from issues that affected the assets of a department based on disclosed data and discovered data. - For this specific example, the
analysis system 10 obtains disclosed data and discovered data from the system regarding guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. As a specific example, the disclosed data includes guidelines that certain types of data shall be encrypted; a system requirement that specifies 128-bit Advanced Encryption Standard (AES) for “y” types of documents; a system design that includes 12 “x” type computers that are to be loaded with 128-bit AES software by company “M”, version 2.0 or newer; and a system build and resulting system that includes 12 “x” type computers that have 128-bit AES software by company “M”, version 2.1. - For this specific example, the discovered data includes the same guideline as the disclosed data; a first system requirement that specifies 128-bit Advanced Encryption Standard (AES) for “y” types of documents and a second system requirement that specifies 256-bit Advanced Encryption Standard (AES) for “A” types of documents; a system design that includes 12 “x” type computers that are to be loaded with 128-bit AES software by company “M”, version 2.0 or newer, and 3 “z” type computers that are to be loaded with 256-bit AES software by company “N” version 3.0 or newer; and a system build and resulting system that includes 10 “x” type computers that have 128-bit AES software by company “M” version 2.1, 2 “x” type computers that have 128-bit AES software by company “M” version 1.3, 2 “z” type computers that have 256-bit AES software by company “N” version 3.1, and 1 “z” type computer that has 256-bit AES software by company “K” version 0.1.
- From just the disclosed data, the analysis system would render a relatively high evaluation rating for the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. The relatively high evaluation rating would be warranted since the system build and resulting system included what was in the system design (e.g., 12 “x” type computers that have 128-bit AES software by company “M”, version 2.1). Further, the system design is consistent with the system reequipments (e.g., 128-bit Advanced Encryption Standard (AES) for “y” types of documents), which is consistent with the guidelines (e.g., certain types of data shall be encrypted).
- From the discovered data, however, the analysis system would render a relatively low evaluation rating for the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. The relatively low evaluation rating would be warranted since the system build and resulting system is not consistent with the system design (e.g., is missing 2 “x” type computers with the right encryption software, only has 2 “z” type computers with the right software, and has a “z” type computer with the wrong software).
- The analysis system would also process the evaluation ratings from the disclosed data and from the discovered data to produce an overall evaluation rating for the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with the assets of the department. In this instance, the disclosed data does not substantially match the discovered data, which indicates a lack of understanding of what's really in the system (i.e., knowledge of the system). Further, since the evaluation rating from the discovered data was low, the analysis system would produce a low overall evaluation rating for the understanding.
-
FIG. 57 is a diagram of an extension of the example ofFIG. 56 . In this example, the analysis system processes the data and/or evaluation ratings to identify deficiencies and/or auto-corrections of at least some of the deficiencies. As shown, the disclosed data includes: -
- guidelines that certain types of data shall be encrypted;
- a system requirement that specifies 128-bit Advanced Encryption Standard (AES) for “y” types of documents;
- a system design that includes 12 “x” type computers that are to be loaded with 128-bit AES software by company “M”, version 2.0 or newer; and
- a system build and resulting system that includes 12 “x” type computers that have 128-bit AES software by company “M”, version 2.1.
- As is also shown, the discovered data includes:
-
- the same guideline as the disclosed data;
- a first system requirement that specifies 128-bit Advanced Encryption Standard (AES) for “y” types of documents and a second system requirement that specifies 256-bit Advanced Encryption Standard (AES) for “A” types of documents;
- a system design that includes 12 “x” type computers that are to be loaded with 128-bit AES software by company “M”, version 2.0 or newer, and 3 “z” type computers that are to be loaded with 256-bit AES software by company “N”, version 3.0 or newer; and
- a system build and resulting system that includes:
- 10 “x” type computers that have 128-bit AES software by company “M”, version 2.1;
- 2 “x” type computers that have 128-bit AES software by company “M”, version 1.3;
- 2 “z” type computers that have 256-bit AES software by company “N”, version 3.1; and
- 1 “z” type computer that has 256-bit AES software by company “K”, version 0.1.
- From this data, the analysis system identifies
deficiencies 232 and, when possible, provides auto-corrections 235. For example, the analysis system determines that the system requirements also included a requirement for 256-bit AES for “A” type documents. The analysis system can auto-correct this deficiency by updating the knowledge of the system to include the missing requirement. This may include updating one or more policies, one or more processes, one or more procedures, and/or updating documentation. - As another example, the analysis system identifies the deficiency of the design further included 3 “z” type computers that are to be loaded with 256-bit AES software by company “N”, version 3.0 or newer. The analysis system can auto-correct this deficiency by updating the knowledge of the system to includes the thee “z” type computers with the correct software. Again, this may include updating one or more policies, one or more processes, one or more procedures, and/or updating documentation.
- As another example, the analysis system identifies the deficiency of 2 “x” type computers having old versions of the encryption software (e.g., have version 1.3 of company M's 128-bit AES software instead of a version 2.0 or newer). The analysis system can auto-correct this deficiency by updating the version of software for the two computers.
- As another example, the analysis system identifies the deficiency of 1 “z” type computer has the wrong encryption software (e.g., it has version 0.1 from company K and not version 3.0 or newer from company N). The analysis system can auto-correct this deficiency by replacing the wrong encryption software with the correct encryption software.
- As another example, the analysis system identifies the deficiency of 1 “z” type computer is missing from the system. The analysis system cannot auto-correct this deficiency since it is missing hardware. In this instance, the analysis system notifies a system admin of the missing computer.
-
FIG. 58 is a schematic block diagram of an embodiment of anevaluation processing module 254 that includes a plurality of comparators 360-362, a plurality of analyzers 363-365, and adeficiency correction module 366. In general, theevaluation processing module 254 identifiesdeficiencies 232 and, when possible, determines auto-corrections 235 from theratings 219 and/or inputted data (e.g., disclosed data, discovered data, and/or desired data) based on evaluation parameters 266 (e.g., disclosed to discovereddeficiency criteria 368, discovered to desireddeficiency criteria 370, disclosed to desireddeficiency criteria 372, disclosed to discovered comparecriteria 373, discovered to desired comparecriteria 374, and disclosed to desired compare criteria 375). - In an example,
comparator 360 compares disclosed data and/orratings 338 and discovered data and/orratings 339 based on the disclosed to discovered comparecriteria 373 to produce, if any, one or more disclosed to discovereddifferences 367. As a more specific example, the analysis system evaluates disclosed, discovered, and/or desired data to produce one or more evaluation ratings regarding the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with identifying the assets of the department. - Each of the disclosed data, discovered data, and desired data includes data regarding the guidelines, system requirements, system design, system build, and/or resulting system associated with identifying the assets of the department and/or the assets of the department. Recall that disclosed data is the known data of the system at the outset of an analysis, which is typically supplied by a system administrator and/or is obtained from data files of the system. The discovered data is the data discovered about the system by the analysis system during the analysis. The desired data is the data obtained by the analysis system from system proficiency resources regarding desired guidelines, system requirements, system design, system build, and/or system operation.
- For the understanding of the guidelines, system requirements, system design, system build, and resulting system associated with identifying the assets of the department, the analysis system may produce one or more evaluation ratings. For example, the analysis system produces an evaluation rating for:
-
- understanding of the guidelines with respect to identifying assets of the department from the disclosed data;
- understanding of the guidelines with respect to identifying assets of the department from the discovered data;
- understanding of the guidelines with respect to identifying assets of the department from the desired data;
- understanding of the system requirements with respect to identifying assets of the department from the disclosed data;
- understanding of the system requirements with respect to identifying assets of the department from the discovered data;
- understanding of the system requirements with respect to identifying assets of the department from the desired data;
- understanding of the system design with respect to identifying assets of the department from the disclosed data;
- understanding of the system design with respect to identifying assets of the department from the discovered data;
- understanding of the system design with respect to identifying assets of the department from the desired data;
- understanding of the system build with respect to identifying assets of the department from the disclosed data;
- understanding of the system build with respect to identifying assets of the department from the discovered data;
- understanding of the system build with respect to identifying assets of the department from the desired data;
- understanding of the resulting system with respect to identifying assets of the department from the disclosed data;
- understanding of the resulting system with respect to identifying assets of the department from the discovered data;
- understanding of the resulting system with respect to identifying assets of the department from the desired data; and/or
- an overall understanding of identifying the assets of the department.
- The disclosed to discovered compare
criteria 373 specifies the evaluation ratings to be compared and/or which data of the disclosed data is to be compared to data of the discovered data. For example, the disclosed to discovered comparecriteria 373 indicates that the “understanding of the guidelines with respect to system design of the department from the disclosed data” is to be compared to the “understanding of the system design with respect to identifying assets of the department from the discovered data”. As another example, the disclosed to discovered comparecriteria 373 indicates that data regarding system design of the disclosed data is to be compared with the data regarding the system design of the discovered data. - In accordance with the disclosed to discovered compare
criteria 373 and for this specific example, thecomparator 360 compares the “understanding of the guidelines with respect to system design of the department from the disclosed data” with the “understanding of the system design with respect to identifying assets of the department from the discovered data” to produce, if any, one or more understanding differences. Thecomparator 360 also compares the data regarding system design of the disclosed data with the data regarding the system design of the discovered data to produce, if any, one or more data differences. Thecomparator 360 outputs the one or more understanding differences and/or the one or more data differences as the disclosed to discovereddifferences 367. - The
analyzer 363 analyzes the disclosed to discovereddifferences 267 in accordance with the disclosed to discovereddeficiency criteria 368 to determine whether adifference 267 constitutes a deficiency. If so, theanalyzer 363 includes it in the disclosed to discovered deficiencies 232-1. The disclosed to discovereddeficiency criteria 368 correspond to the disclosed to discovered comparecriteria 373 and specify how thedifferences 367 are to be analyzed to determine if they constitute deficiencies 232-1. - As an example, the disclosed to discovered
deficiency criteria 368 specify a series of comparative thresholds based on the impact the differences have on the system. The range of impact is from none to significant with as many granular levels in between as desired. For differences that have a significant impact on the system, the comparative threshold is set to trigger a deficiency for virtually any difference. For example, if the difference is regarding system security, then then threshold is set that any difference is a deficiency. - As another example, if the difference is regarding is inconsequential information, then the threshold is set to not identify the difference as a deficiency. For example, the discovered data includes a PO date on Nov. 2, 2020 for a specific purchase order and the disclosed data didn't include a PO date, but the rest of the information regarding the PO is the same for the disclosed and discovered data. In this instance, the missing PO date is inconsequential and would not be identified as a deficiency.
- The
deficiency correction module 366 receives the disclosed to discovered deficiencies 232-1, if any, and determines whether one or more of the deficiencies 232-1 can be auto-corrected to produce an auto-correction 235. In many instances, software deficiencies are auto-correctable (e.g., wrong software, missing software, out-of-date software, etc.) while hardware deficiencies are not auto-correctable (e.g., wrong computing device, missing computing device, missing network connection, etc.). - The
comparator 361 functions similarly to thecomparator 360 to produce discovered to desireddifferences 369 based on the discovered data and/orrating 339 and the desired data and/orrating 340 in accordance with the discovered to desired comparecriteria 374. Theanalyzer 364 functions similarly to theanalyzer 363 to produce discovered to desired deficiencies 232-2 from the discovered to desireddifferences 369 in accordance with the discovered to desireddeficiency criteria 370. Thedeficiency correction module 366 auto-corrects, when possible, the discovered to desired deficiencies 232-2 to produce auto-corrections 235. - The
comparator 362 functions similarly to thecomparator 360 to produce disclosed to desireddifferences 371 based on the disclosed data and/orrating 338 and the desired data and/orrating 340 in accordance with the disclosed to desired comparecriteria 375. Theanalyzer 365 functions similarly to theanalyzer 363 to produce disclosed to desired deficiencies 232-3 from the disclosed to desireddifferences 371 in accordance with the disclosed to desireddeficiency criteria 372. Thedeficiency correction module 366 auto-corrects, when possible, the disclosed to desired deficiencies 232-3 to produce auto-corrections 235. - While the examples were for the understanding of the system with respect to identifying assets of the department, the
evaluation processing module 254 processes any combination of system aspects, evaluation aspects, and evaluation metrics in a similar manner. For example, theevaluation processing module 254 processes the implementation of the system with respect to identifying assets of the department to identifydeficiencies 232 and auto-corrections in the implementation. As another example, theevaluation processing module 254 processes the operation of the system with respect to identifying assets of the department to identifydeficiencies 232 and auto-corrections in the operation of the system. -
FIG. 59 is a state diagram of an example the analysis system analyzing a system. From astart state 380, the analysis proceeds to an understanding of the system state 38) or to a test operations of the assets system functions, and/or security functions of asystem state 386 based on the desired analysis to be performed. For testing the understanding, the analysis proceeds tostate 381 where the understanding of the assets, system functions, and/or security functions of the system are evaluated. This may be done via documentation of the system, policies of the supported business, based upon a question and answer session with personnel of the owner/operator of the system, and/or as discussed herein. - If the understanding of the system is inadequate, the analysis proceeds to the determine deficiencies in the understanding of the
system state 382. In thisstate 382, the deficiencies in understanding are determined by processing differences and/or as discussed herein. - From
state 382, corrections required in understanding the system are identified and operation proceeds tostate 383 in which a report is generated regarding understanding deficiencies and/or corrective measures to be taken. In addition, a report is generated and sent to the owner/operator of the other system. If there are no understanding deficiencies and/or corrective measures, no auto correction is needed, and operations are complete at the done state. - If an autocorrect can be done, operation proceeds to
state 384 where the analysis system updates a determined ability to understand the other system. Corrections are then implemented, and operation proceeds back tostate 381. Note that corrections may be automatically performed for some deficiencies but not others, depending upon the nature of the deficiency. - From
state 381, if the tested understanding of the system is adequate, operation proceeds to state 385 where a report is generated regarding an adequate understanding of the system and the report is sent. From state 385 if operation is complete, operations proceed to the done state. Alternately, from state 385 operation may proceed tostate 386 where testing of the assets, system functions and/or security functions of the other system is performed. If testing of the assets, system functions, and/or security functions of the system results in an adequate test result, operation proceeds tostate 390 where a report is generated indicating adequate implementation and/or operation of the system and the report is sent. - Alternately, at
state 386 if the testing of the system results in an inadequate result, operations proceed tostate 387 where deficiencies in the assets, system functions, and/or security functions of the system are tested. Atstate 387 differences are compared to identify deficiencies in the assets, system functions, and/or security functions. The analysis then proceeds fromstate 387 tostate 388 where a report is generated regarding corrective measures to be taken in response to the assets, system functions, and/or security functions deficiencies. The report is then sent to the owner/operator. If there are no deficiencies and/or corrective measures, no auto correction is needed, and operations are complete at the done state. If autocorrect is required, operation proceeds tostate 389 where the analysis system updates assets, system functions, and/or security functions of the system. Corrections are then implemented and the analysis proceeds tostate 386. Note that corrections may be automatically performed for some deficiencies but not others, depending upon the nature of the deficiency. -
FIG. 60 is a logic diagram of an example of an analysis system analyzing a system, or portion thereof. The method includes the analysis system obtaining system proficiency understanding data regarding the assets of the system (step 400) and obtaining data regarding the owner/operator' s understanding of the assets (step 401). System proficiencies ofstep 400 include industry best practices and regulatory requirements, for example. The data obtained from the system atstep 401 is based upon data received regarding the system or received by probing the system. - The data collected at
steps step 403, meaning that the system proficiency understanding compares favorably to the data regarding understanding, operation is complete, a report is generated (step 412), and the report is sent (step 413). If the comparison is not favorable, as determined atstep 403, operation continues with identifying deficiencies in the understanding of the system (step 404), identifying corrective measures (step 405), generating a corresponding report (step 412) and sending the report (step 413). - The method also includes the analysis system obtaining system proficiency understanding data of the system functions and/or security implementation and/or operation of the system (step 406) and obtaining data regarding the owner/operator's understanding of the system functions and/or security functions implementation and/or operation of the system (step 407). System proficiencies of
step 406 include industry best practices and regulatory requirements, for example. The data obtained from the system atstep 407 is based upon data received regarding the system or received by probing the system. - The data collected at
steps step 415, meaning that the system proficiency understanding compares favorably to the data regarding understanding, operation is complete, a report is generated (step 412), and the report is sent (step 413). If the comparison is not favorable, as determined atstep 415, operation continues with identifying deficiencies in the understanding of the system (step 416), identifying corrective measures (step 417), generating a corresponding report (step 412) and sending the report (step 413). - The method further includes the analysis system comparing the understanding of the physical structure (obtained at step 401) with the understanding of the system functions and/or security functions implementation and/or operation (obtained at step 406) at
step 408. Step 408 essentially determines whether the understanding of the assets corresponds with the understanding of the system functions and/or security functions of the implementation and/or operation of the system. If the comparison is favorable, as determined atstep 409, a report is generated (step 412), and the report is sent (step 413). If the comparison is not favorable, as determined atstep 409, the method continues with identifying imbalances in the understanding (step 410), identifying corrective measures (step 410), generating a corresponding report (step 412), and sending the report (step 413). -
FIG. 61 is a logic diagram of another example of an analysis system analyzing a system, or portion thereof. The method begins atstep 420 where the analysis system determines a system evaluation mode (e.g., assets, system functions, and/or security functions) for analysis. The method continues atstep 421 where the analysis system determines a system evaluation level (e.g., the system or a portion thereof). For instance, the analysis system identifies one or more system elements for evaluation. - The method continues at
step 422 where the analysis system determines an analysis perspective (e.g., understanding, implementation, operation, and/or self-evaluate). The method continues atstep 423 where the analysis system determines an analysis viewpoint (e.g., disclosed, discovered, and/or desired). The method continues atstep 424 where the analysis system determines a desired output (e.g., evaluation rating, deficiencies, and/or auto-corrections). - The method continues at
step 425 where the analysis system determines what data to gather based on the preceding determinations. The method continues atstep 426 where the analysis system gathers data in accordance with the determination made instep 425. The method continues atstep 427 where the analysis system determines whether the gathered data is to be pre-processed. - If yes, the method continues at
step 428 where the analysis system determines data pre-processing functions (e.g., normalize, parse, tag, and/or de-duplicate). The method continues atstep 429 where the analysis system pre-processes the data based on the pre-processing functions to produce pre-processed data. Whether the data is pre-processed or not, the method continues atstep 430 where the analysis system determines one or more evaluation categories (e.g., identify, protect, detect, respond, and/or recover) and/or sub-categories for evaluation. Note that this may be done prior to step 425 and be part of determining the data to gather. - The method continues at
step 431 where the analysis system analyzes the data in accordance with the determine evaluation categories and in accordance with a selected evaluation metric (e.g., process, policy, procedure, automation, certification, and/or documentation) to produce analysis results. The method continues atstep 432 where the analysis system processes the analysis results to produce the desired output (e.g., evaluation rating, deficiencies, and/or auto-correct). The method continues atstep 432 where the analysis system determines whether to end the method or repeat it for another analysis of the system. -
FIG. 62 is a logic diagram of another example of an analysis system analyzing a system or portion thereof. The method begins atstep 440 where the analysis system determines physical assets of the system, or portion thereof, to analyze (e.g., assets in the resulting system). Recall that a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like. - The method continues at
step 441 where the analysis system ascertains implementation of the system, or portion thereof (e.g., assets designed to be, and/or built, in the system). The method continues atstep 442 where the analysis system correlates components of the assets to components of the implementation (e.g., do the assets of the actual system correlate with assets design/built to be in the system). - The method continues at
step 443 where the analysis system scores the components of the physical assets in accordance with the mapped components of the implementation. For example, the analysis system scores how well the assets of the actual system correlate with assets design/built to be in the system. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation). The method continues atstep 444 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies). - The method continues at
step 445 where the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues atstep 446 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues atstep 447 where the analysis system identifies vulnerabilities in the physical assets and/or in the implementation. For example, the analysis system determines that a security software application is missing from several computing devices in the system, or portion thereof, being analyzed. - The method continues at
step 448 where the analysis system determines, if possible, corrective measures of the identified vulnerabilities. The method continues atstep 449 where the analysis system determines whether the corrective measures can be done automatically. If not, the method continues atstep 451 where the analysis system reports the corrective measures. If yes, the method continues atstep 450 where the analysis system auto-corrects the vulnerabilities. -
FIG. 63 is a logic diagram of another example of an analysis system analyzing a system or portion thereof. The method begins atstep 460 where the analysis system determines physical assets of the system, or portion thereof, to analyze (e.g., assets and their intended operation). The method continues atstep 461 where the analysis system ascertains operation of the system, or portion thereof (e.g., the operations actually performed by the assets). The method continues atstep 462 where the analysis system correlates components of the assets to components of operation (e.g., do the identified operations of the assets correlate with the operations actually performed by the assets). - The method continues at
step 463 where the analysis system scores the components of the physical assets in accordance with the mapped components of the operation. For example, the analysis system scores how well the identified operations of the assets correlate with operations actually performed by the assets. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation). The method continues atstep 464 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies). - The method continues at
step 465 where the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues atstep 466 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues atstep 467 where the analysis system identifies vulnerabilities in the physical assets and/or in the operation. - The method continues at
step 468 where the analysis system determines, if possible, corrective measures of the identified vulnerabilities. The method continues atstep 469 where the analysis system determines whether the corrective measures can be done automatically. If not, the method continues atstep 471 where the analysis system reports the corrective measures. If yes, the method continues atstep 470 where the analysis system auto-corrects the vulnerabilities. -
FIG. 64 is a logic diagram of another example of an analysis system analyzing a system or portion thereof. The method begins atstep 480 where the analysis system determines system functions of the system, or portion thereof, to analyze. The method continues atstep 481 where the analysis system ascertains implementation of the system, or portion thereof (e.g., system functions designed to be, and/or built, in the system). The method continues atstep 482 where the analysis system correlates components of the system functions to components of the implementation (e.g., do the system functions of the actual system correlate with system functions design/built to be in the system). - The method continues at
step 483 where the analysis system scores the components of the system functions in accordance with the mapped components of the implementation. For example, the analysis system scores how well the system functions of the actual system correlate with system functions design/built to be in the system. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation). The method continues atstep 484 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies). - The method continues at
step 485 where the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues atstep 486 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues atstep 487 where the analysis system identifies vulnerabilities in the physical assets and/or in the implementation. - The method continues at
step 488 where the analysis system determines, if possible, corrective measures of the identified vulnerabilities. The method continues atstep 489 where the analysis system determines whether the corrective measures can be done automatically. If not, the method continues atstep 491 where the analysis system reports the corrective measures. If yes, the method continues atstep 490 where the analysis system auto-corrects the vulnerabilities. -
FIG. 65 is a logic diagram of another example of an analysis system analyzing a system or portion thereof. The method begins atstep 500 where the analysis system determines system functions of the system, or portion thereof, to analyze. The method continues atstep 501 where the analysis system ascertains operation of the system, or portion thereof (e.g., the operations associated with the system functions). The method continues atstep 502 where the analysis system correlates components of the system functions to components of operation (e.g., do the identified operations of the system functions correlate with the operations actually performed to provide the system functions). - The method continues at
step 503 where the analysis system scores the components of the system functions in accordance with the mapped components of the operation. For example, the analysis system scores how well the identified operations to support the system functions correlate with operations actually performed to support the system functions. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation). The method continues atstep 504 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies). - The method continues at
step 505 where the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues atstep 506 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues atstep 507 where the analysis system identifies vulnerabilities in the physical assets and/or in the operation. - The method continues at
step 508 where the analysis system determines, if possible, corrective measures of the identified vulnerabilities. The method continues atstep 509 where the analysis system determines whether the corrective measures can be done automatically. If not, the method continues atstep 511 where the analysis system reports the corrective measures. If yes, the method continues atstep 510 where the analysis system auto-corrects the vulnerabilities. -
FIG. 66 is a logic diagram of another example of an analysis system analyzing a system or portion thereof. The method begins atstep 520 where the analysis system determines security functions of the system, or portion thereof, to analyze. The method continues atstep 521 where the analysis system ascertains implementation of the system, or portion thereof (e.g., security functions designed to be, and/or built, in the system). The method continues atstep 522 where the analysis system correlates components of the security functions to components of the implementation (e.g., do the security functions of the actual system correlate with security functions design/built to be in the system). - The method continues at
step 523 where the analysis system scores the components of the security functions in accordance with the mapped components of the implementation. For example, the analysis system scores how well the security functions of the actual system correlate with security functions design/built to be in the system. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation). The method continues atstep 524 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies). - The method continues at
step 525 where the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues atstep 526 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues atstep 527 where the analysis system identifies vulnerabilities in the physical assets and/or in the implementation. - The method continues at
step 528 where the analysis system determines, if possible, corrective measures of the identified vulnerabilities. The method continues atstep 529 where the analysis system determines whether the corrective measures can be done automatically. If not, the method continues atstep 531 where the analysis system reports the corrective measures. If yes, the method continues atstep 530 where the analysis system auto-corrects the vulnerabilities. -
FIG. 67 is a logic diagram of another example of an analysis system analyzing a system or portion thereof. The method begins atstep 540 where the analysis system determines security functions of the system, or portion thereof, to analyze. The method continues atstep 541 where the analysis system ascertains operation of the system, or portion thereof (e.g., the operations associated with the security functions). The method continues atstep 542 where the analysis system correlates components of the security functions to components of operation (e.g., do the identified operations of the security functions correlate with the operations actually performed to provide the security functions). - The method continues at
step 543 where the analysis system scores the components of the security functions in accordance with the mapped components of the operation. For example, the analysis system scores how well the identified operations to support the security functions correlate with operations actually performed to support the security functions. The scoring may be based on one or more evaluation metrics (e.g. process, policy, procedure, automation, certification, and/or documentation). The method continues atstep 544 where the analysis system performs a function on the scores to obtain a result (e.g., an evaluation rating, identified deficiencies, and/or auto-correction of deficiencies). - The method continues at
step 545 where the analysis system determines whether the result is equal or greater than a target result (e.g., the evaluation rating is a certain value). If yes, the method continues atstep 546 where the analysis system indicates that the system, or portion thereof, passes this particular test. If the results are less than the target result, the method continues atstep 547 where the analysis system identifies vulnerabilities in the physical assets and/or in the operation. - The method continues at
step 548 where the analysis system determines, if possible, corrective measures of the identified vulnerabilities. The method continues atstep 549 where the analysis system determines whether the corrective measures can be done automatically. If not, the method continues atstep 551 where the analysis system reports the corrective measures. If yes, the method continues atstep 550 where the analysis system auto-corrects the vulnerabilities. -
FIG. 68 is a logic diagram of an example of an analysis system determining an issue detection rating for a system, or portion thereof. Issue detection rating is also referred to herein as detect rating and/or detection rating. An issue includes an event, incident, alert, anomalous activity and/or behavior, and any other system occurrence of interest. The method begins atstep 560 where the analysis system determines a system aspect (seeFIG. 69 ) of a system for an issue detection evaluation. An issue detection evaluation includes evaluating the system aspect's anomalies and event awareness, the system aspect's continuous security monitoring, and/or the system aspect's issue detection process analysis. - The method continues at
step 561 where the analysis system determines at least one evaluation perspective for use in performing the issue detection evaluation on the system aspect. An evaluation perspective is an understanding perspective, an implementation perspective, an operation (e.g., an operational performance) perspective, or a self-analysis perspective. An understanding perspective is with regard to how well the assets, system functions, and/or security functions are understood. An implementation perspective is with regard to how well the assets, system functions, and/or security functions are implemented. An operation perspective (also referred to herein as a performance perspective) is with regard to how well the assets, system functions, and/or security functions operate. A self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions. - The method continues at
step 562 where the analysis system determines at least one evaluation viewpoint for use in performing the issue detection evaluation on the system aspect. An evaluation viewpoint is a disclosed viewpoint, a discovered viewpoint, or a desired viewpoint. A disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data. A discovered viewpoint is with regard to analyzing the system aspect based on the discovered data. A desired viewpoint is with regard to analyzing the system aspect based on the desired data. - The method continues at
step 563 where the analysis system obtains issue detection data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint. Issue detection data is data obtained about the system aspect. The obtaining of issue detection data will be discussed in greater detail with reference toFIGS. 72-79 . - The method continues at
step 564 where the analysis system calculates an issue detection rating as a measure of system issue detection maturity for the system aspect based on the issue detection data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric. An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric. The calculating of an issue detection rating will be discussed in greater detail with reference toFIGS. 80-108 . As used herein, maturity refers to level of development, level of operation reliability, level of operation predictability, level of operation repeatability, level of understanding, level of implementation, level of advanced technologies, level of operation efficiency, level of proficiency, and/or state-of-the-art level. -
FIG. 69 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for determining a system aspect. The method begins atstep 565 where the analysis system determines at least one system element of the system. A system element includes one or more system assets which is a physical asset and/or a conceptual asset. For example, a physical asset is a computing entity, a computing device, a user software application, a system software application (e.g., operating system, etc.), a software tool, a network software application, a security software application, a system monitoring software application, and the like. A system element includes an organization identifier, a division identifier, a department identifier, a group identifier, a sub-group identifier, a device identifier, a software identifier, and/or an internet protocol address identifier. - The method continues at
step 566 where the analysis system determines at least one system criteria of the system. A system criteria is system guidelines, system requirements, system design, system build, or resulting system. Evaluation based on system criteria assists with determining where a deficiency originated and/or how it might be corrected. For example, if the system requirements were lacking a requirement for detecting a particular type of threat, the lack of system requirements could be identified and corrected. - The method continues at
step 567 where the analysis system determines at least one system mode of the system. A system mode is assets, system functions, or system security. The method continues atstep 568 where the analysis system determines the system aspect based on the at least one system element, the at least one system criteria, and the at least one system mode. As an example, a system aspect is determined to be for assets with respect to system requirements of system elements in a particular division. As another example, a system aspect is determined to be for system functions with respect to system design and/or system build of system elements in a particular division. As yet another example, a system aspect is determined to be for assets, system functions, and security functions with respect to guidelines, system requirements, system design, system build, and resulting system of system elements in the organization (e.g., the entire system/enterprise). -
FIG. 70 is a schematic block diagram of an example of ananalysis system 10 determining an issue detection rating for a system, or portion thereof. In this example, thecontrol module 256 receives aninput 271 from the systemuser interface module 81 loaded on asystem 11. Theinput 271 identifies the system aspect to be analyzed and how it is to be analyzed. - The
control module 256 determines one or more system elements, one or more system criteria, and one or more system modes based on the system aspect. Thecontrol module 256 also determines one or more evaluation perspectives, one or more evaluation viewpoints, and/or one or more evaluation rating metrics from the input. As an example, theinput 271 could specify the evaluation perspective(s), the evaluation viewpoint(s), the evaluation rating metric(s), and/or analysis output(s). As another example, theinput 271 indicates a desired analysis output (e.g., an evaluation rating, deficiencies identified, and/or deficiencies auto-corrected). From this input, thecontrol module 256 determines the evaluation perspective(s), the evaluation viewpoint(s), the evaluation rating metric(s) to fulfill the desired analysis output. - In addition, the
control module 256 generatesdata gathering parameters 263,pre-processing parameters 264,data analysis parameters 265, and/orevaluation parameters 266 as discussed with reference toFIG. 35 . Thedata input module 250 obtains issue detection data in accordance with thedata gathering parameters 263 from the data extraction module(s) 80 loaded on thesystem 11, otherexternal feeds 258, and/orsystem proficiency data 260. - The
pre-processing module 251 processes the issue detection data in accordance with thepre-processing parameters 264 to producepre-processed data 414. The issue detection data and/or thepre-processed data 414 may be stored in thedatabase 275. Thedata analysis module 252 calculates anissue detection rating 219 based on thepre-processed data 414 in accordance with thedata analysis parameters 265 and theanalysis modeling 268. - If the requested analysis output was for an evaluation rating only, the
data output module 255 outputs theissue detection rating 219 as theoutput 269. The systemuser interface module 80 renders a graphical representation of the issue detection rating and thedatabase 275 stores it. - If the required analysis output included issue detection deficiencies, then the
evaluation processing module 254 evaluates theissue detection rating 219 and may further evaluate the pre-processed data to identify one ormore deficiencies 232. In addition, theevaluation processing module 254 determines whether a deficiency can be auto-corrected and, if so, determines the auto-correction 235. In this instance, thedata output module 255 outputs theissue detection rating 219, thedeficiencies 232, and the auto-corrections 235 asoutput 269 to thedatabase 275, the systemuser interface module 81, and theremediation module 257. - The system
user interface module 80 renders a graphical representation of the issue detection rating, the deficiencies, and/or the auto-corrections. Thedatabase 275 stores the issue detection rating, the deficiencies, and/or the auto-corrections. Theremediation module 257 processes the auto-corrections 235 within thesystem 11, verifies the auto-corrections, and then records the execution of the auto-correction and its verification. -
FIG. 71 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating, with respect to process, policy, procedure, certification, documentation, and/or automation, the understanding of the system build for issue detection (e.g., the “detect” evaluation category) security functions of an organization (e.g., the entire enterprise) based on disclosed data to produce an evaluation rating. - For this specific example, the
analysis system 10 obtains disclosed data from the system regarding the system build associated with the issue detection security functions of the organization. From the disclosed data, the analysis system renders a first evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of process. The analysis system renders a second evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of policy. The analysis system renders a third evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of procedure. The analysis system renders a fourth evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of certification. The analysis system renders a fifth evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of documentation. The analysis system renders a sixth evaluation rating for the understanding of the system build for issue detection security functions with respect to an evaluation rating metric of automation. - The
analysis system 11 generates the issue detection rating for the understanding of the system build for issue detection security functions based on the six evaluation ratings. As example, each of the six evaluation rating metrics has a maximum potential rating (e.g., 50 for process, 20 for policy, 15 for procedure, 10 for certification, 20 for documentation, and 20 for automation), which has a maximum rating of 135. Continuing with this example, the first evaluation rating based on process is 35; the second evaluation rating based on policy is 10; the third evaluation rating based on procedure is 10; the fourth evaluation rating based on certification is 10; the fifth evaluation rating based on documentation is 15; and the sixth evaluation rating based on automation is 20, resulting in a cumulative score of 100 out of a possible 135. This rating indicates that there is room for improvement and provides a basis for identifying deficiencies. -
FIG. 72 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for obtaining issue detection data, which is a collection of issue detection information. The method begins atstep 570 where the analysis system determines data gathering parameters regarding the system aspect in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and the least one evaluation rating metric. The generation of data gathering parameters will be discussed in greater detail with reference toFIG. 74 . - The method continues at
step 571 where the analysis system identifies system elements of the system aspect based on the data gathering parameters and obtains issue detection information from the system elements in accordance with the data gathering parameters. The obtaining of the issue detection information is discussed in greater detail with reference toFIG. 73 . - The method continues at
step 572 where the analysis system records the issue detection information from the system elements to produce the issue detection data. As an example, the analysis system stores the issue detection information in the database. As another example, the analysis system temporarily stores the issue detection information in the data input module. As yet another example, the analysis system uses some form of retaining a record of the issue detection information. Examples of issue detection information are provided with reference toFIGS. 75-79 . -
FIG. 73 is a logic diagram of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for obtaining the issue detection information. The method begins atstep 573 where the analysis system probes (e.g., push and/or pull information requests) a system element in accordance with the data gathering parameters to obtain a system element data response. The analysis system would do this for most, if not all of the system elements of the system aspect (e.g., the system, or portion of the system, being evaluated). - The method continues at
step 574 where the analysis system identifies vendor information from the system element data response. For example, vendor information includes vendor name, a model name, a product name, a serial number, a purchase date, and/or other information to identify the system element. The method continues atstep 575 where the analysis system tags the system element data response with the vendor information. -
FIG. 74 is a logic diagram of a further of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for determining the data gathering parameters for the evaluation category of issue detection (i.e., detect). The method begins atstep 576 where the analysis system, for the system aspect, ascertains identity of one or more system elements of the system aspect. For a system element of the system aspect, the method continues atstep 577 where the analysis system determines a first data gathering parameter based on at least one system criteria (e.g., guidelines, system requirements, system design, system build, and/or resulting system) of the system aspect. For example, if the determined selected criteria is system requirements, then the first data gathering parameter would be to search for system requirement information. - The method continues at
step 578 where the analysis system determines a second data gathering parameter based on at least one system mode (e.g., assets, system functions, and/or security functions). For example, if the determined selected mode is system functions, then the second data gathering parameter would be to search for system function information. - The method continues at
step 579 where the analysis system determines a third data gathering parameter based on the at least one evaluation perspective (e.g., understanding, implementation, operation, and/or self-evaluation). For example, if the determined selected evaluation perspective is operation, then the third data gathering parameter would be to search for information regarding operation of the system aspect. - The method continues at
step 580 where the analysis system determines a fourth data gathering parameter based on the at least one evaluation viewpoint (e.g., disclosed data, discovered data, and/or desired data). For example, if the determined selected evaluation viewpoint is disclosed and discovered data, then the fourth data gathering parameter would be to obtain for disclosed data and to obtain discovered data. - The method continues at
step 581 where the analysis system determines a fifth data gathering parameter based on the at least one evaluation rating metric (e.g., process, policy, procedure, certification, documentation, and/or automation). For example, if the determined selected evaluation rating metric is process, policy, procedure, certification, documentation, and automation, then the fifth data gathering parameter would be to search for data regarding process, policy, procedure, certification, documentation, and automation. - The analysis system generates the data gathering parameters from the first through fifth data gathering parameters. For example, the data gathering parameters include search for information regarding processes, policies, procedures, certifications, documentation, and/or automation (fifth parameter) pertaining to issue detection (selected evaluation category) system requirements (first parameter) for system operation (third parameter) of system functions (second parameter) from disclosed and discovered data (fourth parameter).
-
FIG. 75 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof. For the evaluation category of issue detection (i.e., detect), the sub-categories and/or sub-sub categories are cues for determining what data to gather for an issue detection evaluation. The sub-categories include anomalies and event awareness, continuous security monitoring, and issue detection processes analysis. As such, the issue detection data may include a collection of anomalies and event awareness information, continuous security monitoring information, and issue detection processes analysis information. - The anomalies and event awareness sub-category includes the sub-sub categories of baseline of network operations and expected data flows establishment and management, detected event analysis, event data aggregation and correlation, event impact determination, and incident alert thresholds establishment. The anomalies and event awareness information includes information regarding how these sub-sub categories are implemented (e.g., understood to be implemented, actually implemented, etc.).
- The continuous security monitoring sub-category includes the sub-sub categories of network monitoring, physical environment monitoring, personnel activity monitoring, malicious code detection, unauthorized mobile code detection, external service provider activity monitoring, monitoring for unauthorized personnel, monitoring for unauthorized connections, monitoring for unauthorized devices, monitoring for unauthorized software, and vulnerability scanning. The continuous security monitoring information includes information regarding how these sub-sub categories are implemented (e.g., understood to be implemented, actually implemented, etc.).
- The issue detection processes analysis sub-category includes the sub-sub categories of roles and responsibilities for issue detection are defined, issue detection activities comply with all applicable requirements (i.e., compliance verification), issue detection process testing, event detection information communication, and issue detection processes improvement. The issue detection processes analysis information includes information regarding how these sub-sub categories are implemented (e.g., understood to be implemented, actually implemented, etc.).
-
FIG. 76 is a diagram of another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof. In this example, the issue detection data includes one or more diagrams, one or more reports, one or more logs, one or more application information records, one or more user information records, one or more device information records, one or more system tool records, one or more system plans, and/or one or more other documents regarding a system aspect. - A diagram is a data flow diagram, an HLD diagram, an LLD diagram, a DLD diagram, an operation flowchart, a data flow diagram, a security operations center (SOC) processes diagram, a software architecture diagram, a hardware architecture diagram, and/or other diagram regarding the design, build, and/or operation of the system, or a portion thereof. A report may be a compliance report, a security report (e.g., risk assessment, user behavioral analysis, exchange traffic and use, etc.), a sales report, a trend report, an inventory summary, or any kind of report generated by an aspect of the system and/or an outside source regarding the aspect of the system. For example, a compliance report is a report created to ensure system compliance with industry standards, laws, rules, and regulations set by government agencies and regulatory bodies. A compliance report may include information to ensure compliance with the Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPPA), etc. A security report summarizes a system's weaknesses based on security scan results and other analytic tools.
- Logs include machine data generated by applications or the infrastructure used to run applications. Logs create a record of events that happen on system components such as network devices (e.g., firewall, router, switch, load balancer, etc.), user applications, databases, servers, and operating systems. An event is a change in the normal behavior of a given system, process, environment, or workflow. Events can be positive, neutral, or negative. An average organization experiences thousands of events every day (e.g., an email, update to firewalls, etc.). Logs include log files, event logs, transaction logs, message logs, etc. For example, a log file is a file that records events that occur in an operating system, other software, or between different users of communication software. A transaction log is a file of communications between a system and the users of that system, or a data collection method that automatically captures the type, content, or time transactions made by a person from a terminal with that system. As another example, an event log provides information about network traffic, usage, and other conditions. For example, an event log may capture all logon sessions to a network, along with account lockouts, failed password attempts, application events, etc. An event log stores this data for retrieval by security professionals or automated security systems.
- An application information record is a record regarding one or more applications of the system. For example, an application information record includes a list of user applications and/or a list of system applications (e.g., operating systems) of the system or a portion thereof. A list of applications includes vendor information (e.g., name, address, contact person information, etc.), a serial number, a software description, a software model number, a version, a generation, a purchase date, an installation date, a service date, use and/or user information, and/or other mechanism for identifying applications.
- A user information record is a record regarding one or more users of the system. For example, a user information record may include one or more lists of users and affiliations of users with the system, or portion thereof. As another example, a user information record include a log of use of the one or more assets by a user or others. A user information record may also include privileges and/or restrictions imposed on the use of the one or more assets (e.g., access control lists). A user information record may include roles and responsibilities of personnel (e.g., from a personnel handbook, access control list, data flow information, human resources documentation, and/or other system information).
- A device information record is a record regarding one or more devices of the system. For example, a device information record may include a list of network devices (e.g., hardware and/or software), a list of user devices (e.g., hardware and/or software), a list of security devices (e.g., hardware and/or software), a list of servers (e.g., hardware and/or software), and/or a list of any other devices of the system or a portion thereof. Each list includes device information pertaining to the devices in the list.
- The device information may include vendor information (e.g., name, address, contact person information, etc.), a serial number, a device description, a device model number, a version, a generation, a purchase date, an installation date, a service date, and/or other mechanism for identifying a device. The device information may also include purchases, installation notes, maintenance records, data use information, and age information. A purchase is a purchase order, a purchase fulfillment document, bill of laden, a quote, a receipt, and/or other information regarding purchases of assets of the system, or a portion thereof. An installation note is a record regarding the installation of an asset of the system, or portion thereof. A maintenance record is a record regarding each maintenance service performed on an asset of the system, or portion thereof.
- As an example, a list of user devices and associated user device information may include (e.g., in tabular form) user ID, user level, user role, hardware (HW) information, IP address, user application software (SW) information, device application software (SW) information, device use information, and/or device maintenance information. A user ID may include an individual identifier of a user and may further include an organization ID, a division ID, a department ID, a group ID, and/or a sub-group ID associated with the user.
- A user level (e.g., C-Level, director level, general level) includes options for data access privileges, data access restrictions, network access privileges, network access restrictions, server access privileges, server access restrictions, storage access privileges, storage access restrictions, required user applications, required device applications, and/or prohibited user applications.
- A user role (e.g., project manager, engineer, quality control, administration) includes further options for data access privileges, data access restrictions, network access privileges, network access restrictions, server access privileges, server access restrictions, storage access privileges, storage access restrictions, required user applications, required device applications, and/or prohibited user applications.
- The HW information field may store information regarding the hardware of the device. For example, the HW information includes information regarding a computing device such as vendor information, a serial number, a description of the computing device, a computing device model number, a version of the computing device, a generation of the computing device, and/or other mechanism for identifying a computing device. The HW information may further store information regarding the components of the computing device such as the motherboard, the processor, video graphics card, network card, connection ports, and/or memory.
- The user application SW information field may store information regarding the user applications installed on the user's computing device. For example, the user application SW information includes information regarding a SW program (e.g., spreadsheet, word processing, database, email, etc.) such as vendor information, a serial number, a description of the program, a program model number, a version of the program, a generation of the program, and/or other mechanism for identifying a program. The device SW information may include similar information, but for device applications (e.g., operating system, drivers, security, etc.).
- The device use data field may store data regarding the use of the device (e.g., use of the computing device and software running on it). For example, the device use data includes a log of use of a user application, or program (e.g., time of day, duration of use, date information, etc.). As another example, the device use data includes a log of data communications to and from the device. As yet another example, the device use data includes a log of network accesses. As a further example, the device use data includes a log of server access (e.g., local and/or remote servers). As still further example, the device use data includes a log of storage access (e.g., local and/or remote memory).
- The maintenance field stores data regarding the maintenance of the device and/or its components. As an example, the maintenance data includes a purchase date, purchase information, an installation date, installation notes, a service date, services notes, and/or other maintenance data of the device and/or its components.
- Note that, when device information records include application information, the application information records may include at least a portion of the device information records and vice versa.
- A system tool record is a record regarding one or more tools of the system. For example, a system tool record includes a list of system tools of the system or a portion thereof and information related to the system tools. System tools may include one or more security tools, one or more data segmentation tools, one or more boundary detection tools, one or more data protection tools, one or more infrastructure management tools, one or more encryption tools, one or more exploit protection tools, one or more malware protection tools, one or more identity management tools, one or more access management tools, one or more system monitoring tools, one or more verification tools, and/or one or more vulnerability management tools.
- A system tool may also be an infrastructure management tool, a network monitoring tool, a network strategy and planning tool, a network managing tool, a Simple Network Management Protocol (SNMP) tool, a telephony monitoring tool, a firewall monitoring tool, a bandwidth monitoring tool, an IT asset inventory management tool, a network discovery tool, a network asset discovery tool, a software discovery tool, a security discovery tool, an infrastructure discovery tool, Security Information & Event Management (SIEM) tool, a data crawler tool, and/or other type of tool to assist in discovery of assets, functions, security issues, implementation of the system, and/or operation of the system.
- System tool information may include vendor information (e.g., name, address, contact person information, etc.), a serial number, a system tool description, a system tool model number, a version, a generation, a purchase date, an installation date, a service date, and/or other mechanism for identifying a system tool. Note that the system tool may be implemented by one or more devices and/or applications included in one or more of the device information records and the application information records such that some overlap in system tool records, device information records, and application information records may occur.
- System plans include business plans, operational plans, system security plans, system design specifications, etc. For example, a system security plan (SSP) is a document that identifies the functions and features of a system, including all its hardware and software installed on the system. System design specifications may include security specifications, hardware specifications, software specifications, data flow specifications, business operation specifications, build specifications, and/or other specifications regarding the system, or a portion thereof.
- Other documents may include internal documents such as memos, calendar entries, notes, emails, training materials, etc. Training materials may include at least a portion of a personnel handbook, presentation slides from a cybersecurity training session, links to videos and/or audio tutorials, handouts from a cybersecurity training session, user manuals regarding security devices and/or tools, training email communications, etc.
-
FIG. 77 is another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof. In this example, disclosed anomalies and event awareness information is gathered from various disclosed data sources of an organization as a portion of the disclosed issue detection information collected as issue detection data. - As shown, disclosed organization diagrams relevant to anomalies and event awareness include a network operations diagram and expected data flow diagrams for users and systems of the organization. Disclosed system tools relevant to anomalies and event awareness include a network monitoring tool and a log management tool. For example, the network monitoring tool may be a simple network management protocol (SNMP). In SNMP, monitored devices are installed with an agent software and a network management system monitors each device and communicates information from those devices to an administrator. As another example, the network monitoring tool may include NetFlow monitoring. NetFlow is a standard for collecting network traffic statistics (e.g., IP addresses, ports, protocol, timestamp, number of bytes, packets, flags, etc.) from routers, switches, specialized network probes, and/or other network devices. With NetFlow monitoring, security teams can detect changes in network behavior to identify anomalies indicative of a security breach.
- A log management tool generally performs a variety of functions such as log collection, central aggregation, storage and retention of logs, log rotation, analysis, and log reporting. The log management tool may be a Security Information & Event Management (SIEM) tool. A SIEM tool allows an organization to monitor network activity and implements a variety of security techniques such as log management, security event management, security information management, and security event correlation.
- Disclosed device information and application information relevant to anomalies and event awareness include a list of security devices (software and/or hardware), a list of network devices (software and/or hardware), a list of system applications (e.g., operating systems), a list of user applications, a list of servers (software and/or hardware), and a list of storage devices (software and/or hardware).
- One or more of the disclosed devices or applications may be associated with a system security tool. For example, security devices may include one or more log forwarding devices and a centralized log server required for a log management tool to collect and store data from various sources. As another example, a network packet collector is a security device required to collect network traffic for input to the network monitoring tool. An entire detailed accounting of organization network devices, applications, operating systems, servers, databases, etc., is relevant to anomalies and event awareness because these items generate logs and/or are monitored for anomalous activity.
- Portions of the disclosed system security plan relevant to anomalies and event awareness include security analyst team roles and responsibilities as well as any specific processes, procedures, plans, data flows, policies, and requirements related to detecting anomalies and events. The security analyst team roles and responsibilities may include day to day activities of each team member, verification processes, information sharing requirements, hierarchy of roles, user privileges, and reporting requirements.
- In this example, the disclosed system security plan discloses a network monitoring process, a log management process, and event impact and incident alert processes. A network monitoring process may include a requirement to send network monitoring alerts generated by the network monitoring tool to a security analyst team for further analysis and remediation. Similarly, a log management process may include a requirement to send alerts generated by the log management tool to a security analyst team for further analysis and remediation. Event impact and incident alert processes may include log analysis procedures to determine event impact and methods to establish appropriate incident alert settings. An incident is a change in a system that negatively impacts the organization. For example, an incident might take place when a cyberattack occurs. An incident alert is a notification of a cybersecurity incident.
- Disclosed reports relevant to anomalies and event awareness include security reports generated by one or more of the log management tool, the network monitoring tool, and the security analyst team. Additionally, compliance reports may contain relevant information to anomalies and event awareness such as previous security issues and application of remedies. Numerous other pieces of disclosed anomalies and event awareness information may be gathered from various information sources of the organization. The above examples are far from exhaustive.
-
FIG. 78 is another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof. In this example, disclosed continuous security monitoring information is gathered from various disclosed data sources of an organization as a portion of the disclosed issue detection information recorded as issue detection data. - As shown, one or more portions of a disclosed system security plan relevant to continuous security monitoring include specific processes (e.g., network monitoring processes), procedures, plans, data flows, policies (e.g., physical environment threat protections), and requirements related to continuous security monitoring. As an example, the system security plan includes policies that all user devices must have antivirus software installed, that the antivirus software be centrally managed, and that unknown networks are prohibited. As another example, a network monitoring process may include a requirement to send alerts from a network monitoring tool to a security analyst team for further analysis and remediation. As another example, physical environment threat protections may include policies against eating or drinking around equipment, fire hazard avoidance policies, maintenance of building security and environmental threat alarms, etc.
- Disclosed device information relevant to continuous security monitoring include a list of security devices such as firewalls and intrusion detection systems, antivirus software and associated devices, malware protection software and associated devices, automatic update and automatic installation features of required software, and on-site alarm systems. Disclosed system tools relevant to continuous security monitoring include a network monitoring tool and any tools pertaining to use and deployment of malware protection software. Other examples of continuous security monitoring tools not disclosed here may include a network behavior anomaly detection (NBAD) tool for personnel and network monitoring, a cloud access security broker (CASB) tool for external service provider activity monitoring, etc. NBAD continuously tracks network characteristics such as traffic volume, bandwidth use, and protocol use in real time and generates an alarm if a strange event or trend is detected indicative of a potential security threat. CASB is an on-premises or cloud-hosted software placed between cloud service consumers and cloud server providers to enforce security, compliance, and governance policies for cloud applications.
- Disclosed training materials relevant to continuous security monitoring include documents related to phishing attack awareness training, email and website security training materials, physical environment security training materials, dates and times of cybersecurity training sessions, and the participants involved in the disclosed cybersecurity training sessions.
- Disclosed user information and application information relevant to continuous security monitoring may include user affiliations with devices (hardware and/or software), access control lists, and personnel roles. For example, based on a user's role within the organization, the user may have certain software requirements, access privileges, access restrictions, etc. Numerous other pieces of disclosed continuous security monitoring information may be gathered from various information sources of the organization. The above examples are far from exhaustive.
-
FIG. 79 is another example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof. In this example, disclosed issue detection processes analysis information is gathered from various disclosed data sources of an organization as a portion of the disclosed issue detection information recorded as issue detection data. - As shown, portions of the system security plan relevant to issue detection processes analysis include specific processes, procedures, plans, data flows, policies, and requirements generally or specifically related to issue detection. As an example, the system security plan includes documented issue detection policies and processes and issue detection requirements (e.g., based on compliance requirements). System diagrams relevant to issue detection processes analysis include issue detection flowcharts that visually illustrate processes described in the system security plan. System tools relevant to issue detection processes include verification tools for testing and improving the issue detection processes of the system.
- Portions of personnel handbooks and/or human resources (HR) documents relevant to issue detection processes analysis include the roles and responsibilities of those involves in issued detection processes and any personnel guidelines. For example, the portions of personnel handbooks and/or HR documents includes the roles and responsibilities of the IT department, the roles and responsibilities of the security analyst team, the roles and responsibilities of the legal department, and issue detection process guidelines.
- Disclosed training materials relevant to issue detection processes analysis include all issue detection training materials, dates and times of cybersecurity training sessions, and the participants involved in the disclosed cybersecurity training sessions.
- Disclosed reports and internal documents relevant to issue detection processes include compliance reports and personnel communications. Compliance reports include details regarding issue detection processes analysis conducted and the results of such analysis. Personnel communications may include any relevant issue detection process analysis information. For example, an issue detection process may specify that the security analyst team must notify legal of incidents. An email communication from the security analyst team to the legal department regarding an incident is a disclosed personnel communication that demonstrates implementation of this process. Numerous other pieces of disclosed issue detection processes analysis information may be gathered from various information sources of the organization. The above examples are far from exhaustive.
-
FIG. 80 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular, for calculating the issue detection rating. The method begins atstep 590 where the analysis system selects and performs at least two of steps 591-596. Atstep 591, the analysis system generates a policy rating for the system aspect based on the issue detection data and policy analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and policy as the evaluation rating metric. Atstep 592, the analysis system generates a documentation rating for the system aspect based on the issue detection data and documentation analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and documentation as the evaluation rating metric. - At
step 593, the analysis system generates an automation rating for the system aspect based on the issue detection data and automation analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and automation as the evaluation rating metric. Atstep 594, the analysis system generates a policy rating for the system aspect based on the issue detection data and policy analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and policy as the evaluation rating metric. - At
step 595, the analysis system generates a certification rating for the system aspect based on the issue detection data and certification analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and certification as the evaluation rating metric. Atstep 596, the analysis system generates a procedure rating for the system aspect based on the issue detection data and procedure analysis parameters and in accordance with the at least one evaluation perspective, the at least one evaluation viewpoint, and procedure as the evaluation rating metric. - The method continues at
step 597 where the analysis system generates the issue detection rating based on the selected and performed at least two of the process rating, the policy rating, the documentation rating, the automation rating, the procedure rating, and the certification rating. For example, the issue detection rating is a summation of the at least individual evaluation metric ratings. As another example, the analysis system performs a mathematical and/or logical function (e.g., a weight average, standard deviation, statistical analysis, trending, etc.) on the at least two individual evaluation metric to produce the issue detection rating. -
FIG. 81 is a schematic block diagram of an embodiment of a scoring module of thedata analysis module 252 that includes aprocess rating module 601, apolicy rating module 602, aprocedure rating module 603, acertification rating module 604, adocumentation rating module 605, anautomation rating module 606, and acumulative rating module 607. In general, the data scoring module generates anissue detection rating 608 from a collection of data based ondata analysis parameters 265. - The
process rating module 601 evaluates the collection ofdata 600, or portion thereof, (e.g., pre-processed data ofFIG. 35 ) to produce a process evaluation rating in accordance with process analysis parameters of thedata analysis parameters 265. The process analysis parameters indicate how the collection of data is to be evaluated with respect to processes of the system, or portion thereof. As an example, the process analysis parameters include: -
- an instruction to compare processes of the
data 600 with a list of processes the system, or portion thereof, should have; - an instruction to count the number of processes of
data 600 and compare it with a quantity of processes the system, or portion thereof, should have; - an instruction to determine last revisions of processes of
data 600 and/or to determine an age of last revisions; - an instruction to determine frequency of use of processes of
data 600; - an instruction to determine a volume of access of processes of
data 600; - an instruction to evaluate a process of
data 600 with respect to a checklist regarding content of the process (e.g., what should be in the process); - a scaling factor based on the size of the system, or portion thereof;
- a scaling factor based on the size of the organization;
- an instruction to compare a balance of local processes with respect to system-wide processes;
- an instruction to compare topics of the processes of
data 600 with desired topics for processes (which may be at least partially derived from the evaluation category and/or sub-categories); and/or - an instruction to evaluate language use within processes of
data 600.
- an instruction to compare processes of the
- The
process rating module 601 can rate thedata 600 at three levels. The first level is that the system has processes, the system has the right number of processes, and/or the system has processes that address the right topics. The second level digs into the processes themselves to determine whether they are adequately cover the requirements of the system. The third level evaluates how well the processes are used and how well they are adhered to. - As an example, the
process rating module 601 generates a process evaluation rating based on a comparison of the processes of thedata 600 with a list of processes the system, or portion thereof, should have. If all of the processes on the list are found in thedata 600, then the process evaluation rating is high. The fewer processes on the list that found in thedata 600, the lower the process evaluation rating will be. - As another example, the
process rating module 601 generates a process evaluation rating based on a determination of the last revisions of processes ofdata 600 and/or to determine an age of last revisions. As a specific example, if processes are revised at a rate that corresponds to a rate of revision in the industry, then a relatively high process evaluation rate would be produced. As another specific example, if processes are revised at a much lower rate that corresponds to a rate of revision in the industry, then a relatively low process evaluation rate would be produced (implies a lack of attention to the processes). As yet another specific example, if processes are revised at a much higher rate that corresponds to a rate of revision in the industry, then a relatively low process evaluation rate would be produced (implies processes are inaccurate, incomplete, and/or created with a lack of knowledge as to what's needed). - As another example, the
process rating module 601 generates a process evaluation rating based on a determination of frequency of use of processes ofdata 600. As a specific example, if processes are used at a frequency (e.g., x times per week) that corresponds to a frequency of use in the industry, then a relatively high process evaluation rate would be produced. As another specific example, if processes are used at a much lower frequency that corresponds to a frequency of use in the industry, then a relatively low process evaluation rate would be produced (implies a lack of using and adhering to the processes). As yet another specific example, if processes are used at a much higher frequency that corresponds to a frequency of use in the industry, then a relatively low process evaluation rate would be produced (implies processes are inaccuracy, incompleteness, and/or difficult to use). - As another example, the
process rating module 601 generates a process evaluation rating based on an evaluation of a process ofdata 600 with respect to a checklist regarding content of the policy (e.g., what should be in the policy, which may be based, at least in part, on an evaluation category, sub-category, and/or sub-sub category). As a specific example, the topics contained in the process ofdata 600 is compared to a checklist of desired topics for such a process. If all of the topics on the checklist are found in the process ofdata 600, then the process evaluation rating is high. The fewer topics on the checklist that found in the process ofdata 600, the lower the process evaluation rating will be. - As another example, the
process rating module 601 generates a process evaluation rating based on a comparison of balance between local processes ofdata 600 and system-wide processes ofdata 600. As a specific example, most security processes should be system-wide. Thus, if there are a certain percentage (e.g., less than 10%) of security processes that are local, then a relatively high process evaluation rating will be generated. Conversely, the greater the percentage of local security processes, the lower the process evaluation rating will be. - As another example, the
process rating module 601 generates a process evaluation rating based on evaluation of language use within processes ofdata 600. As a specific example, most security requirements are mandatory. Thus, if the policy includes too much use of the word “may” (which implies optionality) versus the word “shall” (which implies must), the lower the process evaluation rating will be. - The
process rating module 601 may perform a plurality of the above examples of process evaluation to produce a plurality of process evaluation ratings. Theprocess rating module 601 may output the plurality of the process evaluation ratings to thecumulative rating module 607. Alternatively, theprocess rating module 601 may perform a function (e.g., a weight average, standard deviation, statistical analysis, etc.) on the plurality of process evaluation ratings to produce a process evaluation rating that's provided to thecumulative rating module 607. - The
policy rating module 602 evaluates the collection ofdata 600, or portion thereof, (e.g., pre-processed data ofFIG. 35 ) to produce a policy evaluation rating in accordance with policy analysis parameters of thedata analysis parameters 265. The policy analysis parameters indicate how the collection of data is to be evaluated with respect to policies of the system, or portion thereof. As an example, the policy analysis parameters include: -
- an instruction to compare policies of the
data 600 with a list of policies the system, or portion thereof, should have; - an instruction to count the number of policies of
data 600 and compare it with a quantity of policies the system, or portion thereof, should have; - an instruction to determine last revisions of policies of
data 600 and/or to determine an age of last revisions; - an instruction to determine frequency of use of policies of
data 600; - an instruction to determine a volume of access of policies of
data 600; - an instruction to evaluate a policy of
data 600 with respect to a checklist regarding content of the policy (e.g., what should be in the policy); - a scaling factor based on the size of the system, or portion thereof;
- a scaling factor based on the size of the organization;
- an instruction to compare a balance of local policies with respect to system-wide policies;
- an instruction to compare topics of the policies of
data 600 with desired topics for policies (which may be at least partially derived from the evaluation category and/or sub-categories); and/or - an instruction to evaluate language use within policies of
data 600.
- an instruction to compare policies of the
- The
policy rating module 602 can rate thedata 600 at three levels. The first level is that the system has policies, the system has the right number of policies, and/or the system has policies that address the right topics. The second level digs into the policies themselves to determine whether they are adequately cover the requirements of the system. The third level evaluates how well the policies are used and how well they are adhered to. - The
procedure rating module 603 evaluates the collection ofdata 600, or portion thereof, (e.g., pre-processed data ofFIG. 35 ) to produce a procedure evaluation rating in accordance with procedure analysis parameters of thedata analysis parameters 265. The procedure analysis parameters indicate how the collection of data is to be evaluated with respect to procedures of the system, or portion thereof. As an example, the procedure analysis parameters include: -
- an instruction to compare procedures of the
data 600 with a list of procedures the system, or portion thereof, should have; - an instruction to count the number of procedures of
data 600 and compare it with a quantity of procedures the system, or portion thereof, should have; - an instruction to determine last revisions of procedures of
data 600 and/or to determine an age of last revisions; - an instruction to determine frequency of use of procedures of
data 600; - an instruction to determine a volume of access of procedures of
data 600; - an instruction to evaluate a procedure of
data 600 with respect to a checklist regarding content of the procedure (e.g., what should be in the procedure); - a scaling factor based on the size of the system, or portion thereof;
- a scaling factor based on the size of the organization;
- an instruction to compare a balance of local procedures with respect to system-wide procedures;
- an instruction to compare topics of the procedures of
data 600 with desired topics for procedures (which may be at least partially derived from the evaluation category and/or sub-categori es); and/or - an instruction to evaluate language use within procedures of
data 600.
- an instruction to compare procedures of the
- The
procedure rating module 603 can rate thedata 600 at three levels. The first level is that the system has procedures, the system has the right number of procedures, and/or the system has procedures that address the right topics. The second level digs into the procedures themselves to determine whether they are adequately cover the requirements of the system. The third level evaluates how well the procedures are used and how well they are adhered to. - The
certification rating module 604 evaluates the collection ofdata 600, or portion thereof, (e.g., pre-processed data ofFIG. 35 ) to produce a certification evaluation rating in accordance with certification analysis parameters of thedata analysis parameters 265. The certification analysis parameters indicate how the collection of data is to be evaluated with respect to certifications of the system, or portion thereof. As an example, the certification analysis parameters include: -
- an instruction to compare certifications of the
data 600 with a list of certifications the system, or portion thereof, should have; - an instruction to count the number of certifications of
data 600 and compare it with a quantity of certifications the system, or portion thereof, should have; - an instruction to determine last revisions of certifications of
data 600 and/or to determine an age of last revisions; - an instruction to evaluate a certification of
data 600 with respect to a checklist regarding content of the certification (e.g., what should be certified and/or how it should be certified); - a scaling factor based on the size of the system, or portion thereof;
- a scaling factor based on the size of the organization; and
- an instruction to compare a balance of local certifications with respect to system-wide certifications.
- an instruction to compare certifications of the
- The
certification rating module 603 can rate thedata 600 at three levels. The first level is that the system has certifications, the system has the right number of certifications, and/or the system has certifications that address the right topics. The second level digs into the certifications themselves to determine whether they are adequately cover the requirements of the system. The third level evaluates how well the certifications are maintained and updated. - The
documentation rating module 603 evaluates the collection ofdata 600, or portion thereof, (e.g., pre-processed data ofFIG. 35 ) to produce a documentation evaluation rating in accordance with documentation analysis parameters of thedata analysis parameters 265. The documentation analysis parameters indicate how the collection of data is to be evaluated with respect to documentation of the system, or portion thereof. As an example, the documentation analysis parameters include: -
- an instruction to compare documentation of the
data 600 with a list of documentation the system, or portion thereof, should have; - an instruction to count the number of documentation of
data 600 and compare it with a quantity of documentation the system, or portion thereof, should have; - an instruction to determine last revisions of documentation of
data 600 and/or to determine an age of last revisions; - an instruction to determine frequency of use and/or creation of documentation of
data 600; - an instruction to determine a volume of access of documentation of
data 600; - an instruction to evaluate a document of
data 600 with respect to a checklist regarding content of the document (e.g., what should be in the document); - a scaling factor based on the size of the system, or portion thereof;
- a scaling factor based on the size of the organization;
- an instruction to compare a balance of local documents with respect to system-wide documents;
- an instruction to compare topics of the documentation of
data 600 with desired topics for documentation (which may be at least partially derived from the evaluation category and/or sub-categories); and/or - an instruction to evaluate language use within documentation of
data 600.
- an instruction to compare documentation of the
- The
documentation rating module 605 can rate thedata 600 at three levels. The first level is that the system has documentation, the system has the right number of documents, and/or the system has documents that address the right topics. The second level digs into the documents themselves to determine whether they are adequately cover the requirements of the system. The third level evaluates how well the documentation is used and how well it is maintained. - The
automation rating module 606 evaluates the collection ofdata 600, or portion thereof, (e.g., pre-processed data ofFIG. 35 ) to produce an automation evaluation rating in accordance with automation analysis parameters of thedata analysis parameters 265. The automation analysis parameters indicate how the collection of data is to be evaluated with respect to automation of the system, or portion thereof. As an example, the automation analysis parameters include: -
- an instruction to compare automation of the
data 600 with a list of automation the system, or portion thereof, should have; - an instruction to count the number of automation of
data 600 and compare it with a quantity of automation the system, or portion thereof, should have; - an instruction to determine last revisions of automation of
data 600 and/or to determine an age of last revisions; - an instruction to determine frequency of use of automation of
data 600; - an instruction to determine a volume of access of automation of
data 600; - an instruction to evaluate an automation of
data 600 with respect to a checklist regarding content of the automation (e.g., what the automation should do); - a scaling factor based on the size of the system, or portion thereof;
- a scaling factor based on the size of the organization;
- an instruction to compare a balance of local automation with respect to system-wide automation;
- an instruction to compare topics of the automation of
data 600 with desired topics for automation (which may be at least partially derived from the evaluation category and/or sub-categori es); and/or - an instruction to evaluate operation use of automation of
data 600.
- an instruction to compare automation of the
- The
automation rating module 606 can rate thedata 600 at three levels. The first level is that the system has automation, the system has the right number of automation, and/or the system has automation that address the right topics. The second level digs into the automation themselves to determine whether they are adequately cover the requirements of the system. The third level evaluates how well the automations are used and how well they are adhered to. - The
cumulative rating module 607 receives one or more process evaluation ratings, one or more policy evaluation ratings, one or more procedure evaluation ratings, one or more certification evaluation ratings, one or more documentation evaluation ratings, and/or one or more automation evaluation ratings. Thecumulative rating module 607 may output the evaluation ratings it receives as theissue detection rating 608. Alternatively, thecumulative rating module 607 performs a function (e.g., a weight average, standard deviation, statistical analysis, etc.) on the evaluation ratings it receives to produce theissue detection rating 608. -
FIG. 82 is a schematic block diagram of another embodiment of adata analysis module 252 that is similar to the data analysis module ofFIG. 81 . In this embodiment, thedata analysis module 252 includes adata parsing module 609, which parses thedata 600 into process data, policy data, procedure data, certification data, documentation data, and/or automation data. -
FIG. 83 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating a process rating. The method begins atstep 610 where the analysis system generates a first process rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). - The method continues at
step 611 where the analysis system generates a second process rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). The method continues atstep 612 where the analysis system generates the process rating based on the first and second process ratings. -
FIG. 84 is a logic diagram of a further example of generating a process rating for understanding of system build for issue detection security functions of an organization. The method begins atstep 613 where the analysis system identifies processes regarding issue detection security functions from the data. The method continues atstep 614 where the analysis system generates a process rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 615 where the analysis system determines use of the issue detection processes. The method continues atstep 616 where the analysis system generates a process rating based on use. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 617 where the analysis system determines consistency of applying the issue detection processes. The method continues atstep 618 where the analysis system generates a process rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 619 where the analysis system generates the process rating based on the process rating from the data, the process rating based on use, and the process rating based on consistency of use. -
FIG. 85 is a logic diagram of a further example of generating a process rating for understanding of verifying issue detection security functions of an organization. The method begins atstep 620 where the analysis system identifies processes to verify issue detection security functions from the data. The method continues atstep 621 where the analysis system generates a process rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 622 where the analysis system determines use of the processes to verify the issue detection security functions. The method continues atstep 623 where the analysis system generates a process rating based on use of the verify processes. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 624 where the analysis system determines consistency of applying the verifying processes to issue detection security functions. The method continues atstep 625 where the analysis system generates a process rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 626 where the analysis system generates the process rating based on the process rating from the data, the process rating based on use, and the process rating based on consistency of use. -
FIG. 86 is a diagram of an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof. The issue detection data includes a collection ofdata 600, which includes disclosed data, discovered data, and/or desired data. - For example, data 600 includes one or more network operation and data flow establishment and management processes, one or more data protection tools, one or more vulnerability management tools, one or more network monitoring tools, one or more access management tools, one or more boundary protection tools, network operation and data flow establishment and management implementation, device information, one or more system plans, one or more diagrams, one or more event detection analysis processes, one or more event detection analysis tools, event detection analysis implementation, one or more event data aggregation and correlation processes, one or more event data aggregation and correlation tools, event data aggregation and correlation implementation, one or more logs, one or more event impact determination processes, one or more event impact determination tools, event impact determination implementation, one or more incident alert threshold establishment processes, one or more incident alert threshold establishment tools, incident alert threshold establishment implementation, one or more network monitoring processes, network monitoring implementation, one or more network physical environment monitoring processes, one or more physical environment monitoring tools, physical environment monitoring implementation, one or more personnel activity monitoring processes, one or more personnel activity monitoring tools, personnel activity monitoring implementation, one or more malicious code detection processes, one or more malware protection tools, one or more exploit protection tools, malicious code detection implementation, one or more unauthorized mobile code detection processes, one or more unauthorized mobile code detection tools, unauthorized mobile code detection implementation, one or more external service provider monitoring processes, one or more external service provider monitoring tools, external service provider monitoring implementation, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring processes, unauthorized aspect monitoring implementation, one or more vulnerability scanning processes, vulnerability scanning implementation, one or more verification processes, one or more verification tools, assigned issue detection roles and responsibilities, user information, one or more training documents, one or more issue detection process compliance verification processes, one or more reports, one or more issue detection process testing processes, one or more issue detection process communication processes, and/or one or more issue detection process improvement processes.
- The
data 600 may further include one or more network operation and data flow establishment and management policies, one or more event detection analysis policies, one or more event data aggregation and correlation policies, one or more event impact determination policies, one or more incident alert threshold establishment policies, one or more network monitoring policies, one or more network physical environment monitoring policies, one or more personnel activity monitoring policies, one or more malicious code detection policies, one or more unauthorized mobile code detection policies, one or more external service provider monitoring policies, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring policies, one or more vulnerability scanning policies, one or more verification policies, one or more issue detection process compliance verification policies, one or more issue detection process testing policies, one or more issue detection process communication policies, and/or one or more issue detection process improvement policies. - The
data 600 may still include one or more network operation and data flow establishment and management procedures, one or more event detection analysis procedures, one or more event data aggregation and correlation procedures, one or more event impact determination procedures, one or more incident alert threshold establishment procedures, one or more network monitoring procedures, one or more network physical environment monitoring procedures, one or more personnel activity monitoring procedures, one or more malicious code detection procedures, one or more unauthorized mobile code detection procedures, one or more external service provider monitoring procedures, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring procedures, one or more vulnerability scanning procedures, one or more verification procedures, one or more issue detection process compliance verification procedures, one or more issue detection process testing procedures, one or more issue detection process communication procedures, and/or one or more issue detection process improvement procedures. - The
data 600 may still include one or more network operation and data flow establishment and management documents, one or more event detection analysis documents, one or more event data aggregation and correlation documents, one or more event impact determination documents, one or more incident alert threshold establishment documents, one or more network monitoring documents, one or more network physical environment monitoring documents, one or more personnel activity monitoring documents, one or more malicious code detection documents, one or more unauthorized mobile code detection documents, one or more external service provider monitoring documents, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring documents, one or more vulnerability scanning documents, one or more verification documents, one or more issue detection process compliance verification documents, one or more issue detection process testing documents, one or more issue detection process communication documents, and/or one or more issue detection process improvement documents. - The
data 600 may still include one or more network operation and data flow establishment and management certifications, one or more event detection analysis certifications, one or more event data aggregation and correlation certifications, one or more event impact determination certifications, one or more incident alert threshold establishment certifications, one or more network monitoring certifications, one or more network physical environment monitoring certifications, one or more personnel activity monitoring certifications, one or more malicious code detection certifications, one or more unauthorized mobile code detection certifications, one or more external service provider monitoring certifications, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring certifications, one or more vulnerability scanning certifications, one or more verification certifications, one or more issue detection process compliance verification certifications, one or more issue detection process testing certifications, one or more issue detection process communication certifications, and/or one or more issue detection process improvement certifications. - The
data 600 may still include one or more network operation and data flow establishment and management automations, one or more event detection analysis automations, one or more event data aggregation and correlation automations, one or more event impact determination automations, one or more incident alert threshold establishment automations, one or more network monitoring automations, one or more network physical environment monitoring automations, one or more personnel activity monitoring automations, one or more malicious code detection automations, one or more unauthorized mobile code detection automations, one or more external service provider monitoring automations, one or more unauthorized aspect (e.g., personnel, connections, devices, and software) monitoring automations, one or more vulnerability scanning automations, one or more verification automations, one or more issue detection process compliance verification automations, one or more issue detection process testing automations, one or more issue detection process communication automations, and/or one or more issue detection process improvement automations. - In this example the blue shaded boxes (e.g., network operation and data flow establishment and management processes, event detection analysis processes, etc.) are data that is directly relevant to the
process rating module 601. The light green shaded boxes (e.g., data protection tools, vulnerability management tools, etc.) are data that may be relevant to theprocess rating module 601. Implementations (e.g., network operation and data flow establishment and management implementation, event detection analysis implementation, etc.) refer to how a particular process is performed within the system aspect. Other data (e.g., diagrams, reports, etc.) provides information regarding one or more of the processes, tools, and implementations. The data listed is exemplary and not intended to be an exhaustive list. - There may be overlap and/or redundancies with respect to the processes, tools, and implementations listed. For example, a security information event management (SIEM) tool may include a combination of tools such as vulnerability management, vulnerability scanning, network monitoring, event detection analysis, event aggregation and correlation, event impact determination, etc. As another example, a network monitoring tool may be considered a boundary protection tool and vice versa. Tools may be shared across processes. For example, while vulnerability management tools are listed near network operation and data flow establishment and management processes, several other processes may rely on vulnerability management tools such as vulnerability scanning processes and event awareness analysis processes.
- In one embodiment, the
process rating module 601 rates how well processes are used for identified tasks. For example, theprocess rating module 601 rates how well the network operation and data flow establishment and management tools (e.g., data protection tools, vulnerability management tools, network monitoring tools, access management tools, boundary protection tools, etc.) are used in accordance with the network operation and data flow establishment and management processes. - In another embodiment, the
process rating module 601 rates the consistency of application of processes in implementation. For example, the process rating module rates the consistency of use of the network operation and data flow establishment and management processes to use the network operation and data flow establishment and management tools in the network operation and data flow establishment and management implementation (e.g., to establish and manage the network operation and expected data flows of the system). -
FIG. 87 is a logic diagram of an example of generating a process rating by the analysis system; in particular the process rating module generating a process rating based on data. The method begins atstep 620 where the analysis system determines whether there is at least one process in the collection of data. Note that the threshold number in this step could be greater than one. If there are no processes, the method continues atstep 631 where the analysis system generates a process rating of 0 (and/or a word rating of “none”). - If there is at least one process, the method continues at
step 632 where the analysis system determines whether the processes are repeatable. In this instance, repeatable processes produce consistent results, include variations from process to process, are not routinely reviewed in an organized manner, and/or are not all regulated. For example, when the number of processes is below a desired number of processes, the analysis system determines that the processes are not repeatable (e.g., with too few processes cannot get repeatable outcomes). As another example, when the processes of thedata 600 does not include one or more processes on a list of processes the system should have, the analysis system determines that the processes are not repeatable (e.g., with missing processes cannot get repeatable outcomes). - If the processes are not repeatable, the method continues at
step 633 where the analysis system generates a process rating of 10 (and/or a word rating of “inconsistent”). If, however, the processes are at least repeatable, the method continues atstep 634 where the analysis system determines whether the processes are standardized. In this instance, standardized includes repeatable plus there are no appreciable variations in the processes from process to process, and/or the processes are regulated. - If the processes are not standardized, the method continues at
step 635 where the analysis system generates a process rating of 20 (and/or a word rating of “repeatable”). If, however, the processes are at least standardized, the method continues atstep 636 where the analysis system determines whether the processes are measured. In this instance, measured includes standardized plus precise, exact, and/or calculated to specific needs, concerns, and/or functioning of the system. - If the processes are not measured, the method continues at
step 637 where the analysis system generates a process rating of 30 (and/or a word rating of “standardized”). If, however, the processes are at least measured, the method continues atstep 638 where the analysis system determines whether the processes are optimized. In this instance, optimized includes measured plus processes are up-to-date and/or process improvement assessed on a regular basis as part of system protocols. - If the processes are not optimized, the method continues at
step 639 where the analysis system generates a process rating of 40 (and/or a word rating of “measured”). If the processes are optimized, the method continues atstep 640 where the analysis system generates a process rating of 50 (and/or a word rating of “optimized”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of process rating may be more or less than the six shown. - For this method, distinguishing between repeatable, standardized, measured, and optimized is interpretative based on the manner in which the
data 600 was analyzed. As an example, weighting factors on certain types of analysis affect the level. As a specific example, weighting factors for analysis to determine last revisions of processes, age of last revisions, content verification of processes with respect to a checklist, balance of local processes and system-wide processes, topic verification of the processes with respect to desired topics, and/or process language evaluation will affect the resulting level. -
FIG. 88 is a logic diagram of an example of generating a process rating by the analysis system; in particular the process rating module generating a process rating based on use of processes. The method begins atstep 641 where the analysis system determines whether at least one process in the collection of data has been used. Note that the threshold number in this step could be greater than one. If no processes have been used, the method continues atstep 642 where the analysis system generates a process rating of 0 (and/or a word rating of “none”). - If at least one process is used, the method continues at
step 643 where the analysis system determines whether the use of the processes is repeatable. In this instance, repeatable use of processes is consistent use, but with variations from process to process, use is not routinely reviewed or verified in an organized manner, and/or use is not regulated. - If the use of processes is not repeatable, the method continues at
step 644 where the analysis system generates a process rating of 10 (and/or a word rating of “inconsistent”). If, however, the use of processes is at least repeatable, the method continues atstep 645 where the analysis system determines whether the use of processes is standardized. In this instance, standardized includes repeatable plus there are no appreciable variations in the use of processes from process to process, and/or the use of processes is regulated. - If the use of processes is not standardized, the method continues at
step 646 where the analysis system generates a process rating of 20 (and/or a word rating of “repeatable”). If, however, the use of processes is at least standardized, the method continues atstep 647 where the analysis system determines whether the use of processes is measured. In this instance, measured includes standardized plus use is precise, exact, and/or calculated to specific needs, concerns, and/or functioning of the system. - If the use of processes is not measured, the method continues at
step 648 where the analysis system generates a process rating of 30 (and/or a word rating of “standardized”). If, however, the use of processes is at least measured, the method continues atstep 649 where the analysis system determines whether the use of processes is optimized. In this instance, optimized includes measured plus use of processes are up-to-date and/or improving use of processes is assessed on a regular basis as part of system protocols. - If the use of processes is not optimized, the method continues at
step 650 where the analysis system generates a process rating of 40 (and/or a word rating of “measured”). If the use of processes is optimized, the method continues atstep 651 where the analysis system generates a process rating of 50 (and/or a word rating of “optimized”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of process rating may be more or less than the six shown. -
FIG. 89 is a logic diagram of an example of generating a process rating by the analysis system; in particular the process rating module generating a process rating based on consistency of application of processes. The method begins atstep 652 where the analysis system determines whether at least one process in the collection of data has been consistently applied. Note that the threshold number in this step could be greater than one. If there no processes have been consistently applied, the method continues atstep 653 where the analysis system generates a process rating of 0 (and/or a word rating of “none”). - If at least one process has been consistently applied, the method continues at
step 654 where the analysis system determines whether the consistent application of processes is repeatable. In this instance, repeatable consistency of application of processes is a process is consistently applied for a given circumstance of the system (e.g., determining software applications for like devices in a department), but with variations from process to process, application of processes is not routinely reviewed or verified in an organized manner, and/or application of processes is not regulated. - If the consistency of application of processes is not repeatable, the method continues at
step 655 where the analysis system generates a process rating of 10 (and/or a word rating of “inconsistent”). If, however, the consistency of application of processes is at least repeatable, the method continues atstep 656 where the analysis system determines whether the consistency of application of processes is standardized. In this instance, standardized includes repeatable plus there are no appreciable variations in the application of processes from process to process, and/or the application of processes is regulated. - If the consistency of application of processes is not standardized, the method continues at
step 657 where the analysis system generates a process rating of 20 (and/or a word rating of “repeatable”). If, however, the consistency of application of processes is at least standardized, the method continues atstep 658 where the analysis system determines whether the consistency of application of processes is measured. In this instance, measured includes standardized plus application of processes is precise, exact, and/or calculated to specific needs, concerns, and/or functioning of the system. - If the consistency of application of processes is not measured, the method continues at
step 659 where the analysis system generates a process rating of 30 (and/or a word rating of “standardized”). If, however, the consistency of application of processes is at least measured, the method continues atstep 660 where the analysis system determines whether the consistency of application of processes is optimized. In this instance, optimized includes measured plus application of processes is up-to-date and/or improving application of processes is assessed on a regular basis as part of system protocols. - If the consistency of application of processes is not optimized, the method continues at
step 661 where the analysis system generates a process rating of 40 (and/or a word rating of “measured”). If the consistency of application of processes is optimized, the method continues atstep 662 where the analysis system generates a process rating of 50 (and/or a word rating of “optimized”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of process rating may be more or less than the six shown. -
FIG. 90 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating a policy rating. The method begins atstep 670 where the analysis system generates a first policy rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). - The method continues at
step 671 where the analysis system generates a second policy rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). The method continues atstep 672 where the analysis system generates the policy rating based on the first and second policy ratings. -
FIG. 91 is a logic diagram of a further example of generating a policy rating for understanding of system build for issue detection security functions of an organization. The method begins atstep 673 where the analysis system identifies policies regarding issue detection security functions from the data. The method continues atstep 674 where the analysis system generates a policy rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 675 where the analysis system determines use of the policies for issue detection. The method continues atstep 676 where the analysis system generates a policy rating based on use. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 677 where the analysis system determines consistency of applying the policies for issue detection. The method continues atstep 678 where the analysis system generates a policy rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 679 where the analysis system generates the policy rating based on the policy rating from the data, the policy rating based on use, and the policy rating based on consistency of use. -
FIG. 92 is a logic diagram of a further example of generating a policy rating for understanding of verifying issue detection security functions of an organization. The method begins atstep 680 where the analysis system identifies policies to verify the issue detection security functions from the data. The method continues atstep 681 where the analysis system generates a policy rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 682 where the analysis system determines use of the policies to verify the issue detection security functions. The method continues atstep 683 where the analysis system generates a policy rating based on use of the verify policies. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 684 where the analysis system determines consistency of applying the verifying policies to issue detection security functions. The method continues atstep 685 where the analysis system generates a policy rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 686 where the analysis system generates the policy rating based on the policy rating from the data, the policy rating based on use, and the policy rating based on consistency of use. -
FIG. 93 is a logic diagram of an example of generating a policy rating by the analysis system; in particular the policy rating module generating a policy rating based ondata 600. The method begins atstep 687 where the analysis system determines whether there is at least one policy in the collection of data. Note that the threshold number in this step could be greater than one. If there are no policies, the method continues atstep 688 where the analysis system generates a policy rating of 0 (and/or a word rating of “none”). - If there is at least one policy, the method continues at
step 689 where the analysis system determines whether the policies are defined. In this instance, defined policies include sufficient detail to produce consistent results, include variations from policy to policy, are not routinely reviewed in an organized manner, and/or are not all regulated. For example, when the number of policies is below a desired number of policies, the analysis system determines that the processes are not repeatable (e.g., with too few policies cannot get repeatable outcomes). As another example, when the policies of thedata 600 does not include one or more policies on a list of policies the system should have, the analysis system determines that the policies are not repeatable (e.g., with missing policies cannot get repeatable outcomes). - If the policies are not defined, the method continues at
step 690 where the analysis system generates a policy rating of 5 (and/or a word rating of “informal”). If, however, the policies are at least defined, the method continues atstep 691 where the analysis system determines whether the policies are audited. In this instance, audited includes defined plus the policies are routinely reviewed, and/or the policies are regulated. - If the policies are not audited, the method continues at
step 692 where the analysis system generates a policy rating of 10 (and/or a word rating of “defined”). If, however, the policies are at least audited, the method continues atstep 693 where the analysis system determines whether the policies are embedded. In this instance, embedded includes audited plus are systematically rooted in most, if not all, aspects of the system. - If the policies are not embedded, the method continues at
step 694 where the analysis system generates a policy rating of 15 (and/or a word rating of “audited”). If the policies are embedded, the method continues atstep 695 where the analysis system generates a policy rating of 20 (and/or a word rating of “embedded”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of policy rating may be more or less than the five shown. - For this method, distinguishing between defined, audited, and embedded is interpretative based on the manner in which the
data 600 was analyzed. As an example, weighting factors on certain types of analysis affect the level. As a specific example, weighting factors for analysis to determine last revisions of policies, age of last revisions, content verification of policies with respect to a checklist, balance of local policies and system-wide policies, topic verification of the policies with respect to desired topics, and/or policy language evaluation will affect the resulting level. -
FIG. 94 is a logic diagram of an example of generating a policy rating by the analysis system; in particular the policy rating module generating a policy rating based on use of the polices. The method begins atstep 696 where the analysis system determines whether there is at least one use of a policy. Note that the threshold number in this step could be greater than one. If there are no uses of policies, the method continues atstep 697 where the analysis system generates a policy rating of 0 (and/or a word rating of “none”). - If there is at least one use of a policy, the method continues at
step 698 where the analysis system determines whether the use of policies is defined. In this instance, defined use of policies include sufficient detail on how and/or when to use a policy, include variations in use from policy to policy, use of policies is not routinely reviewed in an organized manner, and/or use of policies is not regulated. - If the use of policies is not defined, the method continues at
step 699 where the analysis system generates a policy rating of 5 (and/or a word rating of “informal”). If, however, the use of policies is at least defined, the method continues atstep 700 where the analysis system determines whether the use of policies is audited. In this instance, audited includes defined plus the use of policies is routinely reviewed, and/or the use of policies is regulated. - If the use of policies is not audited, the method continues at
step 701 where the analysis system generates a policy rating of 10 (and/or a word rating of “defined”). If, however, the use of policies is at least audited, the method continues atstep 702 where the analysis system determines whether the use of policies is embedded. In this instance, embedded includes audited plus use of policies is systematically rooted in most, if not all, aspects of the system. - If the use of policies is not embedded, the method continues at
step 703 where the analysis system generates a policy rating of 15 (and/or a word rating of “audited”). If the use of policies is embedded, the method continues atstep 704 where the analysis system generates a policy rating of 20 (and/or a word rating of “embedded”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of policy rating may be more or less than the five shown. -
FIG. 95 is a logic diagram of an example of generating a policy rating by the analysis system; in particular the policy rating module generating a policy rating based on consistent application of polices. The method begins atstep 705 where the analysis system determines whether there is at least one consistent application of a policy. Note that the threshold number in this step could be greater than one. If there are no consistent application of policies, the method continues atstep 706 where the analysis system generates a policy rating of 0 (and/or a word rating of “none”). - If there is at least one consistent application of a policy, the method continues at
step 707 where the analysis system determines whether the consistent application of policies is defined. In this instance, defined application of policies include sufficient detail on when policies apply, includes application variations from policy to policy, application of policies is not routinely reviewed in an organized manner, and/or application of policies is not regulated. - If the application of policies is not defined, the method continues at
step 708 where the analysis system generates a policy rating of 5 (and/or a word rating of “informal”). If, however, the application of policies is at least defined, the method continues atstep 707 where the analysis system determines whether the application of policies is audited. In this instance, audited includes defined plus the application of policies is routinely reviewed, and/or the application of policies is regulated. - If the application of policies is not audited, the method continues at
step 710 where the analysis system generates a policy rating of 10 (and/or a word rating of “defined”). If, however, the application of policies is at least audited, the method continues atstep 711 where the analysis system determines whether the application of policies is embedded. In this instance, embedded includes audited plus application of policies is systematically rooted in most, if not all, aspects of the system. - If the application of policies is not embedded, the method continues at
step 712 where the analysis system generates a policy rating of 15 (and/or a word rating of “audited”). If the application of policies is embedded, the method continues atstep 713 where the analysis system generates a policy rating of 20 (and/or a word rating of “embedded”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of policies may be more or less than the five shown. -
FIG. 96 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating a documentation rating. The method begins atstep 720 where the analysis system generates a first documentation rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). - The method continues at
step 721 where the analysis system generates a second documentation rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). The method continues atstep 722 where the analysis system generates the documentation rating based on the first and second documentation ratings. -
FIG. 97 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization. The method begins atstep 723 where the analysis system identifies documentation regarding issue detection security functions from the data. The method continues atstep 724 where the analysis system generates a documentation rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 725 where the analysis system determines use of the documentation for issue detection security functions. The method continues atstep 726 where the analysis system generates a documentation rating based on use. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 727 where the analysis system determines consistency of applying the documentation for issue detection security functions. The method continues atstep 728 where the analysis system generates a documentation rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 729 where the analysis system generates the documentation rating based on the documentation rating from the data, the documentation rating based on use, and the documentation rating based on consistency of use. -
FIG. 98 is a logic diagram of a further example of generating a documentation rating for understanding of system build of issue detection security functions of an organization. The method begins atstep 730 where the analysis system identifies documentation to verify the issue detection security functions from the data. The method continues atstep 731 where the analysis system generates a documentation rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 732 where the analysis system determines use of the documentation to verify issue detection security functions. The method continues atstep 733 where the analysis system generates a documentation rating based on use of the verify documentation. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 734 where the analysis system determines consistency of applying the verifying documentation for issue detection security functions. The method continues atstep 735 where the analysis system generates a documentation rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 736 where the analysis system generates the documentation rating based on the documentation rating from the data, the documentation rating based on use, and the documentation rating based on consistency of use. -
FIG. 99 is a logic diagram of an example of generating a documentation rating by the analysis system; in particular the documentation rating module generating a documentation rating based ondata 600. The method begins atstep 737 where the analysis system determines whether there is at least one document in the collection of data. Note that the threshold number in this step could be greater than one. If there are no documents, the method continues atstep 738 where the analysis system generates a documentation rating of 0 (and/or a word rating of “none”). - If there is at least one document, the method continues at
step 739 where the analysis system determines whether the documents are formalized. In this instance, formalized documents include sufficient detail to produce consistent documentation, include form variations from document to document, are not routinely reviewed in an organized manner, and/or formation of documents is not regulated. - If the documents are not formalized, the method continues at
step 740 where the analysis system generates a documentation rating of 5 (and/or a word rating of “informal”). If, however, the documents are at least formalized, the method continues atstep 741 where the analysis system determines whether the documents are metric & reporting. In this instance, metric & reporting includes formal plus the documents are routinely reviewed, and/or the formation of documents is regulated. - If the documents are not metric & reporting, the method continues at
step 742 where the analysis system generates a documentation rating of 10 (and/or a word rating of “formal”). If, however, the documents are at least metric & reporting, the method continues atstep 743 where the analysis system determines whether the documents are improved. In this instance, improved includes audited plus document formation is systematically rooted in most, if not all, aspects of the system. - If the documents are not improve, the method continues at
step 744 where the analysis system generates a documentation rating of 15 (and/or a word rating of “metric & reporting”). If the documents are improve, the method continues atstep 745 where the analysis system generates a documentation rating of 20 (and/or a word rating of “improvement”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of documentation rating may be more or less than the five shown. - For this method, distinguishing between formalized, metric & reporting, and improvement is interpretative based on the manner in which the
data 600 was analyzed. As an example, weighting factors on certain types of analysis affect the level. As a specific example, weighting factors for analysis to determine last revisions of documents, age of last revisions, content verification of documents with respect to a checklist, balance of local documents and system-wide documents, topic verification of the documents with respect to desired topics, and/or document language evaluation will affect the resulting level. -
FIG. 100 is a logic diagram of an example of generating a documentation rating by the analysis system; in particular the documentation rating module generating a documentation rating based on use of documents. The method begins atstep 746 where the analysis system determines whether there is at least one use of a document. Note that the threshold number in this step could be greater than one. If there are no use of documents, the method continues atstep 747 where the analysis system generates a documentation rating of 0 (and/or a word rating of “none”). - If there is at least one use of a document, the method continues at
step 748 where the analysis system determines whether the use of the documents is formalized. In this instance, formalized use of documents include sufficient detail regarding how to use the documentation, include use variations from document to document, use of documents is not routinely reviewed in an organized manner, and/or use of documents is not regulated. - If the use of documents is not formalized, the method continues at
step 749 where the analysis system generates a documentation rating of 5 (and/or a word rating of “informal”). If, however, the use of documents is at least formalized, the method continues atstep 750 where the analysis system determines whether the use of the documents is metric & reporting. In this instance, metric & reporting includes formal plus use of documents is routinely reviewed, and/or the use of documents is regulated. - If the use of documents is not metric & reporting, the method continues at
step 751 where the analysis system generates a documentation rating of 10 (and/or a word rating of “formal”). If, however, the use of documents is at least metric & reporting, the method continues atstep 752 where the analysis system determines whether the use of documents is improved. In this instance, improved includes metric & reporting plus use of document is systematically rooted in most, if not all, aspects of the system. - If the use of documents is not improve, the method continues at
step 753 where the analysis system generates a documentation rating of 15 (and/or a word rating of “metric & reporting”). If the use of documents is improve, the method continues atstep 754 where the analysis system generates a documentation rating of 20 (and/or a word rating of “improvement”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of documentation rating may be more or less than the five shown. -
FIG. 101 is a logic diagram of an example of generating a documentation rating by the analysis system; in particular the documentation rating module generating a documentation rating based on application of documents. The method begins atstep 755 where the analysis system determines whether there is at least one application of a document. Note that the threshold number in this step could be greater than one. If there are no applications of documents, the method continues atstep 756 where the analysis system generates a documentation rating of 0 (and/or a word rating of “none”). - If there is at least one application of a document, the method continues at
step 757 where the analysis system determines whether the application of the documents is formalized. In this instance, formalized application of documents include sufficient detail regarding how to apply the documentation, include application variations from document to document, application of documents is not routinely reviewed in an organized manner, and/or application of documents is not regulated. - If the application of documents is not formalized, the method continues at
step 758 where the analysis system generates a documentation rating of 5 (and/or a word rating of “informal”). If, however, the application of documents is at least formalized, the method continues atstep 759 where the analysis system determines whether the application of the documents is metric & reporting. In this instance, metric & reporting includes formal plus application of documents is routinely reviewed, and/or the application of documents is regulated. - If the application of documents is not metric & reporting, the method continues at
step 760 where the analysis system generates a documentation rating of 10 (and/or a word rating of “formal”). If, however, the application of documents is at least metric & reporting, the method continues atstep 761 where the analysis system determines whether the application of documents is improve. In this instance, improve includes metric & reporting plus use of document is systematically rooted in most, if not all, aspects of the system. - If the application of documents is not improve, the method continues at
step 762 where the analysis system generates a documentation rating of 15 (and/or a word rating of “metric & reporting”). If the application of documents is improve, the method continues atstep 763 where the analysis system generates a documentation rating of 20 (and/or a word rating of “improvement”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of documentation may be more or less than the five shown. -
FIG. 102 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular generating an automation rating. The method begins atstep 764 where the analysis system generates a first automation rating based on a first combination of a system criteria (e.g., system requirements), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). - The method continues at
step 765 where the analysis system generates a second automation rating based on a second combination of a system criteria (e.g., system design), of a system mode (e.g., system functions), of an evaluation perspective (e.g., implementation), and of an evaluation viewpoint (e.g., disclosed data). The method continues atstep 766 where the analysis system generates the automation rating based on the first and second automation ratings. -
FIG. 103 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization. The method begins atstep 767 where the analysis system identifies automation regarding issue detection security functions from the data. The method continues atstep 768 where the analysis system generates an automation rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 769 where the analysis system determines use of the automation for issue detection security functions. The method continues atstep 770 where the analysis system generates an automation rating based on use. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 771 where the analysis system determines consistency of applying the automation for issue detection security functions. The method continues atstep 772 where the analysis system generates an automation rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 773 where the analysis system generates the automation rating based on the automation rating from the data, the automation rating based on use, and the automation rating based on application (i.e., consistency of use). -
FIG. 104 is a logic diagram of a further example of generating an automation rating for understanding of system build of issue detection security functions of an organization. The method begins atstep 774 where the analysis system identifies automation to verify issue detection security functions from the data. The method continues atstep 775 where the analysis system generates an automation rating from the data. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 776 where the analysis system determines use of the automation for issue detection security functions. The method continues atstep 777 where the analysis system generates an automation rating based on use of the verify automation. Examples of this were discussed with reference toFIG. 81 . - The method also continues at
step 778 where the analysis system determines consistency of applying the verifying automation for issue detection security functions. The method continues atstep 779 where the analysis system generates an automation rating based on consistency of use. Examples of this were discussed with reference toFIG. 81 . The method continues atstep 780 where the analysis system generates the automation rating based on the automation rating from the data, the automation rating based on use, and the automation rating based on consistency of use. -
FIG. 105 is a logic diagram of an example of generating an automation rating by the analysis system; in particular the automation rating module generating an automation rating based ondata 600. The method begins atstep 781 where the analysis system determines whether there is available automation for a particular system aspect, system criteria, and/or system mode. If automation is not available, the method continues atstep 782 where the analysis system generates an automation rating of 10 (and/or a word rating of “unavailable”). - If automation is available, the method continues at
step 783 where the analysis system determines whether there is at least one automation in the data. If not, the method continues atstep 784 where the analysis system generates an automation rating of 0 (and/or a word rating of “none”). - If there is at least one automation, the method continues at
step 785 where the analysis system determines whether full automation is found in the data. In this instance, full automation refers to the automation techniques that are available for the system are in thedata 600. - If the automation is not full, the method continues at
step 786 where the analysis system generates an automation rating of 5 (and/or a word rating of “partial”). If, however, the automation is full, the method continues atstep 787 where the analysis system generates an automation rating of 10 (and/or a word rating of “full”). Note that the numerical ratings are example values and could be other values. Further note that the number of level of automation may be more or less than the four shown. -
FIG. 106 is a logic diagram of an example of generating an automation rating by the analysis system; in particular the automation rating module generating an automation rating based on use. The method begins atstep 788 where the analysis system determines whether there is available automation for a particular system aspect, system criteria, and/or system mode. If automation is not available, the method continues atstep 789 where the analysis system generates an automation rating of 10 (and/or a word rating of “unavailable”). - If automation is available, the method continues at
step 790 where the analysis system determines whether there is at least one use of automation. If not, the method continues atstep 791 where the analysis system generates an automation rating of 0 (and/or a word rating of “none”). - If there is at least one use of automation, the method continues at
step 792 where the analysis system determines whether automation is fully used. In this instance, full use of automation refers to the automation techniques that the system has are fully used. - If the use of automation is not full, the method continues at
step 793 where the analysis system generates an automation rating of 5 (and/or a word rating of “partial”). If, however, the use of automation is full, the method continues atstep 794 where the analysis system generates an automation rating of 10 (and/or a word rating of “full”). -
FIG. 107 is a logic diagram of an example of generating an automation rating by the analysis system; in particular the automation rating module generating an automation rating based on application of automation. The method begins atstep 795 where the analysis system determines whether there is available automation for a particular system aspect, system criteria, and/or system mode. If automation is not available, the method continues atstep 796 where the analysis system generates an automation rating of 10 (and/or a word rating of “unavailable”). - If automation is available, the method continues at
step 790 where the analysis system determines whether there is at least one application of automation. If not, the method continues atstep 798 where the analysis system generates an automation rating of 0 (and/or a word rating of “none”). - If there is at least one application of automation, the method continues at
step 799 where the analysis system determines whether automation is fully applied. In this instance, full application of automation refers to the automation techniques of the system are applied to achieve consistent use. - If the application of automation is not full, the method continues at
step 800 where the analysis system generates an automation rating of 5 (and/or a word rating of “partial”). If, however, the application of automation is full, the method continues atstep 801 where the analysis system generates an automation rating of 10 (and/or a word rating of “full”). -
FIG. 108 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof; in particular the analysis system identifying system elements for use in the issue detection evaluation. The method begins atstep 810 where the analysis system activating at least one detection tool (e.g., cybersecurity tool, end point tool, network tool, IP address tool, hardware detection tool, software detection tool, etc.) based on the system aspect. - The method continues at
step 811 where the analysis system determines whether an identified system element has already been identified for the system aspect (e.g., is already in the collection ofdata 600 and/or is part of the gathered data). If yes, the method continues atstep 812 where the analysis system determines whether the identifying of system elements is done. If not, the method repeats atstep 811. If the identifying of system elements is done, the method continues atstep 813 where the analysis system determines whether to end the method or repeat it for another system aspect, or portion thereof. - If, at
step 811, the identified system is element is not included in the collection of data, the method continues atstep 814 where the analysis system determines whether the potential system element is already identified as being a part of the system aspect, but not included in the collection of data 600 (e.g., is it cataloged as being part of the system?). If yes, the method continues atstep 815 where the analysis system adds the identified system element to the collection ofdata 600. - If, at
step 814, the system element is not cataloged as being part of the system, the method continues atstep 816 where the analysis system obtains data regarding the potential system element. For example, the analysis system obtains a device ID, a user ID, a device serial number, a device description, a software ID, a software serial number, a software description, vendor information and/or other data regarding the system element. - The method continues at
step 816 where the analysis system verifies the potential system element based on the data. For example, the analysis system verifies one or more of a device ID, a user ID, a device serial number, a device description, a software ID, a software serial number, a software description, vendor information and/or other data regarding the system element to establish that the system element is a part of the system. When the potential system element is verified, the method continues atstep 818 where the analysis system adds the system element as a part of the system aspect (e.g., catalogs it as part of the system and/or adds it to the collection of data 600). -
FIG. 109 is a diagram of an example of system aspects, evaluation aspects, evaluation rating metrics, and analysis system output options of ananalysis system 11 for analyzing asystem 11, or portion thereof. For instance,analysis system 11 is evaluating, with respect to process, policy, procedure, certification, documentation, and/or automation, the understanding and implementation of the guidelines, system requirements, system design, and/or system build for issue detection security functions of an organization based on disclosed data and discovered data to produce an evaluation rating. - For this example, the
analysis system 10 can generate one or a plurality of issue detection evaluation ratings for implementation of the guidelines, system requirements, system design, and/or system build for issue detection security functions of an organization based on disclosed data and discovered data in accordance with the evaluation rating metrics of process, policy, procedure, certification, documentation, and/or automation. A few, but far from exhaustive, examples are shown inFIGS. 113-117 . -
FIG. 110 is an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.FIG. 110 is similar toFIG. 77 except, in addition to disclosed information regarding system build of issue detection security functions of an organization, here, the issue detection information further includes discovered information regarding system build, system design, and implementation of issue detection security functions of the organization. Discovered information is the information discovered about the system by the analysis system during the analysis. - In this example, the discovered and disclosed anomalies and event awareness information is gathered from various disclosed and discovered data sources as a portion of the discovered and disclosed issue detection information recorded as issue detection data. The disclosed portion of the discovered and disclosed anomalies and event awareness information may include information obtained from disclosed diagrams, a list of system tools, disclosed device information, disclosed application information, the system security plan, and disclosed reports relevant to anomalies and event awareness as discussed with reference to
FIG. 77 . - The discovered portion of the of the discovered and disclosed anomalies and event awareness information is obtained from analyzing at least a portion of the disclosed information and from engagement with system devices, operating systems, networks, system tools, one or more departments (e.g., the IT department), one or more groups, one or more users (e.g., discovered user data), etc. For example, by engaging in a network analysis, the analysis system may discover devices (e.g., discovered devices) connected to an organization's network that were not in the disclosed information.
- By engaging with devices of the organization, the analysis system is operable to analyze whether the devices have any hardware and/or software issues. For example, hardware and/or software may be improperly installed, old, and/or misused. As another example, hardware may not efficiently support software.
- By engaging with various system tools, the analysis system can determine how well the system tools are working and if they are executing their intended purposes. For example, the analysis system can engage with the log management tool, gather logs from system components, and determine log management processes to assess the log management tool's capabilities. As an example, the analysis system may discover that not all log data is being collected and additional log collection forwarding devices (e.g., hardware and/or software) are needed. As another example, the analysis system may discover that while the log management tool aggregates event data from various sources, the event analysis component of the log management tool is limited (e.g., the security analyst team is short staffed and unable to manage the data). When the analysis system outputs include deficiency identification and/or auto-correction, the analysis system is operable to recommend or auto-install a SIEM tool or similar event aggregation and analysis tool to remedy the issue.
- By engaging with various departments, groups, users (e.g., discovered user data such as user communications (e.g., email content, memos, internal documents, text messages, etc.)), etc., the analysis system develops an understanding of system data flow, network operations, personnel roles and responsibilities, and execution of disclosed processes, policies, procedures, etc. For example, a disclosed issue detection policy may require the security analyst team to alert the legal department upon discovery of an incident. However, the analysis system discovers that there is no clear definition of the term incident in the disclosed information and the issue detection policy is flawed. As another example, the analysis system may discover that the incident alert threshold is set too low and is creating noise and alarm fatigue. Numerous other pieces of discovered and disclosed anomalies and event awareness information may be gathered from various information sources. The above examples are far from exhaustive.
-
FIG. 111 is an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.FIG. 111 is similar toFIG. 78 except, in addition to disclosed information regarding system build of issue detection security functions of an organization, here, the issue detection information further includes discovered information regarding system build, system design, and implementation of issue detection security functions of the organization. Discovered information is the information discovered about the system by the analysis system during the analysis. - In this example, discovered and disclosed continuous security monitoring information is gathered from various data sources as a portion of the discovered and disclosed issue detection information recorded as issue detection data. The disclosed portion of the discovered and disclosed continuous security monitoring information may include the disclosed system security plan, disclosed list of system tools, disclosed device information, training materials, and disclosed user information relevant to continuous security monitoring as discussed with reference to
FIG. 78 . - The discovered portion of the of the discovered and disclosed continuous security monitoring information is obtained from analyzing at least a portion of the disclosed information and from engagement with the system devices, applications, networks, system tools, one or more departments (e.g., the IT department), one or more groups, one or more users (e.g., discovered user data), etc.
- For example, by engaging with the IT department, the analysis system develops an understanding of security monitoring data flows, personnel roles and responsibilities, and execution of disclosed continuous security monitoring processes, policies, procedures, etc.
- By engaging in a network analysis, the analysis system may discover devices (e.g., discovered devices) connected to an organization's network that were not in the disclosed information. By engaging with system devices, the analysis system is operable to analyze whether the devices have any hardware and/or software issues. For example, a disclosed malicious code detection policy may state that all system devices require current antivirus software, however; the analysis system analyzes system devices to determine that several devices have no antivirus software installed or have an old version installed.
- By engaging with various system tools, the analysis system can determine how well the system tools are working and if they are executing their intended purposes. For example, the analysis system can engage with the network monitoring tool, gather network traffic data, and determine network monitoring processes to assess the network monitoring tool's capabilities. As an example, the analysis system may discover that an important device is not being monitored. As another example, the analysis system may discover that the network monitoring tool has a negative performance impact on the devices where it is implemented. As another example, the analysis system may discover that external service provider activity is not monitored. When the analysis system outputs include deficiency identification and/or auto-correction, the analysis system is operable to recommend and/or auto-install remedies such as an external service provider activity monitoring tool (e.g., a cloud access security broker (CASB)) in this instance.
- As another example, the analysis system can assess the quality and performance of an alarm system installed at a facility for purposes of physical environment monitoring. As a further example, the analysis system can assess the antivirus and malware protection software implemented by the system to determine whether it is appropriate for the organization and has the necessary features (e.g., auto-update, central management, etc.).
- By engaging with various groups, users (e.g., discovered user data such as user communications (e.g., email content, memos, internal documents, text messages, etc.)), etc., the analysis system develops an understanding of system data flow, personnel roles and responsibilities, and execution of disclosed processes, policies, procedures, etc. For example, disclosed personnel training materials may disclose policies regarding user uploads and network connections. By engaging with devices and analyzing communications on the network, the analysis system can assess whether the users are adhering to the practices outlined in the training materials (e.g., are users performing unauthorized uploads, are users connecting to unknown networks, are users falling victim to phishing emails, etc.). Numerous other pieces of discovered and disclosed continuous security monitoring information may be gathered from various information sources. The above examples are far from exhaustive.
-
FIG. 112 is an example of issue detection data for use by an analysis system to generate an issue detection rating for a system, or portion thereof.FIG. 112 is similar toFIG. 79 except, in addition to disclosed information regarding system build of issue detection security functions of an organization, here, the issue detection information further includes discovered information regarding system build, system design, and implementation of issue detection security functions of the organization. Discovered information is the information discovered about the system by the analysis system during the analysis. - In this example, discovered and disclosed issue detection processes analysis information is gathered from various sources as a portion of the discovered and disclosed issue detection information recorded as issue detection data. The disclosed portion of the discovered and disclosed issue detection processes analysis information may include the disclosed system security plan, disclosed diagrams, a disclosed list of system tools, personnel handbooks, human resources (HR) documents, personnel training materials, reports, and other disclosed internal documents as discussed with reference to
FIG. 79 . - The discovered portion of the of the discovered and disclosed issue detection processes analysis information is obtained from analyzing at least a portion of the disclosed information and from engagement with the system devices, operating systems, networks, system tools, one or more departments (e.g., the IT department), one or more groups, one or more users (e.g., discovered user data), etc.
- For example, by engaging with system devices and various system tools, the analysis system can determine how well the system tools are working and if they are executing their intended purposes. For example, the analysis system can engage with verification tools, gather network and data flow information, and determine issue detection processes to assess the verification tools' capabilities. As an example, the analysis system may discover that vulnerability scanning processes are insufficient for the system or a portion thereof (e.g., scanning is too infrequent).
- By engaging with system devices and various departments (e.g., the IT department, the HR department, etc.), the analysis system develops an understanding of system data flow, personnel roles and responsibilities, and execution of disclosed processes, policies, procedures, etc. For example, the analysis system can analyze whether there is overlap in the roles and responsibilities of the security analyst team, whether security reports are sent to the right parties, whether personnel act in accordance with their roles, etc.
- By analyzing disclosed personnel training materials and reports and by engaging with users of the system (e.g., discovered user data such as user communications (e.g., email content, memos, internal documents, text messages, etc.)), the analysis system can assess whether training occurs often enough, covers the correct topics, involves the right people, and whether the matters discussed are enforced. Numerous other pieces of discovered and disclosed issue detection processes analysis information may be gathered from various information sources. The above examples are far from exhaustive.
-
FIG. 113 is a diagram of an example of producing a plurality of issue detection ratings and combining them into one rating. In this example, sixteen individual ratings are generated and then, via thecumulative rating module 607, are combined into oneissue detection rating 608. A first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data. A second individual issue detection rating is generated from a combination of organization, guidelines, security functions, implementation, and disclosed data. A third individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and discovered data. A fourth issue detection issue detection rating is generated from a combination of organization, guidelines, security functions, implementation, and discovered data. The remaining twelve individual issue detection ratings are generated from the combinations shown. -
FIG. 114 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating. In this example, two individual ratings are generated and then, via thecumulative rating module 607, are combined into oneissue detection rating 608. A first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data. A second individual issue detection rating is generated from a combination of organization, guidelines, security functions, implementation, and disclosed data. This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data and the implementation of the issue detection security functions of the organization from the guidelines of the disclosed data. This comparison provides a metric for determining how well the guidelines were understood and how well they were used and/or applied. -
FIG. 115 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating. In this example, two individual ratings are generated and then, via thecumulative rating module 607, are combined into oneissue detection rating 608. A first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data. A second individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and discovered data. This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data and the understanding of the issue detection security functions of the organization from the guidelines of the discovered data. This comparison provides a metric for determining how well the guidelines were believed to be understood and how well they were actually understood. -
FIG. 116 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating. In this example, two individual ratings are generated and then, via thecumulative rating module 607, are combined into oneissue detection rating 608. A first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data. A second individual issue detection rating is generated from a combination of organization, system requirements, security functions, understanding, and disclosed data. This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data and the understanding of the issue detection security functions of the organization from the system requirements of the disclosed data. This comparison provides a metric for determining how well the guidelines were converted into the system requirements. -
FIG. 117 is a diagram of another example of producing a plurality of issue detection ratings and combining them into one rating. In this example, four individual ratings are generated and then, via thecumulative rating module 607, are combined into oneissue detection rating 608. A first individual issue detection rating is generated from a combination of organization, guidelines, security functions, understanding, and disclosed data. A second individual issue detection rating is generated from a combination of organization, system requirements, security functions, understanding, and disclosed data. A third individual issue detection rating is generated from a combination of organization, system design, security functions, understanding, and disclosed data. A fourth individual issue detection rating is generated from a combination of organization, system build, security functions, understanding, and disclosed data. - This allows for a comparison between the understanding of the issue detection security functions of the organization from the guidelines of the disclosed data, the understanding of the issue detection security functions of the organization from the system requirements of the disclosed data, the understanding of the issue detection security functions of the organization from the system design of the disclosed data, and the understanding of the issue detection security functions of the organization from the system build of the disclosed data. This comparison provides a metric for determining how well the guidelines, system requirements, system design, and/or system build were understood with respect to each and how well they were used and/or applied.
-
FIG. 118 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and determining deficiencies. The method includes one or more of steps 820-823. Atstep 820, the analysis system determines a system criteria deficiency (e.g., guidelines, system requirements, system design, system build, and/or resulting system) of the system aspect based on the issue detection rating and the issue detection data. Examples have been discussed with reference to one or more preceding figures. - At
step 821, the analysis system determines a system mode deficiency (e.g., assets, system functions, and/or security functions) of the system aspect based on the issue detection rating and the issue detection data. Atstep 822, the analysis system determines an evaluation perspective deficiency (e.g., understanding, implementation, operation, and/or self-analysis) of the system aspect based on the issue detection rating and the issue detection data. Atstep 823, the analysis system determines an evaluation viewpoint deficiency (e.g., disclosed, discovered, and/or desired) of the system aspect based on the issue detection rating and the issue detection data. Examples have been discussed with reference to one or more preceding figures. -
FIG. 119 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and determining auto-corrections. The method begins atstep 824 where the analysis system determines a deficiency of the system aspect based on the issue detection rating and/or the issue detection data as discussed with reference toFIG. 118 . The method continues atstep 825 where the analysis system determines whether the deficiency is auto-correctable. For example, is the deficiency regarding software and if so, can it be auto-corrected. - If the deficiency is not auto-correctable, the method continues at
step 826 where the analysis system includes the identified deficiency in a report. If, however, the deficiency is auto-correctable, the method continues atstep 827 where the analysis system auto-corrects the deficiency. The method continues atstep 828 where the analysis system includes the identified deficiency and auto-correction in a report. Examples of auto-correction have been discussed with reference to one or more preceding Figures. -
FIG. 120 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof. The method begins atstep 830 where the analysis system selects a system, or portion thereof, to evaluate issue detection of the system, or portion thereof, with respect to assets, system functions, and/or security functions. For example, the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated. As another example, the analysis system selects one more evaluation perspective, one or more evaluation viewpoints, and/or detect as the for the evaluation category. The analysis system may further select one or more detect sub-categories and/or one or more sub-sub categories. - As another example of selecting the system or portion thereof, the analysis system selects the entire system (e.g., the organization/enterprise), selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group. As another example selecting the system or portion thereof, the analysis system selects one or more physical assets and/or one or more conceptual assets.
- The method continues at
step 831 where the analysis system obtains issue detection information regarding the system, or portion thereof. The issue detection information includes information representative of an organization's understanding of issue detection of the system, or portion thereof, with respect to the assets, the system functions, and/or the security functions. In an example, the analysis system obtains the issue detection information (e.g., disclosed data from the system) by receiving it from a system admin computing entity. In another example, the analysis system obtains the issue detection information by gathering it from one or more computing entities of the system. - The method continues at
step 832 where the analysis system engages with the system, or portion thereof, to produce system issue detection data (e.g., discovered data) regarding the system, or portion thereof, with respect to the assets, the system functions, and/or the security functions. Engaging the system, or portion thereof, will be discussed in greater detail with reference toFIG. 121 . - The method continues at
step 833 where the analysis system calculates an issue detection rating regarding the issue detection of the system, or portion thereof, based on the issue detection information, the system issue detection data, and issue detection processes, issue detection policies, issue detection documentation, and/or issue detection automation. The issue detection rating may be indicative of a variety of factors of the system, or portion thereof. For example, the issue detection rating indicates how well the issue detection information reflects an understanding of issue detection with respect to security functions of the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects an understanding of issue detection with respect to system functions of system, or portion thereof. - As another example, the issue detection rating indicates how well the issue detection information reflects an understanding of issue detection with respect to assets the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects intended implementation of issue detection with respect to assets of the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects intended operation of issue detection with respect to assets of the system, or portion thereof.
- As another example, the issue detection rating indicates how well the issue detection information reflects intended implementation of issue detection with respect to system functions of the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects intended operation of issue detection with respect to system functions of system, or portion thereof.
- As another example, the issue detection rating indicates how well the issue detection information reflects intended implementation of issue detection with respect to security functions of the system, or portion thereof. As another example, the issue detection rating indicates how well the issue detection information reflects intended operation of issue detection with respect to security functions of the system, or portion thereof.
- The method continues at
step 834 where the analysis system gathers desired system issue detection data from one or more system proficiency resources. The method continues atstep 835 where the analysis system calculates a second issue detection rating regarding a desired level of issue detection of the system, or portion thereof, based on the issue detection information, the issue detection data, the desired issue detection data, and the issue detection processes, the issue detection policies, the issue detection documentation, and/or the issue detection automation. The second issue detection rating is regarding a comparison of desired data with the disclosed data and/or discovered data. -
FIG. 121 is a logic diagram of a further example of an analysis system determining an issue detection rating for system, or portion thereof; in particular engaging the system, or portion thereof to obtain data. The method begins atstep 836 where the analysis system interprets the issue detection information to identify components (e.g., computing device, HW, SW, server, etc.) of the system, or portion thereof. The method continues atstep 837 where the analysis system queries a component regarding implementation, function, and/or operation of the component. - The method continues at
step 838 where the analysis system evaluates a response from the component concurrence with a portion of the issue detection information relevant to the component. The method continues atstep 839 where the analysis system determines whether the response concurs with a portion of the issue detection information. If the response concurs, the method continues atstep 840 where the analysis system adds a data element (e.g., a record entry, a note, set a flag, etc.) to the system issue detection data regarding the substantial concurrence of the response from the component with the portion of the issue detection information relevant to the component. - If the response does not concur, the method continues at
step 841 where the analysis system adds a data element (e.g., a record entry in a table, a note, set a flag, etc.) to the system issue detection data regarding the response from the component not substantially concurring with the portion of the issue detection information relevant to the component. The non-concurrence is indicative of a deviation in the implementation, function, and/or operation of the component as identified in the response from disclosed implementation, function, and/or operation of the component as contained in the issue detection information. For example, the deviation is different HW, different SW, different network access, different data access, different data flow, coupled to different other components, and/or other differences. - The method continues in
FIG. 122 atstep 842 where the analysis system queries the component and/or another component regarding a cause for the deviation. The method continues atstep 843 where the analysis system updates a data element to include an indication of the one or more causes for a deviation, wherein a cause for the deviation is based on responses from the component and/or the other component. - The method continues at
step 844 where the analysis system determines whether the deviation is a communication deviation. If yes, the method continues atstep 845 where the analysis system evaluate a response from the device to ascertain an error of the issue detection information regarding the device and/or the communication between the device and the component. The method continues atstep 846 where the analysis system determines one or more causes of the error of the communication deviation. - If the deviation is not a communication deviation, the method continues at
step 847 where the analysis system determines whether the deviation is a system function deviation. If yes, the method continues atstep 848 where the analysis system evaluate a response from the device to ascertain an error of the issue detection information regarding the device and/or the system function of the device. The method continues atstep 849 where the analysis system determines one or more causes of the error of the system function deviation. - If the deviation is not a system function deviation, the method continues at
step 850 where the analysis system determines whether the deviation is a security function deviation. If yes, the method continues atstep 851 where the analysis system evaluate a response from the device to ascertain an error of the issue detection information regarding the device and/or the security function of the device. The method continues atstep 852 where the analysis system determines one or more causes of the error of the security function deviation. - If the deviation is not a security function deviation, the method continues at
step 853 where the analysis system evaluates a device response from the device to ascertain an error of the issue detection information regarding the device and/or of the device. The method continues atstep 854 where the analysis system determines one or more causes of the error of the information and/or of the device. -
FIG. 123 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and in particular for calculating the issue detection rating. The method begins atsteps step 851, the analysis system processes the issue detection information into policy related issue detection information, process related issue detection information, documentation related issue detection information, and/or automation related issue detection information. Atstep 856, the analysis system processes the system issue detection data into policy related system issue detection data, process related system issue detection data, documentation related system issue detection data, and/or automation related system issue detection data. - The method continues at steps 857-860. At
step 857, the analysis system evaluates the process related issue detection information with respect to the process related system issue detection data to produce a process issue detection rating. Atstep 858, the analysis system evaluates the policy related issue detection information with respect to the policy related system issue detection data to produce a policy issue detection rating. Atstep 859, the analysis system evaluates the documentation related issue detection information with respect to the documentation related system issue detection data to produce a documentation issue detection rating. Atstep 860, the analysis system evaluates the automation related issue detection information with respect to the automation related system issue detection data to produce an automation issue detection rating. - The method continues at
step 861 where the analysis system generates an issue detection rating based on the automation issue detection rating, the documentation issue detection rating, the process issue detection rating, and the policy issue detection rating. For example, the analysis system performs a function on the automation issue detection rating, the documentation issue detection rating, the process issue detection rating, and the policy issue detection rating to produce the issue detection rating. The function is a weight average, standard deviation, statistical analysis, trending, and/or other mathematical function. -
FIG. 124 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and in particular engaging the system to obtain issue detection data. The method begins atstep 862 where the analysis system determines data gathering criteria and/or parameters. The determining of data gathering parameters has been discussed with reference to one or mor preceding Figures and one or more subsequent Figures. - The method continues at
step 863 where the analysis system identifies a user device and queries it for data in accordance with the data gathering parameters. The method continues atstep 864 where the analysis system obtains a data response from the user device. The data response includes data regarding the user device relevant to the data gathering parameters. An example of user device data was discussed with reference to one or more ofFIGS. 75-79 . - The method continues at step 865 where the analysis system catalogs the user device (e.g., records it as being part of the system, or portion thereof, if not already cataloged). The method continues at
step 866 where the analysis system identifies vendor information regarding the user device. The method continues atstep 867 where the analysis system tags the data regarding the user device with the vendor information. This enables data to be sorted, searched, etc. based on vendor information. - The method continues at
step 868 where the analysis system determines whether data has been received from all relevant user devices. If not, the method repeats atstep 863. If yes, the method continues atstep 869 where the analysis system identifies a storage device and queries it for data in accordance with the data gathering parameters. The method continues atstep 870 where the analysis system obtains a data response from the storage device. The data response includes data regarding the storage device relevant to the data gathering parameters. An example of storage device data was discussed with reference to one or more ofFIGS. 75-79 . - The method continues at step 871where the analysis system catalogs the storage device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the storage device responds. The method continues at
step 872 where the analysis system identifies vendor information regarding the storage device. The method continues atstep 873 where the analysis system tags the data regarding the storage device with the vendor information. The method continues atstep 874 where the analysis system determines whether data has been received from all relevant storage devices. If not, the method repeats atstep 869. - If yes, the method continues at
step 875 where the analysis system identifies a server device and queries it for data in accordance with the data gathering parameters. The method continues atstep 876 where the analysis system obtains a data response from the server device. The data response includes data regarding the server device relevant to the data gathering parameters. An example of server device data was discussed with reference to one or more ofFIGS. 75-79 . The method continues atstep 877 where the analysis system catalogs the server device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the server device responds. - The method continues at
step 878 where the analysis system identifies vendor information regarding the server device. The method continues atstep 879 where the analysis system tags the data regarding the server device with the vendor information. The method continues atstep 880 ofFIG. 125 where the analysis system determines whether data has been received from all relevant server devices. If not, the method repeats atstep 875. - If yes, the method continues at
step 881 where the analysis system identifies a security device and queries it for data in accordance with the data gathering parameters. The method continues atstep 882 where the analysis system obtains a data response from the security device. The data response includes data regarding the security device relevant to the data gathering parameters. An example of security device data was discussed with reference to one or more ofFIGS. 75-79 . - The method continues at
step 883 where the analysis system catalogs the security device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the security device responds. The method continues atstep 884 where the analysis system identifies vendor information regarding the security device. The method continues atstep 885 where the analysis system tags the data regarding the security device with the vendor information. The method continues atstep 886 where the analysis system determines whether data has been received from all relevant security devices. If not, the method repeats atstep 881. - If yes, the method continues at
step 887 where the analysis system identifies a security tool and queries it for data in accordance with the data gathering parameters. The method continues atstep 888 where the analysis system obtains a data response from the security tool. The data response includes data regarding the security tool relevant to the data gathering parameters. An example of security tool data was discussed with reference to one or more ofFIGS. 75-79 . - The method continues at
step 889 where the analysis system catalogs the security tool (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the security tool responds via hardware on which the tool operates. - The method continues at
step 890 where the analysis system identifies vendor information regarding the security tool. The method continues atstep 891 where the analysis system tags the data regarding the security tool with the vendor information. The method continues atstep 892 where the analysis system determines whether data has been received from all relevant security tools. If not, the method repeats atstep 887. - If yes, the method continues at
step 893 where the analysis system identifies a network device and queries it for data in accordance with the data gathering parameters. The method continues atstep 894 where the analysis system obtains a data response from the network device. The data response includes data regarding the network device relevant to the data gathering parameters. An example of network device data was discussed with reference to one or more ofFIGS. 75-79 . The method continues atstep 895 where the analysis system catalogs the network device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the network device responds. - The method continues at
step 896 where the analysis system identifies vendor information regarding the network device. The method continues atstep 897 where the analysis system tags the data regarding the network device with the vendor information. The method continues atstep 898 ofFIG. 126 where the analysis system determines whether data has been received from all relevant network devices. If not, the method repeats atstep 893. - If yes, the method continues at
step 899 where the analysis system identifies another device (e.g., any other device that is part of the system, interfaces with the system, uses the system, and/or supports the system) and queries it for data in accordance with the data gathering parameters. The method continues atstep 900 where the analysis system obtains a data response from the other device. The data response includes data regarding the other device relevant to the data gathering parameters. An example of other device data was discussed with reference to one or more ofFIGS. 75-79 . - The method continues at
step 901 where the analysis system catalogs the other device (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the other device responds. The method continues atstep 902 where the analysis system identifies vendor information regarding the other device. The method continues atstep 903 where the analysis system tags the data regarding the other device with the vendor information. The method continues atstep 904 where the analysis system determines whether data has been received from all relevant other devices. If not, the method repeats atstep 899. - If yes, the method continues at
step 905 where the analysis system identifies another tool (e.g., any other tool that is part of the system, interprets the system, monitors the system, and/or supports the system) and queries it for data in accordance with the data gathering parameters. The method continues atstep 906 where the analysis system obtains a data response from the other tool. The data response includes data regarding the other tool relevant to the data gathering parameters. An example of other tool data was discussed with reference to one or more ofFIGS. 75-79 . The method continues atstep 907 where the analysis system catalogs the other tool (e.g., records it as being part of the system, or portion thereof, if not already cataloged) when the other tool responds via hardware on which the tool operates - The method continues at
step 908 where the analysis system identifies vendor information regarding the other tool. The method continues atstep 909 where the analysis system tags the data regarding the other tool with the vendor information. The method continues atstep 910 where the analysis system determines whether data has been received from all relevant other tools. If not, the method repeats atstep 905. If yes, the method continues atstep 911 where the analysis system ends the process or repeats it for another part of the system. -
FIG. 127 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof and, in particular, identifying a device or a tool relevant to issue detection. The method begins atstep 920 where the analysis system determines whether a device (e.g., hardware and/or software) or tool is already included in the regarding the data gathering parameters information (e.g., the disclosed data for a particular analysis of the system, or portion thereof). If yes, the method continues atstep 921 where the analysis module determines whether it's done with identifying devices or tools relevant to issue detection. If yes, the method is ended. If not, the method repeats atstep 920. - If the device or tool is not in the issue detection information, the method continues at
step 922 where the analysis system engages one or more detection (or discovery) tools to detect a device and/or a tool relevant to issue detection. Examples of detection tools were discussed with reference to one or more preceding figures. The method continues atstep 923 where the analysis system determines whether the detection tool(s) has identified a device (e.g., hardware and/or software). If not, the method continues atstep 924 where the analysis system determines whether the detection tool(s) has identified a tool. If not, the method repeats atstep 921. - If a tool is identified, the method continues at
step 925 where the analysis system obtains a data response from the tool, via hardware on which the tool operates, in regard to a data gathering request. The data response includes data regarding the tool. Examples of the data regarding the tool were discussed with reference to one or more ofFIGS. 75-79 . - The method continues with
step 926 where the analysis system determines whether the tool is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues atstep 926 where the analysis system adds the tool to the issue detection information and the method continues atstep 921. - If the tool is not cataloged, the method continues at
step 928 where the analysis system verifies the tool as being part of the system and then catalogs it as part of the system. The method continues atstep 929 where the analysis system identifies vendor information regarding the tool. The method continues atstep 930 where the analysis system tags the data regarding the tool with the vendor information. The method repeats atstep 921. - If, at
step 923, a device is identified, the method continues atstep 931 where the analysis system obtains a data response from the device in regard to a data gathering request. The data response includes data regarding the device. Examples of the data regarding the device were discussed with reference to one or more ofFIGS. 75-79 . - The method continues with
step 933 where the analysis system determines whether the device (e.g., hardware and/or software) is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues atstep 932 where the analysis system adds the devices to the issue detection information and the method continues atstep 921. - If the device is not cataloged, the method continues at
step 934 where the analysis system verifies the device as being part of the system and then catalogs it as part of the system. The method continues atstep 935 where the analysis system identifies vendor information regarding the device. The method continues atstep 936 where the analysis system tags the data regarding the device with the vendor information. The method repeats atstep 921. -
FIG. 128 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof and, in particular, identifying a device or a tool relevant to issue detection. The method begins atstep 940 where the analysis system determines whether a device (e.g., hardware and/or software) or tool is already included in the issue detection information (e.g., the disclosed data for a particular analysis of the system, or portion thereof). If yes, the method continues atstep 941 where the analysis module determines whether it's done with identifying devices or tools. If yes, the method is ended. If not, the method repeats atstep 940. - If the device or tool is not in the issue detection information, the method continues at
step 942 where the analysis system interprets data from an identified device and/or tool (e.g., already in the issue detection information) with regards to a device or tool. For example, the analysis system looks for data regarding an identified device exchanging data with the device being reviewed. As another example, the analysis system looks for data regarding a tool being used on the device under review to repair a software issue. - The method continues at
step 943 where the analysis system determines whether the data has identified such a device (e.g., hardware and/or software). If not, the method continues atstep 944 where the analysis system determines whether the detection tool(s) has identified such a tool. If not, the method repeats atstep 941. - If a tool is identified, the method continues at
step 945 where the analysis system obtains a data response from the tool, via hardware on which the tool operates, in regard to a data gathering request. The data response includes data regarding the tool. Examples of the data regarding the tool were discussed with reference to one or more ofFIGS. 75-79 . - The method continues at
step 947 where the analysis system determines whether the tool is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues atstep 946 where the analysis system adds the tool to the issue detection information and the method continues atstep 941. - If the tool is not cataloged, the method continues at
step 948 where the analysis system verifies the tool as being part of the system and then catalogs it as part of the system. The method continues atstep 949 where the analysis system identifies vendor information regarding the tool. The method continues atstep 950 where the analysis system tags the data regarding the tool with the vendor information. The method repeats atstep 921. - If, at
step 943, a device is identified, the method continues atstep 951 where the analysis system obtains a data response from the device in regard to a data gathering request. The data response includes data regarding the device. Examples of the data regarding the device were discussed with reference to one or more ofFIGS. 75-79 . - The method continues with
step 953 where the analysis system determines whether the device (e.g., hardware and/or software) is cataloged (e.g., is part of the system, but is not included in the issue detection information for this particular evaluation). If yes, the method continues atstep 952 where the analysis system adds the devices to the issue detection information and the method continues atstep 941. - If the device is not cataloged, the method continues at
step 954 where the analysis system verifies the device as being part of the system and then catalogs it as part of the system. The method continues atstep 955 where the analysis system identifies vendor information regarding the device. The method continues atstep 956 where the analysis system tags the data regarding the device with the vendor information. The method repeats atstep 941. -
FIG. 129 is a logic diagram of a further example of an analysis system determining an issue detection rating for a system, or portion thereof, and in particular to generating data gathering criteria (or parameters). The method begins atstep 960 where the analysis system determines whether the current analysis is for the entire system or a portion thereof. If the analysis is for the entire system, the method continues atstep 962 where the analysis system prepares to analyze the entire system. If the analysis is for a portion of the system, the method continues atstep 961 where the analysis system determines the particular section (e.g., identifies one or more system elements). - The method continues at
step 963 where the analysis system determines whether the current analysis has identified evaluation criteria (e.g., guidelines, system requirements, system design, system build, and/or resulting system). If yes, the method continues atstep 964 where the analysis system determines the specific evaluation criteria. If not, the method continues atstep 965 where the analysis system determines a set of default evaluation criteria (e.g., one or more of the evaluation criteria). - The method continues at
step 966 where the analysis system determines whether the current analysis has identified an evaluation mode (e.g., assets, system functions, and/or security functions). If yes, the method continues atstep 966 where the analysis system determines the specific evaluation mode(s). If not, the method continues atstep 967 where the analysis system determines a set of default evaluation modes (e.g., one or more of the evaluation modes). - The method continues at
step 968 where the analysis system determines whether the current analysis has identified an evaluation perspective (e.g., understanding, implementation, and/or operation). If yes, the method continues atstep 969 where the analysis system determines the specific evaluation perspective(s). If not, the method continues atstep 970 where the analysis system determines a set of default evaluation perspectives (e.g., one or more of the evaluation perspectives). - The method continues at
step 971 where the analysis system determines whether the current analysis has identified an evaluation viewpoint (e.g., disclosed, discovered, desired, and/or self-analysis). If yes, the method continues atstep 972 where the analysis system determines the specific evaluation viewpoint(s). If not, the method continues atstep 973 where the analysis system determines a set of default evaluation viewpoints (e.g., one or more of the evaluation viewpoints). - The method continues at
step 974 where the analysis system determines whether the current analysis has identified an evaluation category, and/or sub-categories (e.g., categories include identify, protect, detect, response, and/or recover). If yes, the method continues atstep 975 where the analysis system determines one or more specific evaluation categories and/or sub-categories. If not, the method continues atstep 977 where the analysis system determines a set of default evaluation categories and/or sub-categories (e.g., one or more of the evaluation categories and/or sub-categories). The method continues atstep 976 where the analysis system determines the data gathering criteria (or parameters) based on the determination made in the previous steps. -
FIG. 130 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof. The method begins atstep 1000 where the analysis system selects a system, or portion thereof, to evaluate anomaly and event awareness of the system, or portion thereof. For example, the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated. As another example, the analysis system selects one more evaluation perspective and/or one or more evaluation viewpoints. The analysis system may further select one or more sub-sub categories of the anomaly and event awareness sub-category of the issue detection (e.g., detect) category. - As another example of selecting the system or portion thereof, the analysis system selects the entire system, selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group. As another example of selecting the system or portion thereof, the analysis system selects one or more physical assets and/or one or more conceptual assets.
- The method continues at
step 1001 where the analysis system obtains anomaly and event awareness information regarding the system, or portion thereof. The anomaly and event awareness information includes information representative of an organization's understanding of the system, or portion thereof, with respect to establishment and management of network operations and expected data flows, detected event analysis, event data aggregation and correlation, event impact determination, and incident alert threshold establishment. In an example, the analysis system obtains the anomaly and event awareness information (e.g., disclosed data from the system) by receiving it from a system admin computing entity. In another example, the analysis system obtains the anomaly and event awareness information by gathering it from one or more computing entities of the system. - The method continues at
step 1002 where the analysis system engages with the system, or portion thereof, to produce system anomaly and event awareness data (e.g., discovered data) regarding the system, or portion thereof, with respect to the establishment and management of network operations and expected data flows, the detected event analysis, the event data aggregation and correlation, the event impact determination, and the incident alert threshold establishment. Engaging the system, or portion thereof, was discussed with reference toFIG. 121 . - The method continues at
step 1003 where the analysis system calculates an anomaly and event awareness rating regarding the anomaly and event awareness of the system, or portion thereof, based on the anomaly and event awareness information, the system anomaly and event awareness data, and anomaly and event awareness processes, anomaly and event awareness policies, anomaly and event awareness documentation, and/or anomaly and event awareness automation. The anomaly and event awareness rating may be indicative of a variety of factors of the system, or portion thereof. For example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects an understanding of the anomaly and event awareness with respect to assets of the system, or portion thereof. As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects an understanding of the anomaly and event awareness with respect to system functions of system, or portion thereof. - As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects an understanding of the anomaly and event awareness with respect to the security functions of the system, or portion thereof. As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended implementation of the anomaly and event awareness with respect to the assets of the system, or portion thereof. As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended operation of the anomaly and event awareness with respect to the assets of the system, or portion thereof.
- As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended implementation of the anomaly and event awareness with respect to the system functions of the system, or portion thereof. As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended operation of the anomaly and event awareness with respect to the system functions of system, or portion thereof.
- As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended implementation of the anomaly and event awareness with respect to the security functions of the system, or portion thereof. As another example, the anomaly and event awareness rating indicates how well the anomaly and event awareness information reflects intended operation of the anomaly and event awareness with respect to the security functions of the system, or portion thereof.
- The method continues at
step 1004 where the analysis system gathers desired system anomaly and event awareness data from one or more system proficiency resources. The method continues atstep 1005 where the analysis system calculates a second anomaly and event awareness rating regarding a desired level of anomaly and event awareness of the system, or portion thereof, based on the anomaly and event awareness information, the system anomaly and event awareness data, the desired anomaly and event awareness data, and the anomaly and event awareness processes, the anomaly and event awareness policies, the anomaly and event awareness documentation, and/or the anomaly and event awareness automation. The second anomaly and event awareness rating is regarding a comparison of desired data with the disclosed data and/or discovered data. -
FIG. 131 is a logic diagram of an example of an analysis system determining an anomaly and event awareness evaluation rating for a system, or portion thereof. The method begins atstep 1006 where the analysis system determines a system aspect (seeFIG. 69 ) of a system for an anomaly and event awareness evaluation. An anomaly and event awareness evaluation includes evaluating the system's establishment and management of network operations and expected data flows, detected event analysis, event data aggregation and correlation, event impact determination, and incident alert threshold establishment. - The method continues at
step 1007 where the analysis system determines at least one evaluation perspective for use in performing the anomaly and event awareness evaluation on the system aspect. An evaluation perspective is an understanding perspective, an implementation perspective, an operation perspective, or a self-analysis perspective. An understanding perspective is with regard to how well the assets, system functions, and/or security functions with respect to anomaly and event awareness are understood. An implementation perspective is with regard to how well the assets, system functions, and/or security functions with respect to anomaly and event awareness are implemented. An operation perspective is with regard to how well the assets, system functions, and/or security functions with respect to anomaly and event awareness operate. A self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions with respect to anomaly and event awareness. - The method continues at
step 1008 where the analysis system determines at least one evaluation viewpoint for use in performing the anomaly and event awareness evaluation on the system aspect. An evaluation viewpoint is disclosed viewpoint, a discovered viewpoint, or a desired viewpoint. A disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data. A discovered viewpoint is with regard to analyzing the system aspect based on the discovered data. A desired viewpoint is with regard to analyzing the system aspect based on the desired data. - The method continues at
step 1009 where the analysis system obtains anomaly and event awareness data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint. Anomaly and event awareness data is data obtained that is regarding the system aspect. - The method continues at
step 1010 where the analysis system calculates an anomaly and event awareness rating as a measure of system anomaly and event awareness maturity for the system aspect based on the anomaly and event awareness data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric. An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric. -
FIG. 132 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof. The method begins atstep 1020 where the analysis system selects a system, or portion thereof, to evaluate continuous security monitoring of the system, or portion thereof. For example, the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated. As another example, the analysis system selects one more evaluation perspective and/or one or more evaluation viewpoints. The analysis system may further select one or more sub-sub categories of the continuous security monitoring sub-category of the issue detection (e.g., detect) category. - As another example of selecting the system or portion thereof, the analysis system selects the entire system, selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group. As another example of selecting the system or portion thereof, the analysis system selects one or more physical assets and/or one or more conceptual assets.
- The method continues at
step 1021 where the analysis system obtains continuous security monitoring information regarding the system, or portion thereof. The continuous security monitoring information includes information representative of an organization's understanding of the system, or portion thereof, with respect to network monitoring, physical environment monitoring, personnel activity monitoring, malicious code detection, unauthorized mobile code detection, external service provider activity monitoring, unauthorized personnel monitoring, unauthorized connection monitoring, unauthorized device monitoring, unauthorized software monitoring, and vulnerability scanning. - In an example, the analysis system obtains the continuous security monitoring information (e.g., disclosed data from the system) by receiving it from a system admin computing entity. In another example, the analysis system obtains the continuous security monitoring information by gathering it from one or more computing entities of the system.
- The method continues at
step 1022 where the analysis system engages with the system, or portion thereof, to produce system continuous security monitoring data (e.g., discovered data) regarding the system, or portion thereof, with respect to the network monitoring, the physical environment monitoring, the personnel activity monitoring, the malicious code detection, the unauthorized mobile code detection, the external service provider activity monitoring, the unauthorized personnel monitoring, the unauthorized connection monitoring, the unauthorized device monitoring, the unauthorized software monitoring, and the vulnerability scanning. Engaging the system, or portion thereof, was discussed with reference toFIG. 121 . - The method continues at
step 1023 where the analysis system calculates a continuous security monitoring rating regarding the continuous security monitoring of the system, or portion thereof, based on the continuous security monitoring information, the system continuous security monitoring data, and continuous security monitoring processes, continuous security monitoring policies, continuous security monitoring documentation, and/or continuous security monitoring automation. - The continuous security monitoring rating may be indicative of a variety of factors of the system, or portion thereof. For example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects an understanding of the continuous security monitoring with respect to assets of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects an understanding of the continuous security monitoring with respect to system functions of system, or portion thereof.
- As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects an understanding of the continuous security monitoring with respect to the security functions of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended implementation of the continuous security monitoring with respect to the assets of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended operation of the continuous security monitoring with respect to the assets of the system, or portion thereof.
- As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended implementation of the continuous security monitoring with respect to the system functions of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended operation of the continuous security monitoring with respect to the system functions of system, or portion thereof.
- As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended implementation of the continuous security monitoring with respect to the security functions of the system, or portion thereof. As another example, the continuous security monitoring rating indicates how well the continuous security monitoring information reflects intended operation of the continuous security monitoring with respect to the security functions of the system, or portion thereof.
- The method continues at
step 1024 where the analysis system gathers desired continuous security monitoring data from one or more system proficiency resources. The method continues atstep 1025 where the analysis system calculates a second continuous security monitoring rating regarding a desired level of continuous security monitoring of the system, or portion thereof, based on the continuous security monitoring information, the system continuous security monitoring data, the desired continuous security monitoring data, and the continuous security monitoring processes, the continuous security monitoring policies, the continuous security monitoring documentation, and/or the continuous security monitoring automation. The continuous security monitoring rating is regarding a comparison of desired data with the disclosed data and/or discovered data. -
FIG. 133 is a logic diagram of an example of an analysis system determining a continuous security monitoring evaluation rating for a system, or portion thereof. The method begins atstep 1026 where the analysis system determines a system aspect (seeFIG. 69 ) of a system for a continuous security monitoring evaluation. A continuous security monitoring evaluation includes evaluating the system's network monitoring, physical environment monitoring, personnel activity monitoring, malicious code detection, unauthorized mobile code detection, external service provider activity monitoring, unauthorized personnel monitoring, unauthorized connection monitoring, unauthorized device monitoring, unauthorized software monitoring, and vulnerability scanning. - The method continues at
step 1027 where the analysis system determines at least one evaluation perspective for use in performing the continuous security monitoring evaluation on the system aspect. An evaluation perspective is an understanding perspective, an implementation perspective, an operation perspective, or a self-analysis perspective. An understanding perspective is with regard to how well the assets, system functions, and/or security functions with respect to continuous security monitoring are understood. An implementation perspective is with regard to how well the assets, system functions, and/or security functions with respect to continuous security monitoring are implemented. An operation perspective is with regard to how well the assets, system functions, and/or security functions with respect to continuous security monitoring operate. A self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions with respect to continuous security monitoring. - The method continues at
step 1028 where the analysis system determines at least one evaluation viewpoint for use in performing the continuous security monitoring evaluation on the system aspect. An evaluation viewpoint is disclosed viewpoint, a discovered viewpoint, or a desired viewpoint. A disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data. A discovered viewpoint is with regard to analyzing the system aspect based on the discovered data. A desired viewpoint is with regard to analyzing the system aspect based on the desired data. - The method continues at
step 1029 where the analysis system obtains continuous security monitoring data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint. Continuous security monitoring data is data obtained that is regarding the system aspect. - The method continues at
step 1030 where the analysis system calculates a continuous security monitoring rating as a measure of system continuous security monitoring maturity for the system aspect based on the continuous security monitoring data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric. An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric. -
FIG. 134 is a logic diagram of another example of an analysis system determining an issue detection rating for a system, or portion thereof. The method begins atstep 1040 where the analysis system selects a system, or portion thereof, to evaluate issue detection processes analysis the system, or portion thereof. For example, the analysis system selects one or more system elements, one or more system criteria, and/or one or more system modes for the system, or portion thereof, to be evaluated. As another example, the analysis system selects one more evaluation perspective and/or one or more evaluation viewpoints. The analysis system may further select one or more sub-sub categories of the issue detection processes analysis sub-category of the issue detection (e.g., detect) category. - As another example of selecting the system or portion thereof, the analysis system selects the entire system, selects a division of an organization operating the system, selects a department of a division, selects a group of a department, or selects a sub-group of a group. As another example of selecting the system or portion thereof, the analysis system selects one or more physical assets and/or one or more conceptual assets.
- The method continues at
step 1041 where the analysis system obtains issue detection processes analysis information regarding the system, or portion thereof. The issue detection processes analysis information includes information representative of an organization's understanding of the system, or portion thereof, with respect to defined issue detection roles, defined issue detection responsibilities, issue detection compliance verification, testing of issue detection processes, event detection communication, and continuous improvement of issue detection processes. In an example, the analysis system obtains the issue detection processes analysis information (e.g., disclosed data from the system) by receiving it from a system admin computing entity. In another example, the analysis system obtains the issue detection processes analysis information by gathering it from one or more computing entities of the system. - The method continues at
step 1042 where the analysis system engages with the system, or portion thereof, to produce system issue detection processes analysis data (e.g., discovered data) regarding the system, or portion thereof, with respect to the defined issue detection roles, the defined issue detection responsibilities, the issue detection compliance verification, the testing of issue detection processes, the event detection communication, and the continuous improvement of issue detection processes. Engaging the system, or portion thereof, was discussed with reference toFIG. 121 . - The method continues at
step 1043 where the analysis system calculates an issue detection processes analysis rating regarding the issue detection processes analysis of the system, or portion thereof, based on the issue detection processes analysis information, the system issue detection processes analysis data, and issue detection processes analysis processes, issue detection processes analysis policies, issue detection processes analysis documentation, and/or issue detection processes analysis automation. - The issue detection processes analysis rating may be indicative of a variety of factors of the system, or portion thereof. For example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects an understanding of the issue detection processes analysis with respect to assets of the system, or portion thereof. As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects an understanding of the issue detection processes analysis with respect to system functions of system, or portion thereof.
- As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects an understanding of the issue detection processes analysis with respect to the security functions of the system, or portion thereof. As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended implementation of the issue detection processes analysis with respect to the assets of the system, or portion thereof. As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended operation of the issue detection processes analysis with respect to the assets of the system, or portion thereof.
- As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended implementation of the issue detection processes analysis with respect to the system functions of the system, or portion thereof. As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended operation of the issue detection processes analysis with respect to the system functions of system, or portion thereof.
- As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended implementation of the issue detection processes analysis with respect to the security functions of the system, or portion thereof. As another example, the issue detection processes analysis rating indicates how well the issue detection processes analysis information reflects intended operation of the issue detection processes analysis with respect to the security functions of the system, or portion thereof.
- The method continues at
step 1044 where the analysis system gathers desired system issue detection processes analysis data from one or more system proficiency resources. The method continues atstep 1045 where the analysis system calculates a second issue detection processes analysis rating regarding a desired level of issue detection processes analysis of the system, or portion thereof, based on the issue detection processes analysis information, the system issue detection processes analysis data, the desired issue detection processes analysis data, and the issue detection processes analysis processes, the issue detection processes analysis policies, the issue detection processes analysis documentation, and/or the issue detection processes analysis automation. The second issue detection processes analysis rating is regarding a comparison of desired data with the disclosed data and/or discovered data. -
FIG. 135 is a logic diagram of an example of an analysis system determining an issue detection processes analysis rating for a system, or portion thereof. The method begins atstep 1046 where the analysis system determines a system aspect (seeFIG. 69 ) of a system for an issue detection processes analysis evaluation. An issue detection processes analysis evaluation includes evaluating the system's defined issue detection roles, defined issue detection responsibilities, issue detection compliance verification, testing of issue detection processes, event detection communication, and continuous improvement of issue detection processes. - The method continues at
step 1047 where the analysis system determines at least one evaluation perspective for use in performing the issue detection processes analysis evaluation on the system aspect. An evaluation perspective is an understanding perspective, an implementation perspective, an operation perspective, or a self-analysis perspective. An understanding perspective is with regard to how well the assets, system functions, and/or security functions with respect to issue detection processes analysis are understood. An implementation perspective is with regard to how well the assets, system functions, and/or security functions with respect to issue detection processes analysis are implemented. An operation perspective is with regard to how well the assets, system functions, and/or security functions with respect to issue detection processes analysis operate. A self-analysis (or self-evaluation) perspective is with regard to how well the system self-evaluates the understanding, implementation, and/or operation of assets, system functions, and/or security functions with respect to issue detection processes analysis. - The method continues at
step 1048 where the analysis system determines at least one evaluation viewpoint for use in performing the issue detection processes analysis evaluation on the system aspect. An evaluation viewpoint is disclosed viewpoint, a discovered viewpoint, or a desired viewpoint. A disclosed viewpoint is with regard to analyzing the system aspect based on the disclosed data. A discovered viewpoint is with regard to analyzing the system aspect based on the discovered data. A desired viewpoint is with regard to analyzing the system aspect based on the desired data. - The method continues at
step 1049 where the analysis system obtains issue detection processes analysis data regarding the system aspect in accordance with the at least one evaluation perspective and the at least one evaluation viewpoint. Issue detection processes analysis data is data obtained that is regarding the system aspect. - The method continues at
step 1050 where the analysis system calculates an issue detection processes analysis rating as a measure of system issue detection processes analysis maturity for the system aspect based on the issue detection processes analysis data, the at least one evaluation perspective, the at least one evaluation viewpoint, and at least one evaluation rating metric. An evaluation rating metric is a process rating metric, a policy rating metric, a procedure rating metric, a certification rating, a documentation rating metric, or an automation rating metric. - It is noted that terminologies as may be used herein such as bit stream, stream, signal sequence, etc. (or their equivalents) have been used interchangeably to describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, text, graphics, audio, etc. any of which may generally be referred to as ‘data’).
- As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. For some industries, an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more. Other examples of industry-accepted tolerance range from less than one percent to fifty percent. Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics. Within an industry, tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/−1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitude of differences.
- As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”.
- As may even further be used herein, the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item.
- As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that
signal 1 has a greater magnitude thansignal 2, a favorable comparison may be achieved when the magnitude ofsignal 1 is greater than that ofsignal 2 or when the magnitude ofsignal 2 is less than that ofsignal 1. As may be used herein, the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship. - As may be used herein, one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c”, with more or less elements than “a”, “b”, and “c”. In either phrasing, the phrases are to be interpreted identically. In particular, “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c. As an example, it means: “a” only, “b” only, “c” only, “a” and “b”, “a” and “c”, “b” and “c”, and/or “a”, “b”, and “c”.
- As may also be used herein, the terms “processing module”, “processing circuit”, “processor”, “processing circuitry”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture.
- One or more embodiments have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality.
- To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
- In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines. In addition, a flow diagram may include an “end” and/or “continue” indication. The “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
- The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
- While the transistors in the above described figure(s) is/are shown as field effect transistors (FETs), as one of ordinary skill in the art will appreciate, the transistors may be implemented using any type of transistor structure including, but not limited to, bipolar, metal oxide semiconductor field effect transistors (MOSFET), N-well transistors, P-well transistors, enhancement mode, depletion mode, and zero voltage threshold (VT) transistors.
- Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art.
- The term “module” is used in the description of one or more of the embodiments. A module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions. A module may operate independently and/or in conjunction with software and/or firmware. As also used herein, a module may contain one or more sub-modules, each of which may be one or more modules.
- As may further be used herein, a computer readable memory includes one or more memory elements. A memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. The memory device may be in a form a solid-state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information.
- While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
Claims (22)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/301,356 US20210297432A1 (en) | 2020-03-20 | 2021-03-31 | Generation of an anomalies and event awareness evaluation regarding a system aspect of a system |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062992661P | 2020-03-20 | 2020-03-20 | |
US17/247,701 US11892924B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue detection evaluation regarding a system aspect of a system |
US17/301,356 US20210297432A1 (en) | 2020-03-20 | 2021-03-31 | Generation of an anomalies and event awareness evaluation regarding a system aspect of a system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/247,701 Continuation US11892924B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue detection evaluation regarding a system aspect of a system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210297432A1 true US20210297432A1 (en) | 2021-09-23 |
Family
ID=77747895
Family Applications (33)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/128,471 Pending US20210294713A1 (en) | 2020-03-20 | 2020-12-21 | Generation of an identification evaluation regarding a system aspect of a system |
US17/247,710 Active 2041-02-03 US11789833B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue recovery evaluation regarding a system aspect of a system |
US17/128,491 Active 2042-11-21 US11960373B2 (en) | 2020-03-20 | 2020-12-21 | Function evaluation of a system or portion thereof |
US17/247,706 Active 2041-02-27 US11775405B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue response evaluation regarding a system aspect of a system |
US17/247,714 Pending US20210294904A1 (en) | 2020-03-20 | 2020-12-21 | Generation of an asset evaluation regarding a system aspect of a system |
US17/247,705 Active 2041-06-05 US11645176B2 (en) | 2020-03-20 | 2020-12-21 | Generation of a protection evaluation regarding a system aspect of a system |
US17/247,702 Active 2042-06-15 US11954003B2 (en) | 2020-03-20 | 2020-12-21 | High level analysis system with report outputting |
US17/128,509 Active 2041-04-22 US11698845B2 (en) | 2020-03-20 | 2020-12-21 | Evaluation rating of a system or portion thereof |
US17/247,701 Active 2041-09-12 US11892924B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue detection evaluation regarding a system aspect of a system |
US17/301,352 Active US11775406B2 (en) | 2020-03-20 | 2021-03-31 | Generation of an issue recovery plan evaluation regarding a system aspect of a system |
US17/301,349 Active 2041-10-30 US11994968B2 (en) | 2020-03-20 | 2021-03-31 | High level analysis system with report outputting |
US17/301,361 Pending US20210295232A1 (en) | 2020-03-20 | 2021-03-31 | Generation of evaluation regarding fulfillment of business operation objectives of a system aspect of a system |
US17/219,561 Active US11775404B2 (en) | 2020-03-20 | 2021-03-31 | Generation of an issue response planning evaluation regarding a system aspect of a system |
US17/301,356 Pending US20210297432A1 (en) | 2020-03-20 | 2021-03-31 | Generation of an anomalies and event awareness evaluation regarding a system aspect of a system |
US17/301,368 Active US11704212B2 (en) | 2020-03-20 | 2021-03-31 | Evaluation of processes of a system or portion thereof |
US17/219,655 Pending US20210297439A1 (en) | 2020-03-20 | 2021-03-31 | Business operation function evaluation of a system or portion thereof |
US17/350,113 Active US11704213B2 (en) | 2020-03-20 | 2021-06-17 | Evaluation of policies of a system or portion thereof |
US17/304,783 Active US11693751B2 (en) | 2020-03-20 | 2021-06-25 | Generation of an issue response analysis evaluation regarding a system aspect of a system |
US17/304,771 Pending US20210329018A1 (en) | 2020-03-20 | 2021-06-25 | Generation of a continuous security monitoring evaluation regarding a system aspect of a system |
US17/360,815 Active US11853181B2 (en) | 2020-03-20 | 2021-06-28 | Generation of an asset management evaluation regarding a system aspect of a system |
US17/305,018 Active 2041-04-12 US11734140B2 (en) | 2020-03-20 | 2021-06-29 | Method and apparatus for system protection maintenance analysis |
US17/305,005 Active 2041-01-13 US11734139B2 (en) | 2020-03-20 | 2021-06-29 | Method and apparatus for system information protection processes and procedures analysis |
US17/305,086 Active 2041-11-04 US11947434B2 (en) | 2020-03-20 | 2021-06-30 | System under test analysis method to detect deficiencies and/or auto-corrections |
US17/444,163 Pending US20210357511A1 (en) | 2020-03-20 | 2021-07-30 | Method and apparatus for system protection technology analysis |
US18/059,544 Pending US20230090011A1 (en) | 2020-03-20 | 2022-11-29 | Evaluation output of a system or portions thereof |
US18/162,540 Active US11899548B2 (en) | 2020-03-20 | 2023-01-31 | Generation of an issue recovery improvement evaluation regarding a system aspect of a system |
US18/193,523 Active US11966308B2 (en) | 2020-03-20 | 2023-03-30 | Generation of an issue response communications evaluation regarding a system aspect of a system |
US18/141,444 Active US12066909B2 (en) | 2020-03-20 | 2023-04-30 | Generation of a mitigation evaluation regarding a system aspect of a system |
US18/315,620 Pending US20230289269A1 (en) | 2020-03-20 | 2023-05-11 | Evaluation of documentation of a system or portion thereof |
US18/203,171 Pending US20230315597A1 (en) | 2020-03-20 | 2023-05-30 | Evaluation rating regarding fulfillment of business operation objectives of a system aspect of a system |
US18/334,042 Active US12072779B2 (en) | 2020-03-20 | 2023-06-13 | Generation of an issue recovery communications evaluation regarding a system aspect of a system |
US18/507,520 Pending US20240078161A1 (en) | 2020-03-20 | 2023-11-13 | Generation of a business environment evaluation regarding a system aspect of a system |
US18/646,896 Pending US20240281349A1 (en) | 2020-03-20 | 2024-04-26 | Auto-correcting deficiencies of a system |
Family Applications Before (13)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/128,471 Pending US20210294713A1 (en) | 2020-03-20 | 2020-12-21 | Generation of an identification evaluation regarding a system aspect of a system |
US17/247,710 Active 2041-02-03 US11789833B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue recovery evaluation regarding a system aspect of a system |
US17/128,491 Active 2042-11-21 US11960373B2 (en) | 2020-03-20 | 2020-12-21 | Function evaluation of a system or portion thereof |
US17/247,706 Active 2041-02-27 US11775405B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue response evaluation regarding a system aspect of a system |
US17/247,714 Pending US20210294904A1 (en) | 2020-03-20 | 2020-12-21 | Generation of an asset evaluation regarding a system aspect of a system |
US17/247,705 Active 2041-06-05 US11645176B2 (en) | 2020-03-20 | 2020-12-21 | Generation of a protection evaluation regarding a system aspect of a system |
US17/247,702 Active 2042-06-15 US11954003B2 (en) | 2020-03-20 | 2020-12-21 | High level analysis system with report outputting |
US17/128,509 Active 2041-04-22 US11698845B2 (en) | 2020-03-20 | 2020-12-21 | Evaluation rating of a system or portion thereof |
US17/247,701 Active 2041-09-12 US11892924B2 (en) | 2020-03-20 | 2020-12-21 | Generation of an issue detection evaluation regarding a system aspect of a system |
US17/301,352 Active US11775406B2 (en) | 2020-03-20 | 2021-03-31 | Generation of an issue recovery plan evaluation regarding a system aspect of a system |
US17/301,349 Active 2041-10-30 US11994968B2 (en) | 2020-03-20 | 2021-03-31 | High level analysis system with report outputting |
US17/301,361 Pending US20210295232A1 (en) | 2020-03-20 | 2021-03-31 | Generation of evaluation regarding fulfillment of business operation objectives of a system aspect of a system |
US17/219,561 Active US11775404B2 (en) | 2020-03-20 | 2021-03-31 | Generation of an issue response planning evaluation regarding a system aspect of a system |
Family Applications After (19)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/301,368 Active US11704212B2 (en) | 2020-03-20 | 2021-03-31 | Evaluation of processes of a system or portion thereof |
US17/219,655 Pending US20210297439A1 (en) | 2020-03-20 | 2021-03-31 | Business operation function evaluation of a system or portion thereof |
US17/350,113 Active US11704213B2 (en) | 2020-03-20 | 2021-06-17 | Evaluation of policies of a system or portion thereof |
US17/304,783 Active US11693751B2 (en) | 2020-03-20 | 2021-06-25 | Generation of an issue response analysis evaluation regarding a system aspect of a system |
US17/304,771 Pending US20210329018A1 (en) | 2020-03-20 | 2021-06-25 | Generation of a continuous security monitoring evaluation regarding a system aspect of a system |
US17/360,815 Active US11853181B2 (en) | 2020-03-20 | 2021-06-28 | Generation of an asset management evaluation regarding a system aspect of a system |
US17/305,018 Active 2041-04-12 US11734140B2 (en) | 2020-03-20 | 2021-06-29 | Method and apparatus for system protection maintenance analysis |
US17/305,005 Active 2041-01-13 US11734139B2 (en) | 2020-03-20 | 2021-06-29 | Method and apparatus for system information protection processes and procedures analysis |
US17/305,086 Active 2041-11-04 US11947434B2 (en) | 2020-03-20 | 2021-06-30 | System under test analysis method to detect deficiencies and/or auto-corrections |
US17/444,163 Pending US20210357511A1 (en) | 2020-03-20 | 2021-07-30 | Method and apparatus for system protection technology analysis |
US18/059,544 Pending US20230090011A1 (en) | 2020-03-20 | 2022-11-29 | Evaluation output of a system or portions thereof |
US18/162,540 Active US11899548B2 (en) | 2020-03-20 | 2023-01-31 | Generation of an issue recovery improvement evaluation regarding a system aspect of a system |
US18/193,523 Active US11966308B2 (en) | 2020-03-20 | 2023-03-30 | Generation of an issue response communications evaluation regarding a system aspect of a system |
US18/141,444 Active US12066909B2 (en) | 2020-03-20 | 2023-04-30 | Generation of a mitigation evaluation regarding a system aspect of a system |
US18/315,620 Pending US20230289269A1 (en) | 2020-03-20 | 2023-05-11 | Evaluation of documentation of a system or portion thereof |
US18/203,171 Pending US20230315597A1 (en) | 2020-03-20 | 2023-05-30 | Evaluation rating regarding fulfillment of business operation objectives of a system aspect of a system |
US18/334,042 Active US12072779B2 (en) | 2020-03-20 | 2023-06-13 | Generation of an issue recovery communications evaluation regarding a system aspect of a system |
US18/507,520 Pending US20240078161A1 (en) | 2020-03-20 | 2023-11-13 | Generation of a business environment evaluation regarding a system aspect of a system |
US18/646,896 Pending US20240281349A1 (en) | 2020-03-20 | 2024-04-26 | Auto-correcting deficiencies of a system |
Country Status (2)
Country | Link |
---|---|
US (33) | US20210294713A1 (en) |
WO (1) | WO2021188356A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11398960B1 (en) * | 2021-04-09 | 2022-07-26 | EMC IP Holding Company LLC | System and method for self-healing of upgrade issues on a customer environment |
CN115204527A (en) * | 2022-09-15 | 2022-10-18 | 万链指数(青岛)信息科技有限公司 | Enterprise operation health index evaluation system based on big data |
US20230024602A1 (en) * | 2021-07-21 | 2023-01-26 | Box, Inc. | Identifying and resolving conflicts in access permissions during migration of data and user accounts |
US11625292B2 (en) | 2021-05-27 | 2023-04-11 | EMC IP Holding Company LLC | System and method for self-healing of upgrade issues on a customer environment and synchronization with a production host environment |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11327475B2 (en) | 2016-05-09 | 2022-05-10 | Strong Force Iot Portfolio 2016, Llc | Methods and systems for intelligent collection and analysis of vehicle data |
US11112784B2 (en) | 2016-05-09 | 2021-09-07 | Strong Force Iot Portfolio 2016, Llc | Methods and systems for communications in an industrial internet of things data collection environment with large data sets |
US11108798B2 (en) | 2018-06-06 | 2021-08-31 | Reliaquest Holdings, Llc | Threat mitigation system and method |
US11709946B2 (en) | 2018-06-06 | 2023-07-25 | Reliaquest Holdings, Llc | Threat mitigation system and method |
US11146584B1 (en) * | 2018-08-16 | 2021-10-12 | 5thColumn LLC | Methods, apparatuses, systems and devices for network security |
US20210294713A1 (en) * | 2020-03-20 | 2021-09-23 | 5thColumn LLC | Generation of an identification evaluation regarding a system aspect of a system |
JP7501615B2 (en) * | 2020-04-24 | 2024-06-18 | 日本電気株式会社 | SECURITY INSPECTION DEVICE, SECURITY INSPECTION METHOD, AND PROGRAM |
US11741228B2 (en) * | 2020-08-25 | 2023-08-29 | Bank Of America Corporation | System for generating computing network segmentation and isolation schemes using dynamic and shifting classification of assets |
US11689574B2 (en) * | 2021-03-09 | 2023-06-27 | International Business Machines Corporation | Optimizing security and event information |
US20220385683A1 (en) * | 2021-05-28 | 2022-12-01 | Sophos Limited | Threat management using network traffic to determine security states |
CN113961555B (en) * | 2021-11-15 | 2024-06-28 | 中国建设银行股份有限公司 | Data correction method and device, storage medium and electronic equipment |
CN114401113B (en) * | 2021-12-16 | 2023-06-27 | 中国人民解放军战略支援部队信息工程大学 | Network security policy AI autonomous defense method and system based on security ontology modeling |
CN113946836B (en) * | 2021-12-20 | 2022-04-19 | 中山大学 | Method, system, equipment and medium for evaluating toughness of information system |
US11894940B2 (en) | 2022-05-10 | 2024-02-06 | Google Llc | Automated testing system for a video conferencing system |
US20230421563A1 (en) * | 2022-06-22 | 2023-12-28 | Stripe, Inc. | Managing access control using policy evaluation mode |
US12003362B2 (en) * | 2022-07-27 | 2024-06-04 | Rapid7, Inc. | Machine learning techniques for associating assets related to events with addressable computer network assets |
US11972296B1 (en) * | 2023-05-03 | 2024-04-30 | The Strategic Coach Inc. | Methods and apparatuses for intelligently determining and implementing distinct routines for entities |
CN116451911A (en) * | 2023-06-16 | 2023-07-18 | 中国人民解放军战略支援部队航天工程大学 | Equipment technical system evaluation method and system based on capability gap |
CN116955967B (en) * | 2023-09-20 | 2023-12-08 | 成都无糖信息技术有限公司 | System and method for simulating investigation and adjustment in network target range |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040088601A1 (en) * | 2002-10-31 | 2004-05-06 | General Electric Company | Method, system and program product for establishing a self-diagnosing and self-repairing automated system |
US20070005767A1 (en) * | 2005-07-04 | 2007-01-04 | Sampige Sahana P | Method and apparatus for automated testing of a utility computing system |
US20090313198A1 (en) * | 2008-06-17 | 2009-12-17 | Yutaka Kudo | Methods and systems for performing root cause analysis |
US20140365637A1 (en) * | 2013-06-11 | 2014-12-11 | Vmware, Inc. | Methods and systems for reducing metrics used to monitor resources |
US20150089304A1 (en) * | 2013-09-20 | 2015-03-26 | Oracle International Corporation | User-directed logging and auto-correction |
US20160239363A1 (en) * | 2015-02-13 | 2016-08-18 | Fujitsu Limited | Analysis device and information processing system |
US20160335166A1 (en) * | 2015-05-14 | 2016-11-17 | Cisco Technology, Inc. | Smart storage recovery in a distributed storage system |
US20180060452A1 (en) * | 2016-08-23 | 2018-03-01 | Ca, Inc. | System and Method for Generating System Testing Data |
US20180107544A1 (en) * | 2015-03-31 | 2018-04-19 | International Business Machines Corporation | Selecting a storage error abatement alternative in a dispersed storage network |
US20200301798A1 (en) * | 2017-12-08 | 2020-09-24 | Huawei Technologies Co., Ltd. | Fault injection system and method of fault injection |
WO2021188356A1 (en) * | 2020-03-20 | 2021-09-23 | 5thColumn LLC | Evaluation rating of a system or portion thereof |
US11314576B2 (en) * | 2020-03-31 | 2022-04-26 | Accenture Global Solutions Limited | System and method for automating fault detection in multi-tenant environments |
Family Cites Families (286)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6298445B1 (en) * | 1998-04-30 | 2001-10-02 | Netect, Ltd. | Computer security |
US6581166B1 (en) * | 1999-03-02 | 2003-06-17 | The Foxboro Company | Network fault detection and recovery |
IL159332A0 (en) * | 1999-10-31 | 2004-06-01 | Insyst Ltd | A knowledge-engineering protocol-suite |
US7159237B2 (en) * | 2000-03-16 | 2007-01-02 | Counterpane Internet Security, Inc. | Method and system for dynamic network intrusion monitoring, detection and response |
US6907403B1 (en) * | 2000-07-13 | 2005-06-14 | C4Cast.Com, Inc. | Identifying industry sectors using statistical clusterization |
US6850920B2 (en) * | 2001-05-01 | 2005-02-01 | The Regents Of The University Of California | Performance analysis of distributed applications using automatic classification of communication inefficiencies |
US20030115094A1 (en) * | 2001-12-18 | 2003-06-19 | Ammerman Geoffrey C. | Apparatus and method for evaluating the performance of a business |
US20030177414A1 (en) * | 2002-03-14 | 2003-09-18 | Sun Microsystems Inc., A Delaware Corporation | Model for performance tuning applications |
WO2003081493A1 (en) * | 2002-03-22 | 2003-10-02 | Mitsubishi Denki Kabushiki Kaisha | Business profit improvement support system |
US20030187675A1 (en) * | 2002-03-29 | 2003-10-02 | Stefan Hack | Business process valuation tool |
US7290275B2 (en) * | 2002-04-29 | 2007-10-30 | Schlumberger Omnes, Inc. | Security maturity assessment method |
US7373666B2 (en) * | 2002-07-01 | 2008-05-13 | Microsoft Corporation | Distributed threat management |
US6795793B2 (en) * | 2002-07-19 | 2004-09-21 | Med-Ed Innovations, Inc. | Method and apparatus for evaluating data and implementing training based on the evaluation of the data |
US7809595B2 (en) * | 2002-09-17 | 2010-10-05 | Jpmorgan Chase Bank, Na | System and method for managing risks associated with outside service providers |
US8359650B2 (en) * | 2002-10-01 | 2013-01-22 | Skybox Secutiry Inc. | System, method and computer readable medium for evaluating potential attacks of worms |
US6816813B2 (en) * | 2002-10-15 | 2004-11-09 | The Procter & Gamble Company | Process for determining competing cause event probability and/or system availability during the simultaneous occurrence of multiple events |
US7734637B2 (en) * | 2002-12-05 | 2010-06-08 | Borland Software Corporation | Method and system for automatic detection of monitoring data sources |
JP4089427B2 (en) * | 2002-12-26 | 2008-05-28 | 株式会社日立製作所 | Management system, management computer, management method and program |
JP2004252947A (en) * | 2003-01-27 | 2004-09-09 | Fuji Xerox Co Ltd | Evaluation device and method |
US7246156B2 (en) * | 2003-06-09 | 2007-07-17 | Industrial Defender, Inc. | Method and computer program product for monitoring an industrial network |
US20070050777A1 (en) * | 2003-06-09 | 2007-03-01 | Hutchinson Thomas W | Duration of alerts and scanning of large data stores |
US20070113272A2 (en) * | 2003-07-01 | 2007-05-17 | Securityprofiling, Inc. | Real-time vulnerability monitoring |
US9100431B2 (en) * | 2003-07-01 | 2015-08-04 | Securityprofiling, Llc | Computer program product and apparatus for multi-path remediation |
WO2005065392A2 (en) * | 2003-12-30 | 2005-07-21 | Bacon Charles F | System and method for adaptive decision making analysis and assessment |
US20060005246A1 (en) * | 2004-02-09 | 2006-01-05 | Dalton Thomas R | System for providing security vulnerability identification, certification, and accreditation |
US8201257B1 (en) * | 2004-03-31 | 2012-06-12 | Mcafee, Inc. | System and method of managing network security risks |
US20050234767A1 (en) * | 2004-04-15 | 2005-10-20 | Bolzman Douglas F | System and method for identifying and monitoring best practices of an enterprise |
US20070180490A1 (en) * | 2004-05-20 | 2007-08-02 | Renzi Silvio J | System and method for policy management |
US20060021021A1 (en) * | 2004-06-08 | 2006-01-26 | Rajesh Patel | Security event data normalization |
JP4251636B2 (en) * | 2004-07-30 | 2009-04-08 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Organization efficiency evaluation device, organization efficiency evaluation service method, information transmission frequency generation device, information transmission frequency generation method, program, and recording medium |
US20060047535A1 (en) * | 2004-08-26 | 2006-03-02 | Paiva Fredricksen Group, Llc | Method and system of business management |
JP4304535B2 (en) * | 2004-11-17 | 2009-07-29 | 日本電気株式会社 | Information processing apparatus, program, modular system operation management system, and component selection method |
EP1899875A4 (en) * | 2004-12-13 | 2010-01-06 | Lawrence R Guinta | Critically/vulnerability/risk logic analysis methodology for business enterprise and cyber security |
WO2006071985A2 (en) * | 2004-12-29 | 2006-07-06 | Alert Logic, Inc. | Threat scoring system and method for intrusion detection security networks |
US20100070348A1 (en) * | 2005-02-17 | 2010-03-18 | Abhijit Nag | Method and apparatus for evaluation of business performances of business enterprises |
US20060229921A1 (en) * | 2005-04-08 | 2006-10-12 | Mr. Patrick Colbeck | Business Control System |
US20060235733A1 (en) * | 2005-04-13 | 2006-10-19 | Marks Eric A | System and method for providing integration of service-oriented architecture and Web services |
US7925594B2 (en) * | 2005-07-19 | 2011-04-12 | Infosys Technologies Ltd. | System and method for providing framework for business process improvement |
US8150816B2 (en) * | 2005-12-29 | 2012-04-03 | Nextlabs, Inc. | Techniques of optimizing policies in an information management system |
US8301490B2 (en) * | 2006-02-21 | 2012-10-30 | Cornford Alan B | Methods and systems for optimization of innovative capacity and economic value from knowledge, human and risk capital asset sets |
US8112306B2 (en) * | 2006-02-22 | 2012-02-07 | Verint Americas, Inc. | System and method for facilitating triggers and workflows in workforce optimization |
US7689494B2 (en) * | 2006-03-23 | 2010-03-30 | Advisor Software Inc. | Simulation of portfolios and risk budget analysis |
US9077715B1 (en) * | 2006-03-31 | 2015-07-07 | Symantec Corporation | Social trust based security model |
US7831464B1 (en) * | 2006-04-06 | 2010-11-09 | ClearPoint Metrics, Inc. | Method and system for dynamically representing distributed information |
US7934253B2 (en) | 2006-07-20 | 2011-04-26 | Trustwave Holdings, Inc. | System and method of securing web applications across an enterprise |
US20080027784A1 (en) * | 2006-07-31 | 2008-01-31 | Jenny Siew Hoon Ang | Goal-service modeling |
US8775402B2 (en) * | 2006-08-15 | 2014-07-08 | Georgia State University Research Foundation, Inc. | Trusted query network systems and methods |
JP4374378B2 (en) * | 2006-12-21 | 2009-12-02 | 株式会社日立製作所 | Operation performance evaluation apparatus, operation performance evaluation method, and program |
US20080183552A1 (en) | 2007-01-30 | 2008-07-31 | Pied Piper Management Company | Method for evaluating, analyzing, and benchmarking business sales performance |
US9069967B2 (en) * | 2007-02-16 | 2015-06-30 | Veracode, Inc. | Assessment and analysis of software security flaws |
US8613080B2 (en) * | 2007-02-16 | 2013-12-17 | Veracode, Inc. | Assessment and analysis of software security flaws in virtual machines |
US20080249825A1 (en) * | 2007-04-05 | 2008-10-09 | Infosys Technologies Ltd. | Information technology maintenance system framework |
EP2145309A1 (en) * | 2007-05-10 | 2010-01-20 | Pensions First Group Llp | Pension fund systems |
US7971180B2 (en) * | 2007-06-13 | 2011-06-28 | International Business Machines Corporation | Method and system for evaluating multi-dimensional project plans for implementing packaged software applications |
US8166551B2 (en) * | 2007-07-17 | 2012-04-24 | Oracle International Corporation | Automated security manager |
KR100815280B1 (en) * | 2007-08-31 | 2008-03-19 | 주식회사 소프트웨어품질연구소 | Software process maturity management method and computer readable recording medium thereof |
US8819655B1 (en) * | 2007-09-17 | 2014-08-26 | Symantec Corporation | Systems and methods for computer program update protection |
GB0718259D0 (en) * | 2007-09-19 | 2007-10-31 | Olton Ltd | Apparatus and method for information processing |
US20090112809A1 (en) * | 2007-10-24 | 2009-04-30 | Caterpillar Inc. | Systems and methods for monitoring health of computing systems |
US8006116B1 (en) * | 2008-03-31 | 2011-08-23 | Symantec Corporation | Systems and methods for storing health information with computing-system backups |
US20090271152A1 (en) * | 2008-04-28 | 2009-10-29 | Alcatel | Load testing mechanism for server-based applications |
US8949187B1 (en) * | 2008-05-30 | 2015-02-03 | Symantec Corporation | Systems and methods for creating and managing backups based on health information |
US8271949B2 (en) * | 2008-07-31 | 2012-09-18 | International Business Machines Corporation | Self-healing factory processes in a software factory |
US20170070361A1 (en) | 2008-08-11 | 2017-03-09 | Ken Sundermeyer | Data model for home automation |
US8261342B2 (en) | 2008-08-20 | 2012-09-04 | Reliant Security | Payment card industry (PCI) compliant architecture and associated methodology of managing a service infrastructure |
US8145428B1 (en) * | 2008-09-29 | 2012-03-27 | QRI Group, LLC | Assessing petroleum reservoir reserves and potential for increasing ultimate recovery |
US9781148B2 (en) * | 2008-10-21 | 2017-10-03 | Lookout, Inc. | Methods and systems for sharing risk responses between collections of mobile communications devices |
PT104272A (en) * | 2008-12-01 | 2010-06-01 | Paulo Alexandre Cardoso | MULTIFUNCTIONAL STRUCTURES, SECTIONS OF TRAFFIC INFRASTRUCTURES INCLUDING THESE STRUCTURES AND MANAGEMENT PROCESS OF THESE SECTIONS |
US20100250310A1 (en) * | 2009-03-30 | 2010-09-30 | Michael Locherer | Monitoring organizational information for fast decision making |
JP5559306B2 (en) * | 2009-04-24 | 2014-07-23 | アルグレス・インコーポレイテッド | Enterprise information security management software for predictive modeling using interactive graphs |
US8880682B2 (en) * | 2009-10-06 | 2014-11-04 | Emc Corporation | Integrated forensics platform for analyzing IT resources consumed to derive operational and architectural recommendations |
US10019677B2 (en) * | 2009-11-20 | 2018-07-10 | Alert Enterprise, Inc. | Active policy enforcement |
US10027711B2 (en) * | 2009-11-20 | 2018-07-17 | Alert Enterprise, Inc. | Situational intelligence |
US20110131247A1 (en) * | 2009-11-30 | 2011-06-02 | International Business Machines Corporation | Semantic Management Of Enterprise Resourses |
WO2011082380A1 (en) * | 2009-12-31 | 2011-07-07 | Fiberlink Communications Corporation | Consolidated security application dashboard |
US9098333B1 (en) * | 2010-05-07 | 2015-08-04 | Ziften Technologies, Inc. | Monitoring computer process resource usage |
US10805331B2 (en) * | 2010-09-24 | 2020-10-13 | BitSight Technologies, Inc. | Information technology security assessment system |
US8539546B2 (en) * | 2010-10-22 | 2013-09-17 | Hitachi, Ltd. | Security monitoring apparatus, security monitoring method, and security monitoring program based on a security policy |
US8438275B1 (en) * | 2010-11-05 | 2013-05-07 | Amazon Technologies, Inc. | Formatting data for efficient communication over a network |
US8359016B2 (en) * | 2010-11-19 | 2013-01-22 | Mobile Iron, Inc. | Management of mobile applications |
US8869307B2 (en) * | 2010-11-19 | 2014-10-21 | Mobile Iron, Inc. | Mobile posture-based policy, remediation and access control for enterprise resources |
US9026862B2 (en) * | 2010-12-02 | 2015-05-05 | Robert W. Dreyfoos | Performance monitoring for applications without explicit instrumentation |
US8613086B2 (en) * | 2011-01-31 | 2013-12-17 | Bank Of America Corporation | Ping and scan of computer systems |
US8380838B2 (en) * | 2011-04-08 | 2013-02-19 | International Business Machines Corporation | Reduction of alerts in information technology systems |
US20170024827A1 (en) * | 2011-05-19 | 2017-01-26 | Aon Singapore Centre For Innovation Strategy And Management Pte., Ltd. | Dashboard interface, platform, and environment for supporting complex transactions and deriving insights therefrom |
DE102012007527A1 (en) * | 2011-05-27 | 2012-11-29 | BGW AG Management Advisory Group St. Gallen-Wien | Computer-assisted IP rights assessment process and system and method for establishing Intellectual Property Rights Valuation Index |
US9118702B2 (en) * | 2011-05-31 | 2015-08-25 | Bce Inc. | System and method for generating and refining cyber threat intelligence data |
EP2551773B1 (en) * | 2011-07-29 | 2024-03-06 | Tata Consultancy Services Ltd. | Data audit module for application software |
US20140207486A1 (en) | 2011-08-31 | 2014-07-24 | Lifeguard Health Networks, Inc. | Health management system |
US10031646B2 (en) * | 2011-09-07 | 2018-07-24 | Mcafee, Llc | Computer system security dashboard |
US8819491B2 (en) * | 2011-09-16 | 2014-08-26 | Tripwire, Inc. | Methods and apparatus for remediation workflow |
US9225772B2 (en) * | 2011-09-26 | 2015-12-29 | Knoa Software, Inc. | Method, system and program product for allocation and/or prioritization of electronic resources |
US8856936B2 (en) * | 2011-10-14 | 2014-10-07 | Albeado Inc. | Pervasive, domain and situational-aware, adaptive, automated, and coordinated analysis and control of enterprise-wide computers, networks, and applications for mitigation of business and operational risks and enhancement of cyber security |
US9058486B2 (en) * | 2011-10-18 | 2015-06-16 | Mcafee, Inc. | User behavioral risk assessment |
US9686293B2 (en) * | 2011-11-03 | 2017-06-20 | Cyphort Inc. | Systems and methods for malware detection and mitigation |
US20150088597A1 (en) * | 2011-12-02 | 2015-03-26 | Tailored Solutions and Consulting, Inc. | Method, system, and apparatus for managing corporate risk |
US20130208880A1 (en) * | 2011-12-22 | 2013-08-15 | Shoregroup, Inc. | Method and apparatus for evolutionary contact center business intelligence |
US8930758B2 (en) * | 2012-01-16 | 2015-01-06 | Siemens Aktiengesellschaft | Automated testing of mechatronic systems |
US8990392B1 (en) * | 2012-04-11 | 2015-03-24 | NCC Group Inc. | Assessing a computing resource for compliance with a computing resource policy regime specification |
CN102768635B (en) * | 2012-06-07 | 2015-02-11 | 北京奇虎科技有限公司 | Computer health index display equipment and computer health index display method |
US8862948B1 (en) * | 2012-06-28 | 2014-10-14 | Emc Corporation | Method and apparatus for providing at risk information in a cloud computing system having redundancy |
US20140081680A1 (en) * | 2012-08-08 | 2014-03-20 | Mastercard International Incorporated | Methods and systems for evaluating technology assets using data sets to generate evaluation outputs |
US8756698B2 (en) * | 2012-08-10 | 2014-06-17 | Nopsec Inc. | Method and system for managing computer system vulnerabilities |
US9258321B2 (en) * | 2012-08-23 | 2016-02-09 | Raytheon Foreground Security, Inc. | Automated internet threat detection and mitigation system and associated methods |
US20190394243A1 (en) * | 2012-09-28 | 2019-12-26 | Rex Wiig | System and method of a requirement, active compliance and resource management for cyber security application |
US11080718B2 (en) * | 2012-09-28 | 2021-08-03 | Rex Wiig | System and method of a requirement, active compliance and resource management for cyber security application |
US20190394242A1 (en) * | 2012-09-28 | 2019-12-26 | Rex Wig | System and method of a requirement, active compliance and resource management for cyber security application |
EP2901612A4 (en) * | 2012-09-28 | 2016-06-15 | Level 3 Communications Llc | Apparatus, system and method for identifying and mitigating malicious network threats |
US9058359B2 (en) * | 2012-11-09 | 2015-06-16 | International Business Machines Corporation | Proactive risk analysis and governance of upgrade process |
US20140137257A1 (en) * | 2012-11-12 | 2014-05-15 | Board Of Regents, The University Of Texas System | System, Method and Apparatus for Assessing a Risk of One or More Assets Within an Operational Technology Infrastructure |
WO2014093935A1 (en) * | 2012-12-16 | 2014-06-19 | Cloud 9 Llc | Vital text analytics system for the enhancement of requirements engineering documents and other documents |
US9294495B1 (en) | 2013-01-06 | 2016-03-22 | Spheric Security Solutions | System and method for evaluating and enhancing the security level of a network system |
US11843625B2 (en) * | 2013-01-06 | 2023-12-12 | Security Inclusion Now Usa Llc | System and method for evaluating and enhancing the security level of a network system |
US20140258305A1 (en) * | 2013-03-06 | 2014-09-11 | Tremus, Inc. D/B/A Trustfactors, Inc. | Systems and methods for providing contextual trust scores |
US9324119B2 (en) * | 2013-03-15 | 2016-04-26 | Alert Enterprise | Identity and asset risk score intelligence and threat mitigation |
US10075384B2 (en) * | 2013-03-15 | 2018-09-11 | Advanced Elemental Technologies, Inc. | Purposeful computing |
US9721086B2 (en) * | 2013-03-15 | 2017-08-01 | Advanced Elemental Technologies, Inc. | Methods and systems for secure and reliable identity-based computing |
US11226995B2 (en) * | 2013-03-15 | 2022-01-18 | International Business Machines Corporation | Generating business intelligence geospatial elements |
JP6114818B2 (en) * | 2013-04-05 | 2017-04-12 | 株式会社日立製作所 | Management system and management program |
CN104123603A (en) * | 2013-04-28 | 2014-10-29 | 成都勤智数码科技股份有限公司 | Service monitoring platform based on knowledge base |
US9558346B1 (en) * | 2013-05-28 | 2017-01-31 | EMC IP Holding Company LLC | Information processing systems with security-related feedback |
US9449176B1 (en) * | 2013-06-06 | 2016-09-20 | Phillip M. Adams | Computer system vulnerability analysis apparatus and method |
CN105556526B (en) * | 2013-09-30 | 2018-10-30 | 安提特软件有限责任公司 | Non-transitory machine readable media, the system and method that layering threatens intelligence are provided |
US9628356B2 (en) * | 2013-10-10 | 2017-04-18 | Ixia | Methods, systems, and computer readable media for providing user interfaces for specification of system under test (SUT) and network tap topology and for presenting topology specific test results |
WO2015054617A1 (en) * | 2013-10-11 | 2015-04-16 | Ark Network Security Solutions, Llc | Systems and methods for implementing modular computer system security solutions |
US9264494B2 (en) * | 2013-10-21 | 2016-02-16 | International Business Machines Corporation | Automated data recovery from remote data object replicas |
EP2871577B1 (en) * | 2013-11-06 | 2017-08-09 | Software AG | Complex event processing (CEP) based system for handling performance issues of a CEP system and corresponding method |
JP5946068B2 (en) * | 2013-12-17 | 2016-07-05 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Computation method, computation apparatus, computer system, and program for evaluating response performance in a computer system capable of operating a plurality of arithmetic processing units on a computation core |
US9652464B2 (en) * | 2014-01-30 | 2017-05-16 | Nasdaq, Inc. | Systems and methods for continuous active data security |
US9009827B1 (en) * | 2014-02-20 | 2015-04-14 | Palantir Technologies Inc. | Security sharing system |
US20150309912A1 (en) * | 2014-04-24 | 2015-10-29 | Tu Nguyen | Electronics Recycling Retail Desktop Verification Device |
US20150317337A1 (en) * | 2014-05-05 | 2015-11-05 | General Electric Company | Systems and Methods for Identifying and Driving Actionable Insights from Data |
US9413780B1 (en) * | 2014-05-06 | 2016-08-09 | Synack, Inc. | Security assessment incentive method for promoting discovery of computer software vulnerabilities |
US9854057B2 (en) * | 2014-05-06 | 2017-12-26 | International Business Machines Corporation | Network data collection and response system |
US9503467B2 (en) * | 2014-05-22 | 2016-11-22 | Accenture Global Services Limited | Network anomaly detection |
US20150347933A1 (en) * | 2014-05-27 | 2015-12-03 | International Business Machines Corporation | Business forecasting using predictive metadata |
WO2015184123A1 (en) * | 2014-05-28 | 2015-12-03 | Siemens Schweiz Ag | System and method for fault analysis and prioritization |
US8881281B1 (en) * | 2014-05-29 | 2014-11-04 | Singularity Networks, Inc. | Application and network abuse detection with adaptive mitigation utilizing multi-modal intelligence data |
JP6387777B2 (en) * | 2014-06-13 | 2018-09-12 | 富士通株式会社 | Evaluation program, evaluation method, and evaluation apparatus |
US10212176B2 (en) * | 2014-06-23 | 2019-02-19 | Hewlett Packard Enterprise Development Lp | Entity group behavior profiling |
US9465550B1 (en) * | 2014-06-25 | 2016-10-11 | Emc Corporation | Techniques for improved service level objective fulfillment |
US9118714B1 (en) * | 2014-07-23 | 2015-08-25 | Lookingglass Cyber Solutions, Inc. | Apparatuses, methods and systems for a cyber threat visualization and editing user interface |
US9166999B1 (en) * | 2014-07-25 | 2015-10-20 | Fmr Llc | Security risk aggregation, analysis, and adaptive control |
US9472077B2 (en) * | 2014-08-01 | 2016-10-18 | Francis Joseph Coviello | Surveillance of a secure area |
US9817737B2 (en) * | 2014-09-16 | 2017-11-14 | Spirent Communications, Inc. | Systems and methods of building sequenceable test methodologies |
US20160092658A1 (en) * | 2014-09-25 | 2016-03-31 | Marianne LEENAERTS | Method of evaluating information technologies |
US9584536B2 (en) * | 2014-12-12 | 2017-02-28 | Fortinet, Inc. | Presentation of threat history associated with network activity |
EP3239839A4 (en) * | 2014-12-22 | 2018-08-22 | Nec Corporation | Operation management device, operation management method, and recording medium in which operation management program is recorded |
US9531757B2 (en) | 2015-01-20 | 2016-12-27 | Cisco Technology, Inc. | Management of security policies across multiple security products |
US20160226893A1 (en) * | 2015-01-30 | 2016-08-04 | Wipro Limited | Methods for optimizing an automated determination in real-time of a risk rating of cyber-attack and devices thereof |
US9928369B2 (en) * | 2015-02-09 | 2018-03-27 | Cisco Technologies, Inc. | Information technology vulnerability assessment |
WO2016148199A1 (en) * | 2015-03-16 | 2016-09-22 | 国立大学法人大阪大学 | Method for evaluating dual-task performance and system for evaluating dual-task performance |
US9336268B1 (en) * | 2015-04-08 | 2016-05-10 | Pearson Education, Inc. | Relativistic sentiment analyzer |
US20160306690A1 (en) * | 2015-04-20 | 2016-10-20 | S2 Technologies, Inc. | Integrated test design, automation, and analysis |
US9836598B2 (en) * | 2015-04-20 | 2017-12-05 | Splunk Inc. | User activity monitoring |
US10592673B2 (en) * | 2015-05-03 | 2020-03-17 | Arm Limited | System, device, and method of managing trustworthiness of electronic devices |
DK3292471T3 (en) * | 2015-05-04 | 2022-02-21 | Syed Kamran Hasan | METHOD AND DEVICE FOR MANAGING SECURITY IN A COMPUTER NETWORK |
US9983919B2 (en) * | 2015-05-19 | 2018-05-29 | The United States Of America, As Represented By The Secretary Of The Navy | Dynamic error code, fault location, and test and troubleshooting user experience correlation/visualization systems and methods |
US10042697B2 (en) * | 2015-05-28 | 2018-08-07 | Oracle International Corporation | Automatic anomaly detection and resolution system |
US9697355B1 (en) | 2015-06-17 | 2017-07-04 | Mission Secure, Inc. | Cyber security for physical systems |
US9787709B2 (en) * | 2015-06-17 | 2017-10-10 | Bank Of America Corporation | Detecting and analyzing operational risk in a network environment |
US11068827B1 (en) * | 2015-06-22 | 2021-07-20 | Wells Fargo Bank, N.A. | Master performance indicator |
US11282017B2 (en) * | 2015-07-11 | 2022-03-22 | RiskRecon Inc. | Systems and methods for monitoring information security effectiveness |
WO2017019871A1 (en) | 2015-07-28 | 2017-02-02 | Masterpeace Solutions Ltd. | Consistently configuring devices in close physical proximity |
US10198582B2 (en) * | 2015-07-30 | 2019-02-05 | IOR Analytics, LLC | Method and apparatus for data security analysis of data flows |
US10466914B2 (en) * | 2015-08-31 | 2019-11-05 | Pure Storage, Inc. | Verifying authorized access in a dispersed storage network |
US9699205B2 (en) * | 2015-08-31 | 2017-07-04 | Splunk Inc. | Network security system |
US10332116B2 (en) * | 2015-10-06 | 2019-06-25 | Netflix, Inc. | Systems and methods for fraudulent account detection and management |
US9367810B1 (en) * | 2015-10-13 | 2016-06-14 | PagerDuty, Inc. | Operations maturity model |
US10305922B2 (en) * | 2015-10-21 | 2019-05-28 | Vmware, Inc. | Detecting security threats in a local network |
US11025674B2 (en) * | 2015-10-28 | 2021-06-01 | Qomplx, Inc. | Cybersecurity profiling and rating using active and passive external reconnaissance |
US10673887B2 (en) * | 2015-10-28 | 2020-06-02 | Qomplx, Inc. | System and method for cybersecurity analysis and score generation for insurance purposes |
US11184401B2 (en) * | 2015-10-28 | 2021-11-23 | Qomplx, Inc. | AI-driven defensive cybersecurity strategy analysis and recommendation system |
US11968239B2 (en) * | 2015-10-28 | 2024-04-23 | Qomplx Llc | System and method for detection and mitigation of data source compromises in adversarial information environments |
US20220014560A1 (en) * | 2015-10-28 | 2022-01-13 | Qomplx, Inc. | Correlating network event anomalies using active and passive external reconnaissance to identify attack information |
US10917428B2 (en) * | 2015-10-28 | 2021-02-09 | Qomplx, Inc. | Holistic computer system cybersecurity evaluation and scoring |
US10365962B2 (en) * | 2015-11-16 | 2019-07-30 | Pearson Education, Inc. | Automated testing error assessment system |
US9703683B2 (en) * | 2015-11-24 | 2017-07-11 | International Business Machines Corporation | Software testing coverage |
US10346246B2 (en) * | 2015-11-30 | 2019-07-09 | International Business Machines Corporation | Recovering data copies in a dispersed storage network |
US10860448B2 (en) * | 2016-01-13 | 2020-12-08 | Micro Focus Llc | Determining a functional state of a system under test |
US10419458B2 (en) * | 2016-01-21 | 2019-09-17 | Cyiot Ltd | Distributed techniques for detecting atypical or malicious wireless communications activity |
US20170237752A1 (en) * | 2016-02-11 | 2017-08-17 | Honeywell International Inc. | Prediction of potential cyber security threats and risks in an industrial control system using predictive cyber analytics |
US10073753B2 (en) * | 2016-02-14 | 2018-09-11 | Dell Products, Lp | System and method to assess information handling system health and resource utilization |
EP3420700B8 (en) * | 2016-02-24 | 2022-05-18 | Mandiant, Inc. | Systems and methods for attack simulation on a production network |
US20170249644A1 (en) * | 2016-02-25 | 2017-08-31 | Mcs2, Llc | Methods and systems for storing and visualizing managed compliance plans |
US10536478B2 (en) | 2016-02-26 | 2020-01-14 | Oracle International Corporation | Techniques for discovering and managing security of applications |
US20170257304A1 (en) * | 2016-03-07 | 2017-09-07 | General Electric Company | Systems and methods for monitoring system performance and availability |
US11327475B2 (en) | 2016-05-09 | 2022-05-10 | Strong Force Iot Portfolio 2016, Llc | Methods and systems for intelligent collection and analysis of vehicle data |
US10419455B2 (en) | 2016-05-10 | 2019-09-17 | Allstate Insurance Company | Cyber-security presence monitoring and assessment |
CA2968710A1 (en) * | 2016-05-31 | 2017-11-30 | Valarie Ann Findlay | Security threat information gathering and incident reporting systems and methods |
US11012466B2 (en) * | 2016-07-13 | 2021-05-18 | Indrasoft, Inc. | Computerized system and method for providing cybersecurity detection and response functionality |
US10104114B2 (en) * | 2016-07-29 | 2018-10-16 | Rohde & Schwarz Gmbh & Co. Kg | Method and apparatus for testing a security of communication of a device under test |
US10339152B2 (en) * | 2016-08-29 | 2019-07-02 | International Business Machines Corporation | Managing software asset environment using cognitive distributed cloud infrastructure |
US10748074B2 (en) * | 2016-09-08 | 2020-08-18 | Microsoft Technology Licensing, Llc | Configuration assessment based on inventory |
US20180115464A1 (en) * | 2016-10-26 | 2018-04-26 | SignifAI Inc. | Systems and methods for monitoring and analyzing computer and network activity |
US10320849B2 (en) * | 2016-11-07 | 2019-06-11 | Bank Of America Corporation | Security enhancement tool |
US10122744B2 (en) * | 2016-11-07 | 2018-11-06 | Bank Of America Corporation | Security violation assessment tool to compare new violation with existing violation |
US20180137288A1 (en) * | 2016-11-15 | 2018-05-17 | ERPScan B.V. | System and method for modeling security threats to prioritize threat remediation scheduling |
US10209314B2 (en) | 2016-11-21 | 2019-02-19 | Battelle Energy Alliance, Llc | Systems and methods for estimation and prediction of battery health and performance |
EP3545418A4 (en) * | 2016-11-22 | 2020-08-12 | AON Global Operations PLC, Singapore Branch | Systems and methods for cybersecurity risk assessment |
US10841337B2 (en) * | 2016-11-28 | 2020-11-17 | Secureworks Corp. | Computer implemented system and method, and computer program product for reversibly remediating a security risk |
US20180150256A1 (en) * | 2016-11-29 | 2018-05-31 | Intel Corporation | Technologies for data deduplication in disaggregated architectures |
US10171510B2 (en) * | 2016-12-14 | 2019-01-01 | CyberSaint, Inc. | System and method for monitoring and grading a cybersecurity framework |
US11010260B1 (en) * | 2016-12-30 | 2021-05-18 | EMC IP Holding Company LLC | Generating a data protection risk assessment score for a backup and recovery storage system |
US10581896B2 (en) * | 2016-12-30 | 2020-03-03 | Chronicle Llc | Remedial actions based on user risk assessments |
US10237294B1 (en) * | 2017-01-30 | 2019-03-19 | Splunk Inc. | Fingerprinting entities based on activity in an information technology environment |
US10997532B2 (en) * | 2017-02-03 | 2021-05-04 | The Dun And Bradstreet Corporation | System and method for assessing and optimizing master data maturity |
US20180268339A1 (en) * | 2017-03-17 | 2018-09-20 | Hristo Tanev Malchev | Analytical system for performance improvement and forecasting |
KR20180106533A (en) | 2017-03-20 | 2018-10-01 | 장경애 | Data Value evaluation system through detailed analysis of data governance data |
US20190018729A1 (en) * | 2017-04-14 | 2019-01-17 | Microsoft Technology Licensing, Llc | Anomaly remediation using device analytics |
US10656987B1 (en) * | 2017-04-26 | 2020-05-19 | EMC IP Holding Company LLC | Analysis system and method |
US10084825B1 (en) * | 2017-05-08 | 2018-09-25 | Fortinet, Inc. | Reducing redundant operations performed by members of a cooperative security fabric |
US10587644B1 (en) * | 2017-05-11 | 2020-03-10 | Ca, Inc. | Monitoring and managing credential and application threat mitigations in a computer system |
AU2017279806B2 (en) * | 2017-05-29 | 2023-10-12 | Saltor Pty Ltd | Method and system for abnormality detection |
US10218697B2 (en) * | 2017-06-09 | 2019-02-26 | Lookout, Inc. | Use of device risk evaluation to manage access to services |
US10565537B1 (en) * | 2017-06-14 | 2020-02-18 | William Spencer Askew | Systems, methods, and apparatuses for optimizing outcomes in a multi-factor system |
US11005879B2 (en) * | 2017-06-29 | 2021-05-11 | Webroot Inc. | Peer device protection |
US11165800B2 (en) * | 2017-08-28 | 2021-11-02 | Oracle International Corporation | Cloud based security monitoring using unsupervised pattern recognition and deep learning |
KR101908353B1 (en) * | 2017-09-01 | 2018-10-22 | 대한민국(우정사업본부) | Company evaluation system and evaluation method therefor |
US10972445B2 (en) * | 2017-11-01 | 2021-04-06 | Citrix Systems, Inc. | Dynamic crypto key management for mobility in a cloud environment |
US20190138997A1 (en) * | 2017-11-06 | 2019-05-09 | Adp, Llc | Network Competitive Resource Allocation System |
US11354301B2 (en) * | 2017-11-13 | 2022-06-07 | LendingClub Bank, National Association | Multi-system operation audit log |
US20190164015A1 (en) | 2017-11-28 | 2019-05-30 | Sigma Ratings, Inc. | Machine learning techniques for evaluating entities |
US10691585B2 (en) * | 2017-11-30 | 2020-06-23 | The University Of Massachusetts | Efficient software testing system |
US10810502B2 (en) * | 2017-12-01 | 2020-10-20 | Sap Se | Computing architecture deployment configuration recommendation using machine learning |
US11294800B2 (en) * | 2017-12-07 | 2022-04-05 | The Johns Hopkins University | Determining performance of autonomy decision-making engines |
US20190182196A1 (en) * | 2017-12-13 | 2019-06-13 | Knowmail S.A.L LTD. | Handling Communication Messages Based on Urgency |
US10693897B2 (en) * | 2017-12-14 | 2020-06-23 | Facebook, Inc. | Behavioral and account fingerprinting |
US10989538B2 (en) * | 2017-12-15 | 2021-04-27 | Uatc, Llc | IMU data offset compensation for an autonomous vehicle |
US11431740B2 (en) * | 2018-01-02 | 2022-08-30 | Criterion Systems, Inc. | Methods and systems for providing an integrated assessment of risk management and maturity for an organizational cybersecurity/privacy program |
US10592938B2 (en) * | 2018-01-31 | 2020-03-17 | Aon Risk Consultants, Inc. | System and methods for vulnerability assessment and provisioning of related services and products for efficient risk suppression |
US11630758B2 (en) * | 2018-02-06 | 2023-04-18 | Siemens Aktiengesellschaft | Artificial intelligence enabled output space exploration for guided test case generation |
US20200401703A1 (en) * | 2018-02-14 | 2020-12-24 | New Context Services, Inc. | Security assessment platform |
US10684935B2 (en) * | 2018-03-16 | 2020-06-16 | Cisco Technology, Inc. | Deriving the shortest steps to reproduce a device failure condition |
US20190294536A1 (en) * | 2018-03-26 | 2019-09-26 | Ca, Inc. | Automated software deployment and testing based on code coverage correlation |
US20190294531A1 (en) * | 2018-03-26 | 2019-09-26 | Ca, Inc. | Automated software deployment and testing based on code modification and test failure correlation |
EP3785415B1 (en) * | 2018-04-23 | 2022-12-14 | Telefonaktiebolaget LM Ericsson (publ) | Apparatus and method for evaluating multiple aspects of the security for virtualized infrastructure in a cloud environment |
US20210173010A1 (en) * | 2018-05-16 | 2021-06-10 | Advantest Corporation | Diagnostic tool for traffic capture with known signature database |
US10250010B1 (en) * | 2018-05-17 | 2019-04-02 | Globalfoundries Inc. | Control of VCSEL-based optical communications system |
US11108798B2 (en) * | 2018-06-06 | 2021-08-31 | Reliaquest Holdings, Llc | Threat mitigation system and method |
US11283840B2 (en) * | 2018-06-20 | 2022-03-22 | Tugboat Logic, Inc. | Usage-tracking of information security (InfoSec) entities for security assurance |
US10917439B2 (en) * | 2018-07-16 | 2021-02-09 | Securityadvisor Technologies, Inc. | Contextual security behavior management and change execution |
US10873594B2 (en) * | 2018-08-02 | 2020-12-22 | Rohde & Schwarz Gmbh & Co. Kg | Test system and method for identifying security vulnerabilities of a device under test |
US20200053117A1 (en) * | 2018-08-07 | 2020-02-13 | Telesis Corporation | Method, system, and/or software for finding and addressing an information/data or related system's security risk, threat, vulnerability, or similar event, in a computing device or system |
US10812521B1 (en) * | 2018-08-10 | 2020-10-20 | Amazon Technologies, Inc. | Security monitoring system for internet of things (IOT) device environments |
US11797684B2 (en) * | 2018-08-28 | 2023-10-24 | Eclypsium, Inc. | Methods and systems for hardware and firmware security monitoring |
US11134087B2 (en) * | 2018-08-31 | 2021-09-28 | Forcepoint, LLC | System identifying ingress of protected data to mitigate security breaches |
US11750633B2 (en) * | 2018-09-27 | 2023-09-05 | Riskq, Inc. | Digital asset based cyber risk algorithmic engine, integrated cyber risk methodology and automated cyber risk management system |
US11263207B2 (en) * | 2018-10-11 | 2022-03-01 | Kyndryl, Inc. | Performing root cause analysis for information technology incident management using cognitive computing |
US11258827B2 (en) * | 2018-10-19 | 2022-02-22 | Oracle International Corporation | Autonomous monitoring of applications in a cloud environment |
US11093885B2 (en) * | 2019-01-22 | 2021-08-17 | International Business Machines Corporation | Operations augmented enterprise collaborative recommender engine |
EP3921811A4 (en) * | 2019-02-06 | 2023-03-15 | Foretellix Ltd. | Simulation and validation of autonomous vehicle system and components |
US11106789B2 (en) * | 2019-03-05 | 2021-08-31 | Microsoft Technology Licensing, Llc | Dynamic cybersecurity detection of sequence anomalies |
US10877875B2 (en) * | 2019-03-05 | 2020-12-29 | Verizon Patent And Licensing Inc. | Systems and methods for automated programmatic test generation and software validation |
US10491603B1 (en) * | 2019-03-07 | 2019-11-26 | Lookout, Inc. | Software component substitution based on rule compliance for computing device context |
US10785230B1 (en) * | 2019-03-07 | 2020-09-22 | Lookout, Inc. | Monitoring security of a client device to provide continuous conditional server access |
US11818129B2 (en) * | 2019-03-07 | 2023-11-14 | Lookout, Inc. | Communicating with client device to determine security risk in allowing access to data of a service provider |
US11425158B2 (en) * | 2019-03-19 | 2022-08-23 | Fortinet, Inc. | Determination of a security rating of a network element |
WO2020205507A1 (en) * | 2019-04-01 | 2020-10-08 | Raytheon Company | Adaptive, multi-layer enterprise data protection & resiliency platform |
US11360190B2 (en) * | 2019-04-20 | 2022-06-14 | The United States Of America, As Represented By The Secretary Of The Navy | Hardware in the loop simulation and test system that includes a phased array antenna simulation system providing dynamic range and angle of arrival signals simulation for input into a device under test (DUT) that includes a phased array signal processing system along with related methods |
US11587101B2 (en) * | 2019-05-28 | 2023-02-21 | DeepRisk.ai, LLC | Platform for detecting abnormal entities and activities using machine learning algorithms |
US11516228B2 (en) * | 2019-05-29 | 2022-11-29 | Kyndryl, Inc. | System and method for SIEM rule sorting and conditional execution |
US11757907B1 (en) * | 2019-06-18 | 2023-09-12 | Cytellix Corporation | Cybersecurity threat intelligence and remediation system |
US11457009B2 (en) * | 2019-07-17 | 2022-09-27 | Infiltron Holdings, Inc. | Systems and methods for securing devices in a computing environment |
US11334410B1 (en) * | 2019-07-22 | 2022-05-17 | Intuit Inc. | Determining aberrant members of a homogenous cluster of systems using external monitors |
US20210035116A1 (en) * | 2019-07-31 | 2021-02-04 | Bidvest Advisory Services (Pty) Ltd | Platform for facilitating an automated it audit |
US11023511B1 (en) * | 2019-07-31 | 2021-06-01 | Splunk Inc. | Mobile device composite interface for dual-sourced incident management and monitoring system |
US12039546B2 (en) * | 2019-09-13 | 2024-07-16 | Referentia Systems Incorporated | Systems and methods for managing and monitoring continuous attestation of security requirements |
US11755744B2 (en) * | 2019-11-07 | 2023-09-12 | Oracle International Corporation | Application programming interface specification inference |
US11874929B2 (en) * | 2019-12-09 | 2024-01-16 | Accenture Global Solutions Limited | Method and system for automatically identifying and correcting security vulnerabilities in containers |
US20210200595A1 (en) * | 2019-12-31 | 2021-07-01 | Randori Inc. | Autonomous Determination of Characteristic(s) and/or Configuration(s) of a Remote Computing Resource to Inform Operation of an Autonomous System Used to Evaluate Preparedness of an Organization to Attacks or Reconnaissance Effort by Antagonistic Third Parties |
US11647037B2 (en) * | 2020-01-30 | 2023-05-09 | Hewlett Packard Enterprise Development Lp | Penetration tests of systems under test |
US11307975B2 (en) * | 2020-02-20 | 2022-04-19 | International Business Machines Corporation | Machine code analysis for identifying software defects |
US11194704B2 (en) * | 2020-03-16 | 2021-12-07 | International Business Machines Corporation | System testing infrastructure using combinatorics |
US11418531B2 (en) * | 2020-03-18 | 2022-08-16 | Cyberlab Inc. | System and method for determining cybersecurity rating and risk scoring |
US10936462B1 (en) * | 2020-04-29 | 2021-03-02 | Split Software, Inc. | Systems and methods for real-time application anomaly detection and configuration |
US12061232B2 (en) * | 2020-09-21 | 2024-08-13 | Tektronix, Inc. | Margin test data tagging and predictive expected margins |
US11528152B2 (en) * | 2020-10-12 | 2022-12-13 | Raytheon Company | Watermarking for electronic device tracking or verification |
US20220156372A1 (en) * | 2020-11-13 | 2022-05-19 | Sophos Limited | Cybersecurity system evaluation and configuration |
US11379352B1 (en) * | 2020-12-15 | 2022-07-05 | International Business Machines Corporation | System testing infrastructure with hidden variable, hidden attribute, and hidden value detection |
US20220211278A1 (en) * | 2021-01-07 | 2022-07-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Emotion-aware smart refrigerator |
US20220222231A1 (en) * | 2021-01-13 | 2022-07-14 | Coupang Corp. | Computerized systems and methods for using artificial intelligence to optimize database parameters |
US11681964B2 (en) * | 2021-03-15 | 2023-06-20 | Cerner Innovation, Inc. | System and method for optimizing design, workflows, performance, and configurations based on design elements |
US11558412B1 (en) * | 2021-03-29 | 2023-01-17 | Splunk Inc. | Interactive security visualization of network entity data |
US11544138B2 (en) * | 2021-05-27 | 2023-01-03 | Dell Products L.P. | Framework for anomaly detection and resolution prediction |
US11734141B2 (en) * | 2021-07-14 | 2023-08-22 | International Business Machines Corporation | Dynamic testing of systems |
-
2020
- 2020-12-21 US US17/128,471 patent/US20210294713A1/en active Pending
- 2020-12-21 US US17/247,710 patent/US11789833B2/en active Active
- 2020-12-21 US US17/128,491 patent/US11960373B2/en active Active
- 2020-12-21 US US17/247,706 patent/US11775405B2/en active Active
- 2020-12-21 US US17/247,714 patent/US20210294904A1/en active Pending
- 2020-12-21 US US17/247,705 patent/US11645176B2/en active Active
- 2020-12-21 US US17/247,702 patent/US11954003B2/en active Active
- 2020-12-21 US US17/128,509 patent/US11698845B2/en active Active
- 2020-12-21 US US17/247,701 patent/US11892924B2/en active Active
-
2021
- 2021-03-11 WO PCT/US2021/021898 patent/WO2021188356A1/en active Application Filing
- 2021-03-31 US US17/301,352 patent/US11775406B2/en active Active
- 2021-03-31 US US17/301,349 patent/US11994968B2/en active Active
- 2021-03-31 US US17/301,361 patent/US20210295232A1/en active Pending
- 2021-03-31 US US17/219,561 patent/US11775404B2/en active Active
- 2021-03-31 US US17/301,356 patent/US20210297432A1/en active Pending
- 2021-03-31 US US17/301,368 patent/US11704212B2/en active Active
- 2021-03-31 US US17/219,655 patent/US20210297439A1/en active Pending
- 2021-06-17 US US17/350,113 patent/US11704213B2/en active Active
- 2021-06-25 US US17/304,783 patent/US11693751B2/en active Active
- 2021-06-25 US US17/304,771 patent/US20210329018A1/en active Pending
- 2021-06-28 US US17/360,815 patent/US11853181B2/en active Active
- 2021-06-29 US US17/305,018 patent/US11734140B2/en active Active
- 2021-06-29 US US17/305,005 patent/US11734139B2/en active Active
- 2021-06-30 US US17/305,086 patent/US11947434B2/en active Active
- 2021-07-30 US US17/444,163 patent/US20210357511A1/en active Pending
-
2022
- 2022-11-29 US US18/059,544 patent/US20230090011A1/en active Pending
-
2023
- 2023-01-31 US US18/162,540 patent/US11899548B2/en active Active
- 2023-03-30 US US18/193,523 patent/US11966308B2/en active Active
- 2023-04-30 US US18/141,444 patent/US12066909B2/en active Active
- 2023-05-11 US US18/315,620 patent/US20230289269A1/en active Pending
- 2023-05-30 US US18/203,171 patent/US20230315597A1/en active Pending
- 2023-06-13 US US18/334,042 patent/US12072779B2/en active Active
- 2023-11-13 US US18/507,520 patent/US20240078161A1/en active Pending
-
2024
- 2024-04-26 US US18/646,896 patent/US20240281349A1/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040088601A1 (en) * | 2002-10-31 | 2004-05-06 | General Electric Company | Method, system and program product for establishing a self-diagnosing and self-repairing automated system |
US20070005767A1 (en) * | 2005-07-04 | 2007-01-04 | Sampige Sahana P | Method and apparatus for automated testing of a utility computing system |
US20090313198A1 (en) * | 2008-06-17 | 2009-12-17 | Yutaka Kudo | Methods and systems for performing root cause analysis |
US20140365637A1 (en) * | 2013-06-11 | 2014-12-11 | Vmware, Inc. | Methods and systems for reducing metrics used to monitor resources |
US20150089304A1 (en) * | 2013-09-20 | 2015-03-26 | Oracle International Corporation | User-directed logging and auto-correction |
US20160239363A1 (en) * | 2015-02-13 | 2016-08-18 | Fujitsu Limited | Analysis device and information processing system |
US20180107544A1 (en) * | 2015-03-31 | 2018-04-19 | International Business Machines Corporation | Selecting a storage error abatement alternative in a dispersed storage network |
US20160335166A1 (en) * | 2015-05-14 | 2016-11-17 | Cisco Technology, Inc. | Smart storage recovery in a distributed storage system |
US20180060452A1 (en) * | 2016-08-23 | 2018-03-01 | Ca, Inc. | System and Method for Generating System Testing Data |
US20200301798A1 (en) * | 2017-12-08 | 2020-09-24 | Huawei Technologies Co., Ltd. | Fault injection system and method of fault injection |
WO2021188356A1 (en) * | 2020-03-20 | 2021-09-23 | 5thColumn LLC | Evaluation rating of a system or portion thereof |
US11314576B2 (en) * | 2020-03-31 | 2022-04-26 | Accenture Global Solutions Limited | System and method for automating fault detection in multi-tenant environments |
Non-Patent Citations (1)
Title |
---|
A. Erdmann and D. Weber, "PerDaCol and PerfAnalysis - A Tool Set for Performance Measurement Data Collection and Evaluation of Real-Time Communication Systems," 2009 Sixth International Conference on the Quantitative Evaluation of Systems, Budapest, Hungary, 2009, pp. 213-214 (Year: 2009) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11398960B1 (en) * | 2021-04-09 | 2022-07-26 | EMC IP Holding Company LLC | System and method for self-healing of upgrade issues on a customer environment |
US11625292B2 (en) | 2021-05-27 | 2023-04-11 | EMC IP Holding Company LLC | System and method for self-healing of upgrade issues on a customer environment and synchronization with a production host environment |
US20230024602A1 (en) * | 2021-07-21 | 2023-01-26 | Box, Inc. | Identifying and resolving conflicts in access permissions during migration of data and user accounts |
CN115204527A (en) * | 2022-09-15 | 2022-10-18 | 万链指数(青岛)信息科技有限公司 | Enterprise operation health index evaluation system based on big data |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11892924B2 (en) | Generation of an issue detection evaluation regarding a system aspect of a system | |
US20220035929A1 (en) | Evaluating a system aspect of a system | |
US20210406385A1 (en) | Analysis unit for analyzing a system or portion thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: 5THCOLUMN LLC, ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HICKS, RAYMOND;PISANI, RYAN MICHAEL;MCNEELA, THOMAS JAMES;REEL/FRAME:055813/0050 Effective date: 20201218 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: 5C PARTNERS LLC, ILLINOIS Free format text: CHANGE OF NAME;ASSIGNOR:5THCOLUMN LLC;REEL/FRAME:062261/0809 Effective date: 20201211 Owner name: UNCOMMONX INC., ILLINOIS Free format text: CHANGE OF NAME;ASSIGNOR:5THCOLUMN INC.;REEL/FRAME:062266/0364 Effective date: 20201211 Owner name: 5THCOLUMN INC., ILLINOIS Free format text: CONTRIBUTION AGREEMENT;ASSIGNOR:5C PARTNERS LLC;REEL/FRAME:062266/0269 Effective date: 20201211 |
|
AS | Assignment |
Owner name: UNCOMMONX INC., ILLINOIS Free format text: CHANGE OF NAME;ASSIGNOR:5THCOLUMN INC.;REEL/FRAME:062267/0546 Effective date: 20210806 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: RN COLUMN INVESTORS LLC, AS COLLATERAL AGENT, ILLINOIS Free format text: SECURITY INTEREST;ASSIGNOR:UNCOMMONX INC.;REEL/FRAME:064576/0516 Effective date: 20230811 |
|
AS | Assignment |
Owner name: SW7, LLC, AS COLLATERAL AGENT, ILLINOIS Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:UNCOMMONX INC.;REEL/FRAME:064659/0492 Effective date: 20230811 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |