US20230110815A1 - Ai platform with customizable virtue scoring models and methods for use therewith - Google Patents
Ai platform with customizable virtue scoring models and methods for use therewith Download PDFInfo
- Publication number
- US20230110815A1 US20230110815A1 US17/820,398 US202217820398A US2023110815A1 US 20230110815 A1 US20230110815 A1 US 20230110815A1 US 202217820398 A US202217820398 A US 202217820398A US 2023110815 A1 US2023110815 A1 US 2023110815A1
- Authority
- US
- United States
- Prior art keywords
- data
- virtue
- model
- score
- machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 72
- 238000010801 machine learning Methods 0.000 claims abstract description 25
- 238000003860 storage Methods 0.000 claims abstract description 24
- 230000004044 response Effects 0.000 claims abstract description 12
- 230000003993 interaction Effects 0.000 claims abstract description 10
- 238000013473 artificial intelligence Methods 0.000 claims description 191
- 238000004458 analytical method Methods 0.000 claims description 87
- 238000012549 training Methods 0.000 claims description 14
- 230000006872 improvement Effects 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 description 70
- 238000012545 processing Methods 0.000 description 55
- 230000006870 function Effects 0.000 description 42
- 238000010586 diagram Methods 0.000 description 31
- 238000001514 detection method Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 15
- 230000009471 action Effects 0.000 description 10
- 230000008878 coupling Effects 0.000 description 10
- 238000010168 coupling process Methods 0.000 description 10
- 238000005859 coupling reaction Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 10
- 238000013507 mapping Methods 0.000 description 10
- 230000003068 static effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000002349 favourable effect Effects 0.000 description 4
- 238000012800 visualization Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000002085 persistent effect Effects 0.000 description 3
- 210000003813 thumb Anatomy 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000001568 sexual effect Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 235000008733 Citrus aurantifolia Nutrition 0.000 description 1
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 235000011941 Tilia x europaea Nutrition 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 239000004571 lime Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 239000012925 reference material Substances 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
Definitions
- the present disclosure relates to processing systems and applications used in the development, analysis and/or use of artificial intelligence models or other content.
- FIG. 1 A presents a block diagram representation of an example system.
- FIG. 1 B presents a block diagram representation of an example artificial intelligence (AI) development platform.
- AI artificial intelligence
- FIG. 1 C presents a block diagram representation of an example system.
- FIG. 1 D presents a block diagram representation of an example content analysis platform.
- FIG. 1 E presents a block diagram representation of an example client device.
- FIG. 2 A presents a flowchart representation of an example method.
- FIG. 2 B presents a flowchart representation of an example method.
- FIG. 2 C presents a flowchart representation of an example method.
- FIG. 2 D presents a flowchart representation of an example method.
- FIG. 3 A presents a block diagram representation of an example AI auto detection model.
- FIG. 3 B presents a block diagram representation of an example auto-mapping function.
- FIG. 3 C presents a block diagram representation of an example virtue scoring model.
- FIG. 3 D presents a block diagram representation of an example survey creation widget.
- FIG. 3 E presents a block diagram representation of an example of control panel generation tools.
- FIG. 3 F presents a pictorial representation of an example of a content analysis control panel.
- FIGS. 4 A- 4 Y present graphical diagram representations of example screen displays or portions thereof.
- FIGS. 5 A- 5 D present graphical diagram representations of example screen displays or portions thereof.
- FIGS. 6 A- 6 F present graphical diagram representations of example screen displays or portions thereof.
- FIG. 1 A presents a block diagram representation of an example system in accordance with various embodiments.
- a system 850 is presented that includes an AI development platform 800 that communicates with client devices 825 via a network 115 .
- the network 115 can be the Internet or other wide area or local area network, either public or private.
- the client devices 825 can be computing devices of users such as AI developers or administrators of databases, social media platforms or other sources of AI or media content.
- IBM has Fairness 360 for bias.
- IBM also has the Explainability Toolkit for increasing transparency.
- Audit-AI for statistical bias detection.
- Lime has software for visualizing bias to increase fairness.
- SHAP uses game theory for explain output of black box model.
- XAI for dynamic systems. The problem is that most AI developers do not want to switch from one platform or toolkit, to another, and another again.
- the AI development platform 800 and system 850 makes these technological improvements to computer technology by reworking the AI infrastructure from the ground up, building AI ethics into the work experience, and streamlining the process to achieve safe and effective algorithms for ML developers.
- AI development platform 800 provides a “one stop shop” to building robust and certifiable AI systems.
- the primary goal of the AI development platform 800 is to provide a software as a service (SaaS) platform to an ethical AI community, it may be used in conjunction with social media platforms such as Instagram, Facebook, LinkedIn, GitHub, etc.
- This platform could also be used by AI ethicists to audit their own systems of AI development. Users can use the framework and publicly post their decisions along the way for human in the loop feedback from a community through the posting of problems, questions, reviews, etc.
- the systems described herein improve computer technology by providing a user interface with many new features and combinations that improve the user experience, increase user efficiency and generate more accurate, more robust and more virtuous results.
- the AI development platform 800 includes:
- the AI development platform 800 facilitates the development of a training dataset associated with at least one of the plurality of client devices 825 via dataset development tools 802 .
- the resulting dataset can be stored, for example, in a database 819 associated with the AI development platform 800 .
- the AI development platform 800 also provides access to a plurality of auto machine learning tools 804 , such as DataRobot, H20.ai and/or other auto machine learning tools to facilitate the development of an AI model.
- the AI development platform 800 includes a set of control panel generation tools 806 that facilitate the generation and user-customization of a graphical user interface (GUI) based content analysis control panel.
- GUI graphical user interface
- the AI development platform 800 also includes a plurality of AI analysis tools/widgets 808 that implement, for example, auto detection and mapping tools such as AI models, statistical functions or other AI or functions that analyze input datasets to automatically identify and/or map data associated with protected attributes, key performance indicators and/or other metrics.
- AI analysis tools/widgets 808 can also include a plurality of standard virtue scoring models that each generate a corresponding virtue score.
- Such standard virtue scoring models include a responsibility model, an equitability (or bias) model, a reliability model, an explainability model, a robustness model, a traceability model and/or other models that generate virtue scores such as a responsibility score, an equitability (or bias) score, a reliability score, an explainability score, and/or other morality or virtue score.
- the AI analysis tools/widgets 808 can include tools to facilitate the generation of one or more virtue scoring models, such as ML or other AI models that are generated based on survey data and the collection of corresponding survey results.
- the AI analysis tools/widgets 808 can include survey widgets and other tools to facilitate the generation of user-customized virtue scoring models that can differ from each of the standard virtue scoring models, and that are implemented via ML or other AI models that are generated based on user-customized survey data and the collection of corresponding survey results.
- the AI development platform 800 also provides access to a version control repository 812 , such as a Git repository or other version control system for storing and managing a plurality of versions of the training dataset and the AI model.
- the AI development platform 800 also provides access to one or more machine learning management tools 810 to perform other management operations associated with the AI model, training dataset, etc.
- the content analysis control panel generated via the set of control panel generation tools 806 operates in conjunction with the AI analysis tools/widgets 808 to provide a graphical user interface that aids the user by gathering and presenting AI data and/or other content for analysis, the creation of custom virtue scoring models, the selection of particular virtue scoring models (either custom or preset) to be used, and the presentation of virtue scores and other analysis results.
- the content analysis control panel operates via the control panel generation tools 806 and associated AI analysis tools/widgets 808 to:
- the AI development platform 800 operates to perform operations that include:
- the AI development platform 800 operates to perform operations that include:
- the AI development platform 800 operates to perform operations that include:
- the learning and collaboration subsystem 811 , the platform access subsystem 813 , subscription and billing subsystem 815 , the privacy management system 817 and the database 819 , the dataset development tools 802 , AutoML tools 804 , control panel generation tools 806 , AI analysis tools/widgets 808 , ML management tools 810 and the version control repository 812 are shown as being internal to the AI development platform 800 , in other examples, any subset of the various elements of AI development platform 800 can be implemented external to the AI development platform 800 and coupled to the other components via the network 115 . Furthermore, the AI development platform 800 can be implemented in a cloud computing configuration with any or all of the various elements of AI development platform 800 implemented within the cloud.
- FIG. 1 B presents a block diagram representation of an AI development platform 800 in accordance with various embodiments.
- the AI development platform 800 includes a network interface 820 such as a 3G, 4G, 5G or other cellular wireless transceiver, a Bluetooth transceiver, a WiFi transceiver, UltraWideBand transceiver, WIMAX transceiver, ZigBee transceiver or other wireless interface, a Universal Serial Bus (USB) interface, an IEEE 1394 Firewire interface, an Ethernet interface or other wired interface and/or other network card or modem for communicating for communicating via the network 115 .
- a network interface 820 such as a 3G, 4G, 5G or other cellular wireless transceiver, a Bluetooth transceiver, a WiFi transceiver, UltraWideBand transceiver, WIMAX transceiver, ZigBee transceiver or other wireless interface, a Universal Serial Bus (USB) interface, an IEEE 1394 Firewire interface, an Ethernet interface or
- the AI development platform 800 also includes a processing module 830 and memory module 840 that stores an operating system (O/S) 844 such as an Apple, Unix, Linux or Microsoft operating system or other operating system, the learning and collaboration subsystem 811 , the platform access subsystem 813 , subscription and billing subsystem 815 , the privacy management system 817 and the database 819 , the dataset development tools 802 , AutoML tools 804 , control panel generation tools 806 , AI analysis tools/widgets 808 , ML management tools 810 and the version control repository 812 .
- O/S operating system
- the O/S 844 , the learning and collaboration subsystem 811 , the platform access subsystem 813 , subscription and billing subsystem 815 , the privacy management system 817 and the database 819 , the dataset development tools 802 , AutoML tools 804 , control panel generation tools 806 , AI analysis tools/widgets 808 , ML management tools 810 and the version control repository 812 each include operational instructions that, when executed by the processing module 830 , cooperate to configure the processing module 830 into a special purpose device to perform the particular functions of the AI development platform 800 described herein.
- the AI development platform 800 may include a user interface (I/F) 862 such as a display device, touch screen, key pad, touch pad, joy stick, thumb wheel, a mouse, one or more buttons, a speaker, a microphone, an accelerometer, gyroscope or other motion or position sensor, video camera or other interface devices that provide information to a user of the AI development platform 800 and that generate data in response to the user's interaction with AI development platform 800 .
- I/F user interface
- a user interface such as a display device, touch screen, key pad, touch pad, joy stick, thumb wheel, a mouse, one or more buttons, a speaker, a microphone, an accelerometer, gyroscope or other motion or position sensor, video camera or other interface devices that provide information to a user of the AI development platform 800 and that generate data in response to the user's interaction with AI development platform 800 .
- the processing module 830 can be implemented via a single processing device or a plurality of processing devices.
- processing devices can include a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, quantum computing device, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions that are stored in a memory, such as memory 840 .
- the memory module 840 can include a hard disc drive or other disc drive, read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
- the memory storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. While a particular bus architecture is presented that includes a single bus 860 , other architectures are possible including additional data buses and/or direct connectivity between one or more elements. Further, the AI development platform 800 can include one or more additional elements that are not specifically shown.
- FIG. 1 C presents a block diagram representation of an example system.
- a content analysis system 865 is shown that includes several elements of the AI development platform 800 that are referred to by common reference numerals.
- FIG. 1 D presents a block diagram representation of an example content analysis platform 875 is shown that includes several elements of the AI development platform 800 that are referred to by common reference numerals.
- the content analysis system 865 includes content analysis tools/widgets 808 ′ that includes the same or similar tools to the AI analysis tools/widgets 808 , but that operate on media content or other content data, be it AI generated or not.
- FIG. 1 E presents a block diagram representation of an example client device in accordance with various embodiments.
- a client device 825 is presented that includes a network interface 220 such as a 3G, 4G, 5G or other cellular wireless transceiver, a Bluetooth transceiver, a WiFi transceiver, UltraWideBand transceiver, WIMAX transceiver, ZigBee transceiver or other wireless interface, a Universal Serial Bus (USB) interface, an IEEE 1394 Firewire interface, an Ethernet interface or other wired interface and/or other network card or modem for communicating for communicating via network 115 .
- a network interface 220 such as a 3G, 4G, 5G or other cellular wireless transceiver, a Bluetooth transceiver, a WiFi transceiver, UltraWideBand transceiver, WIMAX transceiver, ZigBee transceiver or other wireless interface, a Universal Serial Bus (USB) interface, an IEEE 1394 Firewire interface, an Ethernet
- the client device 825 also includes a processing module 230 and memory module 240 that stores an operating system (O/S) 244 such as an Apple, Unix, Linux or Microsoft operating system or other operating system, training data 120 , and one or more gaming applications 248 .
- O/S operating system
- gaming application 248 each include operational instructions that, when executed by the processing module 230 , cooperate to configure the processing module into a special purpose device to perform the particular functions of the client device 825 described herein.
- the client device 825 also includes a user interface (I/F) 262 such as a display device, touch screen, key pad, touch pad, joy stick, thumb wheel, a mouse, one or more buttons, a speaker, a microphone, an accelerometer, gyroscope or other motion or position sensor, video camera or other interface devices that provide information to a user of the client device 825 and that generate data in response to the user's interaction with the client device 825 .
- I/F user interface
- the processing module 230 can be implemented via a single processing device or a plurality of processing devices.
- processing devices can include a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, quantum computing device, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions that are stored in a memory, such as memory 240 .
- the memory module 240 can include a hard disc drive or other disc drive, read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
- the processing device implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry
- the memory storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.
- a particular bus architecture is presented that includes a single bus 260 , other architectures are possible including additional data buses and/or direct connectivity between one or more elements.
- the client device 825 can include one or more additional elements that are not specifically shown.
- the client device 825 operates, via network interface 220 , network 115 and AI development platform 800 and/or content analysis platform 875 .
- the client device 825 operates to display a graphical user interface, such as a content analysis control panel or other user interface.
- a graphical user interface such as a content analysis control panel or other user interface.
- the client device 825 displays a content analysis control panel based on content analysis control panel data generated by either the AI analysis platform 800 or the content analysis platform 875 and, in particular, the graphical user interface can display one or more screen displays based on data generated by the AI development platform 800 and/or content analysis platform 875 .
- the graphical user interface can operate in response to interactions by a user to generate input data that is sent to the AI development platform 800 and/or content analysis platform 875 to control the operation of the AI development platform 800 and/or content analysis platform 875 and/or to provide other input.
- FIG. 2 A presents a flowchart representation of an example method in accordance with various embodiments.
- a method 600 is presented for use with any of the functions and features discussed in conjunction with FIGS. 1 A- 1 E .
- a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below.
- Step 602 includes providing, via a system that includes a processor and a network interface, an AI development platform that includes: a platform access subsystem that provides secure access to the AI development platform to a plurality of client devices via the network interface; a learning and collaboration subsystem that provides a network-based forum that facilitates a collaborative development of machine learning tools via the plurality of client devices and that provides access to a library of AI tutorials and a database of AI news; a subscription and billing subsystem that controls access to the AI development platform via each of the plurality of client devices in conjunction with subscription information associated with each of the plurality of client devices and further; that generates billing information associated with each of the plurality of client devices in accordance with the subscription information; and a privacy management system that protects the privacy of machine learning development data associated with each of the plurality of client devices.
- an AI development platform that includes: a platform access subsystem that provides secure access to the AI development platform to a plurality of client devices via the network interface; a learning and collaboration subsystem that provides a network-based forum that
- Step 604 includes facilitating, via the AI development platform, the development of a training dataset associated with at least one of the plurality of client devices.
- Step 606 includes providing, via the AI development platform, access to a plurality of auto machine learning tools to facilitate the development of an AI model.
- Step 608 includes providing, via the AI development platform, access to a plurality of AI analysis widgets to facilitate the evaluation of the AI model, wherein the plurality of AI analysis widgets include a plurality of virtue scoring models that predict virtue scores for the AI model associated with the plurality of virtues.
- Step 610 includes providing, via the AI development platform, access to a version control repository for storing a plurality of versions of the training dataset and the AI model.
- FIG. 2 B presents a flowchart representation of an example method in accordance with various embodiments.
- a method 620 is presented for use with any of the functions and features discussed in conjunction with FIGS. 1 A- 1 E and/or the method of FIG. 2 A .
- a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below.
- Step 622 includes generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel.
- Step 624 includes receiving, via the machine, customization data that indicates a plurality of virtue scoring models, and presentation parameters associated with the plurality of scoring models.
- Step 626 includes receiving, via the machine, content data.
- Step 628 includes generating, via the machine, predicted virtue score data associated with the content data for each of the plurality of virtue scoring models.
- Step 630 includes facilitating display, via the content analysis control panel and in accordance with the customization data, the predicted virtue score data associated with the content data for each of the plurality of virtue scoring models.
- the plurality of virtue scoring models include a plurality of artificial intelligence (AI) models that are each trained based on survey data to generate portions of the predicted virtue score data indicating a corresponding one of a plurality of scores.
- AI artificial intelligence
- the plurality of AI models includes a responsibility model and the plurality of scores includes a responsibility score that is based on an amount the content data addresses legal or ethical principles.
- the plurality of AI models includes an equitability model and the plurality of scores includes an equitability score that is based on an amount of bias in the content data.
- the plurality of AI models includes a reliability model and the plurality of scores includes a reliability score that indicates variations in others of the plurality of scores.
- the plurality of AI models includes an explainability model and the plurality of scores includes an explainability score associated with the content data.
- the plurality of AI models includes a morality model and the plurality of scores includes a morality score associated with the content data.
- the method can further include generating improvement data associated with at least one of the plurality of scores.
- the content data is an Artificial Intelligence (AI) model.
- AI Artificial Intelligence
- the presentation parameters includes a customized selection of at least one of: at least one statistic, at least one chart, or at least one graph.
- the method can further include displaying, via the content analysis control panel and in accordance with the customization data, of at least one of: at least one protected attribute, or at least one key performance indicator.
- the method can further include facilitating selection of the content data from at least one of: an AI model, or a content source.
- the method can further include generating, based on user input, survey data corresponding to a survey; collecting survey results data in response to the survey; and facilitating generation of a custom virtue scoring model of the plurality of virtue scoring models.
- FIG. 2 C presents a flowchart representation of an example method in accordance with various embodiments.
- a method 640 is presented for use with any of the functions and features discussed in conjunction with FIGS. 1 A- 1 E and/or the methods of FIGS. 2 A and/or 2 B .
- a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below.
- Step 642 includes generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, custom survey data in response to user interactions with the graphical user interface.
- Step 644 includes receiving, via the machine and responsive to the custom survey data, survey results data.
- Step 646 includes generating, utilizing machine learning and via the machine, a customized virtue scoring model based on the custom survey data and the survey results data.
- Step 648 includes receiving, via the machine, content data.
- Step 650 includes generating, via the machine and utilizing the customized virtue scoring model, predicted virtue score data associated with the content data.
- Step 652 includes facilitating display, via the graphical user interface, the predicted virtue score data associated with the content data.
- the customized virtue scoring model includes an artificial intelligence (AI) model that is trained based on at least one of: the custom survey data or the survey results data.
- AI artificial intelligence
- the AI model includes a responsibility model and the predicted virtue score data indicates a responsibility score that is based on an amount the content data addresses legal or ethical principles.
- the AI model includes an equitability model and the predicted virtue score data indicates an equitability score that is based on an amount of bias in the content data.
- the AI model includes a reliability model and the predicted virtue score data indicates a reliability score that indicates variations in an others virtue scores.
- the AI model includes an explainability model and the predicted virtue score data indicates an explainability score associated with the content data.
- the AI model includes a morality model and the predicted virtue score data indicates a morality score associated with the content data.
- the method further includes generating improvement data associated with and the predicted virtue score data.
- the content data is an Artificial Intelligence (AI) model.
- AI Artificial Intelligence
- the method further includes facilitating selection of the content data from at least one of: an AI model, or a content source.
- the customized virtue scoring model includes an artificial intelligence (AI) model and wherein generating the customized virtue scoring model includes providing access to a plurality of AI analysis widgets to facilitate an evaluation of the AI model.
- AI artificial intelligence
- the plurality of AI analysis widgets include a plurality of virtue scoring models that predict virtue scores for the AI model associated with a plurality of virtues.
- the customized virtue scoring model includes an artificial intelligence (AI) model and wherein the method further comprises providing access to a version control repository for storing a plurality of versions of a training dataset and a plurality of version of the AI model.
- AI artificial intelligence
- FIG. 2 D presents a flowchart representation of an example method in accordance with various embodiments.
- a method 660 is presented for use with any of the functions and features discussed in conjunction with FIGS. 1 A- 1 E and/or the methods of FIGS. 2 A, 2 B and/or 2 C .
- a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below.
- Step 662 includes generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel.
- Step 664 includes receiving, via the machine, content data.
- Step 666 includes detecting, via one or more AI models implemented via the machine, detection data that includes first portions of the content data associated with a protected attribute and second portions of the content data associated with a predetermined metric.
- Step 668 includes generating, via the machine, analysis data associated with the protected attribute and the predetermined metric.
- Step 670 includes facilitating display, via the content analysis control panel, the analysis data associated with the protected attribute and the predetermined metric.
- the protected attribute is a potential source of discrimination.
- the potential source of discrimination is at least one of: gender, race, age, religion, ethnicity, sexual preference, or disability.
- the predetermined metric is a key performance indicator that varies based on the potential source of discrimination.
- the predetermined metric is a term that varies based on the potential source of discrimination.
- the predetermined metric indicates at least one grade point average.
- the predetermined metric indicates at least one salary.
- the predetermined metric indicates at least one job offers.
- the predetermined metric indicates at least one loan approval or disapproval.
- the predetermined metric indicates at least one credit score.
- the predetermined metric indicates at least one job promotion.
- the predetermined metric indicates at least one arrest.
- FIG. 3 A presents a block diagram representation of an example AI auto-detection model.
- an AI auto-detection model 302 is shown that is an example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808 ′.
- AI auto-detection model 302 is trained via training data 306 to recognize portions of input data 300 that contain or are predicted to contain, one or more protected attributes or other metrics.
- the input data 300 can be AI input/output data of an underlying AI process to be analyzed and/or content data from other media content from a media source to be analyzed.
- the protected attributes can include terms related to gender, sex, race, age, religion, ethnicity, sexual preference, disabilities or other terms associated with potential sources of discrimination.
- the metrics can, for example, include one or more terms, key performance indicators (KPIs) or other factors that could be present in the input data 300 and be vary based on such sources of discrimination. Examples of such metrics include grade point average, salary, job offers, loan approvals or disapprovals, credit scores, promotions, arrests, etc. depending on the type of data being analyzed.
- KPIs key performance indicators
- the AI auto-detection model 302 uses deep layered natural language processing or other AI that is trained based on training data 306 that contains these terms, region variations, common or expected misspellings of these terms, alternative terms, etc.
- the AI auto-detection model 302 generates detection data 304 , such as columnar or tabular data containing labels that indicate the terms identified in the input data 300 .
- detection data 304 such as columnar or tabular data containing labels that indicate the terms identified in the input data 300 .
- the AI auto-detection model 302 is shown as a single model, the AI auto-detection model 302 may contain a plurality of individual AI models, for example, each trained to recognize one corresponding term to be detected.
- FIG. 3 B presents a block diagram representation of an example auto-mapping function.
- an auto-mapping function 312 is shown that is a further example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808 ′.
- the auto-mapping function 312 operates on the detection data 304 and applies a continuous distribution, categorical distribution, binned distribution or other statistical analysis to generate analysis data indicating statistics and/or other values regarding protected attributes and metrics.
- Illustrative examples include:
- FIG. 3 C presents a block diagram representation of an example virtue scoring model.
- a virtue scoring model 322 is shown that is a further example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808 ′.
- the virtue scoring model 322 is trained, for example, via training data 326 to generate a virtue score 324 corresponding to a particular virtue in response to content data 320 such as analysis data 314 , AI output data of an underlying AI process to be analyzed and/or content data from other media content from a media source to be analyzed.
- content data 320 such as analysis data 314
- the virtue scoring model 322 is shown as a single model, the virtue scoring model 322 may contain a plurality of individual models, each corresponding to a different standard or customized virtue score 324 .
- Examples of the virtue scoring model(s) 322 include:
- FIG. 3 D presents a block/flow diagram representation of an example survey creation process.
- the AI development platform 800 and content analysis platform 875 are operable to generate customized virtue scoring models that are trained or otherwise generated based on custom survey data and the survey results data.
- a survey creation widget 342 that is a further example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808 ′ is used to create a custom survey 344 based on custom survey data 340 input by the user via, for example, the content analysis control panel.
- the survey results data 348 are generated based on survey input 346 from survey participants.
- custom survey 344 is shown as a single survey, the survey creation widget 342 can be used to generate multiple custom surveys for multiple custom virtue scoring models. Furthermore, survey data results 348 and custom survey data 340 generated in this fashion can also be used to train any of the standard virtue scoring models discussed above.
- FIG. 3 E presents a pictorial/block diagram representation of an example of control panel generation tools 806 .
- control panel generation tools store control panel setting and customization parameters 352 that are generated via interaction the user and user input 350 .
- the control panel generation tools 806 generate content analysis control panel data 354 , based on further user input 350 and the AI analysis tools/widgets 808 or content analysis tools/widgets 808 ′.
- This content analysis control panel data 354 is formatted for display via a display device of a client device, such as client device 825 to reproduce the content analysis control panel 360 .
- An example screen display is shown in FIG. 3 F .
- the content analysis control panel 360 generated via the set of control panel generation tools 806 operates in conjunction with the AI analysis tools/widgets 808 to provide a graphical user interface that aids the user by gathering and presenting AI data and/or other content for analysis, the creation of custom virtue scoring models, the selection of particular virtue scoring models (either custom or preset) to be used, and the presentation of virtue scores and other analysis results.
- the content analysis control panel 360 operates via the control panel generation tools 806 and associated AI analysis tools/widgets 808 to:
- FIGS. 4 A- 4 V and 5 A- 5 E present graphical diagram representations of example screen displays or portions thereof corresponding to a content analysis control panel.
- FIG. 4 A presents a screen display of a content analysis control panel (CACP) of a User “Jane Doe”.
- the CACP includes a news feed that shows various AI related articles that can be individually accessed and read by the user.
- FIG. 4 B the user has accessed a drop-down menu and chosen to create a new AI pipeline.
- FIG. 4 C a popup window is shown that allows the user to input a title and description of the new pipeline.
- FIG. 4 D the CACP is shown after the user has chosen to name the new pipeline, “medical treatment selection pipeline”.
- the screen display indicates that there is currently no data for the pipeline and prompts to user to import data in order to get started.
- the user has the option of dragging a dropping a data set into the window or using an API of the system.
- Input data sets can, for example, be in columnar format with columns representing different datatypes. Input data sets can be static, continuous updated and/or updated, periodically (e.g. once a day, once a week, once a month, etc.).
- FIG. 4 F the user has elected to view a history of datasets that have been entered, their respective dates and who they were added by (“Rory”, in this case).
- the user has customized the CACP by entering customization data to select and generate two particular virtue scoring models for the selected pipeline, a responsible/responsibility scoring model and a equitable/equitability scoring model.
- the user has selected presentation parameters, either default or customized for each scoring model to indicate how the virtue scores will be displayed, for example, by particular graphs, charts, or other graphics or visual indications.
- the CACP prompts the user to fill out a survey in order to train the responsibility scoring model.
- Equitability scores are presented in a window below in the chosen presentation format along with an overall fairness index in the upper right portion of the screen. This fairness index can be generated based on a function/combination of the user-selected virtues or based on all virtues, depending on the implementation.
- robustness and traceability scoring models are also available as well as links to tools that assist the user in improving, responsibility, equitability, robustness and/or other standard virtues.
- the user is given the options to retrain or deploy any of the selected virtue scoring models. Icons can also be provided allowing the user to seek human in the loop (HIL) feedback and/or to share results with private groups, public groups, social media, etc.
- HIL human in the loop
- the equitability scoring window/bias monitor is selected and several different data overviews are presented in various and possibly user selected formats.
- the explainability scoring window/bias monitor is selected and several different data overviews are presented in various and possibly user selected formats.
- FIG. 4 I a macro-view of a data overview is shown that breaks down a “loan” metric onto four different components.
- FIG. 4 J a micro-view is shown where total/overall score (“good”) is presented along with a breakdown of various inputs/features that contribute to that score.
- a prompt is provided that allows the user to retrain the explainer (e.g., the explainability virtue scoring model).
- the user can query the system on the effects of selected features and how to change certain features to receive certain scores, for example.
- the user is presented an option to create an extension.
- FIG. 4 K presents an interface on the CACP that uses the control panel generation tools to permit the user to create one or more customized control panels. Templates are available related to the categories “healthcare” and “finance” for users that want to start from a pre-existing control panel configuration, as well as a blank template for users that wish to start from scratch. As indicated, control panels can be designated as either public or private. As shown in the bottom of the screen display, a user that does not see a feature he/she wants, they may add feedback to the administrator of the platform to perhaps include this in a later release.
- FIG. 4 L the user has selected to create a new control panel and prompted to enter a control panel name. The user is also allowed to create a scoring model for a new/customized virtue.
- FIG. 4 M the user has used a survey widget to create a survey for a new virtue “Virtue I”.
- FIG. 4 N an example of the survey widget are shown.
- FIG. 4 O the user selects the audience for completing the survey based on particular names and email addresses—i.e. to generate survey input/results.
- the user can select an existing crowd, employees for example, create a new crowd as shown in FIG. 4 P , or proceed on a general crowd source.
- a screen display generated by the survey widget for a new survey is presented in FIG. 4 Q .
- FIG. 4 R presents a cloud portal of the CACP that presents various service guides and a link to the news feed of FIG. 4 A .
- FIG. 4 S the user is selecting to access the API reference materials.
- FIG. 4 T presents a static/predetermined survey, magnitude slider that can be used to customize an AI analysis widget corresponding to the bias monitor and equitability scoring model to enable virtue tracing based on scoring magnitudes.
- FIG. 4 U presents a widget creator that allows a user to create/customize his/her own widgets.
- FIG. 4 V- 4 X the user has selected different output formats for display in conjunction with the AI analysis widget corresponding to the bias monitor and equitability scoring model.
- FIG. 4 Y presents a billing and payment screen.
- the survey widget configures a survey for multiple-choice questions.
- the survey widget configures a survey with multiple-choice questions with answers input by users via slider-bars.
- the survey widget configures a survey with short answers input by users.
- API options and instructions are provided to facilitate the input of datasets.
- FIGS. 6 A- 6 F present graphical diagram representations of example screen displays or portions thereof of another example content analysis control panel.
- example screen displays are presented as part of the graphical user interface implemented via the AI development platform 800 .
- the AI development platform 800 supports a communal development framework that allows users to view repositories on people's walls, view other profiles to see public work, promote trust through transparency, allow people to be involved in decisions, add friends and follow people and organizational work, approve/disapprove work, borrow others code by forking or cloning their repository.
- This communal development framework also supports AI ethics discussion in ethics forums, and/or other forums where a user posts a question, others can answer, and users can comment on question and answers. Documentation can be provided in a “Learn” section which includes information on AI how to use Version Control, Data API, an AI moral insight model, etc. In various embodiments, only users/subscribers are allowed to post, but others can look at questions and answers.
- this communal development framework also supports a news feed that allows users to educate themselves on machine learning, ethics, current events in AI ethics, etc. Users can also create their own content. Tools can be provided to aid users in setting the tone of their contributions and otherwise to provide a guide on how to post.
- This communal development framework also supports organizational billing for cloud services allowing users to, for example, choose their organization with billing credentials and print out a quick report. Variable subscription plans can be offered that allow users to subscribe to the specific services and/or level of use they may need.
- widget As used herein the terms “widget”, “tool” and “toolkit” correspond to a website, utility, platform, computer, cloud device and/or software routine that performs one or more specific functions.
- the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items.
- an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more.
- Other examples of industry-accepted tolerance range from less than one percent to fifty percent.
- Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics.
- tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/ ⁇ 1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitude of differences.
- the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level.
- inferred coupling i.e., where one element is coupled to another element by inference
- the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items.
- the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item.
- the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., indicates an advantageous relationship that would be evident to one skilled in the art in light of the present disclosure, and based, for example, on the nature of the signals/items that are being compared.
- the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide such an advantageous relationship and/or that provides a disadvantageous relationship.
- Such an item/signal can correspond to one or more numeric values, one or more measurements, one or more counts and/or proportions, one or more types of data, and/or other information with attributes that can be compared to a threshold, to each other and/or to attributes of other information to determine whether a favorable or unfavorable comparison exists.
- Examples of such a advantageous relationship can include: one item/signal being greater than (or greater than or equal to) a threshold value, one item/signal being less than (or less than or equal to) a threshold value, one item/signal being greater than (or greater than or equal to) another item/signal, one item/signal being less than (or less than or equal to) another item/signal, one item/signal matching another item/signal, one item/signal substantially matching another item/signal within a predefined or industry accepted tolerance such as 1%, 5%, 10% or some other margin, etc.
- a predefined or industry accepted tolerance such as 1%, 5%, 10% or some other margin, etc.
- a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1.
- the comparison of the inverse or opposite of items/signals and/or other forms of mathematical or logical equivalence can likewise be used in an equivalent fashion.
- the comparison to determine if a signal X>5 is equivalent to determining if ⁇ X ⁇ 5
- the comparison to determine if signal A matches signal B can likewise be performed by determining ⁇ A matches ⁇ B or not(A) matches not(B).
- the determination that a particular relationship is present can be utilized to automatically trigger a particular action. Unless expressly stated to the contrary, the absence of that particular condition may be assumed to imply that the particular action will not automatically be triggered.
- the determination that a particular relationship is present can be utilized as a basis or consideration to determine whether to perform one or more actions. Note that such a basis or consideration can be considered alone or in combination with one or more other bases or considerations to determine whether to perform the one or more actions. In one example where multiple bases or considerations are used to determine whether to perform one or more actions, the respective bases or considerations are given equal weight in such determination. In another example where multiple bases or considerations are used to determine whether to perform one or more actions, the respective bases or considerations are given unequal weight in such determination.
- one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c”, with more or less elements than “a”, “b”, and “c”.
- the phrases are to be interpreted identically.
- “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c.
- it means: “a” only, “b” only, “c” only, “a” and “b”, “a” and “c”, “b” and “c”, and/or “a”, “b”, and “c”.
- processing module may be a single processing device or a plurality of processing devices.
- a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions.
- the processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit.
- a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
- processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network).
- the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry
- the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.
- the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures.
- Such a memory device or memory element can be included in an article of manufacture.
- a flow diagram may include a “start” and/or “continue” indication.
- the “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines.
- a flow diagram may include an “end” and/or “continue” indication.
- the “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines.
- start indicates the beginning of the first step presented and may be preceded by other activities not specifically shown.
- the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown.
- a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
- the one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples.
- a physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein.
- the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
- signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
- signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
- a signal path is shown as a single-ended path, it also represents a differential signal path.
- a signal path is shown as a differential path, it also represents a single-ended signal path.
- module is used in the description of one or more of the embodiments.
- a module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions.
- a module may operate independently and/or in conjunction with software and/or firmware.
- a module may contain one or more sub-modules, each of which may be one or more modules.
- a computer readable memory includes one or more memory elements.
- a memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device.
- Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, a quantum register or other quantum memory and/or any other device that stores data in a non-transitory manner.
- the memory device may be in a form of a solid-state memory, a hard drive memory or other disk storage, cloud memory, thumb drive, server memory, computing device memory, and/or other non-transitory medium for storing data.
- the storage of data includes temporary storage (i.e., data is lost when power is removed from the memory element) and/or persistent storage (i.e., data is retained when power is removed from the memory element).
- a transitory medium shall mean one or more of: (a) a wired or wireless medium for the transportation of data as a signal from one computing device to another computing device for temporary storage or persistent storage; (b) a wired or wireless medium for the transportation of data as a signal within a computing device from one element of the computing device to another element of the computing device for temporary storage or persistent storage; (c) a wired or wireless medium for the transportation of data as a signal from one computing device to another computing device for processing the data by the other computing device; and (d) a wired or wireless medium for the transportation of data as a signal within a computing device from one element of the computing device to another element of the computing device for processing the data by the other element of the computing device.
- a non-transitory computer readable memory is substantially equivalent
- AI artificial intelligence
- SVMs support vector machines
- Bayesian networks genetic algorithms, feature learning, sparse dictionary learning, preference learning, deep learning and other machine learning techniques that are trained using training data via unsupervised, semi-supervised, supervised and/or reinforcement learning, and/or other AI.
- the human mind is not equipped to perform such AI techniques, not only due to the complexity of these techniques, but also due to the fact that artificial intelligence, by its very definition—requires “artificial” intelligence—i.e. machine/non-human intelligence.
- One or more functions associated with the methods and/or processes described herein can be implemented as a large-scale system that is operable to receive, transmit and/or process data on a large-scale.
- a large-scale refers to a large number of data, such as one or more kilobytes, megabytes, gigabytes, terabytes or more of data that are received, transmitted and/or processed.
- Such receiving, transmitting and/or processing of data cannot practically be performed by the human mind on a large-scale within a reasonable period of time, such as within a second, a millisecond, microsecond, a real-time basis or other high speed required by the machines that generate the data, receive the data, convey the data, store the data and/or use the data.
- One or more functions associated with the methods and/or processes described herein can require data to be manipulated in different ways within overlapping time spans.
- the human mind is not equipped to perform such different data manipulations independently, contemporaneously, in parallel, and/or on a coordinated basis within a reasonable period of time, such as within a second, a millisecond, microsecond, a real-time basis or other high speed required by the machines that generate the data, receive the data, convey the data, store the data and/or use the data.
- One or more functions associated with the methods and/or processes described herein can be implemented in a system that is operable to electronically receive digital data via a wired or wireless communication network and/or to electronically transmit digital data via a wired or wireless communication network. Such receiving and transmitting cannot practically be performed by the human mind because the human mind is not equipped to electronically transmit or receive digital data, let alone to transmit and receive digital data via a wired or wireless communication network.
- One or more functions associated with the methods and/or processes described herein can be implemented in a system that is operable to electronically store digital data in a memory device. Such storage cannot practically be performed by the human mind because the human mind is not equipped to electronically store digital data.
- One or more functions associated with the methods and/or processes described herein may operate to cause an action by a processing module directly in response to a triggering event—without any intervening human interaction between the triggering event and the action. Any such actions may be identified as being performed “automatically”, “automatically based on” and/or “automatically in response to” such a triggering event. Furthermore, any such actions identified in such a fashion specifically preclude the operation of human activity with respect to these actions—even if the triggering event itself may be causally connected to a human activity of some kind.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Economics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Game Theory and Decision Science (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A system operates by: generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, custom survey data in response to user interactions with the graphical user interface; receiving, via the machine and responsive to the custom survey data, survey results data; generating, utilizing machine learning and via the machine, a customized virtue scoring model based on the custom survey data and the survey results data; receiving, via the machine, content data; generating, via the machine and utilizing the customized virtue scoring model, predicted virtue score data associated with the content data; and facilitating display, via the graphical user interface, the predicted virtue score data associated with the content data.
Description
- The present U.S. Utility Patent application claims priority pursuant to 35 U.S.C. § 119(e) to U.S. Provisional Application No. 63/262,395, entitled “AI PLATFORM WITH CUSTOMIZABLE CONTENT ANALYSIS CONTROL PANEL AND METHODS FOR USE THEREWITH”, filed Oct. 12, 2021; U.S. Provisional Application No. 63/262,396, entitled “AI PLATFORM WITH CUSTOMIZABLE VIRTUE SCORING MODELS AND METHODS FOR USE THEREWITH”, filed Oct. 12, 2021; and U.S. Provisional Application No. 63/262,397, entitled “AI PLATFORM WITH AUTOMATIC ANALYSIS DATA AND METHODS FOR USE THEREWITH”, filed Oct. 12, 2021, all of which are hereby incorporated herein by reference in their entirety and made part of the present U.S. Utility Patent Application for all purposes.
- The present disclosure relates to processing systems and applications used in the development, analysis and/or use of artificial intelligence models or other content.
-
FIG. 1A presents a block diagram representation of an example system. -
FIG. 1B presents a block diagram representation of an example artificial intelligence (AI) development platform. -
FIG. 1C presents a block diagram representation of an example system. -
FIG. 1D presents a block diagram representation of an example content analysis platform. -
FIG. 1E presents a block diagram representation of an example client device. -
FIG. 2A presents a flowchart representation of an example method. -
FIG. 2B presents a flowchart representation of an example method. -
FIG. 2C presents a flowchart representation of an example method. -
FIG. 2D presents a flowchart representation of an example method. -
FIG. 3A presents a block diagram representation of an example AI auto detection model. -
FIG. 3B presents a block diagram representation of an example auto-mapping function. -
FIG. 3C presents a block diagram representation of an example virtue scoring model. -
FIG. 3D presents a block diagram representation of an example survey creation widget. -
FIG. 3E presents a block diagram representation of an example of control panel generation tools. -
FIG. 3F presents a pictorial representation of an example of a content analysis control panel. -
FIGS. 4A-4Y present graphical diagram representations of example screen displays or portions thereof. -
FIGS. 5A-5D present graphical diagram representations of example screen displays or portions thereof. -
FIGS. 6A-6F present graphical diagram representations of example screen displays or portions thereof. -
FIG. 1A presents a block diagram representation of an example system in accordance with various embodiments. In particular, asystem 850 is presented that includes anAI development platform 800 that communicates withclient devices 825 via anetwork 115. Thenetwork 115 can be the Internet or other wide area or local area network, either public or private. Theclient devices 825 can be computing devices of users such as AI developers or administrators of databases, social media platforms or other sources of AI or media content. - As AI development accelerates at an unprecedented rate, many machine learning (ML) Engineers are beginning to require knowledge in a diverse range of fields including AI ethics, MLOps, and AutoML. Currently there is just scattered, disparate toolkits, which can lead developers to make poor decisions due to lack of experience and accountability.
- There is also increased regulation requirements under way by places like the EU and potential need to meet some standards of quality in the near future. IBM has Fairness 360 for bias. IBM also has the Explainability Toolkit for increasing transparency. There is Audit-AI for statistical bias detection. Lime has software for visualizing bias to increase fairness. There is SHAP that uses game theory for explain output of black box model. There is XAI for dynamic systems. The problem is that most AI developers do not want to switch from one platform or toolkit, to another, and another again. The
AI development platform 800 andsystem 850 makes these technological improvements to computer technology by reworking the AI infrastructure from the ground up, building AI ethics into the work experience, and streamlining the process to achieve safe and effective algorithms for ML developers. It provides a “one stop shop” to building robust and certifiable AI systems. Although the primary goal of theAI development platform 800 is to provide a software as a service (SaaS) platform to an ethical AI community, it may be used in conjunction with social media platforms such as Instagram, Facebook, LinkedIn, GitHub, etc. This platform could also be used by AI ethicists to audit their own systems of AI development. Users can use the framework and publicly post their decisions along the way for human in the loop feedback from a community through the posting of problems, questions, reviews, etc. Furthermore, the systems described herein improve computer technology by providing a user interface with many new features and combinations that improve the user experience, increase user efficiency and generate more accurate, more robust and more virtuous results. - The
AI development platform 800 includes: -
- a. a
platform access subsystem 813 that provides secure access to the AI development platform to a plurality ofclient devices 825 via thenetwork 115; - b. a learning and
collaboration subsystem 811 that provides a network-based forum that facilitates a collaborative development of machine learning models or other AI tools via the plurality ofclient devices 825 and that, for example, provides access to a library of AI tutorials, a database of AI news, a forum for questions and answers regarding machine learning, including the use of specific machine learning techniques and/or whether or not particular process is fair, biased, transparent, secure, safe, etc., and/or a database of documentation regarding theAI development platform 800 including, for example, instructions on what the platform is, why it is, what is in it, who it is for, when to use it, and how to use it and further including instructions on the use of the various and subsystems, and/or how to access and operate the various customizations, interconnected tools/widgets and other features via theAI development platform 800; - c. a subscription and
billing subsystem 815 that controls access to theAI development platform 800 via each of the plurality ofclient devices 825 in conjunction with subscription information associated with each of the plurality ofclient devices 825 and further, that generates billing information associated with each of the plurality ofclient devices 825 in accordance with the subscription information; and - d. a
privacy management system 817 that protects the privacy of machine learning development data associated with each of the plurality ofclient devices 825.
- a. a
- In operation, the
AI development platform 800 facilitates the development of a training dataset associated with at least one of the plurality ofclient devices 825 viadataset development tools 802. The resulting dataset can be stored, for example, in adatabase 819 associated with theAI development platform 800. TheAI development platform 800 also provides access to a plurality of automachine learning tools 804, such as DataRobot, H20.ai and/or other auto machine learning tools to facilitate the development of an AI model. TheAI development platform 800 includes a set of controlpanel generation tools 806 that facilitate the generation and user-customization of a graphical user interface (GUI) based content analysis control panel. - The
AI development platform 800 also includes a plurality of AI analysis tools/widgets 808 that implement, for example, auto detection and mapping tools such as AI models, statistical functions or other AI or functions that analyze input datasets to automatically identify and/or map data associated with protected attributes, key performance indicators and/or other metrics. In addition, the AI analysis tools/widgets 808 can also include a plurality of standard virtue scoring models that each generate a corresponding virtue score. Examples of such standard virtue scoring models include a responsibility model, an equitability (or bias) model, a reliability model, an explainability model, a robustness model, a traceability model and/or other models that generate virtue scores such as a responsibility score, an equitability (or bias) score, a reliability score, an explainability score, and/or other morality or virtue score. In addition, the AI analysis tools/widgets 808 can include tools to facilitate the generation of one or more virtue scoring models, such as ML or other AI models that are generated based on survey data and the collection of corresponding survey results. Furthermore, the AI analysis tools/widgets 808 can include survey widgets and other tools to facilitate the generation of user-customized virtue scoring models that can differ from each of the standard virtue scoring models, and that are implemented via ML or other AI models that are generated based on user-customized survey data and the collection of corresponding survey results. - The
AI development platform 800 also provides access to aversion control repository 812, such as a Git repository or other version control system for storing and managing a plurality of versions of the training dataset and the AI model. TheAI development platform 800 also provides access to one or more machinelearning management tools 810 to perform other management operations associated with the AI model, training dataset, etc. - In operation, the content analysis control panel generated via the set of control
panel generation tools 806 operates in conjunction with the AI analysis tools/widgets 808 to provide a graphical user interface that aids the user by gathering and presenting AI data and/or other content for analysis, the creation of custom virtue scoring models, the selection of particular virtue scoring models (either custom or preset) to be used, and the presentation of virtue scores and other analysis results. For example, the content analysis control panel operates via the controlpanel generation tools 806 and associated AI analysis tools/widgets 808 to: -
- guide the user through customization of control panel settings and customization parameters used to generate the content analysis control panel;
- facilitate the selection of data sets from an AI model or content source in addition to the selection of protected attributes, key performance indicators and/or other metrics;
- identify, map and present data associated with the protected attributes, key performance indicators and/or other metrics including a customized selection of statistics, charts, graphs and/or other visualizations;
- facilitate the generation of survey data and collection of survey results data to facilitate the generation of custom and/or standard virtue scoring models;
- generate and present virtue scores associated with a selected group of customized and/or standard virtue scoring models including a customized selection of statistics, charts, graphs and/or other visualizations of each score; and
- generate and present suggested improvements to any of the virtue scores associated with any of a selected group of virtue scoring models.
- In an example of operation, the
AI development platform 800 operates to perform operations that include: -
- generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel;
- receiving, via the machine, customization data that indicates a plurality of virtue scoring models, and presentation parameters associated with the plurality of scoring models;
- receiving, via the machine, content data from an AI model or media source;
- generating, via the machine, predicted virtue score data associated with the content data for each of the plurality of virtue scoring models; and
- facilitating display, via the content analysis control panel and in accordance with settings and other customization data, the predicted virtue score data associated with the content data for each of the plurality of virtue scoring models.
- In another example of operation, the
AI development platform 800 operates to perform operations that include: -
- generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, custom survey data in response to user interactions with the graphical user interface;
- receiving, via the machine and responsive to the custom survey data, survey results data;
- generating, utilizing machine learning and via the machine, a customized virtue scoring model based on the custom survey data and the survey results data;
- receiving, via the machine, content data from an AI model or media source;
- generating, via the machine and utilizing the customized virtue scoring model, predicted virtue score data associated with the content data; and
- facilitating display, via the graphical user interface, the predicted virtue score data associated with the content data.
- In a further example of operation, the
AI development platform 800 operates to perform operations that include: -
- generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel;
- receiving, via the machine, content data from an AI model or media source;
- detecting, via one or more AI models implemented via the machine, detection data that includes first portions of the content data associated with a protected attribute and second portions of the content data associated with a predetermined metric;
- generating, via the machine, analysis data associated with the protected attribute and the predetermined metric; and
- facilitating display, via the content analysis control panel, the analysis data associated with the protected attribute and the predetermined metric.
- It should be noted that while the learning and
collaboration subsystem 811, theplatform access subsystem 813, subscription andbilling subsystem 815, theprivacy management system 817 and thedatabase 819, thedataset development tools 802,AutoML tools 804, controlpanel generation tools 806, AI analysis tools/widgets 808,ML management tools 810 and theversion control repository 812 are shown as being internal to theAI development platform 800, in other examples, any subset of the various elements ofAI development platform 800 can be implemented external to theAI development platform 800 and coupled to the other components via thenetwork 115. Furthermore, theAI development platform 800 can be implemented in a cloud computing configuration with any or all of the various elements ofAI development platform 800 implemented within the cloud. -
FIG. 1B presents a block diagram representation of anAI development platform 800 in accordance with various embodiments. In particular, theAI development platform 800 includes anetwork interface 820 such as a 3G, 4G, 5G or other cellular wireless transceiver, a Bluetooth transceiver, a WiFi transceiver, UltraWideBand transceiver, WIMAX transceiver, ZigBee transceiver or other wireless interface, a Universal Serial Bus (USB) interface, an IEEE 1394 Firewire interface, an Ethernet interface or other wired interface and/or other network card or modem for communicating for communicating via thenetwork 115. - The
AI development platform 800 also includes aprocessing module 830 andmemory module 840 that stores an operating system (O/S) 844 such as an Apple, Unix, Linux or Microsoft operating system or other operating system, the learning andcollaboration subsystem 811, theplatform access subsystem 813, subscription andbilling subsystem 815, theprivacy management system 817 and thedatabase 819, thedataset development tools 802,AutoML tools 804, controlpanel generation tools 806, AI analysis tools/widgets 808,ML management tools 810 and theversion control repository 812. In particular, the O/S 844, the learning andcollaboration subsystem 811, theplatform access subsystem 813, subscription andbilling subsystem 815, theprivacy management system 817 and thedatabase 819, thedataset development tools 802,AutoML tools 804, controlpanel generation tools 806, AI analysis tools/widgets 808,ML management tools 810 and theversion control repository 812 each include operational instructions that, when executed by theprocessing module 830, cooperate to configure theprocessing module 830 into a special purpose device to perform the particular functions of theAI development platform 800 described herein. - The
AI development platform 800 may include a user interface (I/F) 862 such as a display device, touch screen, key pad, touch pad, joy stick, thumb wheel, a mouse, one or more buttons, a speaker, a microphone, an accelerometer, gyroscope or other motion or position sensor, video camera or other interface devices that provide information to a user of theAI development platform 800 and that generate data in response to the user's interaction withAI development platform 800. - The
processing module 830 can be implemented via a single processing device or a plurality of processing devices. Such processing devices can include a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, quantum computing device, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions that are stored in a memory, such asmemory 840. Thememory module 840 can include a hard disc drive or other disc drive, read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that when the processing device implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. While a particular bus architecture is presented that includes asingle bus 860, other architectures are possible including additional data buses and/or direct connectivity between one or more elements. Further, theAI development platform 800 can include one or more additional elements that are not specifically shown. -
FIG. 1C presents a block diagram representation of an example system. In particular acontent analysis system 865 is shown that includes several elements of theAI development platform 800 that are referred to by common reference numerals. Similarly,FIG. 1D presents a block diagram representation of an examplecontent analysis platform 875 is shown that includes several elements of theAI development platform 800 that are referred to by common reference numerals. - While the discussions of the
AI development platform 800 have focused on the development and analysis of AI models, it should be noted than many of the elements of theAI development platform 800 also apply to the analysis of other media content that may or may not be AI related. Thecontent analysis system 865, for example, includes content analysis tools/widgets 808′ that includes the same or similar tools to the AI analysis tools/widgets 808, but that operate on media content or other content data, be it AI generated or not. -
FIG. 1E presents a block diagram representation of an example client device in accordance with various embodiments. In particular, aclient device 825 is presented that includes anetwork interface 220 such as a 3G, 4G, 5G or other cellular wireless transceiver, a Bluetooth transceiver, a WiFi transceiver, UltraWideBand transceiver, WIMAX transceiver, ZigBee transceiver or other wireless interface, a Universal Serial Bus (USB) interface, an IEEE 1394 Firewire interface, an Ethernet interface or other wired interface and/or other network card or modem for communicating for communicating vianetwork 115. - The
client device 825 also includes aprocessing module 230 andmemory module 240 that stores an operating system (O/S) 244 such as an Apple, Unix, Linux or Microsoft operating system or other operating system,training data 120, and one ormore gaming applications 248. In particular, the O/S 244 andgaming application 248 each include operational instructions that, when executed by theprocessing module 230, cooperate to configure the processing module into a special purpose device to perform the particular functions of theclient device 825 described herein. - The
client device 825 also includes a user interface (I/F) 262 such as a display device, touch screen, key pad, touch pad, joy stick, thumb wheel, a mouse, one or more buttons, a speaker, a microphone, an accelerometer, gyroscope or other motion or position sensor, video camera or other interface devices that provide information to a user of theclient device 825 and that generate data in response to the user's interaction with theclient device 825. - The
processing module 230 can be implemented via a single processing device or a plurality of processing devices. Such processing devices can include a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, quantum computing device, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions that are stored in a memory, such asmemory 240. Thememory module 240 can include a hard disc drive or other disc drive, read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that when the processing device implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. While a particular bus architecture is presented that includes asingle bus 260, other architectures are possible including additional data buses and/or direct connectivity between one or more elements. Further, theclient device 825 can include one or more additional elements that are not specifically shown. - The
client device 825 operates, vianetwork interface 220,network 115 andAI development platform 800 and/orcontent analysis platform 875. In various embodiments, theclient device 825 operates to display a graphical user interface, such as a content analysis control panel or other user interface. For example, theclient device 825 displays a content analysis control panel based on content analysis control panel data generated by either theAI analysis platform 800 or thecontent analysis platform 875 and, in particular, the graphical user interface can display one or more screen displays based on data generated by theAI development platform 800 and/orcontent analysis platform 875. Furthermore, the graphical user interface can operate in response to interactions by a user to generate input data that is sent to theAI development platform 800 and/orcontent analysis platform 875 to control the operation of theAI development platform 800 and/orcontent analysis platform 875 and/or to provide other input. -
FIG. 2A presents a flowchart representation of an example method in accordance with various embodiments. In particular, amethod 600 is presented for use with any of the functions and features discussed in conjunction withFIGS. 1A-1E . Furthermore, a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below. - Step 602 includes providing, via a system that includes a processor and a network interface, an AI development platform that includes: a platform access subsystem that provides secure access to the AI development platform to a plurality of client devices via the network interface; a learning and collaboration subsystem that provides a network-based forum that facilitates a collaborative development of machine learning tools via the plurality of client devices and that provides access to a library of AI tutorials and a database of AI news; a subscription and billing subsystem that controls access to the AI development platform via each of the plurality of client devices in conjunction with subscription information associated with each of the plurality of client devices and further; that generates billing information associated with each of the plurality of client devices in accordance with the subscription information; and a privacy management system that protects the privacy of machine learning development data associated with each of the plurality of client devices.
- Step 604 includes facilitating, via the AI development platform, the development of a training dataset associated with at least one of the plurality of client devices. Step 606 includes providing, via the AI development platform, access to a plurality of auto machine learning tools to facilitate the development of an AI model. Step 608 includes providing, via the AI development platform, access to a plurality of AI analysis widgets to facilitate the evaluation of the AI model, wherein the plurality of AI analysis widgets include a plurality of virtue scoring models that predict virtue scores for the AI model associated with the plurality of virtues. Step 610 includes providing, via the AI development platform, access to a version control repository for storing a plurality of versions of the training dataset and the AI model.
-
FIG. 2B presents a flowchart representation of an example method in accordance with various embodiments. In particular, amethod 620 is presented for use with any of the functions and features discussed in conjunction withFIGS. 1A-1E and/or the method ofFIG. 2A . Furthermore, a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below. - Step 622 includes generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel. Step 624 includes receiving, via the machine, customization data that indicates a plurality of virtue scoring models, and presentation parameters associated with the plurality of scoring models.
- Step 626 includes receiving, via the machine, content data. Step 628 includes generating, via the machine, predicted virtue score data associated with the content data for each of the plurality of virtue scoring models. Step 630 includes facilitating display, via the content analysis control panel and in accordance with the customization data, the predicted virtue score data associated with the content data for each of the plurality of virtue scoring models.
- In addition or in the alternative, the plurality of virtue scoring models include a plurality of artificial intelligence (AI) models that are each trained based on survey data to generate portions of the predicted virtue score data indicating a corresponding one of a plurality of scores.
- In addition or in the alternative, the plurality of AI models includes a responsibility model and the plurality of scores includes a responsibility score that is based on an amount the content data addresses legal or ethical principles.
- In addition or in the alternative, the plurality of AI models includes an equitability model and the plurality of scores includes an equitability score that is based on an amount of bias in the content data.
- In addition or in the alternative, the plurality of AI models includes a reliability model and the plurality of scores includes a reliability score that indicates variations in others of the plurality of scores.
- In addition or in the alternative, the plurality of AI models includes an explainability model and the plurality of scores includes an explainability score associated with the content data.
- In addition or in the alternative, the plurality of AI models includes a morality model and the plurality of scores includes a morality score associated with the content data.
- In addition or in the alternative, the method can further include generating improvement data associated with at least one of the plurality of scores.
- In addition or in the alternative, the content data is an Artificial Intelligence (AI) model.
- In addition or in the alternative, the presentation parameters includes a customized selection of at least one of: at least one statistic, at least one chart, or at least one graph.
- In addition or in the alternative, the method can further include displaying, via the content analysis control panel and in accordance with the customization data, of at least one of: at least one protected attribute, or at least one key performance indicator.
- In addition or in the alternative, the method can further include facilitating selection of the content data from at least one of: an AI model, or a content source.
- In addition or in the alternative, the method can further include generating, based on user input, survey data corresponding to a survey; collecting survey results data in response to the survey; and facilitating generation of a custom virtue scoring model of the plurality of virtue scoring models.
-
FIG. 2C presents a flowchart representation of an example method in accordance with various embodiments. In particular, amethod 640 is presented for use with any of the functions and features discussed in conjunction withFIGS. 1A-1E and/or the methods ofFIGS. 2A and/or 2B . Furthermore, a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below. - Step 642 includes generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, custom survey data in response to user interactions with the graphical user interface. Step 644 includes receiving, via the machine and responsive to the custom survey data, survey results data.
- Step 646 includes generating, utilizing machine learning and via the machine, a customized virtue scoring model based on the custom survey data and the survey results data. Step 648 includes receiving, via the machine, content data. Step 650 includes generating, via the machine and utilizing the customized virtue scoring model, predicted virtue score data associated with the content data. Step 652 includes facilitating display, via the graphical user interface, the predicted virtue score data associated with the content data.
- In addition or in the alternative, the customized virtue scoring model includes an artificial intelligence (AI) model that is trained based on at least one of: the custom survey data or the survey results data.
- In addition or in the alternative, the AI model includes a responsibility model and the predicted virtue score data indicates a responsibility score that is based on an amount the content data addresses legal or ethical principles.
- In addition or in the alternative, the AI model includes an equitability model and the predicted virtue score data indicates an equitability score that is based on an amount of bias in the content data.
- In addition or in the alternative, the AI model includes a reliability model and the predicted virtue score data indicates a reliability score that indicates variations in an others virtue scores.
- In addition or in the alternative, the AI model includes an explainability model and the predicted virtue score data indicates an explainability score associated with the content data.
- In addition or in the alternative, the AI model includes a morality model and the predicted virtue score data indicates a morality score associated with the content data.
- In addition or in the alternative, the method further includes generating improvement data associated with and the predicted virtue score data.
- In addition or in the alternative, the content data is an Artificial Intelligence (AI) model.
- In addition or in the alternative, the method further includes facilitating selection of the content data from at least one of: an AI model, or a content source.
- In addition or in the alternative, the customized virtue scoring model includes an artificial intelligence (AI) model and wherein generating the customized virtue scoring model includes providing access to a plurality of AI analysis widgets to facilitate an evaluation of the AI model.
- In addition or in the alternative, the plurality of AI analysis widgets include a plurality of virtue scoring models that predict virtue scores for the AI model associated with a plurality of virtues.
- In addition or in the alternative, the customized virtue scoring model includes an artificial intelligence (AI) model and wherein the method further comprises providing access to a version control repository for storing a plurality of versions of a training dataset and a plurality of version of the AI model.
-
FIG. 2D presents a flowchart representation of an example method in accordance with various embodiments. In particular, amethod 660 is presented for use with any of the functions and features discussed in conjunction withFIGS. 1A-1E and/or the methods ofFIGS. 2A, 2B and/or 2C . Furthermore, a system comprising a network interface configured to communicate via a network, at least one processor and a non-transitory machine-readable storage medium can store operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that any of the method steps described below. - Step 662 includes generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel. Step 664 includes receiving, via the machine, content data.
- Step 666 includes detecting, via one or more AI models implemented via the machine, detection data that includes first portions of the content data associated with a protected attribute and second portions of the content data associated with a predetermined metric. Step 668 includes generating, via the machine, analysis data associated with the protected attribute and the predetermined metric. Step 670 includes facilitating display, via the content analysis control panel, the analysis data associated with the protected attribute and the predetermined metric.
- In addition or in the alternative, the protected attribute is a potential source of discrimination.
- In addition or in the alternative, the potential source of discrimination is at least one of: gender, race, age, religion, ethnicity, sexual preference, or disability.
- In addition or in the alternative, the predetermined metric is a key performance indicator that varies based on the potential source of discrimination.
- In addition or in the alternative, the predetermined metric is a term that varies based on the potential source of discrimination.
- In addition or in the alternative, the predetermined metric indicates at least one grade point average.
- In addition or in the alternative, the predetermined metric indicates at least one salary.
- In addition or in the alternative, the predetermined metric indicates at least one job offers.
- In addition or in the alternative, the predetermined metric indicates at least one loan approval or disapproval.
- In addition or in the alternative, the predetermined metric indicates at least one credit score.
- In addition or in the alternative, the predetermined metric indicates at least one job promotion.
- In addition or in the alternative, the predetermined metric indicates at least one arrest.
-
FIG. 3A presents a block diagram representation of an example AI auto-detection model. In particular, an AI auto-detection model 302 is shown that is an example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808′. AI auto-detection model 302 is trained viatraining data 306 to recognize portions ofinput data 300 that contain or are predicted to contain, one or more protected attributes or other metrics. In various examples, theinput data 300 can be AI input/output data of an underlying AI process to be analyzed and/or content data from other media content from a media source to be analyzed. - In various examples, the protected attributes can include terms related to gender, sex, race, age, religion, ethnicity, sexual preference, disabilities or other terms associated with potential sources of discrimination. The metrics can, for example, include one or more terms, key performance indicators (KPIs) or other factors that could be present in the
input data 300 and be vary based on such sources of discrimination. Examples of such metrics include grade point average, salary, job offers, loan approvals or disapprovals, credit scores, promotions, arrests, etc. depending on the type of data being analyzed. - In various examples, the AI auto-
detection model 302 uses deep layered natural language processing or other AI that is trained based ontraining data 306 that contains these terms, region variations, common or expected misspellings of these terms, alternative terms, etc. In operation, the AI auto-detection model 302 generatesdetection data 304, such as columnar or tabular data containing labels that indicate the terms identified in theinput data 300. While the AI auto-detection model 302 is shown as a single model, the AI auto-detection model 302 may contain a plurality of individual AI models, for example, each trained to recognize one corresponding term to be detected. -
FIG. 3B presents a block diagram representation of an example auto-mapping function. In particular, an auto-mapping function 312 is shown that is a further example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808′. - In various embodiments, the auto-
mapping function 312 operates on thedetection data 304 and applies a continuous distribution, categorical distribution, binned distribution or other statistical analysis to generate analysis data indicating statistics and/or other values regarding protected attributes and metrics. Illustrative examples, include: -
- race=36% white/Asian, 64% other races
- Age=23% over 65
- GPA=3.213+/−0.53
- LSAT score=32+/−7
- Etc.
The auto-mapping function 312 can be implemented via one or more parametric or non-parametric statistical functions. In other examples, the auto-mapping function 312 can be implemented via AI techniques and optionally be trained based ontraining data 316 to generate theanalysis data 314. While the auto-mapping function 312 is shown as a single function, the auto-mapping function 312 may contain a plurality of individual functions, for example, each operable to generate statistics orother analysis data 314 for a corresponding term or set of terms indicated by thedetection data 304.
-
FIG. 3C presents a block diagram representation of an example virtue scoring model. In particular, avirtue scoring model 322 is shown that is a further example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808′. In operation, thevirtue scoring model 322 is trained, for example, viatraining data 326 to generate avirtue score 324 corresponding to a particular virtue in response tocontent data 320 such asanalysis data 314, AI output data of an underlying AI process to be analyzed and/or content data from other media content from a media source to be analyzed. While thevirtue scoring model 322 is shown as a single model, thevirtue scoring model 322 may contain a plurality of individual models, each corresponding to a different standard or customizedvirtue score 324. - Examples of the virtue scoring model(s) 322 include:
-
- A responsibility scoring model trained to generate a
virtue score 324 indicating a responsibility score or other metric that indicates, for example, how well underlying AI or other content is addressing legal and/or ethical principles; - An equitability scoring model trained to generate a
virtue score 324 indicating an equitability score or other metric, that indicates, for example, an amount (or lack of) bias in the underlying AI or other content data; - A reliability scoring model or other function that generates a
virtue score 324 indicating that identifies variations or drift inother virtue scores 324 or other changes in AI input or output data from the training set that can. For example, indicate the need to retrain the underlying AI or investigate the cause of changes in scores in content data; - An explainability scoring model trained to generate a
virtue score 324 indicating an explainability score or other metric indicating, for example, how transparent an underlying AI process is; - One or more sub-models relating to portions of the results above, that for example, can be used to construct overall virtue scores 324; and
- One or more user customized virtues, trained for example, based on results from user defined surveys to generate
other virtue scores 324 that are different than those listed above and address a particular user problem or concern.
- A responsibility scoring model trained to generate a
-
FIG. 3D presents a block/flow diagram representation of an example survey creation process. As previously discussed, theAI development platform 800 andcontent analysis platform 875 are operable to generate customized virtue scoring models that are trained or otherwise generated based on custom survey data and the survey results data. In the example, asurvey creation widget 342 that is a further example of an AI analysis tool/widget 808 and/or a content analysis tool/widget 808′ is used to create a custom survey 344 based oncustom survey data 340 input by the user via, for example, the content analysis control panel. The survey resultsdata 348 are generated based onsurvey input 346 from survey participants. - While the custom survey 344 is shown as a single survey, the
survey creation widget 342 can be used to generate multiple custom surveys for multiple custom virtue scoring models. Furthermore,survey data results 348 andcustom survey data 340 generated in this fashion can also be used to train any of the standard virtue scoring models discussed above. -
FIG. 3E presents a pictorial/block diagram representation of an example of controlpanel generation tools 806. In the example shown control panel generation tools store control panel setting and customization parameters 352 that are generated via interaction the user and user input 350. In operation, the controlpanel generation tools 806 generate content analysiscontrol panel data 354, based on further user input 350 and the AI analysis tools/widgets 808 or content analysis tools/widgets 808′. This content analysiscontrol panel data 354 is formatted for display via a display device of a client device, such asclient device 825 to reproduce the content analysis control panel 360. An example screen display is shown inFIG. 3F . - As previously discussed, the content analysis control panel 360 generated via the set of control
panel generation tools 806 operates in conjunction with the AI analysis tools/widgets 808 to provide a graphical user interface that aids the user by gathering and presenting AI data and/or other content for analysis, the creation of custom virtue scoring models, the selection of particular virtue scoring models (either custom or preset) to be used, and the presentation of virtue scores and other analysis results. For example, the content analysis control panel 360 operates via the controlpanel generation tools 806 and associated AI analysis tools/widgets 808 to: -
- guide the user through customization of control panel settings and customization parameters 352 used to generate the content analysis control panel 360;
- facilitate the selection of data sets from an AI model or content source in addition to the selection of protected attributes, key performance indicators and/or other metrics;
- identify, map and present data associated with the protected attributes, key performance indicators and/or other metrics including a customized selection of statistics, charts, graphs and/or other visualizations;
- facilitate the generation of survey data and collection of survey results data to facilitate the generation of custom and/or standard virtue scoring models;
- generate and present virtue scores associated with a selected group of virtue scoring models including a customized selection of statistics, charts, graphs and/or other visualizations of each score; and
- generate and present suggested improvements to any of the virtue scores associated with a selected group of virtue scoring models.
-
FIGS. 4A-4V and 5A-5E present graphical diagram representations of example screen displays or portions thereof corresponding to a content analysis control panel. In particular,FIG. 4A presents a screen display of a content analysis control panel (CACP) of a User “Jane Doe”. The CACP includes a news feed that shows various AI related articles that can be individually accessed and read by the user. InFIG. 4B , the user has accessed a drop-down menu and chosen to create a new AI pipeline. InFIG. 4C , a popup window is shown that allows the user to input a title and description of the new pipeline. In,FIG. 4D , the CACP is shown after the user has chosen to name the new pipeline, “medical treatment selection pipeline”. The screen display indicates that there is currently no data for the pipeline and prompts to user to import data in order to get started. In particular, the user has the option of dragging a dropping a data set into the window or using an API of the system. - In
FIG. 4E , the user has imported a dataset and auto-detection and auto-mapping have been performed by the AI analysis widgets/tools 808 to categorize the metrics “last”, “ugpa” and “zfgpa” by race, either “white/Asian” or “other”. Input data sets can, for example, be in columnar format with columns representing different datatypes. Input data sets can be static, continuous updated and/or updated, periodically (e.g. once a day, once a week, once a month, etc.). InFIG. 4F , the user has elected to view a history of datasets that have been entered, their respective dates and who they were added by (“Rory”, in this case). - In
FIG. 4G , the user has customized the CACP by entering customization data to select and generate two particular virtue scoring models for the selected pipeline, a responsible/responsibility scoring model and a equitable/equitability scoring model. - Furthermore, the user has selected presentation parameters, either default or customized for each scoring model to indicate how the virtue scores will be displayed, for example, by particular graphs, charts, or other graphics or visual indications. In this case, the CACP prompts the user to fill out a survey in order to train the responsibility scoring model. Equitability scores are presented in a window below in the chosen presentation format along with an overall fairness index in the upper right portion of the screen. This fairness index can be generated based on a function/combination of the user-selected virtues or based on all virtues, depending on the implementation.
- As shown in the panel on the right, robustness and traceability scoring models are also available as well as links to tools that assist the user in improving, responsibility, equitability, robustness and/or other standard virtues. As shown in the panel on the left, the user is given the options to retrain or deploy any of the selected virtue scoring models. Icons can also be provided allowing the user to seek human in the loop (HIL) feedback and/or to share results with private groups, public groups, social media, etc.
- In
FIG. 4H , the equitability scoring window/bias monitor is selected and several different data overviews are presented in various and possibly user selected formats. InFIGS. 4I and 4J , the explainability scoring window/bias monitor is selected and several different data overviews are presented in various and possibly user selected formats. InFIG. 4I , a macro-view of a data overview is shown that breaks down a “loan” metric onto four different components. InFIG. 4J , a micro-view is shown where total/overall score (“good”) is presented along with a breakdown of various inputs/features that contribute to that score. A prompt is provided that allows the user to retrain the explainer (e.g., the explainability virtue scoring model). In the panel on the right, the user can query the system on the effects of selected features and how to change certain features to receive certain scores, for example. In addition, the user is presented an option to create an extension. -
FIG. 4K presents an interface on the CACP that uses the control panel generation tools to permit the user to create one or more customized control panels. Templates are available related to the categories “healthcare” and “finance” for users that want to start from a pre-existing control panel configuration, as well as a blank template for users that wish to start from scratch. As indicated, control panels can be designated as either public or private. As shown in the bottom of the screen display, a user that does not see a feature he/she wants, they may add feedback to the administrator of the platform to perhaps include this in a later release. - In
FIG. 4L , the user has selected to create a new control panel and prompted to enter a control panel name. The user is also allowed to create a scoring model for a new/customized virtue. InFIG. 4M , the user has used a survey widget to create a survey for a new virtue “Virtue I”.FIG. 4N , an example of the survey widget are shown. InFIG. 4O , the user selects the audience for completing the survey based on particular names and email addresses—i.e. to generate survey input/results. The user can select an existing crowd, employees for example, create a new crowd as shown inFIG. 4P , or proceed on a general crowd source. A screen display generated by the survey widget for a new survey is presented inFIG. 4Q . -
FIG. 4R presents a cloud portal of the CACP that presents various service guides and a link to the news feed ofFIG. 4A . InFIG. 4S , the user is selecting to access the API reference materials.FIG. 4T presents a static/predetermined survey, magnitude slider that can be used to customize an AI analysis widget corresponding to the bias monitor and equitability scoring model to enable virtue tracing based on scoring magnitudes.FIG. 4U presents a widget creator that allows a user to create/customize his/her own widgets. InFIG. 4V-4X , the user has selected different output formats for display in conjunction with the AI analysis widget corresponding to the bias monitor and equitability scoring model.FIG. 4Y presents a billing and payment screen. - In
FIG. 5A , the survey widget configures a survey for multiple-choice questions. InFIG. 5B , the survey widget configures a survey with multiple-choice questions with answers input by users via slider-bars. InFIG. 5C , the survey widget configures a survey with short answers input by users. InFIG. 5D , API options and instructions are provided to facilitate the input of datasets. -
FIGS. 6A-6F present graphical diagram representations of example screen displays or portions thereof of another example content analysis control panel. In particular, example screen displays are presented as part of the graphical user interface implemented via theAI development platform 800. - In various embodiments, the
AI development platform 800 supports a communal development framework that allows users to view repositories on people's walls, view other profiles to see public work, promote trust through transparency, allow people to be involved in decisions, add friends and follow people and organizational work, approve/disapprove work, borrow others code by forking or cloning their repository. This communal development framework also supports AI ethics discussion in ethics forums, and/or other forums where a user posts a question, others can answer, and users can comment on question and answers. Documentation can be provided in a “Learn” section which includes information on AI how to use Version Control, Data API, an AI moral insight model, etc. In various embodiments, only users/subscribers are allowed to post, but others can look at questions and answers. - In various embodiments, this communal development framework also supports a news feed that allows users to educate themselves on machine learning, ethics, current events in AI ethics, etc. Users can also create their own content. Tools can be provided to aid users in setting the tone of their contributions and otherwise to provide a guide on how to post. This communal development framework also supports organizational billing for cloud services allowing users to, for example, choose their organization with billing credentials and print out a quick report. Variable subscription plans can be offered that allow users to subscribe to the specific services and/or level of use they may need.
- As used herein the terms “widget”, “tool” and “toolkit” correspond to a website, utility, platform, computer, cloud device and/or software routine that performs one or more specific functions.
- It is noted that terminologies as may be used herein such as bit stream, stream, signal sequence, etc. (or their equivalents) have been used interchangeably to describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, text, graphics, audio, etc. any of which may generally be referred to as ‘data’).
- As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. For some industries, an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more. Other examples of industry-accepted tolerance range from less than one percent to fifty percent. Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics. Within an industry, tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/−1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitude of differences.
- As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”.
- As may even further be used herein, the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item.
- As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., indicates an advantageous relationship that would be evident to one skilled in the art in light of the present disclosure, and based, for example, on the nature of the signals/items that are being compared. As may be used herein, the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide such an advantageous relationship and/or that provides a disadvantageous relationship. Such an item/signal can correspond to one or more numeric values, one or more measurements, one or more counts and/or proportions, one or more types of data, and/or other information with attributes that can be compared to a threshold, to each other and/or to attributes of other information to determine whether a favorable or unfavorable comparison exists. Examples of such a advantageous relationship can include: one item/signal being greater than (or greater than or equal to) a threshold value, one item/signal being less than (or less than or equal to) a threshold value, one item/signal being greater than (or greater than or equal to) another item/signal, one item/signal being less than (or less than or equal to) another item/signal, one item/signal matching another item/signal, one item/signal substantially matching another item/signal within a predefined or industry accepted tolerance such as 1%, 5%, 10% or some other margin, etc. Furthermore, one skilled in the art will recognize that such a comparison between two items/signals can be performed in different ways. For example, when the advantageous relationship is that
signal 1 has a greater magnitude thansignal 2, a favorable comparison may be achieved when the magnitude ofsignal 1 is greater than that ofsignal 2 or when the magnitude ofsignal 2 is less than that ofsignal 1. Similarly, one skilled in the art will recognize that the comparison of the inverse or opposite of items/signals and/or other forms of mathematical or logical equivalence can likewise be used in an equivalent fashion. For example, the comparison to determine if a signal X>5 is equivalent to determining if −X<−5, and the comparison to determine if signal A matches signal B can likewise be performed by determining −A matches −B or not(A) matches not(B). As may be discussed herein, the determination that a particular relationship is present (either favorable or unfavorable) can be utilized to automatically trigger a particular action. Unless expressly stated to the contrary, the absence of that particular condition may be assumed to imply that the particular action will not automatically be triggered. In other examples, the determination that a particular relationship is present (either favorable or unfavorable) can be utilized as a basis or consideration to determine whether to perform one or more actions. Note that such a basis or consideration can be considered alone or in combination with one or more other bases or considerations to determine whether to perform the one or more actions. In one example where multiple bases or considerations are used to determine whether to perform one or more actions, the respective bases or considerations are given equal weight in such determination. In another example where multiple bases or considerations are used to determine whether to perform one or more actions, the respective bases or considerations are given unequal weight in such determination. - As may be used herein, one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c”, with more or less elements than “a”, “b”, and “c”. In either phrasing, the phrases are to be interpreted identically. In particular, “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c. As an example, it means: “a” only, “b” only, “c” only, “a” and “b”, “a” and “c”, “b” and “c”, and/or “a”, “b”, and “c”.
- As may also be used herein, the terms “processing module”, “processing circuit”, “processor”, “processing circuitry”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture.
- One or more embodiments have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality.
- To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
- In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines. In addition, a flow diagram may include an “end” and/or “continue” indication. The “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
- The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
- Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art.
- The term “module” is used in the description of one or more of the embodiments. A module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions. A module may operate independently and/or in conjunction with software and/or firmware. As also used herein, a module may contain one or more sub-modules, each of which may be one or more modules.
- As may further be used herein, a computer readable memory includes one or more memory elements. A memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, a quantum register or other quantum memory and/or any other device that stores data in a non-transitory manner. Furthermore, the memory device may be in a form of a solid-state memory, a hard drive memory or other disk storage, cloud memory, thumb drive, server memory, computing device memory, and/or other non-transitory medium for storing data. The storage of data includes temporary storage (i.e., data is lost when power is removed from the memory element) and/or persistent storage (i.e., data is retained when power is removed from the memory element). As used herein, a transitory medium shall mean one or more of: (a) a wired or wireless medium for the transportation of data as a signal from one computing device to another computing device for temporary storage or persistent storage; (b) a wired or wireless medium for the transportation of data as a signal within a computing device from one element of the computing device to another element of the computing device for temporary storage or persistent storage; (c) a wired or wireless medium for the transportation of data as a signal from one computing device to another computing device for processing the data by the other computing device; and (d) a wired or wireless medium for the transportation of data as a signal within a computing device from one element of the computing device to another element of the computing device for processing the data by the other element of the computing device. As may be used herein, a non-transitory computer readable memory is substantially equivalent to a computer readable memory. A non-transitory computer readable memory can also be referred to as a non-transitory computer readable storage medium.
- One or more functions associated with the methods and/or processes described herein can be implemented via a processing module that operates via the non-human “artificial” intelligence (AI) of a machine. Examples of such AI include machines that operate via anomaly detection techniques, decision trees, association rules, expert systems and other knowledge-based systems, computer vision models, artificial neural networks, convolutional neural networks, support vector machines (SVMs), Bayesian networks, genetic algorithms, feature learning, sparse dictionary learning, preference learning, deep learning and other machine learning techniques that are trained using training data via unsupervised, semi-supervised, supervised and/or reinforcement learning, and/or other AI. The human mind is not equipped to perform such AI techniques, not only due to the complexity of these techniques, but also due to the fact that artificial intelligence, by its very definition—requires “artificial” intelligence—i.e. machine/non-human intelligence.
- One or more functions associated with the methods and/or processes described herein can be implemented as a large-scale system that is operable to receive, transmit and/or process data on a large-scale. As used herein, a large-scale refers to a large number of data, such as one or more kilobytes, megabytes, gigabytes, terabytes or more of data that are received, transmitted and/or processed. Such receiving, transmitting and/or processing of data cannot practically be performed by the human mind on a large-scale within a reasonable period of time, such as within a second, a millisecond, microsecond, a real-time basis or other high speed required by the machines that generate the data, receive the data, convey the data, store the data and/or use the data.
- One or more functions associated with the methods and/or processes described herein can require data to be manipulated in different ways within overlapping time spans. The human mind is not equipped to perform such different data manipulations independently, contemporaneously, in parallel, and/or on a coordinated basis within a reasonable period of time, such as within a second, a millisecond, microsecond, a real-time basis or other high speed required by the machines that generate the data, receive the data, convey the data, store the data and/or use the data.
- One or more functions associated with the methods and/or processes described herein can be implemented in a system that is operable to electronically receive digital data via a wired or wireless communication network and/or to electronically transmit digital data via a wired or wireless communication network. Such receiving and transmitting cannot practically be performed by the human mind because the human mind is not equipped to electronically transmit or receive digital data, let alone to transmit and receive digital data via a wired or wireless communication network.
- One or more functions associated with the methods and/or processes described herein can be implemented in a system that is operable to electronically store digital data in a memory device. Such storage cannot practically be performed by the human mind because the human mind is not equipped to electronically store digital data.
- One or more functions associated with the methods and/or processes described herein may operate to cause an action by a processing module directly in response to a triggering event—without any intervening human interaction between the triggering event and the action. Any such actions may be identified as being performed “automatically”, “automatically based on” and/or “automatically in response to” such a triggering event. Furthermore, any such actions identified in such a fashion specifically preclude the operation of human activity with respect to these actions—even if the triggering event itself may be causally connected to a human activity of some kind.
- While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
Claims (20)
1. A method comprising:
generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, custom survey data in response to user interactions with the graphical user interface;
receiving, via the machine and responsive to the custom survey data, survey results data;
generating, utilizing machine learning and via the machine, a customized virtue scoring model based on the custom survey data and the survey results data;
receiving, via the machine, content data;
generating, via the machine and utilizing the customized virtue scoring model, predicted virtue score data associated with the content data; and
facilitating display, via the graphical user interface, the predicted virtue score data associated with the content data.
2. The method of claim 1 , wherein the customized virtue scoring model includes an artificial intelligence (AI) model that is trained based on at least one of: the custom survey data or the survey results data.
3. The method of claim 2 , wherein the AI model includes a responsibility model and the predicted virtue score data indicates a responsibility score that is based on an amount the content data addresses legal or ethical principles.
4. The method of claim 2 , wherein the AI model includes an equitability model and the predicted virtue score data indicates an equitability score that is based on an amount of bias in the content data.
5. The method of claim 2 , wherein the AI model includes a reliability model and the predicted virtue score data indicates a reliability score that indicates variations in an others virtue scores.
6. The method of claim 2 , wherein the AI model includes an explainability model and the predicted virtue score data indicates an explainability score associated with the content data.
7. The method of claim 2 , wherein the AI model includes a morality model and the predicted virtue score data indicates a morality score associated with the content data.
8. The method of claim 2 , further comprising:
generating improvement data associated with and the predicted virtue score data.
9. The method of claim 1 , wherein the content data is an Artificial Intelligence (AI) model.
10. The method of claim 1 , further comprising:
facilitating selection of the content data from at least one of: an AI model, or a content source.
11. The method of claim 1 , wherein the customized virtue scoring model includes an artificial intelligence (AI) model and wherein generating the customized virtue scoring model includes providing access to a plurality of AI analysis widgets to facilitate an evaluation of the AI model.
12. The method of claim 11 , wherein the plurality of AI analysis widgets include a plurality of virtue scoring models that predict virtue scores for the AI model associated with a plurality of virtues.
13. The method of claim 1 , wherein the customized virtue scoring model includes an artificial intelligence (AI) model and wherein the method further comprises providing access to a version control repository for storing a plurality of versions of a training dataset and a plurality of version of the AI model.
14. A system comprises:
a network interface configured to communicate via a network;
at least one processor;
a non-transitory machine-readable storage medium that stores operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that include:
generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, custom survey data in response to user interactions with the graphical user interface;
receiving, via the machine and responsive to the custom survey data, survey results data;
generating, utilizing machine learning and via the machine, a customized virtue scoring model based on the custom survey data and the survey results data;
receiving, via the machine, content data;
generating, via the machine and utilizing the customized virtue scoring model, predicted virtue score data associated with the content data; and
facilitating display, via the graphical user interface, the predicted virtue score data associated with the content data.
15. The system of claim 14 , wherein the customized virtue scoring model includes an artificial intelligence (AI) model that is trained based on at least one of: the custom survey data or the survey results data.
16. The system of claim 15 , wherein the AI model includes a responsibility model and the predicted virtue score data indicates a responsibility score that is based on an amount the content data addresses legal or ethical principles.
17. The system of claim 15 , wherein the AI model includes an equitability model and the predicted virtue score data indicates an equitability score that is based on an amount of bias in the content data.
18. The system of claim 15 , wherein the AI model includes a reliability model and the predicted virtue score data indicates a reliability score that indicates variations in an others virtue scores.
19. The system of claim 15 , wherein the AI model includes an explainability model and the predicted virtue score data indicates an explainability score associated with the content data.
20. The system of claim 15 , wherein the AI model includes a morality model and the predicted virtue score data indicates a morality score associated with the content data.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/820,398 US20230110815A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with customizable virtue scoring models and methods for use therewith |
PCT/US2022/041024 WO2023064037A1 (en) | 2021-10-12 | 2022-08-22 | Artificial intelligence platform and methods for use therewith |
EP22881529.6A EP4416658A1 (en) | 2021-10-12 | 2022-08-22 | Artificial intelligence platform and methods for use therewith |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163262397P | 2021-10-12 | 2021-10-12 | |
US202163262396P | 2021-10-12 | 2021-10-12 | |
US202163262395P | 2021-10-12 | 2021-10-12 | |
US17/820,398 US20230110815A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with customizable virtue scoring models and methods for use therewith |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230110815A1 true US20230110815A1 (en) | 2023-04-13 |
Family
ID=85796964
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/820,407 Pending US20230111112A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with automatic analysis data and methods for use therewith |
US17/820,386 Pending US20230114826A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with customizable content analysis control panel and methods for use therewith |
US17/820,398 Pending US20230110815A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with customizable virtue scoring models and methods for use therewith |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/820,407 Pending US20230111112A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with automatic analysis data and methods for use therewith |
US17/820,386 Pending US20230114826A1 (en) | 2021-10-12 | 2022-08-17 | Ai platform with customizable content analysis control panel and methods for use therewith |
Country Status (3)
Country | Link |
---|---|
US (3) | US20230111112A1 (en) |
EP (1) | EP4416658A1 (en) |
WO (1) | WO2023064037A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11806629B2 (en) * | 2020-03-24 | 2023-11-07 | Virtuous AI, Inc. | Artificial intelligence models for moral insight prediction and methods for use therewith |
US12061970B1 (en) * | 2023-10-06 | 2024-08-13 | Broadridge Financial Solutions, Inc. | Systems and methods of large language model driven orchestration of task-specific machine learning software agents |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180373781A1 (en) * | 2017-06-21 | 2018-12-27 | Yogesh PALRECHA | Data handling methods and system for data lakes |
US10587464B2 (en) * | 2017-07-21 | 2020-03-10 | Accenture Global Solutions Limited | Automatic provisioning of a software development environment |
AU2019372358A1 (en) * | 2018-11-01 | 2021-05-20 | Everbridge, Inc. | Analytics dashboards for critical event management software systems, and related software |
US20200250525A1 (en) * | 2019-02-04 | 2020-08-06 | Pathtronic Inc. | Lightweight, highspeed and energy efficient asynchronous and file system-based ai processing interface framework |
-
2022
- 2022-08-17 US US17/820,407 patent/US20230111112A1/en active Pending
- 2022-08-17 US US17/820,386 patent/US20230114826A1/en active Pending
- 2022-08-17 US US17/820,398 patent/US20230110815A1/en active Pending
- 2022-08-22 WO PCT/US2022/041024 patent/WO2023064037A1/en active Application Filing
- 2022-08-22 EP EP22881529.6A patent/EP4416658A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20230111112A1 (en) | 2023-04-13 |
WO2023064037A1 (en) | 2023-04-20 |
EP4416658A1 (en) | 2024-08-21 |
US20230114826A1 (en) | 2023-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11277452B2 (en) | Digital processing systems and methods for multi-board mirroring of consolidated information in collaborative work systems | |
US11775890B2 (en) | Digital processing systems and methods for map-based data organization in collaborative work systems | |
US11935080B2 (en) | Reinforcement machine learning for personalized intelligent alerting | |
US20230114826A1 (en) | Ai platform with customizable content analysis control panel and methods for use therewith | |
US9619531B2 (en) | Device, method and user interface for determining a correlation between a received sequence of numbers and data that corresponds to metrics | |
US20180005161A1 (en) | System and method for determining user metrics | |
US11238383B2 (en) | Systems and methods for creating and managing user teams of user accounts | |
US9183592B2 (en) | Systems and methods for graphically enabled retirement planning | |
US11238409B2 (en) | Techniques for extraction and valuation of proficiencies for gap detection and remediation | |
US11663839B1 (en) | Polarity semantics engine analytics platform | |
Kolyshkina et al. | Interpretability of machine learning solutions in public healthcare: The CRISP-ML approach | |
US20210150443A1 (en) | Parity detection and recommendation system | |
US10134009B2 (en) | Methods and systems of providing supplemental informaton | |
US11270213B2 (en) | Systems and methods for extracting specific data from documents using machine learning | |
US9213472B2 (en) | User interface for providing supplemental information | |
US20200111046A1 (en) | Automated and intelligent time reallocation for agenda items | |
US20160124585A1 (en) | Typeahead features | |
US20220207445A1 (en) | Systems and methods for dynamic relationship management and resource allocation | |
US20240009575A1 (en) | Ethical ai development platform and methods for use therewith | |
Shaw et al. | Participation inequality in the gig economy | |
US20240233219A1 (en) | System and method for improved data structures and related interfaces | |
US20210390263A1 (en) | System and method for automated decision making | |
US20180060434A1 (en) | Measuring member value in social networks | |
US20150134415A1 (en) | Automated Process for Obtaining, Analyzing and Displaying Data in Story Form | |
EP3846092A1 (en) | Device and method for promoting eco-friendly actions and helping to achieve predetermined environmental goals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VIRTUOUS AI, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DONOVAN, RORY;REEL/FRAME:060846/0303 Effective date: 20220816 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |