WO2022225793A1 - Outil de conception de systèmes d'intelligence artificielle - Google Patents

Outil de conception de systèmes d'intelligence artificielle Download PDF

Info

Publication number
WO2022225793A1
WO2022225793A1 PCT/US2022/024875 US2022024875W WO2022225793A1 WO 2022225793 A1 WO2022225793 A1 WO 2022225793A1 US 2022024875 W US2022024875 W US 2022024875W WO 2022225793 A1 WO2022225793 A1 WO 2022225793A1
Authority
WO
WIPO (PCT)
Prior art keywords
tool
data
user
project
input
Prior art date
Application number
PCT/US2022/024875
Other languages
English (en)
Inventor
Christine Meinders
Original Assignee
Christine Meinders
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/234,752 external-priority patent/US20210240892A1/en
Application filed by Christine Meinders filed Critical Christine Meinders
Priority to AU2022260264A priority Critical patent/AU2022260264A1/en
Priority to EP22792236.6A priority patent/EP4327229A1/fr
Priority to CA3217360A priority patent/CA3217360A1/fr
Publication of WO2022225793A1 publication Critical patent/WO2022225793A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model

Definitions

  • the present invention is directed to artificial intelligence design tools.
  • AI systems often suffer from severe biases in how the systems are constructed.
  • the systems are trained on datasets, and the datasets can have inherent limitations in the data provided.
  • conventional face recognition software protocols might be trained on primarily Caucasian faces and have trouble recognizing other races.
  • conventional voice recognition systems (often seen in smart assistants) were predominantly trained with primarily male voices.
  • a method provides for receiving input, at an interface on a computing device.
  • the input includes a dataset, an analysis for the dataset, and an output medium.
  • the method then provides for selecting, based on the received input, at least one algorithm from a plurality of algorithms.
  • the method then provides for processing, via the computing device, the received input with the at least one algorithm to yield an output.
  • the output is provided at the interface on the computing device.
  • selecting at least one algorithm includes determining whether the received input corresponds to requirements associated with each algorithm in the plurality of algorithms. The method then provides for selecting algorithms of the plurality of algorithms, based on determining that the received input corresponds to requirements associated with the selected algorithms.
  • the input includes a format for the output, a supplementary dataset, a type of the dataset, and/or input consideration variables.
  • the at least one algorithm includes an artificial intelligence model.
  • the artificial intelligence model can be selected from a plurality of artificial intelligence approaches, including: an artificial narrow intelligence approach, a non-symbolic artificial intelligence approach, a symbolic artificial intelligence approach, a hybrid symbolic and non- symbolic artificial intelligence approach, and a statistical artificial intelligence approach.
  • the at least one algorithm includes a machine learning model.
  • the machine learning model can be selected from a plurality of machine learning models, including: a decision tree, a Bayesian network, an artificial neural network, a support vector machine, a convolutional neural networks, and a capsule network.
  • the machine learning model was trained on the received input.
  • the machine learning model was trained, via the computing device, on a subset of a database of artificial intelligence systems.
  • the subset can include artificial intelligence systems with datasets comprising metadata corresponding to metadata of the received dataset and/or the output medium.
  • the output includes an indication of whether the at least one algorithm successfully processed the received input.
  • the method includes additional steps.
  • the additional steps can provide for determining, via the computing device, whether the output comprises at least one bias in a plurality of biases. For example, the present disclosure searches for an unwanted bias (a bias unwanted by the user). Based on determining that the output comprises the at least one bias, the method provides for identifying a portion of the received input which corresponds to the determined bias. The method then provides for displaying the identified portion of the received input at the interface on the computing device.
  • the method provides for removing the identified portion
  • the method then provides for retrieving, via the computing device, supplementary input data in a database of artificial intelligence systems.
  • the supplementary input data corresponds to the identified portion of the received input and does not comprise the at least one bias.
  • the method then provides for displaying the supplementary input data at the interface on the computing device.
  • the method additionally provides for receiving a request, via the interface on the computing device, to process a second selection of input data.
  • the second selection of input data includes the received input with the supplementary input data in place of the identified portion.
  • the method then provides for processing, via the computing device, the second selection of input data with the at least one algorithm. This yields a second output.
  • the method provides for displaying the second output at the interface on the computing device.
  • the second-output can be a revision of the first output.
  • identifying the portion of the received input corresponding to the determined bias includes processing metadata associated with each of the received input.
  • the metadata can include AI tagging, or identification of biases in the plurality of biases corresponding to each of the received input.
  • a method provides for receiving input, at an interface on a computing device.
  • the input includes a dataset, an analysis for the dataset, an output medium, and/or a processed output.
  • the processed output includes an artificial intelligence system based on the dataset, the analysis for the dataset, and the output medium.
  • the method provides for determining, via the computing device, whether metadata associated with the received input comprises at least one bias in a plurality of biases.
  • the method then provides for identifying a portion of the received input corresponding to the at least one bias.
  • the method then provides for displaying, at the interface on the computing device, the identified portion and the at least one bias.
  • the method provides for retrieving, via the computing device, supplementary input data from a database of artificial intelligence systems.
  • the supplementary input data corresponds to the identified portion of the received input and does not comprise the at least one bias.
  • the method then provides for displaying the supplementary input data at the interface on the computing device.
  • the method provides for receiving a request, via the interface for the computing device, to process a second selection of input data.
  • the second selection of input data includes the received input with the supplementary input data in place of the identified portion.
  • the method then provides for processing, via the computing device, the second selection of input data to yield an output.
  • the method then provides for displaying the output at the interface on the computing device.
  • a third embodiment of the present disclosure provides for a non-transitory computer-readable medium.
  • the non-transitory computer-readable medium includes embedded computer-readable code.
  • the code when loaded on a computing device, causes the computing device to perform a series of steps.
  • the steps include receiving input, at an interface on the computing device.
  • the input includes a dataset, an analysis for the dataset, and/or an output medium.
  • the steps then provide for selecting, based on the received input, at least one algorithm from a plurality of algorithms.
  • the steps then provide for processing, via the computing device, the received input with the at least one algorithm to yield an output.
  • the steps then provide for displaying the output at the interface on the computing device.
  • the steps provide for determining, via the computing device, whether the output comprises at least one bias in a plurality of biases. The steps then provide for identifying a portion of the received input corresponding to the determined bias, based on determining that the output comprises at least one bias. The steps then provide for displaying the identified portion of the received input at the interface on the computing device.
  • the steps provide for removing the identified portion from the received input to yield updated input.
  • the steps then provide for retrieving, via the computing device, supplementary input data in a database of artificial intelligence systems.
  • the supplementary input data corresponds to the identified portion of the received input and does not comprise the at least one bias.
  • the steps then provide for displaying the supplementary input data at the interface on the computing device.
  • the steps provide for receiving a request, via the interface on the computing device, to process a second selection of input data.
  • the second selection of input data includes the received input with the supplementary input data in place of the identified portion.
  • the steps then provide for processing, via the computing device, the second selection of input data with the at least one algorithm to yield a second output.
  • the second output is displayed at the interface on the computing device.
  • identifying the portion of the received input corresponding to the determined bias further includes processing metadata associated with each of the received input.
  • the metadata includes identification of biases in the plurality of biases corresponding to each of the received input.
  • the present disclosure refers to various machine learning or artificial intelligence algorithms or models. Any machine learning or artificial intelligence algorithm, as known in the art, can be used to perform various steps of the present disclosure, as would be readily apparent to one skilled in the art.
  • the at least one algorithm (discussed above) is created from a learning algorithm.
  • the present disclosure uses “algorithms” and “models” interchangeably.
  • the disclosed tool allows users to define the type of artificial intelligence or artificial life they are designing within. Conventionally, users only design with artificial narrow intelligence and artificial life, but the present disclosure provide examples of artificial narrow intelligence and artificial super intelligence to reference additional approaches to AI.
  • the AI tool also includes symbolic, non-symbolic and statistical systems.
  • the present disclosure refers to various systems and medium. Any system and/or output medium can be used by the disclosed AI tool, as would be readily contemplated by one skilled in the art.
  • FIG. 1 shows an exemplary methodology for creating an AI system, according to an embodiment of the present disclosure.
  • FIGs. 2A-2G demonstrate exemplary input selections for an AI interface, according to various embodiments of the present disclosure.
  • FIG. 3 shows an exemplary methodology for identifying a bias in a created AI system, according to an embodiment of the present disclosure.
  • FIG. 4 shows an exemplary methodology for identifying a bias in an externally created AI system, according to an embodiment of the present disclosure.
  • FIG. 5 shows an exemplary methodology for removing a bias in a created AI system, according to an embodiment of the present disclosure.
  • FIGs. 6A-6B show an exemplary methodology for a user to build an AI system, according to an embodiment of the present disclosure.
  • FIG. 7 shows an exemplary system for building and/or evaluating an AI system, according to an embodiment of the present disclosure.
  • FIG. 8 shows an exemplary comparison of how AI data specific to one medium is used in a variety of mediums, according to an embodiment of the present disclosure.
  • FIGs. 9A-9C show exemplary input selections in an AI interface, according to an embodiment of the present disclosure.
  • Fig. 10 illustrates a conceptual map of a collaborative software tool, according to an embodiment of the present disclosure.
  • Fig. 11 illustrates fundaments of the collaborative software tool in use, according to an embodiment of the present disclosure.
  • Fig. 12 illustrates a process of designing social technologies using the tool, according to an embodiment of the present disclosure.
  • Fig. 13 illustrates a system map, according to an embodiment of the present disclosure.
  • Fig. 14 illustrates an exemplary project representation, according to an embodiment of the present disclosure.
  • Fig. 15 illustrates an exemplary user interface screen of the login page, according to an embodiment of the present disclosure.
  • Fig. 16 illustrates an exemplary user interface of a section where users can select whether to remake (transform), create (encode), deconstruct (decode) or contribute to a social technology project, according to an embodiment of the present disclosure.
  • Fig. 17 illustrates an exemplary user interface where the project creator is asked if they will enter as an individual or as a group, according to an embodiment of the present disclosure.
  • Fig. 18 illustrates an exemplary user interface illustrating a discover section of the tool, according to an embodiment of the present disclosure.
  • Fig. 19 is an exemplary user interface section of the tool where the user or group identifies the intended audience of the project, according to an embodiment of the present disclosure.
  • Fig. 20A illustrates an exemplary screen shot relating to the frames section of the tool showing aspects of frames, traces and consent, according to an embodiment of the present disclosure.
  • Fig. 20B illustrates an exemplary screen shot relating to the frames section of the tool, according to an embodiment of the present disclosure.
  • Fig. 20C illustrates an exemplary screen shot relating to the frames section of the tool and aspects of flagging and rating, according to an embodiment of the present disclosure.
  • Fig. 20D illustrates an exemplary screen shot relating to the frames section of the tool and aspects of content, according to an embodiment of the present disclosure.
  • Fig. 20E and F illustrate exemplary screen shots relating to the frames section of the tool, according to an embodiment of the present disclosure.
  • Figs. 20G-20M illustrates exemplary screen shots relating to the frames section of the tool, according to an embodiment of the present disclosure.
  • Figs. 21A-H illustrate exemplary user interfaces relating to the blueprint section, according to embodiments of the present disclosure.
  • Fig. 22 illustrates an exemplary use of the tool to shift an existing frame or recommend a new frame, according to an embodiment of the present disclosure.
  • Figs. 23A and 23B illustrate an example of the scaffolding section of the tool, according to embodiments of the present disclosure.
  • Figs. 24A-F are exemplary user interfaces relating to the data implementation section of the tool, according to embodiments of the present disclosure.
  • Figs. 24G-J are exemplary user interfaces relating to the model implementation section of the tool, according to embodiments of the present disclosure.
  • Fig. 25 illustrates an example of the activation (testing) section of the tool, according to an embodiment of the present disclosure.
  • Figs. 26A-E illustrate alternative variations of the audit section of the tool, according to embodiments of the present disclosure.
  • Figs. 27A-D are exemplary user interfaces that illustrate a project page with data collection sections, frame collection sections and voting sections, according to embodiments of the present disclosure.
  • Fig. 28 is an exemplary user interface that illustrates card selection states within the tool, according to an embodiment of the present disclosure.
  • Fig. 29 illustrates an example of adding an image or other content to a blank or new card, according to an embodiment of the present disclosure.
  • AI systems, interfaces and experiences are becoming a foundational part of the re-search, design and development of products and experiences.
  • the technical requirements of AI thinking can be challenging for those without programming experience. Therefore, the present disclosure provides an AI design tool for individuals to understand and engage in, not only the user experience of AI, but design for the systems and culture of AI. Additionally, this tool will use a deep learning architecture to find relationships from user-uploaded data.
  • the disclosed design tool provides a place for AI design thinking and creation that helps design teams, researchers, and developers start to make a space for inclusive AI design thinking. Accordingly, one embodiment of the present disclosure provides for an electronic tool for standardizing the AI design process; this tool helps users understand the different types and technical inputs for designing AI (algorithms, systems, agents, projects, experiences) and stresses the importance of culture and assumptions embedded in the design process.
  • This AI Design Tool helps designers, researchers, and developers build AI systems from technical and conceptual perspectives.
  • the exemplary AI design tool provides for at least three modes, including (1) a design/prototyping mode, (2) a cultural probe mode, and (3) a playful exploration mode.
  • the design/prototyping mode provides a technically accurate design, while still incorporating prompts for culture, bias and transparency. Some examples of the design/prototyping mode provide for localization and varying levels of connectivity, according to user preferences.
  • the cultural probe mode looks at the cultural and social considerations/biases in AI systems that were already created (either by the AI design tool or by another, external system). The cultural probe mode therefore helps researchers identify bias in an existing system, remove unwanted or potential bias, and design further AI systems for transparency and opportunities for localization.
  • the playful exploration mode allows users to build a new AI system that is primarily for learning purposes and does not need to include technically-perfect constructs.
  • the disclosed AI design tool provides a variety of benefits to overcome the limitations of conventional AI systems.
  • the disclosed AI design tool can be used by users to learn about AI systems generally.
  • the tool can identify and correct problematic assumption implicit in conventional AI products.
  • the tool can provide ease of access to construct new AI systems without the biases of conventional systems.
  • FIG. 1 shows a methodology 100 for creating an AI system using the disclosed
  • Methodology 100 begins at step 110 by receiving input.
  • the input can be received at an interface for an artificial intelligence tool on a computing device (as discussed further with respect to FIG. 7).
  • the input includes a dataset, an analysis for the dataset, and an output medium.
  • the input can include additional selections from a user related to the type of analysis, additional datasets, and acceptable output mediums (as discussed further with respect to FIGs. 2A-2H).
  • the input further includes a format for the output, a supplementary dataset, a type of the dataset, metadata corresponding to the dataset, and input consideration variables.
  • a user “tags” the input dataset as including certain biases. For example, the user identifies the input dataset as being trained on only men, or only people of a particular race/ethnicity. In another example, the user identifies the analysis to be used on the database as created by only creators located in the Western Hemisphere.
  • the tool prompts a user to choose whether to disclose or not disclose the uploaded data.
  • the received input includes APIs, real time sensor information, existing data sets, or creating a new dataset.
  • methodology 100 provides for selecting an algorithm and/or model based on the received input.
  • more than one algorithm can be selected.
  • the algorithm can be selected from a plurality of algorithms stored at the artificial intelligence tool.
  • the methodology 100 can provide for any artificial intelligence approach, including an artificial narrow intelligence approach, an artificial general intelligence approach, an artificial intelligence super approach, a non-symbolic artificial intelligence approach, a symbolic artificial intelligence approach, a hybrid symbolic and non-symbolic artificial intelligence approach, a statistical artificial intelligence approach, and/or any other AI approach as known in the art.
  • the machine learning model including any of: a decision tree, a Bayesian network, an artificial neural network, a support vector machine, a convolutional neural networks, and a capsule network.
  • an algorithm provided by a selected machine learning model was trained on the received input.
  • the artificial intelligence tool comprises a database of pre-existing AI systems and datasets.
  • the selected machine learning model was trained on a subset of these pre-existing AI systems and datasets, and can have been trained only on AI systems and datasets which have metadata corresponding to metadata of the input dataset and the output medium.
  • the artificial intelligence tool determines whether the received input corresponds to requirements associated with each algorithm in the plurality of algorithms. For example, if the user wishes to build an AI system with a binary classifier as the output medium, the artificial intelligence tool will select a machine learning algorithm with a binary classifier. The artificial intelligence tool can verify that the dataset can be classified as a binary output. [0084] Some examples of step 120 further include pre-processing the data. For example, the artificial intelligence tool identifies variables in the input dataset; these variables can correspond to variables that will be used by the selected algorithm.
  • step 120 the algorithm is selected by an artificial intelligence process, as would be readily contemplated by one skilled in the art.
  • methodology 100 provides for processing the received input with the selected algorithm. This yields an output.
  • the output can be an AI system which is displayable on the output medium and is trained by the input dataset.
  • methodology 100 additionally provides an indication of whether the selected algorithm successfully processed the received input.
  • methodology 140 provides for displaying the output.
  • the output can be displayed in the output medium.
  • the output can be an AI system.
  • the output medium can be any of the output formats discussed below with respect to screen 200F of FIG. 2F or screen 900C of FIG. 9C.
  • the output is provided, and not displayed.
  • the system can provide for haptic feedback, tactile output, and/or auditory output. Any other sensory output or XR output can also be provided for by the AI tool.
  • the output is experience in real life, augmented reality, virtual reality, or any other emerging reality.
  • FIGs. 2A-2H demonstrate exemplary input selections for an AI interface, according to various embodiments of the present disclosure.
  • FIG. 2A shows an interface selection screen 201 and screen 202.
  • Screen 201 prompts a user to select between an artificial narrow intelligence 271, an artificial general intelligence 272, an artificial super intelligence 273, a dynamical systems/embodied and embedded cognition 274, a software (e.g., cellular automata) 275, a hardware 276 (e.g., robots), and a wetware 277 (e.g., synthetic biology).
  • a software e.g., cellular automata
  • a hardware 276 e.g., robots
  • a wetware 277 e.g., synthetic biology
  • FIG. 2B shows an interface selection screen 200B which prompts the user to select an existing application, algorithm or hardware device.
  • a user chooses one of: a body 210, a smart home device (e.g., Alexa) 211, an algorithm 212, an autonomous car interface 212, a chatbot214, an infrastructure 215, and a wearable 216. Therefore, the disclosed AI design tool provides an interface to integrate with, and modify, existing AI materials.
  • a select number of existing AI materials are shown in screen 200B, the present disclosure contemplates any existing AI material, as known in the art, can be included on an exemplary screen 200B.
  • FIG. 2C shows an interface selection screen 200C which prompts the user to select an input.
  • the input can be a dataset 219 (e.g., big data 220, little data 221, or device specific data 222) and a consideration 227 (e.g., a social consideration 223, a cultural consideration 224, an ethical consideration 225, or a creative consideration 226).
  • Social considerations 223 include, for example, job loss probability of an industry due to automation of a task, inclusion of a particular societal group or demographic, and bias towards or against a particular societal group or demographic.
  • Cultural considerations 224 include, for example, facial/expression data, audio data, and internet of things products.
  • Cultural considerations further include determining how emotion and feelings vary across cultures (or how various social preferences are location and cultural specific).
  • Ethical considerations 225 include any determinations that must be made on a right or wrong (e.g., binary) basis. For example, ethical considerations 225 should be used for designing an AI system that produces autonomous car decision making.
  • Creative considerations 226 include the user’s desire for computational creativity, exploratory learning of AI development, a user’s intention to transform particular data, a generational criterion, or an evaluative criterion.
  • selection screen 200C provides a variety of datatype and potential considerations to choose from when a user is building an AI system. Upon receiving datatype(s) and a consideration, the artificial intelligence tool can eventually evaluate whether the final, created AI system achieves the selected consideration 227.
  • FIG. 2D shows an interface selection screen 200D which prompts the user to select a type of learning algorithm.
  • learning algorithm can be a supervised algorithm 230 or an unsupervised algorithm 231.
  • the user makes a second selection, including reinforcement learning 232, a support vector machine 233, a classifier 234, a clustering technique 235, and a caring-for algorithm 236.
  • An exemplary caring-for algorithm 236 provides automated plant watering
  • Additional caring-for algorithms 236 can be provided for personnel or other system tasks.
  • FIG. 2E shows an interface selection screen 200E which prompts a user to select an intent for the AI system.
  • the intent can be a physical intent 241, a social intent 242, an emotional intent 243, a creative intent 244, an ethical intent 245, a cultural intent 246, or a personal assistant intent 247.
  • a physical intent 241 corresponds to an AI system which is configured to provide some physical response to a user.
  • a physical response can include haptic feedback such as a jarring vibration and an emoji visual.
  • a social intent 242 corresponds to an AI system which is configured to facilitate political or socio-political activism.
  • an exemplary AI system with a social intent can facilitate participation in political rallies.
  • An emotional intent 243 can correspond to an AI system which is responsive to a user’s emotions. Emotional intent 243 can be problematic if a user does not know who designed the emotions database and model, and from which cultural perspective; additionally, a user can prefer to opt in or consent to the utilization of an emotionally responsive AI.
  • an exemplary AI system with an emotional intent 243 provides sounds according to a user’ s mood, light changes according to a user’ s mood, and scent generation based on a user’ s mood.
  • a creative intent 244 corresponds to an AI system which does not need to correspond directly to algorithm accuracy, and can be used for user learning.
  • An ethical intent 24 corresponds to an AI system which must take into account ethical considerations.
  • a cultural intent 246 corresponds to an AI system which must take into account cultural norms of different societal groups.
  • a smart assistant intent 247 corresponds to an AI system which is configured to provide assistant to a user.
  • an AI system with a smart assistant intent 247 assists a user with travel arrangements (e.g. booking flights, seeing the weaver, booking a cab).
  • FIG. 2F shows an interface selection screen 200F with exemplary output formats.
  • the output formats can include printed language 250, synthetic speech 251, physical object manipulation 252, a device change 253, AI tagging 254, a report summary 255, and exportable code output or data production 256.
  • Interface selection screen 200F prompts a selection of a specific material / form for the construct AI system.
  • Printed language 250 can include modifying language, or producing culturally / socially specific language.
  • Synthetic speech 251 can include when users communicate or the system communicates (e.g. a synthetic speech system). In some examples, synthetic speech 251 modifies how language is personalized to users, in a transparent way. For example, a user can opt in to choosing specific type of speech or producing culturally / socially specific language.
  • Physical object manipulation 252 can include manipulating objects in the real or virtual worlds.
  • Device change 253 can include pitch changing software.
  • AI tagging 254 can include tagging input data, output data, or a model.
  • Exportable code output or data production 256 can include an existing product that the user may export or link out to alternative databases or models.
  • FIG. 2G shows an interface selection screen 200G with exemplary behaviors.
  • interface selection screen 200G corresponds to a sociocultural design tool.
  • exemplary behaviors include, for example, physical behaviors 260, social behaviors 261, and emotional behaviors 262.
  • a physical behavior 260 corresponds to an AI system which is configured to provide physical feedback to a user.
  • the physical behavior 260 of the AI system can include, for example, physical touch, talking, movement of devices controlled by the AI system, and smiling emojis.
  • a social behavior 261 corresponds to an AI system which is configured to provide social feedback to a user.
  • the social behavior 261 of the AI system can include, for example, mirroring a user’s behavior, identifying particular aspects of a user’s behavior, or subverting particular actions of a user.
  • An emotional behavior 262 corresponds to an AI system which is configured to provide emotional feedback to a user.
  • the emotional behavior 262 of the AI system can include, for example, identifying that a user is internalizing certain feelings, that a user is externalizing certain feelings, and that a user is acting defiant.
  • a user can make more than one selections on any of screens 200A-200G. Although particular options are shown in each of screens 200A-200G, the present disclosure contemplates each of the screens 200A-200G can include any selections as known in the art.
  • an exemplary interface screen provides a text box.
  • a user can enter text related to a prompt; the disclosed tool can analyze the text with any algorithm discussed herein to provide additional learning for the disclosed tool or additional data for any aspect of the disclosed tool.
  • the artificial intelligence tool prompts the user for particular selections based on the user’s previous input. For example, if the user makes selections in accordance with building an interface for Alexa, the artificial intelligence tool prompts the user to choose social considerations 223 on FIG. 2C and emotional intent on FIG. 2E.
  • the artificial intelligence tool collects usage data of user selections on screens 200A-200G over a plurality of usage instances.
  • the artificial intelligence tool learns patterns of the user according to the user selections (learning, for example, via a machine learning model as discussed further below).
  • the artificial intelligence tool thereby identifies inherent biases of the user according to the user selections.
  • the artificial intelligence tool can then prompt the user on the various screens 200A-200G.
  • FIG. 3 shows an exemplary methodology 300 for identifying a bias in a created
  • AI system For example, the created AI system can be the output displayed at step 140 of FIG. 1.
  • Methodology 300 begins at step 310 by receiving an output.
  • methodology 300 provides for determining whether the output has a bias.
  • the artificial intelligence tool can search for any bias in a plurality of biases (e.g., social biases, cultural biases, gender biases, racial biases, and interaction biases created through usage over time).
  • the artificial intelligence tool retrieves metadata or tagging of the input dataset to determine whether there are inherent limitations of the input dataset (e.g., was the dataset trained on only people of a particular race, gender, world view, geography, or any other limitation as known in the art).
  • the methodology 300 searches only for an unwanted bias. For example, the user can select biases that the artificial intelligence tool should identify. In other examples, the methodology 300 provides for suggesting what bias is likely, even if no bias is identified.
  • methodology 300 can provide for displaying, at step 340, that no bias was identified.
  • step 330 identifies a portions of the received input corresponding to the bias.
  • the artificial intelligence tool can provide for processing metadata associated with each of the received input.
  • the metadata can include identification of biases corresponding to each of the received input.
  • Step 330 can identify the portion of the input dataset which has the bias identified at step 320.
  • methodology 300 provides for displaying the identified portion and the bias.
  • the identified portion and the bias can be displayed at an interface display at a user’s computing device.
  • FIG. 4 shows an exemplary methodology 400 for identifying a bias in an externally created AI system.
  • Methodology 400 receives an artificial intelligence system as the input dataset at 410.
  • Step 410 can additionally, or alternatively, receive a dataset, an analysis for the dataset, an output medium, an algorithm/model, and a processed output.
  • the processed output can be an artificial intelligence system based on the dataset, the analysis for the dataset, and the output medium.
  • methodology 400 provides for determining, via the disclosed artificial intelligence tool, whether metadata associated with the received input from step 410 has a bias.
  • Methodology 400 provides similar bias identification and display (steps 430 and
  • methodology 400 provides a method for analyzing existing artificial intelligence systems and identifying whether the existing system contains hidden limitations or biases.
  • the disclosed AI tool provides for deconstructing problematic approaches to the design and development of conventional AI systems, while designing for new knowledge systems.
  • FIG. 5 shows an exemplary methodology 500 for removing an unwanted bias in a created AI system.
  • Methodology 500 begins at step 510 with removing an identified portion from a received input.
  • the disclosed tool can provide for removing a portion of the data from the received input corresponding to an unwanted bias.
  • the identified portion can be identified according to steps 330 and 430 of FIGs. 3 and 4, respectively.
  • Methodology 500 then proceeds to step 520 which provides for retrieving supplementary input data.
  • the supplementary input data can be any of the input data discussed above with respect to step 110 of FIG. 1.
  • the disclosed tool can retrieve supplementary input data from a database of AI systems.
  • the supplementary input data corresponds to the identified portion of the received input and does not include the selected bias.
  • the disclosed tool identifies that a facial recognition AI system comprises a dataset of Caucasian faces with little other racial diversity. Therefore, the disclosed tool retrieves a dataset of faces comprising a greater amount of racial diversity. In another example, the disclosed tool retrieves an AI facial recognition system, which was trained on a dataset of faces with greater levels of racial diversity than the original AI facial recognition system.
  • Methodology 500 then proceeds to step 530 which provides for receiving a request to process a second selection of input data including the supplementary input data (retrieved at step 520).
  • the user can select the supplementary input data at a user interface (for example, the interface screens as discussed with respect to FIGs. 2A-2G).
  • Methodology 500 can then proceed to process the second selection of input data to yield a second output (step 540) and display the second output (step 550).
  • Steps 540 and 550 of methodology 500 can correspond to steps 130 and 140 of methodology 100, as discussed above with respect to FIG. 1.
  • FIG. 5 shows an exemplary methodology 500 which provides for minimizing biases in created AI systems.
  • the disclosed design tool identifies that an artificial intelligence voice recognition system was trained by white male voices (and no other types of voices). Such an artificial intelligence voice recognition system might prioritize enunciation, choose a loud voice over a soft voice, etc.
  • the disclosed design tool can identify and provide these biases to a user.
  • the disclosed design tool can suggest adjustments to the artificial intelligence voice recognition system; for example, adjusting the data set to include women, or artificially decreasing the volume and modifying the enunciation.
  • a user can use a neural network to analyze a dataset via the disclosed AI tool.
  • the user then switches to a classification algorithm.
  • the tool can provide for displaying the output from the neural network compared against the output from the classification algorithm.
  • the tool can identify the changes and determine which algorithm provided a more accurate output.
  • system 700 provides a system 700.
  • the 700 includes a plurality of users 701a, 701b, 701c... 701n; a plurality of user AI creation devices 702a, 702b, 702c ... 701n; a network 703; and an external database 704.
  • the plurality of users 701a, 701b, 701c... 701n each have an associated user AI creation devices 702a, 702b, 702c ... 701n.
  • the user AI creation devices 702a, 702b, 702c ... 701n can include a software application running the disclosed AI tool, according to any of the embodiments discussed herein.
  • the users 701a, 701b, 701c... 701n are connected to a network 703.
  • the external computing device can facilitate information exchange between the plurality of user AI creation devices 702a, 702b, 702c ... 701n and the users 701a, 701b, 701c... 701n.
  • the database is uploaded to the external computing device 704.
  • the users 701a, 701b, 701c... 701n can choose for their associated user AI creation devices 702a, 702b, 702c ... 701n to be disconnected from the network 703.
  • the users 701a, 701b, 701c... 701n selectively choose which information/data is shared by their associated user AI creation devices 702a, 702b, 702c ... 701n with the network 703.
  • FIG. 9 additional interface screens are shown for an exemplary embodiment of the disclosed AI tool.
  • the disclosed AI tool prompts the user to select profile info 902; smart home and/or internet of things product inputs 904; emotions analysis 906; little data 908; touch 910; and mapping data 912.
  • the disclosed AI tool prompts the user to select an analysis algorithm, including any of a swarm theory 914, a sorting algorithm 916, a neural network 918, a searching algorithm 920, a watching algorithm 922, and a linear regression analysis 924.
  • the disclosed AI tool prompts the user to select an output.
  • the output can include any system or medium that the user intends to interact with the product provided by the AI tool.
  • screen 900C shows an autonomous car 926, a surveillance camera 928, an art generation product 930, an ocean product 932, an algorithm 934, a music generator 936, a digital profile or wearable device 938, a plant growth model 940, a fraud detection product 942, a chatbot or robot 944, a quilting design product 946, and an artificial intelligence healthcare product 948.
  • any selections can be provided to a user, as known in the art.
  • any machine learning or artificial intelligence algorithm as known in the art, can be used in the various embodiments of the present disclosure.
  • systems and medium are shown in screen 900C, the present disclosure contemplates that any system and/or output medium can be used by the disclosed AI tool, according to the various embodiments of the present disclosure.
  • the disclosed AI tool can be an in-browser generator and/or a software application, which can be used in Virtual Reality, XR, Augmented Reality and/or real life.
  • the present disclosure also contemplates that the disclosed AI tool can be operated in any form as known in the art. In other examples, it could be any computer program running on any computing device.
  • FIGs. 6A-6B show an exemplary methodology 600 for a user to build an AI system, according to another embodiment of the present disclosure.
  • Methodology 600 can be the designer/prototyper mode.
  • a user starts with a pre determined AI design question or approach. For example, a user can intend to create transparency regarding the utilization of emotions analysis in voice interfaces.
  • methodology 600 uses deep learning, the design question, any keywords and/or input data (whether user created or uploaded from an existing dataset) to (1) identify patterns, and (2) make comparisons with both labeled and unlabeled data in order to create new labels, relationships, models and/or context.
  • the user identifies the material.
  • Materials take both physical and digital forms in the design.
  • the hardware of a product may lend itself to the utilization of specific data/models/algorithms intended for that specific product.
  • digital material includes a software application, a hardware device, or any other product utilizing Artificial Intelligence.
  • the materials comprise the form of the system; with more embodied AI devices, the materials and form themselves affect how the disclosed AI tool produces output.
  • the materials can produce the form.
  • step 602 the user makes decisions regarding how and what will be designed. For example, if the user wants to design for a product like Amazon’s cloud- based voice service, Alexa, only specific design choices will be available based on that product. [0137]
  • the design tool (or service) fetches the requirements for the integration at step 604.
  • the design tool can also retrieve any tagging information related to the material (or product) chosen in step 602.
  • the user is then prompted to include data by one or many of these options: from existing data sets (step 618), user created data sets step (612).
  • the user can select real time data from a sensor or data from an API.
  • the input can also include any AI tagging (or metadata) provided by any other product.
  • the user creates a specific data type and then uploads the data type at step 614, having it verified by the service/design tool at step 616. Therefore, the data type conforms to the material chosen in step 602.
  • the user can upload pre-existing data sets that conform to the new data type.
  • Exemplary datasets include, for example:
  • multiple datasets can be used at step 620.
  • methodology 600 the user is then prompted to enter a “consideration input” and/or an intent at step where they can add cultural context, ethics, etc. (any variable that should be considered in the design process). These input considerations will be output at the end, and can also be used to highlight information throughout the design that might be relevant to that consideration. There are several benefits to entering the “consideration input”, the primary benefit is to build ethics, culture and bias controls into the design. In some examples of methodology 600, the users are reminded to design with and for these input considerations throughout the design process and not only at step.
  • the user is provided with learning algorithms, which are populated by the material.
  • Inputs to the learning algorithms can be existing datasets, user uploaded data sets, real-time sensor information, API’s, and the design question/key words (or any other input as discussed above with respect to step 110 of FIG. 1).
  • the input can also include any AI tagging (meta data) provided by a specific product (discussed further below).
  • the user can train data locally with open source SDKs and/or scale using cloud services.
  • the user identifies an intent, which reflects the intention of the design.
  • the tool prompts users to identify a personal culture of the users, and/or a culture that the user is designing for.
  • the tool can analyze and adapt later prompts to the user based on this input.
  • step 626 the user identifies the format of the output.
  • the service feeds a dataset to integrate and display in the sample output.
  • One output shown at step 628, is a prototype built on an SDK with the data the user suggested (in the form of suggested code, API, AI Tagging and/or written information). Additional outputs (not shown) can include hardware, physical material, or auditory noise.
  • Another output is auto-generated analysis/visualization (a report summary with visuals), shown at step 630.
  • This report can include technical and social/cultural considerations. In the report, the output can also highlight issues of concern with the AI design process or designed biases in data, models and demographic information about the creators.
  • An exemplary output according to step 630 can provide a recommendation to utilize pitch changing to identify the presence of an algorithm (earcon). The report can include suggestions of pitch changing libraries. Step 630 can further provide for populating the output.
  • the service then displays the sample output.
  • the tool provides AI tagging (also referred to as meta tagging).
  • AI tagging includes receiving content descriptors of (1) the algorithms/models, (2) input data used in the design of existing AI systems, (3) the demographic information of the humans or machines proposing the AI system, and (4) who created the materials and form of the AI system.
  • the disclosed tool uses the AI tags to increase algorithmic transparency by providing data, algorithm / model information in the design and development process of an AI system.
  • the disclosed tool also provides for tagging created AI systems with the demographics of the creators, content descriptors of the algorithms used, and/or content descriptors of the input data used.
  • the disclosed AI tool provides pre build non-technical considerations AI system design, giving these considerations equal importance to the technical algorithm selection.
  • Output from the disclosed AI tool therefore reduces unwanted bias that exists in conventionally-designed AI systems.
  • the output can be displayed, felt, or heard through various devices (e.g., phones, embedded haptics in clothing, and/or sound produced in location specific ML systems)
  • Examples of this AI Tagging include:
  • ⁇ AI data “gesture data from UCLA: trained on: gender (90% male-identified,
  • AI tagging is incorporated at the beginning of the AI design process (e.g. before step 110 of FIG. 1, or before steps 310 and 410 of FIGs. 3 and 4, respectively), when the user imports information from a specific product and/or dataset.
  • AI tagging is output from the design tool.
  • the disclosed AI tool receives AI tagging data from a user in his home, at a worksite, through a user’s mobile device, through a scanner, through an RFID chip embedded in a computing device.
  • the user can access the AI tags through any of these devices, or while viewing a system in augmented realities.
  • a user receives a text message identifying the bias. Any other method for uploading an AI tag or displaying AI tagging can be used as well, as contemplated by one skilled in the art.
  • the disclosed AI tool provides for collecting data insights from multiple and varied realities in order to expand the reach of an AI system beyond conventional AI systems. This data provides more holistic cultural perspectives on the roles of user bodies, location, thinking about feelings, and user interaction with color; this holistic perspective provided by the disclosed AI tool provides a different cultural perspective for users than conventional AI systems.
  • the disclosed AI tool receives user-rated data (e.g., embodied data sorting) or other reviews of conventionally-designed AI systems. The disclosed tool then identifies patterns in sorting to determine how the salience of objects and media varies across cultures.
  • the AI tool collects location-based feeling placement, when users identify where they would like to tag a feeling, by dropping a color-coded feeling in a specific location. Users can leave information in locations that can then be collected and used for a more complicated AI system, which can build across multiple data streams, across multiple realities to focus on more embodied AI experiences.
  • Virtual Reality Data Collection can be collected similarly to augmented reality data collection, as would be readily contemplated by one skilled in the art.
  • FIG. 8 provides a chart 800 showing how different data can be collected across realities.
  • chart 800 shows the datasets: media 802, behavior 804, material 806, reality type 808, and artificial intelligence model 810.
  • An exemplary artificial intelligence model 810 can include one type of media 802, one behavior 804, one material 806, and one reality type 808.
  • a collection and unsupervised learning artificial intelligence model can use textual media, throwing behavior, a phone as material, and data collected from real life.
  • a data-sorting artificial intelligence model can receive media input from textual media, audio media, video media, and 3-D object media.
  • the data-sorting artificial intelligence model can use visio-spatial sort behavior, use headset/controller material, and a virtual reality implementation.
  • an AI system which provides output for an individual experience
  • 3-D objects in physical space, can cause the items to place and/or receive, can use a phone or tablet, and provide augmented realities.
  • an AI system which provides output for a collective experience provides a photon (i.e., electric communication) and a phone/tablet.
  • the AI system is provided in Internet of Things augmented reality.
  • the AI design tool provides an interactive experience for a group of users around the world (for example, the group of users can be diverse).
  • the AI design tool provides a set of questions to the group of users and receives personal refinement from each user.
  • the set of questions can be directed towards the user’s feelings.
  • the questions range from general cultural concepts of feelings (e.g., “How would your community describe ‘feeling average’?”) to more personal ideas about how the users feel (eg., “How do you know you feel blue or melancholy?”).
  • the AI design tool collects responses over an extended period of time. This information can be sorted or analyzed using various models, including supervised learning or unsupervised learning. For example, the AI design tool groups together keywords from the iterations of questions (much like a flocking algorithmic script).
  • the AI design tool (1) predicts which questions a particular user will be comfortable answering, according to the groupings; and (2) prompt a user to consent to any of a plurality of public disclosures of the user’ s data AFTER the user has honestly answered the question. Therefore, unlike conventional data collection systems which first require a user to opt-in to disclosure before the user has provided any information, the disclosed design tool provides a platform for users to first disclose their information and then decide what they are interested in sharing. Therefore, the disclosed design tool ensures greater accuracy in user responses over conventional systems. [0170] With the grouped questions and the user responses, the disclosed AI tool examines emotional and behavioral patterns to determine future questions and to determine which questions should be provided to which users. Therefore, the disclosed AI tool provides a system for users to engage with feelings and develop their emotional health.
  • the AI design tool reveals the assumptions in the design and development of conventional systems by increasing AI literacy through user workshops reliant on the disclosed tool. Using this tool, conventional approaches to AI development and design can be deconstructed; the tool can create new approaches; and the tool redefines and provides alternatives to existing problematic knowledge systems.
  • the disclosed AI tool can identify response patterns to show traits of reported feelings across cultures and different demographics.
  • the speculative, interactive, and design practices of the disclosed AI tool provides alternative embodiments than conventional treatments for mental health diagnosis and treatment.
  • the disclosed AI tool collects and organizes different types of data across different realities or environments.
  • the AI design tool can collect data from crowd sourcing, embodied data sorting in virtual reality, and location-based feeling placement in augmented reality (e.g., the user drops a color-coded feeling in specific locations).
  • the disclosed AI tool can use the data from each reality to provide a different strength for data collection.
  • the disclosed AI tool provides an interface for users to see how their responses to a question compare with (1) their previous response and (2) other responses around the world.
  • Some embodiments include keyword search options and visualizations.
  • a tool according to the present disclosure develops a AI tool to diagnose depression; the developed tool has a lower bias than conventional diagnostic methods.
  • the disclosed tool provides embodiments focusing on mental health for hots, browsers, digital materials, smart materials, haptics, handwriting, spoken words, and locations.
  • An exemplary tool can take as input: (1) crowdsourced data about user feelings, (2) user thoughts about their feelings, (3) location data, (4) varied voluntary demographic information, and (5) clinical research regarding keyword patterns found in existing diagnostic systems and assessments.
  • the present tool provides for unwanted bias reduction by examining who designed the data collection, who contributed to the data, who created the models, which models where used, and why.
  • the exemplary tool provides supervised and unsupervised learning with more data collection.
  • the disclosed AI tool selects the algorithm to analyze the data based on the AFs database collection.
  • the exemplary tool provides a plurality of output options, including (1) visualization, (2) alternative information for inputs, (2) new words, (4) new classifications, (5) new language of emotions, (6) data from a contextual normalcy (according to the contextual normalcy embodiment discussed above, (7) data from an augmented reality distributed emotion application, and (8) intelligent location-based experiences.
  • an embodiment of the present tool provides data primarily focused on individual and collective cultures as well as project-defined communities and teams.
  • Embodiments of the invention are directed to a collaborative software tool that facilitates new ways to create and shape emerging technologies.
  • the collaborative software tool places equal weight on both social and technical components of the design process, centers multiple perspectives and community driven design, and is explicitly exploratory (not prescriptive).
  • Fig. 10 illustrates a conceptual map of the collaborative software tool 1000.
  • the tool 1000 includes tools for data mapping 1004, model mapping 1008, and form and material mapping 1012.
  • the tool allows for creating AI 1016, exploring concepts 1020, exploring humanistic lenses 1024, and implementation using the final data models 1028.
  • the tool 1000 may also include analyzing AI 1032, which includes project mapping 1036.
  • the project mapping 1036 may include: a concept section 1040, humanities section 1044 and AI section 1048.
  • the concept section 1040 shows the initial concept in the blueprint or audit of the AI.
  • the humanities section 1044 shows the humanities lenses used in the blueprint or audit of the AI.
  • the AI section 1048 includes all of the data, models, and form and material and also shows the blueprint or audit of the final AI.
  • the sections are modular and are movable in the tool. Additional details about the tool are disclosed hereinafter.
  • Fig. 11 illustrates an example of the fundamentals in action with the mapping sections implemented 1100.
  • the explore concepts step includes exploring the methodologies of research 1104.
  • the process continues by exploring humanistic lenses 1108.
  • the process continues by using a training model inference 1112.
  • the process continues with Critical AI 1116.
  • Fig. 12 illustrates a process of designing social technologies using the tool and various sections of the tool 1200. It will be appreciated that the sections are modular and that the order of the process is not limited to that shown in Fig. 12.
  • a user may access the tool at a landing 1204 where the user is prompted to sign in or access the tool anonymously 1208. The user then selects that they want to encode 1212 to indicate that they are designing social technologies.
  • the designers may also enter into the project by trying to understand an existing project (decode) 1214, transform or modify an existing one (like remixing) 1216, or contribute a perspective, data or model, or any type of insight into an existing project 1218.
  • the tool may begin with a discover process 1220, in which designers are asked to think about the discovery process (e.g., who are they designing with, who are they designing for) and the context (e.g., projects focused on intelligence vs. life).
  • the designers are asked to describe or define the project 1224.
  • the designers are also prompted to identify themselves or, in some cases, may want to be protect and not identify themselves.
  • the community is defined or the individuals on the project may also be identified in this section.
  • the user also decides the “governing system” of the project 1228, including which parts of the project are open to additional outside collaboration.
  • the users implement the data collection system, where the user decides which data they want attached to the project in various sections.
  • the users decide the duration of the data usage, and the rules for usage, and which pieces of data (or information) will be used.
  • the project may be attached to the tool’s database but may also link out to other databases.
  • the user may also prototype what it would be like to use a different database (e.g., relational vs. object oriented, etc.).
  • the designers are also asked to identify who they are creating the project for (i.e., the audience) 1232, such that it can be determined whether there are any potential disconnects between the people creating the tools and the people using or being affected by these tools.
  • the process also includes a blueprint section - blueprint 1
  • part 1 of blueprint 1240 focuses on what does the project do (e.g., inference) and part 2 of blueprint 1244 focuses on how it works (e.g., training).
  • the blueprint section is interchangeable, and can take on its own form.
  • the tool also includes a data collection section 1250.
  • the user In the data collection process 1250 the user is asked to critically think about data collection, type of data, who decides what becomes data, and who creates the data question.
  • the user either retrains or identifies or creates “rules” 1254.
  • Rules may include artificial intelligence, life, and knowledge models - which may include” deep learning, machine learning models, algorithms, and community or individually defined rules that may become a model or algorithm. Users are prompted to think about the knowledge system, transfer of knowledge to information, exploration of how information becomes data.
  • the data collection portion in the blueprint section
  • the user also explores both form and material 1258.
  • a user might want to speak to form only, or material only, or both form and material.
  • the users can create or use community and individual contributed design patterns and explorations. These patterns and/or behaviors may also incorporate a combination of patterns across realities.
  • one pattern in AR may affect or incorporate information from VR, or IRL.
  • the design or learning output from one reality (VR) may generate or be an input for another reality (AR).
  • the tool may also include a frames section 1260 and/or a frameshifts sections
  • the frames section 1260 or frameshifts section 1262 may include frames, lenses, filters, etc. that can be applied to the project holistically or on each section. These frames may represent current thinking about a technology from a specific perspective, or they may offer shifts in the way the technology or parts of the technology are framed or understood. The frames may also encourage designers or contributors to think critically about a technology project.
  • the user may select frames or add frames 1261.
  • a user may select frames or add frames 1263.
  • the process also includes a prototype section 1270, scaffolding section 1274, implementation section 1279, activation section (also referred to as testing) 1282, and audit section 1286, discussed in further detail hereinafter.
  • the tool may also include a reflections section 1290 and a tracing section 1294.
  • the scaffolding section 1274 includes choosing software 1275 and performing an internal software integration 1276. An external software integration 1277 may also be done.
  • a paper prototype 1278 may also be generated during the scaffolding section.
  • the implementation section 1279 may include choosing data 1280 and choosing a model 1281.
  • Fig. 13 illustrates a system map 1300 according to embodiments of the present disclosure.
  • the tool In addition to making a project (through decode, encode, or transform), the tool also provides a path for “donation” or “contribution” to existing projects, or to the tool’s public library. These donations can be in the form of data or “frames”. This allows those contributing or donating to consent to who or what projects they are donating to. In this example, the creator enters the landing page 1204, signs in 1208, and chooses the contribute option 1218. These donations can be in the form of, for example, data or "frames”.
  • the user may choose to contribute to a project that they have been invited to
  • the user may perform a project search 1312.
  • the project search may be done using, for example, browsing public projects 1316, by key word 1320, or by invite code 1324.
  • the creator may contribute to data, frames, shifting of frames
  • Fig. 14 illustrates an exemplary project representation 1400.
  • a user can add modules / sections to the headers 1404. This is the “at a glance” view of the data, algorithms, software and lenses used in the project. Similar to a QR code, it is a quick read for someone new to the project.
  • a visualization / sound (earcon) of the project summary that shows the project at a glance (visual, sound, etc) poiesis / imprint. This can be realized on paper, a screen, using VR and AR or can be embedded or stored in objects. Users add the headers above based on their project. Additional categories can also be pulled from the methods / landing page.
  • Fig. 15 illustrates an exemplary user interface screen of the login page 1500.
  • the user is asked to enter in their own demographic information, and may add their own identifying information. And, they are asked to share this information in each relevant section. Here, they can identify what groups they are a part of or would like to join.
  • Fig. 16 illustrates an exemplary user interface of a section where users can select whether to remake (transform), create (encode), deconstruct (decode) or contribute to a social technology project 1600.
  • a project can be driven by individuals, nonprofits or self-defined communities. The tool allows for a community to contribute their info and collaborate together. The mechanics of community sourced input is expressed in how a team may collaborate from the very beginning or pull from community sourced data, with community defined leaders and - creating checkpoints specific to each project.
  • Fig. 17 illustrates an exemplary project setup user interface 1700 where the project creator is asked if they will enter as an individual or as a group. If the user is part of a group, they are further requested to enter the group name. The creator can enter in an existing project name or add an invite code.
  • Fig. 18 illustrates an exemplary user interface illustrating a discover section of the tool 1800.
  • the user is prompted with several questions and asked to think about the project (e.g., who is making it and who is it for).
  • the user is presented with various approaches to AI - via a checkbox or they may write in a response.
  • the user is able to describe their projects, and keywords may be used later for recommendations at certain stages of the projects.
  • the user can write in questions / considerations, find more questions, and/or upvote existing questions. Similar to other sections of the tool, they may upvote or downvote specific questions.
  • Fig. 19 is an exemplary user interface section 1900 of the tool where the user or group identifies the intended audience of the project. This helps people recognize if the project is being designed for the user or group that they are or are not a part of. Additionally, the designers can upvote/downvote questions, write in questions, and/or find questions that fit with the project.
  • Figs. 20A-20M illustrates exemplary screen shots relating to the frames section of the tool 2000.
  • the social and community created frames 2004 help prototype through possible design futures. These frames may be used to describe existing approaches to technology, or may offer a “frame” shift - new approach to creating a tech project. Examples of frames 2004 include privacy, bias, time, level of connectivity and the like. The frames 2004 may directly affect a project if it requires changes in data or models, as discussed in further detail with respect to the data/model mapping section).
  • Fig. 20A illustrates an exemplary screen shot relating to the frames section of the tool 2000.
  • the discover section, blueprint, shifts, prototype, reflections, audit - and any additional sections illustrates a drop down section for traces 2008 (tracing the attributions and inspiration of those directly contributed to the portion of the project being created (showing what they did).
  • the consent icon 2012 is off in this subsection of frames which is shown in a later screen as a pop out.
  • the circle icons 2014 in the top right represent contributors.
  • Traces 2008 show a ledger of inspiration and attribution 2016 for each section.
  • one aspect of the frames section 2000 includes flagging and comments on sections of project 2016.
  • the dark triangle on the right 2018 signals that someone has flagged a section for review.
  • a comment may be added to provide context as to why the section has been flagged for review.
  • the flagging and comments section 2016 may also incorporate rating systems used internally or when active as a user research tool when reviewing a corporate prototype. It will be appreciated that the flagging and commenting may also be a part of other sections of the tool (e.g., blueprint, discover, etc.).
  • Figs. 20D illustrates another exemplary screen shot relating to the frames section of the tool 2000 illustrating an exemplary screenshot of the consent section 2020.
  • the user can identify existent areas they give their consent to have their contributions used, they may also write in new suggested areas, and identify specific pieces of data to be shared, as well as how that information is shared (for example is it a local ml system, or cloud-connected).
  • the frames (filter, lenses) 2004 can be grouped into categories with similar concepts 2030a-f.
  • a new frame can be created within a pre-existing category 2030a-f or a new category 2032.
  • Each individual or group of people may contribute or add frames as they see fit.
  • the individuals or groups adding frames may also be identified.
  • the frames may be individual frames or a grouping of frames (pack).
  • the frames may be searched. As shown in Fig. 20F and 20G, the frames may be searched. As shown in Fig.
  • the frames may be searched by category (e.g., bias) 2140.
  • category e.g., bias
  • a user may enter a search term in a search field 2044, which returns frames relating to that search term.
  • the search term “bias” returns different frames relating to the “bias” search term.
  • the frames may be indexed by title, contributors, descriptions, etc. such that a user can search by any of the title, contributor or description and any matching frames will be returned.
  • the frames may be further filtered by frame description, frame name or frame description 2148, as shown in Fig. 20G. Fig.
  • the frame entry point examples identifies where individuals can search and add frames.
  • the card may include the contributors, descriptors and other relevant information (on the front or back of the card); this information is indexed for search purposes.
  • the group of contributors may also be grouped into groups (i.e., a whole bunch of icons in another icon).
  • Fig. 201 is a detailed view of an exemplary screen shot of a user interface for creating an exemplary frame 2004.
  • the frame card information and headers can be added or renamed 2054.
  • Examples of the frame card information 2054 include the frame title, frame description, contributors, frame category, examples, critical questions, inspiration/attribution and whether the frame is public or private.
  • the frame may also include a graphic or other files 2058 may be added to the frame. These files may be, for example, direct audio / video, images, videos, APIs, sound, sensor info, etc. It will be appreciated that the files may be uploaded or searched for and taken from other frames, data, input, cards, etc.
  • Fig. 201 is a detailed view of an exemplary screen shot of a user interface for creating an exemplary frame 2004.
  • the frame card information and headers can be added or renamed 2054.
  • Examples of the frame card information 2054 include the frame title, frame description, contributors, frame category, examples, critical questions, inspiration/attribution and whether the frame
  • the user can lock in 2160 one frame or many frames as a group of frames. Comments can also be added to one frame or a group of frames.
  • the frame may be displayed with a description of the frame only and selecting on the frame will display additional information about the frame.
  • users can also search by philosophy.
  • a user may search for frames relating to the values or philosophy of an organization.
  • the search may be for a feminst.ai philosophy as a whole or may pull specific pieces of the philosophy.
  • Fig. 20L illustrates a detailed view of a frame card 2004 contributed to by
  • Feminist.AI members This exemplary card incorporates principle #8 (visualization of the Feminist. AI philosophy that people directly affected by tech should be making the tech).
  • the frames can be used to help remake a project (such as a search algorithm) to identify current assumptions in the design process through different frames (lenses).
  • the current frames 2004 are pulled directly from the book Algorithms of Oppression by Safiya U. Noble as applied by (or processed by) the Feminist.AI community. All too often in tech, there is a prominent focus on the software — and its developers are well compensated for their labor regardless of whether the things they create have problematic aspects to them.
  • the tool fully integrates this course correction into the design process in a way that is publicly recognized and can be financially compensated using the frames 2004. This can be incorporated in the blueprint section or other sections/actions of the tool.
  • Figs. 21 A-H illustrate exemplary user interfaces relating to the blueprint section
  • Fig. 21A the “create a blueprint” section of the tool is shown.
  • users can identify what is going into the system or add what they think goes into the system and open it up to community to contribute.
  • the actions section the tool looks at functionality as well. Users may also rearrange the cards, which may include input, actions, output, data, rules and form/material as shown in Fig. 21 A.
  • the user defines the rules, but then the rules are broken down into functionality and model.
  • the input may “interact with the actions/functionality”, and has the form of a specific output or outputs.
  • Fig. 21C illustrates an exemplary input card 2140.
  • the card may include images, but similar to the frames card, users may add an API, sound, sensor input, live video, audio, mp3, etc.
  • the uploads may be audio / video, image, video, API, sound, and/or sensor info that can be uploaded or searched for / taken from other frames, data, input, cards, etc.
  • the blueprint section 2100 includes concept inference
  • concept training 2150 and concept training 2154.
  • users define rules when it is implemented.
  • the rules are broken into functionality and machine learning (ML) models.
  • ML machine learning
  • the input 2156 “interacts with the actions/functionality” 2158, and has the form of a specific output or outputs 2160.
  • concept training 2152 the concept is trained using one or more of training data 2162, ML models 2164 and data form relationship information 2166.
  • this section provides an example of the position/orientation map pop-up 2170 of Fig. 21F in a collapsed state.
  • Fig. 21F shows the position / orientation map 2170 in the expanded state.
  • the position / orientation map 2170 identifies where the participant is in the design process.
  • the blueprint section 2100 includes two aspects: - what does it do (inference) 2150 and how does it work (training) 2152. It will be appreciated that the input data and material may affect the form or experience of the project.
  • Fig. 21H illustrates an example of the data collection card 2180.
  • Users may add elements to this card 2180 or remove elements from this card 2180 As shown in the exemplary card 2180, the users may provide information about “what is beauty” and images or other files (e.g., APIs, sound, sensor input, live video, audio, mp3, etc.) can be added.
  • the data in the data collection card can be source data or “retraining” data.
  • Fig. 22 illustrates an exemplary use of the tool to shift an existing frame or recommend a new frame (i.e., frameshift) 2200.
  • a user can add as many layers (grouping) of frames or individual frames as needed.
  • the first row illustrates a set of frames offers new perspectives from the Algorithms of Oppression Lenses book by Safiya U. Noble - here we ask, what if we designed for positive representation, multiple culturally situated searches, and use search as a form of community power.
  • the second row illustrates additional frames suggested for the project.
  • people may revisit previous project information - like the frames - and may offer a shift in that thinking.
  • the frames may come from an individual, a book, a group of people, etc. and people may login and suggest or remove frames.
  • Figs. 23 A-23B, 24A-J and 25 illustrate examples of the prototype section of the tool, which includes scaffolding, implementation (data and model), and activation (also understood as testing) modules.
  • Figs. 23A and 23B illustrate an example of the scaffolding section of the tool
  • a user identifies integration with potential software.
  • the user identifies the compatibility with other tools or sections of the collaborative software tool sections (and the ability to integrate with those sections). For example, if a user is using runway ml, but wants to upload audio data, when they get to data uploads, a warning will appear about compatibility and make suggestions regarding other options.
  • the project creator may also use the models or algorithms built within (or local to) the tool itself [0222]
  • Figs. 24A-F are exemplary user interfaces relating to the data uploading implementation section of the tool 2400 and
  • Figs. 24G-J are exemplary user interfaces relating to the model implementation section of the tool 2450.
  • a user can create
  • Fig. 24B illustrates an exemplary upload process.
  • the user can see the history of the model, how it was made, who contributed, and at what point it was changed or updated.
  • the user may load a csv or excel file exported from unity, containing movement (e.g., information from an accelerometer sensor). Rating of various objects in unity (sound, visuals, videos, patterns, etc) - focusing on more embodied approaches to data collection.
  • the user may also upload data from those in their network (it may only be available to them via their drive).
  • Fig. 24C illustrates an example where the user can explore or map the data and model section.
  • the user may incorporate a data repo that already exists (e.g., via github, gitlab, internally with a group data network).
  • the user may also see previous versions of the data (e.g., data updated or retrained by the Feminist.AI community).
  • the user can add their own data to the data repo, and work with their own forked data.
  • the user can select the model within the repository — then the toolkit shows the history of the model, how it was made, who contributed, and at what point it was changed or updated.
  • Figs. 24 G-J show the model implementation section 2450.
  • the model implementation may compare different models with one dataset, or one model with different datasets.
  • the user can run inference on various models (with one data set) and can also engage in the reverse (where they use various datasets on one model and compare). If a user selects a model that is not supported by the software selected in the scaffolding section, the tool will throw an error in a model requesting that the user select another model.
  • the search is indexed on the models and the data repository, allowing the users to search either by model name or data repository name (and can compare different data types with the same model relevant to the individual or community created frames (lenses)). After selecting a model, the users can run the inference.
  • the inference opens in a modal shown in Fig. 24H.
  • the “edit design” button opens a project drawer that allows users to edit cards to match the model.
  • Fig. 24H illustrates an example of a model instance being run.
  • Fig. 241 illustrates an example of an implementation model search. In this section, users can queue various models and retrain with a specific dataset. The dataset is disabled until the model is selected.
  • Fig. 24J illustrates an implementation model search where a particular model has been selected.
  • Fig. 25 illustrates an example of the activation (testing) section 2500 of the tool.
  • the user identifies any software integrations, models and data. If there are any issues here, suggestions are made regarding alternative models, data, etc. User may be redirected to other sections or required to update data or data parameters. Here, the user may connect their project with external software (if it has not already been connected).
  • Figs. 26A-E illustrate alternative variations of the audit section (2600, 2610,
  • This section may also include an at-a-glance section (project representation) - where there is a sensory (or accessible) entry point to the project - through a visualization, sound or sensor output that represents the whole project - through visuals or other sensory inputs/outputs, where the user can add information that is important to the final project representation card.
  • the user In the audit section of the tool (in this case, the project representation), the user can see the state of diversity of dataset and the various considerations. On the screen, this is the section that “pops out” to represent various projects.
  • the final snapshot identifies who was involved and their decision making. The final project asks people to socially position their work and describe where it lives.
  • the audit section allows for the project to be viewed holistically, commented on, connected with - here the project can show some of the frames and considerations made during the design process, look at the consent section, inspiration, comments, flagging and rating sections, as well as ask people to consider the social implications of the project, where it may live, and if there is a disconnect between the users and creators.
  • This audit section provides a view (or recap) of the data, algorithms, software and lenses used in the project.
  • the visualization / sound (earcon) of the project summary shows the project representation (visual, sound, etc) poiesis / imprint and can be realized on paper, screen, via VR and AR or may be embedded or stored in objects. Users can add headers to the project representation section of the audit section based on the project and any additional categories pulled from the methods / landing page.
  • Figs. 26A-C also include the orientation/position map.
  • Fig. 27A-D are exemplary user interfaces that illustrate a project page 2700, which incorporates data collection section (for a specific project or related projects), showing that projects can be grouped by non-profits, academic institutions, project type, the individuals creating the project or own the project and the like.
  • the tool provides a place to collect data and make critical projects with input from communities, as shown in Figs. 27A and 27B.
  • a user can contribute information, which reframes the process of search, inspired by Algorithms of Oppression by Safiya U. Noble, contributed by the Feminist. AI community. Communities contribute and this information can be used in education programming of the tool.
  • Projects from this information page can be remixed or used in the tool (if consent is given).
  • the tool provides a place to collect data and make critical projects with input from communities. Communities contribute and this this information can be used in education programming of the tool. Projects from this information page can be remixed or used in the tool (if consent is given).
  • a user can click on existing models or suggest new ones in the sources of model section 2730, like the anti- freshness algorithm or link out to something like runway ml, or p5.js to incorporate with the project. Rather than using google search to reimagine google search, a user can do it with, for example, Wekinator, Runway ML, or p5.js.
  • Wekinator Runway ML
  • p5.js In the data collection information shown in Fig.
  • a user can contribute information to existing algorithms of oppression inspired or Feminist. AI projects. For example, they can contribute by checking the plus button 2735 in the data collection section 2720 to launch a data contribution card 2740, as shown in Fig. 27D.
  • Fig. 28 illustrates the card selection states within the tool, according to an embodiment of the present disclosure.
  • exemplary card selection states include unselected 2804, hover 2808 and selected 2812.
  • Fig. 29 illustrates examples of types of data collection cards including blank cards 2904, cards with an added image or other input (such as a sound, API, etc.) 2908, and a new card 2912.
  • steps 120 and 130 of FIG. 1, steps 320 and 330 of FIG. 3, steps 420 and 430 of FIG. 4, steps 510 and 540 of FIG. 5, steps of the processes of Figs. 12 and 13 can be performed by a supervised or unsupervised algorithm.
  • the system may utilize more basic machine learning tools including (1) decision trees (“DT”), (2) Bayesian networks (“BN”), (3) artificial neural network (“ANN”), or (4) support vector machines (“SVM”).
  • DT decision trees
  • BN Bayesian networks
  • ANN artificial neural network
  • SVM support vector machines
  • deep learning algorithms or other more sophisticated machine learning algorithms e.g., convolutional neural networks (“CNN”), or capsule networks (“CapsNet”) may be used.
  • CNN convolutional neural networks
  • CapsNet capsule networks
  • DT are classification graphs that match input data to questions asked at each consecutive step in a decision tree.
  • the DT program moves down the “branches” of the tree based on the answers to the questions (e.g., First branch: Does the dataset comprise widely representative data? yes or no. Branch two: Is the dataset missing a specific racial/ethnic group? yes or no, etc.).
  • Bayesian networks are based on likelihood something is true based on given independent variables and are modeled based on probabilistic relationships. BN are based purely on probabilistic relationships that determine the likelihood of one variable based on another or others. For example, BN can model the relationships between input datasets, output datasets, material, and any other information as contemplated by the present disclosure. Using an efficient BN algorithm, an inference can be made based on the input data.
  • ANN Artificial neural networks
  • ANN are computational models inspired by an animal's central nervous system. They map inputs to outputs through a network of nodes. However, unlike BN, in ANN the nodes do not necessarily represent any actual variable. Accordingly, ANN may have a hidden layer of nodes that are not represented by a known variable to an observer. ANNs are capable of pattern recognition. Their computing methods make it easier to understand a complex and unclear process that might go on during predicting a body position of the user based a variety of input data.
  • Support vector machines came about from a framework utilizing of machine learning statistics and vector spaces (linear algebra concept that signifies the number of dimensions in linear space) equipped with some kind of limit-related structure. In some cases, they may determine a new coordinate system that easily separates inputs into two classifications. For example, a SVM could identify a line that separates two sets of points originating from different classifications of events.
  • DNN Deep neural networks
  • CNN Convolutional Neural Network
  • RBM Restricted Boltzmann Machine
  • LSTM Long Short Term Memory
  • Machine learning models require training data to identify the features of interest that they are designed to detect. For instance, various methods may be utilized to form the machine learning models, including applying randomly assigned initial weights for the network and applying gradient descent using back propagation for deep learning algorithms. In other examples, a neural network with one or two hidden layers can be used without training using this technique.
  • the machine learning model can be trained using labeled data, or data that represents certain user input. In other examples, the data will only be labeled with the outcome and the various relevant data may be input to train the machine learning algorithm.
  • various machine learning models may be utilized that input various data disclosed herein. In some examples, the input data will be labeled by having an expert in the field label the relevant regulations according to the particular situation. Accordingly, the input to the machine learning algorithm for training data identify various legal regulations as ‘relevant’ or ‘non-relevanf .
  • Supervised Learning The disclosed AI tool provides for using supervised learning to engage in classification. For example, the tool pairs keywords from questions with the primary feeling word in a particular question, and uses this as training data.
  • Unsupervised Learning In another embodiment of the disclosed tool, the tool removes keyword pairs and determines what patterns emerge.
  • a local hardware model can be used to provide various embodiments of the present disclosure.
  • the disclosed AI tool can be provided for an electromechanical device which allows a user to create and follow associative trails of links and personal annotations while interacting with the disclosed AI tool.
  • Such a local hardware model can mimic the associate processes of the human brain (or the local hardware model mirrors other natural systems), and allow a user to better learn how to construct and deconstruct AI systems.
  • electro-mechanical controls and display device scan be integrated into a desk.
  • Such a local hardware model can provide haptic, tactile, auditory, physical, and visual feedback to a user. Feedback can additionally be provided across realities.
  • the disclosure herein may be implemented with any type of hardware and/or software, and may be a pre-programmed general purpose computing device.
  • the system may be implemented using a server, a personal computer, a portable computer, a thin client, or any suitable device or devices.
  • the disclosure and/or components thereof may be a single device at a single location, or multiple devices at a single, or multiple, locations that are connected together using any appropriate communication protocols over any communication medium such as electric cable, fiber optic cable, or in a wireless manner.
  • modules which perform particular functions. It should be understood that these modules are merely schematically illustrated based on their function for clarity purposes only, and do not necessary represent specific hardware or software. In this regard, these modules may be hardware and/or software implemented to substantially perform the particular functions discussed. Moreover, the modules may be combined together within the disclosure, or divided into additional modules based on the particular function desired. Thus, the disclosure should not be construed to limit the present invention, but merely be understood to illustrate one example implementation thereof.
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device).
  • client device e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device.
  • Data generated at the client device e.g., a result of the user interaction
  • the present disclosure contemplates any of the following networks (or a combination of the networks), including: a distributed network, a decentralized network, an edge network, a federated network, and/or a mesh network.
  • Implementations of the subject matter described in this specification can be implemented in a computing system that includes a backend component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a frontend component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such backend, middleware, or frontend components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
  • Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter network (e.g., the Internet), mesh networks, and peer-to-peer networks (e.g., ad hoc peer-to- peer networks).
  • LAN local area network
  • WAN wide area network
  • inter network e.g., the Internet
  • mesh networks e.g., the Internet
  • peer-to-peer networks e.g., ad hoc peer-to- peer networks.
  • Implementations of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Implementations of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially- generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them.
  • a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal.
  • the computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
  • the term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing
  • the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
  • the apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, subprograms, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
  • Devices suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magnetooptical disks; and CDROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • processors include AI hardware devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Stored Programmes (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Systems (AREA)
  • Machine Translation (AREA)
  • Paper (AREA)

Abstract

La présente divulgation concerne des systèmes d'intelligence artificielle et des procédés de réception et d'analyse de données. Un procédé donné à titre d'exemple consiste à recevoir une entrée, au niveau d'une interface sur un dispositif informatique. L'entrée comprend un ensemble de données, une analyse pour l'ensemble de données et un support de sortie. Le procédé consiste ensuite à sélectionner, sur la base de l'entrée reçue, au moins un algorithme parmi une pluralité d'algorithmes. Le procédé consiste ensuite à traiter, par l'intermédiaire du dispositif informatique, l'entrée reçue avec le ou les algorithmes pour produire une sortie. La sortie est fournie au niveau de l'interface sur le dispositif informatique.
PCT/US2022/024875 2021-04-19 2022-04-14 Outil de conception de systèmes d'intelligence artificielle WO2022225793A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2022260264A AU2022260264A1 (en) 2021-04-19 2022-04-14 Tool for designing artificial intelligence systems
EP22792236.6A EP4327229A1 (fr) 2021-04-19 2022-04-14 Outil de conception de systèmes d'intelligence artificielle
CA3217360A CA3217360A1 (fr) 2021-04-19 2022-04-14 Outil de conception de systemes d'intelligence artificielle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/234,752 2021-04-19
US17/234,752 US20210240892A1 (en) 2018-04-11 2021-04-19 Tool for designing artificial intelligence systems

Publications (1)

Publication Number Publication Date
WO2022225793A1 true WO2022225793A1 (fr) 2022-10-27

Family

ID=83722570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/024875 WO2022225793A1 (fr) 2021-04-19 2022-04-14 Outil de conception de systèmes d'intelligence artificielle

Country Status (4)

Country Link
EP (1) EP4327229A1 (fr)
AU (1) AU2022260264A1 (fr)
CA (1) CA3217360A1 (fr)
WO (1) WO2022225793A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120159389A1 (en) * 2010-10-25 2012-06-21 Innovatia Inc. System and method for dynamic generation of procedures
US20180029293A1 (en) * 2015-02-18 2018-02-01 Technische Universität München Method and device for producing a three-dimensional object
US20190121855A1 (en) * 2017-10-20 2019-04-25 ConceptDrop Inc. Machine Learning System for Optimizing Projects
US20190318262A1 (en) * 2018-04-11 2019-10-17 Christine Meinders Tool for designing artificial intelligence systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120159389A1 (en) * 2010-10-25 2012-06-21 Innovatia Inc. System and method for dynamic generation of procedures
US20180029293A1 (en) * 2015-02-18 2018-02-01 Technische Universität München Method and device for producing a three-dimensional object
US20190121855A1 (en) * 2017-10-20 2019-04-25 ConceptDrop Inc. Machine Learning System for Optimizing Projects
US20190318262A1 (en) * 2018-04-11 2019-10-17 Christine Meinders Tool for designing artificial intelligence systems

Also Published As

Publication number Publication date
CA3217360A1 (fr) 2022-10-27
EP4327229A1 (fr) 2024-02-28
AU2022260264A1 (en) 2023-11-16

Similar Documents

Publication Publication Date Title
US20190318262A1 (en) Tool for designing artificial intelligence systems
Akerkar Artificial intelligence for business
Li et al. A survey of data-driven and knowledge-aware explainable ai
Cashman et al. A User‐based Visual Analytics Workflow for Exploratory Model Analysis
Wang et al. Learning performance prediction via convolutional GRU and explainable neural networks in e-learning environments
Stacchio et al. Empowering digital twins with eXtended reality collaborations
Seebacher Predictive intelligence for data-driven managers
Li et al. Autonomous GIS: the next-generation AI-powered GIS
US20230186117A1 (en) Automated cloud data and technology solution delivery using dynamic minibot squad engine machine learning and artificial intelligence modeling
Angamuthu et al. Integrating multi-criteria decision-making with hybrid deep learning for sentiment analysis in recommender systems
Afsari et al. Artificial Intelligence Platform for Low-Cost Robotics
Chen et al. A bibliometric review of soft computing for recommender systems and sentiment analysis
US20210240892A1 (en) Tool for designing artificial intelligence systems
WO2022225793A1 (fr) Outil de conception de systèmes d'intelligence artificielle
CN112256917B (zh) 用户兴趣识别方法、装置、设备及计算机可读存储介质
Melcher et al. Codeless Deep Learning with KNIME: Build, train, and deploy various deep neural network architectures using KNIME Analytics Platform
de Lucena Framework for collaborative knowledge management in organizations
Shao et al. Visual explanation for open-domain question answering with bert
Casillo et al. The Role of AI in Improving Interaction With Cultural Heritage: An Overview
Devi et al. SoloDB for social media’s big data using deep natural language with AI applications and Industry 5.0
JP2020046956A (ja) 機械学習システム
US20230093468A1 (en) Cognitive image searching based on personalized image components of a composite image
Roy et al. MultiMICS: a contextual multifaceted intelligent multimedia information fusion paradigm
Zhao et al. Research on emotion-embedded design flow based on deep learning technology
Braunschweig Artificial Intelligence: Current challenges and Inria's engagement-Inria white paper

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22792236

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 3217360

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2022260264

Country of ref document: AU

Ref document number: 805153

Country of ref document: NZ

Ref document number: AU2022260264

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2022260264

Country of ref document: AU

Date of ref document: 20220414

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2022792236

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022792236

Country of ref document: EP

Effective date: 20231120